Skip to content

FAQ

For a non-technical overview, see the FAQ on micelclaw.com.

PostgreSQL with pgvector for storage and vector search. Fastify (Node.js) for the REST API. React 19 with Vite and Tailwind CSS for the dashboard. Ollama for local AI inference (embeddings, entity extraction, voice). Docker for managed services (Jellyfin, Mailu, Firefly III, etc.). The project is a monorepo managed with pnpm workspaces.

Ollama runs quantized models on CPU. The embedding model is qwen3-embedding (0.6B parameters, 1024 dimensions) — small enough to embed a record in under 200ms on a modern CPU. Entity extraction uses qwen3 (1.7B parameters). These models fit comfortably in 4 GB of RAM. No GPU, no CUDA, no cloud API keys required.

PostgreSQL with the pgvector extension. One database for everything: notes, calendar events, emails, contacts, files metadata, photo metadata, embeddings, the knowledge graph, and search indexes (tsvector). No Elasticsearch, no Redis, no separate search service. One backup strategy, one connection pool, one operational concern.

RequirementMinimumRecommended
OSLinux (Debian, Ubuntu, WSL2)Debian 13 (Trixie)
Node.js>=2022 LTS
PostgreSQL>=15 with pgvector16 with pgvector
pnpm>=9Latest
RAM4 GB8 GB
DockerOptional (for managed services)Docker Engine 24+
OllamaOptional (for AI features)Latest

The core application (Fastify API + scheduler + sync engine) runs on bare metal as a Node.js process. Managed services — Jellyfin, Mailu, qBittorrent, Firefly III, Bitcoin Core, and 35+ others — run as Docker containers. The Service Lifecycle Manager orchestrates them: RAM budgeting, on-demand start/stop, scheduled windows, drain guards, and health monitoring. Docker Compose files are auto-generated from the service registry.

Terminal window
git pull origin main
pnpm install
pnpm db:migrate
pnpm build

Migrations are idempotent and versioned with Drizzle ORM. No manual schema changes needed. The migration count is at 137+ and growing — each one is tested against the production schema before merging.

Yes. Skills are markdown files (SKILL.md) that describe API endpoints, expected behavior, and examples. The AI agent reads the skill at runtime and learns to call the described APIs. Skills support metadata flags like always:true (loaded on every message) and always:false (activated by context). See Creating a Skill for a step-by-step guide.

Yes. Apps are Docker containers registered in the service registry with a lifecycle policy (always-on, on-demand, or scheduled). They integrate into the dashboard via iframe embed or API proxy. The Service Lifecycle Manager handles starting, stopping, and health checking. See Creating an App for details.

Hybrid search with four signals fused via Reciprocal Rank Fusion (RRF):

  1. Semantic search — pgvector cosine similarity against 1024-dim embeddings
  2. Full-text search — tsvector with GIN indexes, weighted columns (title > content > tags), UNION ALL across all domain tables
  3. Knowledge graph — entity overlap via entity_links table
  4. Heat scoring — temporal relevance from the record_heat table

Signals are rank-normalized, degenerate signals are auto-suppressed, and multi-signal matches get a confidence bonus. Total latency: 30–80 ms, all inside PostgreSQL. Two endpoints: GET /search (standard RRF) and GET /search/advanced (user-tunable weights with provenance breakdown).

Yes. All data lives in a single PostgreSQL database — standard pg_dump works. All files are on your filesystem in predictable paths. There are no proprietary formats, no binary blobs in the database (files are stored on disk, not in PostgreSQL), and no vendor lock-in. If you leave Micelclaw, your data comes with you.