FAQ
For a non-technical overview, see the FAQ on micelclaw.com.
What’s the tech stack?
Section titled “What’s the tech stack?”PostgreSQL with pgvector for storage and vector search. Fastify (Node.js) for the REST API. React 19 with Vite and Tailwind CSS for the dashboard. Ollama for local AI inference (embeddings, entity extraction, voice). Docker for managed services (Jellyfin, Mailu, Firefly III, etc.). The project is a monorepo managed with pnpm workspaces.
How does the AI work without a GPU?
Section titled “How does the AI work without a GPU?”Ollama runs quantized models on CPU. The embedding model is qwen3-embedding (0.6B parameters, 1024 dimensions) — small enough to embed a record in under 200ms on a modern CPU. Entity extraction uses qwen3 (1.7B parameters). These models fit comfortably in 4 GB of RAM. No GPU, no CUDA, no cloud API keys required.
What database does it use?
Section titled “What database does it use?”PostgreSQL with the pgvector extension. One database for everything: notes, calendar events, emails, contacts, files metadata, photo metadata, embeddings, the knowledge graph, and search indexes (tsvector). No Elasticsearch, no Redis, no separate search service. One backup strategy, one connection pool, one operational concern.
What are the system requirements?
Section titled “What are the system requirements?”| Requirement | Minimum | Recommended |
|---|---|---|
| OS | Linux (Debian, Ubuntu, WSL2) | Debian 13 (Trixie) |
| Node.js | >=20 | 22 LTS |
| PostgreSQL | >=15 with pgvector | 16 with pgvector |
| pnpm | >=9 | Latest |
| RAM | 4 GB | 8 GB |
| Docker | Optional (for managed services) | Docker Engine 24+ |
| Ollama | Optional (for AI features) | Latest |
Can I run it on Docker?
Section titled “Can I run it on Docker?”The core application (Fastify API + scheduler + sync engine) runs on bare metal as a Node.js process. Managed services — Jellyfin, Mailu, qBittorrent, Firefly III, Bitcoin Core, and 35+ others — run as Docker containers. The Service Lifecycle Manager orchestrates them: RAM budgeting, on-demand start/stop, scheduled windows, drain guards, and health monitoring. Docker Compose files are auto-generated from the service registry.
How do I update?
Section titled “How do I update?”git pull origin mainpnpm installpnpm db:migratepnpm buildMigrations are idempotent and versioned with Drizzle ORM. No manual schema changes needed. The migration count is at 137+ and growing — each one is tested against the production schema before merging.
Can I build custom skills?
Section titled “Can I build custom skills?”Yes. Skills are markdown files (SKILL.md) that describe API endpoints, expected behavior, and examples. The AI agent reads the skill at runtime and learns to call the described APIs. Skills support metadata flags like always:true (loaded on every message) and always:false (activated by context). See Creating a Skill for a step-by-step guide.
Can I build custom apps?
Section titled “Can I build custom apps?”Yes. Apps are Docker containers registered in the service registry with a lifecycle policy (always-on, on-demand, or scheduled). They integrate into the dashboard via iframe embed or API proxy. The Service Lifecycle Manager handles starting, stopping, and health checking. See Creating an App for details.
How does search work?
Section titled “How does search work?”Hybrid search with four signals fused via Reciprocal Rank Fusion (RRF):
- Semantic search — pgvector cosine similarity against 1024-dim embeddings
- Full-text search — tsvector with GIN indexes, weighted columns (title > content > tags),
UNION ALLacross all domain tables - Knowledge graph — entity overlap via
entity_linkstable - Heat scoring — temporal relevance from the
record_heattable
Signals are rank-normalized, degenerate signals are auto-suppressed, and multi-signal matches get a confidence bonus. Total latency: 30–80 ms, all inside PostgreSQL. Two endpoints: GET /search (standard RRF) and GET /search/advanced (user-tunable weights with provenance breakdown).
Can I export my data?
Section titled “Can I export my data?”Yes. All data lives in a single PostgreSQL database — standard pg_dump works. All files are on your filesystem in predictable paths. There are no proprietary formats, no binary blobs in the database (files are stored on disk, not in PostgreSQL), and no vendor lock-in. If you leave Micelclaw, your data comes with you.