Skip to content

Your LLM's Local Source of Truth

A local-first knowledge base and verification engine that sits between you and your AI tools. It answers from proven knowledge first, validates through executable code when needed, and gets smarter with every query.

pip install factq

Home

Docs-First Resolution

Your project already has a knowledge base — it's called docs/. factq indexes your existing markdown files and makes them the first place any AI tool looks before generating an answer.

Executable Verification

When the KB can't answer, factq doesn't guess. It writes Python code to fetch, compute, and verify the answer in a secure sandbox. Only proven facts reach the user.

Self-Growing Knowledge

Every verified fact can be saved back to your docs/ folder as a new markdown file — version-controlled in git, shared with your team automatically. The KB compounds over time.

MCP Server

Runs as a local MCP server so Claude Code, Cursor, and Windsurf can call factq.query() as a tool. Entirely local — no cloud hop.

RLM Integration

Built on the Recursive Language Model paradigm. The LLM writes code to navigate and verify knowledge instead of relying on embedding-based retrieval. Zero VRAM overhead.

Sandbox Security

Five-layer defense-in-depth: safe builtins, AST validation, function whitelisting, execution timeouts, and output truncation. Production deploys use Docker/Modal/E2B.


How It Works

Phase 0 — KB Lookup

Query hits your project's docs/ folder first. Known facts return in milliseconds.

Phase 1 — Generate

If the KB has no answer, the LLM writes Python code to retrieve and verify the fact from external sources.

Phase 2 — Validate

Code runs in a sandbox. Errors trigger self-correction. The loop runs up to 5 iterations until the answer is proven.

Phase 3 — Commit

Verified results are offered for commit to docs/. Your knowledge base grows with every query.


Quick Example

import factq

# Initialize from project root — auto-indexes docs/ folder
fq = factq.init(".")

# Query checks project docs first, then KB, then Deepthink
result = fq.query("What version of Pydantic does our project use?")
# result.source: 'docs' | 'kb' | 'deepthink'

# Save a verified finding to the project's docs/ folder
fq.save("docs/research/pydantic_v2_migration.md", result)

Or from the CLI:

factq init                                    # index current project
factq query "What is our database schema?"    # docs-first resolution
factq save docs/research/schema_notes.md      # commit a finding

Why factq?

Traditional RAG Cloud Search (Perplexity) Doc Fetchers (Context7) factq
Verification Text similarity Citation links None Executable code proof
Infrastructure 3-6 GB VRAM Cloud API Cloud API Zero VRAM, ~28 MB
Knowledge Persistence Ephemeral chunks None None Git-versioned docs/
Offline Partial No No Fully offline
Gets Smarter Over Time No No No Yes