SYSTEM_DOCUMENTATION_01

How It Works

A high-fidelity pipeline for technical intelligence. Termiflow bridges the gap between raw web data and terminal-native execution.

[ ARCHITECTURE_OVERVIEW ]

REF: DIAGRAM_V2.0_STABLE
MANAGED One API key. Zero configuration. We handle the infrastructure.
+-----------------+       +-----------------------+
|  YOUR_TERMINAL  | ----> |    TERMIFLOW_CLOUD    |
|  (termiflow     | <---- |                       |
|   CLI)          |       |  AI curation, search, |
+-----------------+       |  scoring, summaries   |
     tf_xxx key           +-----------------------+

30-Second Setup

Run termiflow config init, paste your tf_xxx key, done. No provider keys to manage.

Managed AI

We handle the LLM scoring, web crawling, and summarization. You get curated results without managing API keys or costs.

Synced State

Subscriptions and feed state sync across devices. Local SQLite cache for offline reading.

SELF-HOSTED Your keys. Your infrastructure. Full data sovereignty.
+-----------------+       +-----------------------+       +-------------------+
|  YOUR_TERMINAL  | ----> |   TERMIFLOW_ENGINE    | ----> |  YOUR_LLM_KEYS    |
|  (termiflow     | <---- |   (local Go library)  | <---- |  (Anthropic,      |
|   CLI)          |       |                       |       |   OpenAI, Ollama) |
+-----------------+       +-----------+-----------+       +-------------------+
                                      |
                                      v
                          +-----------------------+
                          |   YOUR_SEARCH_KEYS    |
                          |   (Tavily, Google,    |
                          |    Bing, Brave)       |
                          +-----------------------+

Full Sovereignty

All data stays on your machine. Local SQLite storage, no server calls. Fully offline-capable after initial setup.

Bring Your Own

Use your own Anthropic, OpenAI, or local Ollama keys. Pair with Tavily, Google, Bing, or Brave for search.

Open Engine

termiflow-engine is a reusable Go library for search, scoring, filtering, and summarization. MIT licensed.

[ THE_PIPELINE ]

01

Search (Tavily)

Deep-crawls developer documentation, GitHub repos, and StackOverflow threads.

$ tavily.search(q="linux kernel networking stacks", detail="high")
> [SCANNING] 452 sources identified...
02

Score (Claude)

LLM evaluates the relevance of each source based on your technical profile.

$ model.score(content, schema="tech_relevance_v1")
> Result: Source_A(0.98), Source_B(0.42)...
03

Filter

Removes duplicates, outdated documentation, and low-confidence snippets.

$ kernel.drop_duplicates(threshold=0.95)
> Filtering complete. 12 valid entries retained.
04

Summarize (Claude)

Distills multi-source data into actionable terminal commands or technical summaries.

$ llm.summarize(content, format="terminal_digest")
> Generating markdown buffer... [OK]
05

Deliver

Final output streamed to your stdout or stored in your local SQLite search cache.

$ termiflow feed --refresh
> [DONE] 8 items ready — streaming to stdout

[ EDITIONS_COMPARISON ]

PARAMETER MANAGED_CLI SELF_HOSTED_CLI
SETUP Zero-Config (Auth Hook) Config.yaml Definition
API_KEYS Handled by Cloud Proxy User-Supplied Keys
LLM_OPTIONS Managed (latest Claude) Anthropic, OpenAI, Ollama (local)
SEARCH Optimized Tavily Engine Google, Bing, Brave, Tavily
DATA Encrypted Cache (7 Days) Persistent Local SQLite
COST $12 / Month (All API Inc.) Free (Pay your providers)
SOURCE Same CLI (managed config) MIT Licensed Go Repo

[ PRIVACY_MANIFESTO ]

Local-First Storage

Your search history and context never touch our servers. Everything is stored in an encrypted local SQLite database: ~/.termiflow/vault.db

Transparent Proxy

We strictly proxy requests to LLM providers. We do not train models on your queries or retain metadata longer than 24 hours.

Opt-Out Telemetry

Zero tracking by default. You have to explicitly enable metrics in your termiflow.toml.

Build The Future Interface

Contribute to our core engine and community plugins.