Omegon

Terminal-native AI agent harness — for operators who build. Single binary. Ten providers. Zero runtime dependencies.

What is Omegon?

Omegon is a ~19MB Rust binary that provides a complete AI-assisted development environment. It connects to 10 inference providers through native Rust clients (no subprocess shims, no Node.js at runtime), manages persistent project memory across sessions, decomposes work into parallel git worktree children, and tracks design decisions in a knowledge graph — all from your terminal.

It is not a wrapper around curl. It is not an IDE plugin. It is not a chatbot with a file picker. It is a systems engineering harness that happens to use LLMs as one of its subsystems.

Three-Axis Routing Model

Omegon organizes inference along three independent axes, giving operators fine-grained control over capability, cost, and context:

AxisValuesControls
Capability Tier local → retribution → victory → gloriana Which model family to use
Thinking Level off → minimal → low → medium → high Extended reasoning budget
Context Class Squad (128k) → Maniple (272k) → Clan (400k) → Legion (1M) Context window capacity

The agent adjusts these axes autonomously based on task complexity, or operators can override with /model, /think, and /context. See Three-Axis Model for details.

Key Features

Get Working Fast

If your goal is a first successful run rather than a deep tour of Omegon internals, start with the provider happy paths:

See Installation for all methods, and Quick Start if you want a broader first-session walkthrough.

At a Glance

RuntimeSingle ~19 MB Rust binary, 6 crates
Providers10 inference + 3 search, native Rust clients
Agent tools53 structured tools (file ops, memory, design, search, codebase retrieval, and inference control)
LicenseBSL 1.1 (converts to MIT on 2031-03-19)