Local Ollama Path

Use this path if you want Omegon working with local inference first. It is the best path for offline evaluation, cost-free experimentation, and environments where external provider setup is a distraction.

What you need

Steps

  1. Install or start Ollama
  2. Pull a model, for example ollama pull qwen3:32b
  3. Run omegon in your project directory
  4. If needed, use /model and select a local/Ollama-backed model
  5. Send this prompt: Read README.md and summarize the project layout.

Success signal

If it fails