Local Ollama Path
Use this path if you want Omegon working with local inference first. It is the best path for offline evaluation, cost-free experimentation, and environments where external provider setup is a distraction.
What you need
- Ollama installed and running
- At least one model pulled locally
- Omegon installed
Steps
- Install or start Ollama
- Pull a model, for example
ollama pull qwen3:32b - Run
omegonin your project directory - If needed, use
/modeland select a local/Ollama-backed model - Send this prompt:
Read README.md and summarize the project layout.
Success signal
- Omegon responds without asking for an external provider key
- The model/provider metadata indicates a local path
- The agent successfully reads local project files
If it fails
- Confirm
ollama listshows at least one installed model - Confirm the Ollama daemon is reachable from the same machine
- If Omegon still does not detect it, file a Setup problem issue and say you followed the local Ollama path