Ollama (Local Models)
Use Ollama to run open-source models locally with RadarOS. No API key required—ideal for development, privacy-sensitive workloads, and cost-free experimentation.Setup
Install Ollama
Download and install Ollama from ollama.ai. Start the Ollama service (it runs on
http://localhost:11434 by default).Factory
The Ollama model name (e.g.,
llama3.1, codellama, mistral).Optional configuration. See Config below.
Config
Ollama server URL. Defaults to
http://localhost:11434. Use this for remote Ollama instances or custom ports.No API key needed
No API key needed
Ollama runs locally and does not require an API key. Just ensure the Ollama service is running.
Example
Popular Models
| Model | Use Case |
|---|---|
llama3.1 | General purpose, strong all-around performance |
codellama | Code generation and understanding |
mistral | Fast, efficient, good for chat |
mixtral | Mixture of experts, higher capability |
phi3 | Small, fast, good for edge devices |
Discover more models
Discover more models
Run
ollama list to see installed models. Browse ollama.com/library for the full catalog.