Models
Model Agnosticism
RadarOS is model-agnostic. You can use OpenAI, Anthropic, Google Gemini, Vertex AI, Ollama, or any custom provider through a unifiedModelProvider interface. Switch models with a single line change—no refactoring required.
Unified Interface
All providers implement the same
generate() and stream() methods. Your agent code stays identical.Easy Switching
Swap
openai("gpt-4o") for anthropic("claude-sonnet-4-20250514") without touching the rest of your app.ModelProvider Interface
Every model in RadarOS implements theModelProvider interface:
Unique identifier for the provider (e.g.,
"openai", "anthropic").The specific model identifier (e.g.,
"gpt-4o", "claude-sonnet-4-20250514").Non-streaming completion. Returns the full response with message, usage, and finish reason.
Streaming completion. Yields text deltas, tool call chunks, and finish events.
Factory Functions
Use factory functions to create model instances. Each returns aModelProvider ready for agents, teams, and workflows.
| Factory | Provider | Returns | Config |
|---|---|---|---|
openai(modelId, config?) | OpenAI | ModelProvider | { apiKey?, baseURL? } |
anthropic(modelId, config?) | Anthropic | ModelProvider | { apiKey? } |
google(modelId, config?) | Google Gemini | ModelProvider | { apiKey? } |
vertex(modelId, config?) | Google Vertex AI | ModelProvider | { project?, location?, credentials? } |
ollama(modelId, config?) | Ollama (local) | ModelProvider | { host? } (default: http://localhost:11434) |
openaiRealtime(modelId?, config?) | OpenAI Realtime | RealtimeProvider | { apiKey?, baseURL? } |
googleLive(modelId?, config?) | Gemini Live | RealtimeProvider | { apiKey? } |
Switching Models in One Line
Realtime Providers (Voice)
For voice agents, use the realtime helpers that return aRealtimeProvider:
ModelConfig Options
Pass these options togenerate() or stream() (or set them on agents):
Sampling temperature (0–2). Lower = more deterministic. Typical: 0.7.
Maximum tokens in the completion. Provider-specific limits apply.
Nucleus sampling. Alternative to temperature for some providers.
Stop sequences. Generation stops when any of these strings are produced.
Output format:
"text" (default), "json" (JSON object), or { type: "json_schema", schema, name? } for structured output.Per-request API key override. Use when you need to override the key set at construction (e.g., multi-tenant).
Example: ModelConfig Usage
Next Steps
OpenAI
GPT-4o, GPT-4o-mini, GPT-4-turbo, o1-preview.
Anthropic
Claude Sonnet, Claude Haiku.
Google Gemini
Gemini 2.5 Flash, Gemini 2.5 Pro. Multi-modal support.
Vertex AI
Enterprise Gemini via Google Cloud. IAM auth, VPC, compliance.
Ollama
Local models: Llama, CodeLlama, Mistral.
Custom Provider
Implement your own ModelProvider.