Skip to main content

Agents

What is an Agent?

An Agent in RadarOS is a conversational AI unit that processes user input, optionally calls tools, and produces text or structured output. Agents are model-agnostic—you plug in any supported LLM provider (OpenAI, Anthropic, Google Gemini, Ollama)—and can be extended with tools, memory, sessions, and guardrails.

Synchronous

Use run(input, opts?) for a single request-response cycle.

Streaming

Use stream(input, opts?) for token-by-token or chunk-by-chunk output.

AgentConfig

Configure an agent by passing an AgentConfig object to the Agent constructor. All properties except name and model are optional.
name
string
required
Display name for the agent. Used in logs and events.
model
ModelProvider
required
The LLM provider instance. Use factory functions like openai("gpt-4o"), anthropic("claude-sonnet-4-20250514"), or google("gemini-2.0-flash").
instructions
string | (ctx) => string
System instructions for the agent. Can be a static string or a function that receives RunContext and returns a string (useful for dynamic prompts).
tools
ToolDef[]
Array of tool definitions created with defineTool(). Enables function calling.
memory
Memory
Memory instance for short-term and optional long-term conversation context. See Memory.
storage
StorageDriver
Storage driver for sessions. Defaults to InMemoryStorage if not provided.
sessionId
string
Default session ID for this agent. Can be overridden per-run via RunOpts.
userId
string
Default user ID for this agent. Can be overridden per-run via RunOpts.
addHistoryToMessages
boolean
default:"true"
Whether to include session history in the messages sent to the LLM. Set to false to disable.
numHistoryRuns
number
Limits how many prior turns to include. Each turn = 2 messages (user + assistant). If unset, defaults to 20 messages.
maxToolRoundtrips
number
default:"10"
Maximum number of tool-call rounds before stopping. Prevents infinite loops.
temperature
number
Sampling temperature passed to the model (0–2 typically). Lower = more deterministic.
structuredOutput
ZodSchema
Zod schema to enforce structured JSON output. See Structured Output.
hooks
AgentHooks
Lifecycle hooks: beforeRun, afterRun, onToolCall, onError. See Hooks & Guardrails.
guardrails
{ input, output }
Input and output guardrails. Each is an array of validators. See Hooks & Guardrails.
eventBus
EventBus
Custom event bus for emitting agent events. Defaults to a new EventBus if not provided.
reasoning
ReasoningConfig
Enable extended thinking / chain-of-thought reasoning. See Reasoning.
userMemory
UserMemory
User memory instance for cross-session personalization. See User Memory.
logLevel
string
default:"silent"
Logging level: "debug" | "info" | "warn" | "error" | "silent". Default is "silent".
retry
Partial<RetryConfig>
Retry configuration for transient LLM API failures (429, 5xx, network errors). Default: 3 retries with exponential backoff.
maxContextTokens
number
Maximum context window tokens. When set, conversation history is automatically trimmed (oldest messages first) to fit within this limit.

Methods

run(input, opts?) → RunOutput

Processes user input and returns the full response.
const result = await agent.run("What is the capital of France?");
console.log(result.text);
console.log(result.usage.totalTokens);

stream(input, opts?) → AsyncGenerator<StreamChunk>

Streams the response as chunks. Use for real-time UIs or SSE.
for await (const chunk of agent.stream("Tell me a short story.")) {
  if (chunk.type === "text") process.stdout.write(chunk.text);
  if (chunk.type === "finish") console.log("\nDone.", chunk.usage);
}

RunOpts

Options passed to run() or stream():
PropertyTypeDescription
sessionIdstringSession ID for multi-turn conversations. Auto-generated if omitted.
userIdstringUser identifier.
metadataRecord<string, unknown>Arbitrary metadata available in RunContext.
apiKeystringPer-request API key override for the model provider.

RunOutput

The object returned by run():
PropertyTypeDescription
textstringThe assistant’s text response.
toolCallsToolCallResult[]Results from any tool calls made during the run.
usageTokenUsage{ promptTokens, completionTokens, totalTokens, reasoningTokens? }.
structuredunknownParsed structured output when structuredOutput schema is set.
thinkingstringModel’s internal reasoning content (when reasoning is enabled).
durationMsnumberWall-clock duration of the run in milliseconds.

Basic Example

import { Agent, openai } from "@radaros/core";

const agent = new Agent({
  name: "Assistant",
  model: openai("gpt-4o"),
  instructions: "You are a helpful assistant. Be concise.",
});

const result = await agent.run("What is the capital of France?");
console.log(result.text);