Skip to main content

Agent

import { Agent } from "@radaros/core";

Constructor

new Agent(config: AgentConfig)
name
string
required
Agent name identifier.
model
ModelProvider
required
LLM provider instance, e.g. openai("gpt-4o").
instructions
string | (ctx: RunContext) => string
System prompt. Can be static or dynamic based on context.
tools
ToolDef[]
Tools the agent can call during execution.
memory
Memory
Memory instance for short/long-term message storage.
storage
StorageDriver
Persistent storage for sessions. Defaults to InMemoryStorage.
sessionId
string
Default session ID. Overridden by RunOpts.sessionId.
userId
string
Default user ID.
addHistoryToMessages
boolean
default:"true"
Include session history in LLM messages.
numHistoryRuns
number
Number of past conversation turns to include.
maxToolRoundtrips
number
default:"10"
Maximum tool call iterations per run.
temperature
number
LLM sampling temperature.
structuredOutput
ZodSchema
Zod schema to enforce structured JSON responses.
hooks
AgentHooks
Lifecycle hooks: beforeRun, afterRun, onToolCall, onError.
guardrails
{ input?: InputGuardrail[]; output?: OutputGuardrail[] }
Input/output validation guardrails.
eventBus
EventBus
Custom event bus for event-driven workflows.
reasoning
ReasoningConfig
Enable extended thinking / chain-of-thought reasoning. See Reasoning.
userMemory
UserMemory
User memory instance for cross-session personalization. See User Memory.
logLevel
LogLevel
default:"silent"
Log verbosity: "debug", "info", "warn", "error", "silent".
retry
Partial<RetryConfig>
Retry configuration for transient LLM API failures (429, 5xx, network errors). See RetryConfig.
maxContextTokens
number
Maximum context window tokens. History is auto-trimmed (oldest first) to fit within this limit.

Methods

run(input, opts?)

async run(input: MessageContent, opts?: RunOpts): Promise<RunOutput>
Execute the agent and return the complete response.

stream(input, opts?)

async *stream(input: MessageContent, opts?: RunOpts): AsyncGenerator<StreamChunk>
Stream the agent’s response token by token.

Properties

PropertyTypeDescription
namestringAgent name
eventBusEventBusEvent emitter
toolsToolDef[]Configured tools
modelIdstringModel identifier
providerIdstringProvider identifier
hasStructuredOutputbooleanWhether structured output is enabled

RunOpts

interface RunOpts {
  sessionId?: string;
  userId?: string;
  metadata?: Record<string, unknown>;
  apiKey?: string;
}

RunOutput

interface RunOutput {
  text: string;
  toolCalls: ToolCallResult[];
  usage: TokenUsage;
  structured?: unknown;
  thinking?: string;
  durationMs?: number;
}

TokenUsage

interface TokenUsage {
  promptTokens: number;
  completionTokens: number;
  totalTokens: number;
  reasoningTokens?: number;
}

StreamChunk

type StreamChunk =
  | { type: "text"; text: string }
  | { type: "thinking"; text: string }
  | { type: "tool_call_start"; toolCall: { id: string; name: string } }
  | { type: "tool_call_delta"; toolCallId: string; argumentsDelta: string }
  | { type: "tool_call_end"; toolCallId: string }
  | { type: "finish"; finishReason: string; usage?: TokenUsage };

Model Providers

Factory Functions

import { openai, anthropic, google, vertex, ollama } from "@radaros/core";

openai(modelId: string, config?: { apiKey?: string; baseURL?: string }): ModelProvider
anthropic(modelId: string, config?: { apiKey?: string }): ModelProvider
google(modelId: string, config?: { apiKey?: string }): ModelProvider
vertex(modelId: string, config?: { project?: string; location?: string; credentials?: string }): ModelProvider
ollama(modelId: string, config?: { baseURL?: string }): ModelProvider

ModelProvider Interface

interface ModelProvider {
  readonly providerId: string;
  readonly modelId: string;
  generate(messages: ChatMessage[], options?: ModelConfig): Promise<ModelResponse>;
  stream(messages: ChatMessage[], options?: ModelConfig): AsyncGenerator<StreamChunk>;
}

ModelConfig

interface ModelConfig {
  temperature?: number;
  maxTokens?: number;
  topP?: number;
  stop?: string[];
  responseFormat?: "text" | "json" | { type: "json_schema"; schema: object; name?: string };
  apiKey?: string;
  reasoning?: ReasoningConfig;
}

ReasoningConfig

interface ReasoningConfig {
  enabled: boolean;
  effort?: "low" | "medium" | "high";  // OpenAI only
  budgetTokens?: number;               // Anthropic, Google, Vertex AI
}

Retry

RetryConfig

interface RetryConfig {
  maxRetries: number;       // default: 3
  initialDelayMs: number;   // default: 500
  maxDelayMs: number;       // default: 10000
  retryableErrors?: (error: unknown) => boolean;
}
Retryable errors are automatically detected: HTTP 429 (rate limit), 5xx (server errors), and network errors (ECONNRESET, ETIMEDOUT).

withRetry

import { withRetry } from "@radaros/core";

const result = await withRetry(
  () => someApiCall(),
  { maxRetries: 5, initialDelayMs: 1000 }
);
Standalone retry utility with exponential backoff + jitter. Used internally by the LLM loop, also exported for custom use.

Tools

defineTool

import { defineTool } from "@radaros/core";

defineTool<T extends z.ZodObject>(config: {
  name: string;
  description: string;
  parameters: T;
  execute: (args: z.infer<T>, ctx: RunContext) => Promise<string | ToolResult>;
  cache?: ToolCacheConfig;
}): ToolDef

ToolResult

interface ToolResult {
  content: string;
  artifacts?: Artifact[];
}

interface Artifact {
  type: string;
  data: unknown;
  mimeType?: string;
}

ToolCacheConfig

interface ToolCacheConfig {
  ttl: number; // milliseconds
}
See Tool Caching for details.

Team

import { Team, TeamMode } from "@radaros/core";

Constructor

new Team(config: TeamConfig)
name
string
required
Team name.
mode
TeamMode
required
Execution mode: coordinate, route, broadcast, collaborate.
model
ModelProvider
required
Orchestrator model.
members
Agent[]
required
Member agents.
instructions
string
Team-level instructions.
maxRounds
number
Max rounds for collaborate mode.
storage
StorageDriver
Persistent storage.

Methods

async run(input: string, opts?: RunOpts): Promise<RunOutput>
async *stream(input: string, opts?: RunOpts): AsyncGenerator<StreamChunk>

TeamMode

enum TeamMode {
  Coordinate = "coordinate",
  Route = "route",
  Broadcast = "broadcast",
  Collaborate = "collaborate"
}

Workflow

import { Workflow } from "@radaros/core";

Constructor

new Workflow<TState>(config: WorkflowConfig<TState>)
name
string
required
Workflow name.
initialState
TState
required
Initial state object.
steps
StepDef<TState>[]
required
Ordered list of steps.
retryPolicy
{ maxRetries: number; backoffMs: number }
Retry configuration.

Methods

async run(opts?: { sessionId?: string; userId?: string }): Promise<WorkflowResult<TState>>

Step Types

// Agent step
{ name: string; agent: Agent; inputFrom?: (state: TState) => string }

// Function step
{ name: string; run: (state: TState, ctx: RunContext) => Promise<Partial<TState>> }

// Condition step
{ name: string; condition: (state: TState) => boolean; steps: StepDef<TState>[] }

// Parallel step
{ name: string; parallel: StepDef<TState>[] }

Memory

import { Memory } from "@radaros/core";

Constructor

new Memory(config?: MemoryConfig)
storage
StorageDriver
Storage backend. Defaults to InMemoryStorage.
maxShortTermMessages
number
default:"50"
Max messages before overflow to long-term.
enableLongTerm
boolean
default:"false"
Enable long-term summarization.

Methods

MethodReturnsDescription
addMessages(sessionId, messages)Promise<void>Add messages to short-term memory
getMessages(sessionId)Promise<ChatMessage[]>Get short-term messages
getSummaries(sessionId)Promise<string[]>Get long-term summaries
getContextString(sessionId)Promise<string>Formatted context string
clear(sessionId)Promise<void>Clear all memory for session

UserMemory

import { UserMemory } from "@radaros/core";

Constructor

new UserMemory(config?: UserMemoryConfig)
storage
StorageDriver
Storage backend for facts. Defaults to InMemoryStorage.
model
ModelProvider
LLM for auto-extraction. Falls back to the agent’s model.
maxFacts
number
default:"100"
Maximum facts per user.
enabled
boolean
default:"true"
Enable/disable auto-extraction.

Methods

MethodReturnsDescription
getFacts(userId)Promise<UserFact[]>Get all facts for a user
addFacts(userId, facts, source?)Promise<void>Add facts manually
removeFact(userId, factId)Promise<void>Remove a fact by ID
clear(userId)Promise<void>Clear all facts for a user
getContextString(userId)Promise<string>Formatted string for system prompt
extractAndStore(userId, messages, model?)Promise<void>Extract and store facts from messages
asTool(config?)ToolDefCreate a tool for the agent to recall user facts on demand

UserFact

interface UserFact {
  id: string;
  fact: string;
  createdAt: Date;
  source: "auto" | "manual";
}
See User Memory for usage guide.

Storage Drivers

import {
  InMemoryStorage,
  SqliteStorage,
  PostgresStorage,
  MongoDBStorage,
} from "@radaros/core";

StorageDriver Interface

interface StorageDriver {
  get<T>(namespace: string, key: string): Promise<T | null>;
  set<T>(namespace: string, key: string, value: T): Promise<void>;
  delete(namespace: string, key: string): Promise<void>;
  list<T>(namespace: string, prefix?: string): Promise<Array<{ key: string; value: T }>>;
  close(): Promise<void>;
}
DriverConstructorRequires
InMemoryStoragenew InMemoryStorage()None
SqliteStoragenew SqliteStorage(filepath)better-sqlite3
PostgresStoragenew PostgresStorage(connectionString)pg
MongoDBStoragenew MongoDBStorage(uri, dbName?, collectionName?)mongodb

Vector Stores

import {
  InMemoryVectorStore,
  PgVectorStore,
  QdrantVectorStore,
  MongoDBVectorStore,
} from "@radaros/core";

VectorStore Interface

interface VectorStore {
  upsert(doc: VectorDocument): Promise<void>;
  search(embedding: number[], options?: VectorSearchOptions): Promise<VectorSearchResult[]>;
  get(id: string): Promise<VectorDocument | null>;
  delete(id: string): Promise<void>;
  clear(): Promise<void>;
  close(): Promise<void>;
}

Types

interface VectorDocument {
  id: string;
  content: string;
  embedding?: number[];
  metadata?: Record<string, unknown>;
}

interface VectorSearchOptions {
  topK?: number;
  minScore?: number;
  filter?: Record<string, unknown>;
}

interface VectorSearchResult {
  document: VectorDocument;
  score: number;
}

Embeddings

import { OpenAIEmbedding, GoogleEmbedding } from "@radaros/core";

EmbeddingProvider Interface

interface EmbeddingProvider {
  embed(text: string): Promise<number[]>;
  embedBatch(texts: string[]): Promise<number[][]>;
}
ProviderConstructorDefault Model
OpenAIEmbeddingnew OpenAIEmbedding({ apiKey?, model? })text-embedding-3-small
GoogleEmbeddingnew GoogleEmbedding({ apiKey?, model? })text-embedding-004

KnowledgeBase

import { KnowledgeBase } from "@radaros/core";

Constructor

new KnowledgeBase(config: {
  name: string;
  vectorStore: VectorStore;
  collection?: string;
})

Methods

MethodReturnsDescription
initialize()Promise<void>Initialize the vector store
add(doc)Promise<void>Add a single document
addDocuments(docs)Promise<void>Add multiple documents
search(query, options?)Promise<VectorSearchResult[]>Semantic search
get(id)Promise<VectorDocument | null>Get document by ID
delete(id)Promise<void>Delete document
clear()Promise<void>Clear all documents
close()Promise<void>Close connections
asTool(config?)ToolDefCreate a tool for RAG

Message Types

type MessageRole = "system" | "user" | "assistant" | "tool";
type MessageContent = string | ContentPart[];

interface ChatMessage {
  role: MessageRole;
  content: MessageContent | null;
  toolCalls?: ToolCall[];
  toolCallId?: string;
  name?: string;
}

type ContentPart = TextPart | ImagePart | AudioPart | FilePart;

interface TextPart   { type: "text"; text: string }
interface ImagePart  { type: "image"; data: string; mimeType?: string }
interface AudioPart  { type: "audio"; data: string; mimeType?: string }
interface FilePart   { type: "file"; data: string; mimeType: string; filename?: string }

Helpers

getTextContent(content: MessageContent | null): string
isMultiModal(content: MessageContent | null): content is ContentPart[]

EventBus

import { EventBus } from "@radaros/core";

const bus = new EventBus();
bus.on("run.start", (data) => { /* { runId, agentName, input } */ });
bus.on("run.complete", (data) => { /* { runId, output } */ });
bus.on("run.error", (data) => { /* { runId, error } */ });
bus.on("run.stream.chunk", (data) => { /* { runId, chunk } */ });
bus.on("tool.call", (data) => { /* { runId, toolName, args } */ });
bus.on("tool.result", (data) => { /* { runId, toolName, result } */ });

Logger

import { Logger } from "@radaros/core";

const logger = new Logger({
  level: "info",    // "debug" | "info" | "warn" | "error" | "silent"
  color: true,      // auto-detect TTY
  prefix: "myapp",
});

MCPToolProvider

Connect to an MCP server and use its tools as native ToolDef[].
import { MCPToolProvider } from "@radaros/core";

const mcp = new MCPToolProvider({
  name: "github",
  transport: "stdio",          // "stdio" | "http"
  command: "npx",              // stdio only
  args: ["-y", "@modelcontextprotocol/server-github"],
  env: { GITHUB_TOKEN: "..." },
  // url: "http://...",        // http only
  // headers: {},              // http only
});

await mcp.connect();
const tools = await mcp.getTools();  // ToolDef[]
await mcp.refresh();                 // re-discover tools
await mcp.close();                   // disconnect
ConfigTypeDescription
namestringUnique name for this MCP server. Used to namespace tool names.
transport"stdio" | "http"Transport protocol.
commandstringCommand to spawn (stdio only).
argsstring[]Arguments for the command (stdio only).
envRecord<string, string>Environment variables (stdio only).
urlstringMCP server URL (http only).
headersRecord<string, string>Custom HTTP headers (http only).
Requires peer dependency: @modelcontextprotocol/sdk

A2ARemoteAgent

Connect to a remote A2A-compliant agent.
import { A2ARemoteAgent } from "@radaros/core";

const remote = new A2ARemoteAgent({
  url: "http://remote-service:3001",
  headers: { Authorization: "Bearer ..." },  // optional
  name: "remote-agent",                       // optional override
  timeoutMs: 60000,                           // optional (default 60s)
});

// Discover agent capabilities
const card = await remote.discover();

// Synchronous call
const result = await remote.run("Hello");

// Streaming call
for await (const chunk of remote.stream("Tell me a story")) {
  if (chunk.type === "text") process.stdout.write(chunk.text);
}

// Use as a tool for another agent
const tool = remote.asTool();

// Get cached Agent Card
const agentCard = remote.getAgentCard();
ConfigTypeDescription
urlstringBase URL of the remote A2A agent.
headersRecord<string, string>Custom headers for auth.
namestringOverride discovered name.
timeoutMsnumberRequest timeout (default 60000).

Toolkits

Pre-built tool collections. Each returns ToolDef[] via getTools().
import {
  DuckDuckGoToolkit,
  HackerNewsToolkit,
  WebSearchToolkit,
  GmailToolkit,
  WhatsAppToolkit,
} from "@radaros/core";
ToolkitConfigToolsAPI Key Required
DuckDuckGoToolkit{ enableSearch?, enableNews?, maxResults? }duckduckgo_search, duckduckgo_newsNo
HackerNewsToolkit{ enableGetTopStories?, enableGetUserDetails? }hackernews_top_stories, hackernews_userNo
WebSearchToolkit{ provider, apiKey?, maxResults? }web_searchYes (Tavily/SerpAPI)
GmailToolkit{ credentialsPath?, tokenPath?, authClient? }gmail_send, gmail_search, gmail_readYes (OAuth2)
WhatsAppToolkit{ accessToken?, phoneNumberId?, version?, recipientWaid? }whatsapp_send_text, whatsapp_send_templateYes (Meta)