Agent
import { Agent } from "@radaros/core";
Constructor
new Agent(config: AgentConfig)
LLM provider instance, e.g. openai("gpt-4o").
instructions
string | (ctx: RunContext) => string
System prompt. Can be static or dynamic based on context.
Tools the agent can call during execution.
Memory instance for short/long-term message storage.
Persistent storage for sessions. Defaults to InMemoryStorage.
Default session ID. Overridden by RunOpts.sessionId.
Include session history in LLM messages.
Number of past conversation turns to include.
Maximum tool call iterations per run.
LLM sampling temperature.
Zod schema to enforce structured JSON responses.
Lifecycle hooks: beforeRun, afterRun, onToolCall, onError.
guardrails
{ input?: InputGuardrail[]; output?: OutputGuardrail[] }
Input/output validation guardrails.
Custom event bus for event-driven workflows.
Enable extended thinking / chain-of-thought reasoning. See Reasoning.
User memory instance for cross-session personalization. See User Memory.
Log verbosity: "debug", "info", "warn", "error", "silent".
Retry configuration for transient LLM API failures (429, 5xx, network errors). See RetryConfig.
Maximum context window tokens. History is auto-trimmed (oldest first) to fit within this limit.
Methods
async run(input: MessageContent, opts?: RunOpts): Promise<RunOutput>
Execute the agent and return the complete response.
async *stream(input: MessageContent, opts?: RunOpts): AsyncGenerator<StreamChunk>
Stream the agent’s response token by token.
Properties
| Property | Type | Description |
|---|
name | string | Agent name |
eventBus | EventBus | Event emitter |
tools | ToolDef[] | Configured tools |
modelId | string | Model identifier |
providerId | string | Provider identifier |
hasStructuredOutput | boolean | Whether structured output is enabled |
RunOpts
interface RunOpts {
sessionId?: string;
userId?: string;
metadata?: Record<string, unknown>;
apiKey?: string;
}
RunOutput
interface RunOutput {
text: string;
toolCalls: ToolCallResult[];
usage: TokenUsage;
structured?: unknown;
thinking?: string;
durationMs?: number;
}
TokenUsage
interface TokenUsage {
promptTokens: number;
completionTokens: number;
totalTokens: number;
reasoningTokens?: number;
}
StreamChunk
type StreamChunk =
| { type: "text"; text: string }
| { type: "thinking"; text: string }
| { type: "tool_call_start"; toolCall: { id: string; name: string } }
| { type: "tool_call_delta"; toolCallId: string; argumentsDelta: string }
| { type: "tool_call_end"; toolCallId: string }
| { type: "finish"; finishReason: string; usage?: TokenUsage };
Model Providers
Factory Functions
import { openai, anthropic, google, vertex, ollama } from "@radaros/core";
openai(modelId: string, config?: { apiKey?: string; baseURL?: string }): ModelProvider
anthropic(modelId: string, config?: { apiKey?: string }): ModelProvider
google(modelId: string, config?: { apiKey?: string }): ModelProvider
vertex(modelId: string, config?: { project?: string; location?: string; credentials?: string }): ModelProvider
ollama(modelId: string, config?: { baseURL?: string }): ModelProvider
ModelProvider Interface
interface ModelProvider {
readonly providerId: string;
readonly modelId: string;
generate(messages: ChatMessage[], options?: ModelConfig): Promise<ModelResponse>;
stream(messages: ChatMessage[], options?: ModelConfig): AsyncGenerator<StreamChunk>;
}
ModelConfig
interface ModelConfig {
temperature?: number;
maxTokens?: number;
topP?: number;
stop?: string[];
responseFormat?: "text" | "json" | { type: "json_schema"; schema: object; name?: string };
apiKey?: string;
reasoning?: ReasoningConfig;
}
ReasoningConfig
interface ReasoningConfig {
enabled: boolean;
effort?: "low" | "medium" | "high"; // OpenAI only
budgetTokens?: number; // Anthropic, Google, Vertex AI
}
Retry
RetryConfig
interface RetryConfig {
maxRetries: number; // default: 3
initialDelayMs: number; // default: 500
maxDelayMs: number; // default: 10000
retryableErrors?: (error: unknown) => boolean;
}
Retryable errors are automatically detected: HTTP 429 (rate limit), 5xx (server errors), and network errors (ECONNRESET, ETIMEDOUT).
withRetry
import { withRetry } from "@radaros/core";
const result = await withRetry(
() => someApiCall(),
{ maxRetries: 5, initialDelayMs: 1000 }
);
Standalone retry utility with exponential backoff + jitter. Used internally by the LLM loop, also exported for custom use.
import { defineTool } from "@radaros/core";
defineTool<T extends z.ZodObject>(config: {
name: string;
description: string;
parameters: T;
execute: (args: z.infer<T>, ctx: RunContext) => Promise<string | ToolResult>;
cache?: ToolCacheConfig;
}): ToolDef
interface ToolResult {
content: string;
artifacts?: Artifact[];
}
interface Artifact {
type: string;
data: unknown;
mimeType?: string;
}
interface ToolCacheConfig {
ttl: number; // milliseconds
}
See Tool Caching for details.
Team
import { Team, TeamMode } from "@radaros/core";
Constructor
new Team(config: TeamConfig)
Execution mode: coordinate, route, broadcast, collaborate.
Max rounds for collaborate mode.
Methods
async run(input: string, opts?: RunOpts): Promise<RunOutput>
async *stream(input: string, opts?: RunOpts): AsyncGenerator<StreamChunk>
TeamMode
enum TeamMode {
Coordinate = "coordinate",
Route = "route",
Broadcast = "broadcast",
Collaborate = "collaborate"
}
Workflow
import { Workflow } from "@radaros/core";
Constructor
new Workflow<TState>(config: WorkflowConfig<TState>)
steps
StepDef<TState>[]
required
Ordered list of steps.
retryPolicy
{ maxRetries: number; backoffMs: number }
Retry configuration.
Methods
async run(opts?: { sessionId?: string; userId?: string }): Promise<WorkflowResult<TState>>
Step Types
// Agent step
{ name: string; agent: Agent; inputFrom?: (state: TState) => string }
// Function step
{ name: string; run: (state: TState, ctx: RunContext) => Promise<Partial<TState>> }
// Condition step
{ name: string; condition: (state: TState) => boolean; steps: StepDef<TState>[] }
// Parallel step
{ name: string; parallel: StepDef<TState>[] }
Memory
import { Memory } from "@radaros/core";
Constructor
new Memory(config?: MemoryConfig)
Storage backend. Defaults to InMemoryStorage.
Max messages before overflow to long-term.
Enable long-term summarization.
Methods
| Method | Returns | Description |
|---|
addMessages(sessionId, messages) | Promise<void> | Add messages to short-term memory |
getMessages(sessionId) | Promise<ChatMessage[]> | Get short-term messages |
getSummaries(sessionId) | Promise<string[]> | Get long-term summaries |
getContextString(sessionId) | Promise<string> | Formatted context string |
clear(sessionId) | Promise<void> | Clear all memory for session |
UserMemory
import { UserMemory } from "@radaros/core";
Constructor
new UserMemory(config?: UserMemoryConfig)
Storage backend for facts. Defaults to InMemoryStorage.
LLM for auto-extraction. Falls back to the agent’s model.
Enable/disable auto-extraction.
Methods
| Method | Returns | Description |
|---|
getFacts(userId) | Promise<UserFact[]> | Get all facts for a user |
addFacts(userId, facts, source?) | Promise<void> | Add facts manually |
removeFact(userId, factId) | Promise<void> | Remove a fact by ID |
clear(userId) | Promise<void> | Clear all facts for a user |
getContextString(userId) | Promise<string> | Formatted string for system prompt |
extractAndStore(userId, messages, model?) | Promise<void> | Extract and store facts from messages |
asTool(config?) | ToolDef | Create a tool for the agent to recall user facts on demand |
UserFact
interface UserFact {
id: string;
fact: string;
createdAt: Date;
source: "auto" | "manual";
}
See User Memory for usage guide.
Storage Drivers
import {
InMemoryStorage,
SqliteStorage,
PostgresStorage,
MongoDBStorage,
} from "@radaros/core";
StorageDriver Interface
interface StorageDriver {
get<T>(namespace: string, key: string): Promise<T | null>;
set<T>(namespace: string, key: string, value: T): Promise<void>;
delete(namespace: string, key: string): Promise<void>;
list<T>(namespace: string, prefix?: string): Promise<Array<{ key: string; value: T }>>;
close(): Promise<void>;
}
| Driver | Constructor | Requires |
|---|
InMemoryStorage | new InMemoryStorage() | None |
SqliteStorage | new SqliteStorage(filepath) | better-sqlite3 |
PostgresStorage | new PostgresStorage(connectionString) | pg |
MongoDBStorage | new MongoDBStorage(uri, dbName?, collectionName?) | mongodb |
Vector Stores
import {
InMemoryVectorStore,
PgVectorStore,
QdrantVectorStore,
MongoDBVectorStore,
} from "@radaros/core";
VectorStore Interface
interface VectorStore {
upsert(doc: VectorDocument): Promise<void>;
search(embedding: number[], options?: VectorSearchOptions): Promise<VectorSearchResult[]>;
get(id: string): Promise<VectorDocument | null>;
delete(id: string): Promise<void>;
clear(): Promise<void>;
close(): Promise<void>;
}
Types
interface VectorDocument {
id: string;
content: string;
embedding?: number[];
metadata?: Record<string, unknown>;
}
interface VectorSearchOptions {
topK?: number;
minScore?: number;
filter?: Record<string, unknown>;
}
interface VectorSearchResult {
document: VectorDocument;
score: number;
}
Embeddings
import { OpenAIEmbedding, GoogleEmbedding } from "@radaros/core";
EmbeddingProvider Interface
interface EmbeddingProvider {
embed(text: string): Promise<number[]>;
embedBatch(texts: string[]): Promise<number[][]>;
}
| Provider | Constructor | Default Model |
|---|
OpenAIEmbedding | new OpenAIEmbedding({ apiKey?, model? }) | text-embedding-3-small |
GoogleEmbedding | new GoogleEmbedding({ apiKey?, model? }) | text-embedding-004 |
KnowledgeBase
import { KnowledgeBase } from "@radaros/core";
Constructor
new KnowledgeBase(config: {
name: string;
vectorStore: VectorStore;
collection?: string;
})
Methods
| Method | Returns | Description |
|---|
initialize() | Promise<void> | Initialize the vector store |
add(doc) | Promise<void> | Add a single document |
addDocuments(docs) | Promise<void> | Add multiple documents |
search(query, options?) | Promise<VectorSearchResult[]> | Semantic search |
get(id) | Promise<VectorDocument | null> | Get document by ID |
delete(id) | Promise<void> | Delete document |
clear() | Promise<void> | Clear all documents |
close() | Promise<void> | Close connections |
asTool(config?) | ToolDef | Create a tool for RAG |
Message Types
type MessageRole = "system" | "user" | "assistant" | "tool";
type MessageContent = string | ContentPart[];
interface ChatMessage {
role: MessageRole;
content: MessageContent | null;
toolCalls?: ToolCall[];
toolCallId?: string;
name?: string;
}
type ContentPart = TextPart | ImagePart | AudioPart | FilePart;
interface TextPart { type: "text"; text: string }
interface ImagePart { type: "image"; data: string; mimeType?: string }
interface AudioPart { type: "audio"; data: string; mimeType?: string }
interface FilePart { type: "file"; data: string; mimeType: string; filename?: string }
Helpers
getTextContent(content: MessageContent | null): string
isMultiModal(content: MessageContent | null): content is ContentPart[]
EventBus
import { EventBus } from "@radaros/core";
const bus = new EventBus();
bus.on("run.start", (data) => { /* { runId, agentName, input } */ });
bus.on("run.complete", (data) => { /* { runId, output } */ });
bus.on("run.error", (data) => { /* { runId, error } */ });
bus.on("run.stream.chunk", (data) => { /* { runId, chunk } */ });
bus.on("tool.call", (data) => { /* { runId, toolName, args } */ });
bus.on("tool.result", (data) => { /* { runId, toolName, result } */ });
Logger
import { Logger } from "@radaros/core";
const logger = new Logger({
level: "info", // "debug" | "info" | "warn" | "error" | "silent"
color: true, // auto-detect TTY
prefix: "myapp",
});
Connect to an MCP server and use its tools as native ToolDef[].
import { MCPToolProvider } from "@radaros/core";
const mcp = new MCPToolProvider({
name: "github",
transport: "stdio", // "stdio" | "http"
command: "npx", // stdio only
args: ["-y", "@modelcontextprotocol/server-github"],
env: { GITHUB_TOKEN: "..." },
// url: "http://...", // http only
// headers: {}, // http only
});
await mcp.connect();
const tools = await mcp.getTools(); // ToolDef[]
await mcp.refresh(); // re-discover tools
await mcp.close(); // disconnect
| Config | Type | Description |
|---|
name | string | Unique name for this MCP server. Used to namespace tool names. |
transport | "stdio" | "http" | Transport protocol. |
command | string | Command to spawn (stdio only). |
args | string[] | Arguments for the command (stdio only). |
env | Record<string, string> | Environment variables (stdio only). |
url | string | MCP server URL (http only). |
headers | Record<string, string> | Custom HTTP headers (http only). |
Requires peer dependency: @modelcontextprotocol/sdk
A2ARemoteAgent
Connect to a remote A2A-compliant agent.
import { A2ARemoteAgent } from "@radaros/core";
const remote = new A2ARemoteAgent({
url: "http://remote-service:3001",
headers: { Authorization: "Bearer ..." }, // optional
name: "remote-agent", // optional override
timeoutMs: 60000, // optional (default 60s)
});
// Discover agent capabilities
const card = await remote.discover();
// Synchronous call
const result = await remote.run("Hello");
// Streaming call
for await (const chunk of remote.stream("Tell me a story")) {
if (chunk.type === "text") process.stdout.write(chunk.text);
}
// Use as a tool for another agent
const tool = remote.asTool();
// Get cached Agent Card
const agentCard = remote.getAgentCard();
| Config | Type | Description |
|---|
url | string | Base URL of the remote A2A agent. |
headers | Record<string, string> | Custom headers for auth. |
name | string | Override discovered name. |
timeoutMs | number | Request timeout (default 60000). |
Pre-built tool collections. Each returns ToolDef[] via getTools().
import {
DuckDuckGoToolkit,
HackerNewsToolkit,
WebSearchToolkit,
GmailToolkit,
WhatsAppToolkit,
} from "@radaros/core";
| Toolkit | Config | Tools | API Key Required |
|---|
DuckDuckGoToolkit | { enableSearch?, enableNews?, maxResults? } | duckduckgo_search, duckduckgo_news | No |
HackerNewsToolkit | { enableGetTopStories?, enableGetUserDetails? } | hackernews_top_stories, hackernews_user | No |
WebSearchToolkit | { provider, apiKey?, maxResults? } | web_search | Yes (Tavily/SerpAPI) |
GmailToolkit | { credentialsPath?, tokenPath?, authClient? } | gmail_send, gmail_search, gmail_read | Yes (OAuth2) |
WhatsAppToolkit | { accessToken?, phoneNumberId?, version?, recipientWaid? } | whatsapp_send_text, whatsapp_send_template | Yes (Meta) |