Skip to main content

Reasoning

RadarOS supports extended thinking (reasoning) across all major model providers. When enabled, the model produces an internal chain-of-thought before generating the final answer — improving performance on complex math, logic, and multi-step problems.

Quick Start

import { Agent, google } from "@radaros/core";

const agent = new Agent({
  name: "Thinker",
  model: google("gemini-2.5-flash"),
  instructions: "Solve problems step by step.",
  reasoning: {
    enabled: true,
    budgetTokens: 8000,
  },
  logLevel: "info",
});

const result = await agent.run("What is the 15th prime number?");
console.log(result.text);

if (result.thinking) {
  console.log("Reasoning:", result.thinking);
}

console.log("Reasoning tokens:", result.usage.reasoningTokens);

Configuration

reasoning.enabled
boolean
required
Enable or disable reasoning for this agent.
reasoning.effort
'low' | 'medium' | 'high'
Reasoning effort level. OpenAI only — maps to reasoning_effort parameter. Default: "medium".
reasoning.budgetTokens
number
Maximum tokens allocated for the thinking phase. Anthropic, Google, Vertex AI — maps to budget_tokens / thinkingBudget. Default: 10000.

Provider Support

Each provider maps ReasoningConfig to its native API:
ProviderAPI ParameterEffortBudgetModels
OpenAIreasoning_efforteffort fieldN/Ao1, o1-mini, o3-mini
Anthropicthinking.budget_tokensN/AbudgetTokens fieldclaude-sonnet-4-20250514
GooglethinkingConfig.thinkingBudgetN/AbudgetTokens fieldgemini-2.5-flash, gemini-2.5-pro
Vertex AIthinkingConfig.thinkingBudgetN/AbudgetTokens fieldgemini-2.5-flash, gemini-2.5-pro
When reasoning is enabled for OpenAI, temperature is ignored (the API does not allow both). For Anthropic, the temperature parameter is removed when thinking is active.

Output

When reasoning is enabled, RunOutput includes:
FieldTypeDescription
thinkingstring | undefinedThe model’s internal chain-of-thought
usage.reasoningTokensnumber | undefinedTokens consumed by the reasoning phase

Streaming

During streaming, reasoning content is delivered as thinking chunks:
for await (const chunk of agent.stream("Solve this puzzle...")) {
  if (chunk.type === "thinking") {
    process.stdout.write(`[think] ${chunk.text}`);
  } else if (chunk.type === "text") {
    process.stdout.write(chunk.text);
  }
}

ReasoningConfig Type

interface ReasoningConfig {
  enabled: boolean;
  effort?: "low" | "medium" | "high";
  budgetTokens?: number;
}

Logging

When logLevel is "info" or higher, the agent logger displays:
  • Thinking content (truncated, italic, dimmed)
  • Reasoning token count with a brain icon in the usage summary