Agent

Table of contents

  1. Creating an Agent
  2. Running Conversations
    1. Turn methods
    2. Single-turn (no history)
    3. Multi-turn streaming
    4. Resetting the conversation
  3. AgentResponse
  4. AgentOptions Reference
  5. Agent Events
    1. Logging instead of OnEvent
  6. Model Selection
  7. Using Tools with an Agent
  8. Dependency Injection

The Agent class drives multi-turn conversations with an LLM. It accepts any ILLMBackend, maintains conversation history, streams responses token-by-token, and automatically orchestrates MCP tool calls.

Creating an Agent

var agent = new Agent(lm, new AgentOptions
{
    SystemPrompt = "You are a helpful assistant.",
    OnEvent      = e =>
    {
        if (e.Kind == AgentEventKind.TextDelta)
            Console.Write(e.Text);
    },
});

Running Conversations

Turn methods

MethodHistoryStreaming
RunAsyncNoNo
RunStreamAsyncNoYes
ChatAsyncYesNo
ChatStreamAsyncYesYes

Every method has a (text, images, ...) overload for multimodal input. All return Task<AgentResponse>.

Single-turn (no history)

var response = await agent.RunAsync(
    "Summarise the following text: ...",
    mcpServerUrl: "http://localhost:5100/mcp");   // optional

Multi-turn streaming

// Each call appends to the conversation history
await agent.ChatStreamAsync("What is the weather in Paris?");
await agent.ChatStreamAsync("What about London?");
await agent.ChatStreamAsync("Compare both cities.");

Resetting the conversation

// Clears history without compaction
agent.ResetConversation();

AgentResponse

PropertyDescription
TextFinal model text for this turn
ToolInvocationsAll tool calls executed during the turn, in order
UsageToken usage (InputTokens, OutputTokens, TotalTokens); may be null

AgentOptions Reference

PropertyDefaultDescription
SystemPromptnullInstruction text prepended to every request
OnEventnullCallback fired for every AgentEvent
ReasoningnullPer-agent reasoning effort (see Reasoning Control)
InferencenullPer-agent sampling parameters (see Inference Config)
ModelnullPer-agent model default (see Model Selection)
LoggernullILogger — events are written automatically; no OnEvent switch needed

Agent Events

The OnEvent callback receives AgentEvent instances as the agent processes a request:

AgentEventKindDescription
UserInputUser message was received
SystemPromptSystem prompt dispatched to the model
ToolDeclarationAn MCP server was declared for this request
ReasoningModel emitted a thinking / reasoning chunk
TextDeltaA streaming text delta arrived from the model
ToolCallThe model invoked a tool
ToolResultA tool returned its result
AnswerThe model produced its final answer for this turn
StepCompletedA workflow step was verified and completed
WorkflowCompletedAll workflow steps were completed
var agent = new Agent(lm, new AgentOptions
{
    OnEvent = e => e.Kind switch
    {
        AgentEventKind.TextDelta    => Console.Write(e.Text),
        AgentEventKind.ToolCall     => Console.WriteLine($"\n[Tool] {e.ToolName}({e.Arguments})"),
        AgentEventKind.ToolResult   => Console.WriteLine($"[Result] {e.Text}"),
        AgentEventKind.Answer       => Console.WriteLine("\n[Done]"),
        _                           => default,
    },
});

Logging instead of OnEvent

Pass an ILogger to have all events written automatically without wiring OnEvent manually:

var agent = new Agent(lm, new AgentOptions
{
    SystemPrompt = "You are a helpful assistant.",
    Logger       = loggerFactory.CreateLogger<Agent>(),
});

TextDelta is written at Trace level; all other events at Debug or Information so standard log-level filters keep the output clean.

Model Selection

The model to use is resolved in order of precedence (highest wins):

  1. Per-request model: parameter
  2. AgentOptions.Model
  3. LMConfig.ModelName
// Per-request override
var response = await agent.ChatStreamAsync(
    "Quick summary please.",
    model: "advanced");   // resolved via LMConfig.Models alias map

Using Tools with an Agent

Point the agent at a running MCP server via the mcpServerUrl parameter:

var response = await agent.RunAsync(
    "What is the weather in Tokyo?",
    mcpServerUrl: "http://localhost:5100/mcp");

The agent will automatically discover available tools, call them as needed, and continue the conversation with the results.

Dependency Injection

Register the agent in an ASP.NET Core or generic host:

builder.Services.AddSingleton<ILLMBackend>(_ => new OpenAIBackend(new LMConfig
{
    Endpoint  = builder.Configuration["LM:Endpoint"]!,
    ModelName = builder.Configuration["LM:Model"]!,
}));

builder.Services.AddScoped<Agent>(sp => new Agent(
    sp.GetRequiredService<ILLMBackend>(),
    new AgentOptions { SystemPrompt = "You are a helpful assistant." }));