Getting Started
Table of contents
- Installation
- Step 1 — Choose a backend
- Step 2 — Chat with the agent
- Step 3 — Define tools
- Step 4 — Host an MCP server
- What’s Next?
Installation
Add the NuGet package to your project:
dotnet add package Theoistic.Agentic
Requirements: .NET 10 · ASP.NET Core (included via
Microsoft.AspNetCore.Appframework reference)
Step 1 — Choose a backend
Agentic uses the ILLMBackend abstraction so you can swap between a remote OpenAI-compatible API and a locally-running llama.cpp model without changing any agent code.
Remote API (OpenAIBackend)
Connects to LM Studio, OpenRouter, OpenAI, or any OpenAI-compatible /v1/responses endpoint:
using Agentic;
var lm = new OpenAIBackend(new LMConfig
{
Endpoint = "http://localhost:1234", // LM Studio or any OpenAI-compatible server
ModelName = "your-model-name",
ApiKey = "sk-...", // optional Bearer token
EmbeddingModel = "your-embedding-model", // optional — needed for vector storage
});
Compatible with:
- LM Studio
- Ollama (via OpenAI-compatible endpoint)
- OpenAI API
- Any OpenAI-compatible REST API (
/v1/responses)
Local inference (NativeBackend)
Runs inference locally using llama.cpp. The runtime is downloaded and installed automatically on first use:
using Agentic;
using Agentic.Runtime.Core;
var sessionOptions = new Mantle.LmSessionOptions
{
ModelPath = @"/path/to/model.gguf",
ToolRegistry = new Mantle.ToolRegistry(),
Compaction = new Mantle.ConversationCompactionOptions(MaxInputTokens: 4096),
};
await using var lm = new NativeBackend(
sessionOptions,
backend: LlamaBackend.Cuda,
cudaVersion: "12.4",
installProgress: new Progress<(string msg, double pct)>(p => Console.Write($"\r[{p.pct:F0}%] {p.msg}")));
See NativeBackend for full details including CPU, CUDA, and Vulkan options.
Step 2 — Chat with the agent
var agent = new Agent(lm, new AgentOptions
{
SystemPrompt = "You are a helpful assistant.",
OnEvent = e =>
{
if (e.Kind == AgentEventKind.TextDelta)
Console.Write(e.Text);
},
});
// Single-turn (no history)
var response = await agent.RunAsync("Hello!");
// Multi-turn streaming (maintains conversation history)
await agent.ChatStreamAsync("What did I just say?");
Step 3 — Define tools
Use the [Tool] and [ToolParam] attributes to expose methods to the model with zero boilerplate:
using System.ComponentModel;
public class WeatherTools : IAgentToolSet
{
[Tool, Description("Get the current weather for a city.")]
public Task<string> GetWeather(
[ToolParam("City name")] string city,
[ToolParam("Unit: celsius or fahrenheit")] string unit = "celsius")
{
return Task.FromResult(
$"The weather in {city} is 22 °{(unit == "fahrenheit" ? "F" : "C")} and sunny.");
}
}
Step 4 — Host an MCP server
Expose your tools over HTTP so any MCP-compatible client can call them:
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddAgenticMcp(opt =>
{
opt.ApiKey = "my-secret-key"; // optional Bearer-token auth
opt.ToolCallTimeout = TimeSpan.FromSeconds(55);
});
var app = builder.Build();
app.MapMcpServer("/mcp");
var tools = app.Services.GetRequiredService<ToolRegistry>();
tools.Register(new WeatherTools());
await app.RunAsync();
The MCP server exposes all registered tools over SSE + JSON-RPC so any MCP-compatible client (LM Studio, Claude Desktop, etc.) can call them.
What’s Next?
Explore the individual feature documentation or jump straight into examples:
- OpenAI Backend — configuration options and direct API calls
- Native Backend — local llama.cpp inference with auto-install
- Agent — multi-turn chat, events, and options
- Workflows — multi-step agent pipelines
- Tool System — building tools with attributes
- MCP Server — hosting tools over HTTP
- Context Compaction — managing long conversations
- Vector Storage — semantic search and embeddings
- Examples — real-world usage patterns