Colmena gives you a single, clean API to call any LLM, chain agents into workflows, and deploy anywhere — in just a few lines of code.
Works with all major LLM providers
Colmena's core is written in Rust and compiled directly into native bindings for Python and Node.js. No wrappers, no interpretation layers — just your code at full speed.
PyO3 and NAPI-RS compile Rust directly into your runtime. There is nothing in the way between you and the model.
Swap between OpenAI, Gemini, and Anthropic by changing a single string. Your logic never changes.
From a single .call() to complex multi-step pipelines — the same library handles both.
from colmena import ColmenaLlm import json llm = ColmenaLlm() # Synchronous call — simple as it gets response = llm.call( messages=[{"role": "user", "content": "Explain quantum computing."}], provider="openai", model="gpt-4o", temperature=0.7 ) print(response) # Real-time streaming for chunk in llm.stream( messages=[{"role": "user", "content": "Write a Rust summary."}], provider="anthropic", model="claude-3-sonnet-20240229" ): print(chunk, end="", flush=True) # Run a full workflow result = json.loads(colmena.run_dag("workflow.json"))
import { ColmenaLlm, runDag, serveDag } from 'colmena-ai'; const llm = new ColmenaLlm(); // Full TypeScript types — no casting, no @types needed const response = await llm.call( [{ role: 'user', content: 'Describe native bindings.' }], 'openai', { model: 'gpt-4o', temperature: 0.5 } ); // healthCheck and getProviders built in const ok = await llm.healthCheck('gemini'); // Execute a full workflow from a JSON file const result = await runDag('workflow.json'); // Or expose it instantly as a REST endpoint await serveDag('workflow.json', '0.0.0.0', 3000);
{ "nodes": [ { "id": "fetch_data", "type": "http", "url": "https://api.example.com/data" }, { "id": "analyze", "type": "llm", "provider": "gemini", "model": "gemini-2.0-flash" }, { "id": "process", "type": "python", "script": "transform.py" } ], "edges": [ ["fetch_data", "analyze"], ["analyze", "process"] ] }
No SDK setup, no per-provider boilerplate. Install once, call anything.
The most useful features in the smallest possible API surface. No boilerplate, no learning curve.
Change one string and your agent switches providers. No new imports, no new clients, no refactoring.
Pick your language. Install. Build.
Colmena is open source, MIT licensed, and built by the Startti team. Start for free — no credit card, no setup, just code.