Quick Start¶
Five lines from install to a working consensus call.
1. Install¶
2. Your first pattern¶
import asyncio
import os
from executionkit import Provider, consensus
async def main() -> None:
async with Provider(
base_url="https://api.openai.com/v1",
api_key=os.environ["OPENAI_API_KEY"],
model="gpt-4o-mini",
) as provider:
result = await consensus(
provider,
"What is the capital of France? Answer in one word.",
num_samples=3,
)
print(result.value) # Paris
print(result.metadata["agreement_ratio"]) # 1.0
print(result.cost) # TokenUsage(input_tokens=..., output_tokens=..., llm_calls=3)
asyncio.run(main())
Provider opens an HTTP client on first use; the async context manager closes it cleanly. You can also call await provider.aclose() directly.
3. Pick a different pattern¶
from executionkit import Tool, react_loop
async def get_weather(city: str) -> str:
return f"Weather in {city}: 18°C, light rain."
weather = Tool(
name="get_weather",
description="Look up current weather for a city.",
parameters={
"type": "object",
"properties": {"city": {"type": "string"}},
"required": ["city"],
"additionalProperties": False,
},
execute=get_weather,
)
result = await react_loop(provider, "What's the weather in Paris?", tools=[weather])
print(result.value)
print(result.metadata["tool_calls_made"]) # 1
from functools import partial
from executionkit import pipe, consensus, refine_loop
result = await pipe(
provider,
"Explain gradient descent in simple terms.",
partial(consensus, num_samples=3),
partial(refine_loop, target_score=0.9),
)
print(result.value)
print(result.cost) # Cumulative across both steps
4. Track cost across calls¶
from executionkit import Kit
kit = Kit(provider)
await kit.consensus("Classify: ...", num_samples=3)
await kit.refine("Summarise: ...")
print(kit.usage) # TokenUsage(input_tokens=..., output_tokens=..., llm_calls=...)
5. Sync wrappers (outside async)¶
from executionkit import consensus_sync
result = consensus_sync(provider, "What is 2 + 2?")
print(result.value)
The sync wrappers raise RuntimeError when called inside a running event loop (e.g. Jupyter) — use await directly there.
What next¶
- Provider Setup — configure OpenAI, Ollama, Groq, Together, GitHub Models, Azure.
- Patterns Overview — pick the right pattern for your problem.
- Recipes — failover, cost-aware routing, pattern chaining.