Adapters¶
The provider layer: HTTP client, structural protocols, and response types.
Provider¶
The default OpenAI-compatible HTTP client. Speaks /chat/completions JSON. Uses stdlib urllib by default; switches to httpx.AsyncClient (with connection pooling) when httpx is installed.
executionkit.provider.Provider
dataclass
¶
Provider(base_url: str, model: str, api_key: str = '', default_temperature: float = 0.7, default_max_tokens: int = 4096, timeout: float = 120.0)
Universal LLM provider. Posts JSON, parses JSON. No SDK needed.
Works with any OpenAI-compatible endpoint: OpenAI, Azure, Ollama, Together, Groq, GitHub Models, etc.
aclose
async
¶
Release the underlying HTTP client.
Call this when the provider is no longer needed (or use it as an async context manager instead).
Source code in executionkit/provider.py
complete
async
¶
complete(messages: Sequence[dict[str, Any]], *, temperature: float | None = None, max_tokens: int | None = None, tools: Sequence[dict[str, Any]] | None = None, **kwargs: Any) -> LLMResponse
POST to {base_url}/chat/completions and parse the JSON response.
Source code in executionkit/provider.py
Protocols¶
LLMProvider and ToolCallingProvider are @runtime_checkable Protocols. Any object matching the interface satisfies the protocol — no inheritance required.
executionkit.provider.LLMProvider ¶
Bases: Protocol
Structural protocol for any LLM backend.
Any class with a matching complete signature satisfies this protocol
via structural subtyping (PEP 544) — no explicit inheritance required.
executionkit.provider.ToolCallingProvider ¶
Bases: LLMProvider, Protocol
Extension of LLMProvider for providers that support tool calling.
The built-in :class:Provider satisfies this protocol via its
supports_tools attribute. Pass to :func:react_loop to unlock
tool-calling patterns.
Response types¶
executionkit.provider.LLMResponse
dataclass
¶
LLMResponse(content: str, tool_calls: tuple[ToolCall, ...] = tuple(), finish_reason: str = 'stop', usage: MappingProxyType[str, Any] = (lambda: MappingProxyType({}))(), raw: Any = None)
Parsed LLM completion response.
Handles both OpenAI (prompt_tokens / completion_tokens) and
Anthropic (input_tokens / output_tokens) usage key formats.
executionkit.provider.ToolCall
dataclass
¶
A single tool invocation extracted from an LLM response.
MockProvider¶
For unit tests. Yields canned responses, tracks all calls, and never makes real HTTP calls.
executionkit._mock.MockProvider
dataclass
¶
Test double implementing LLMProvider and ToolCallingProvider.
Accepts a list of responses (strings or LLMResponse objects) and
returns them in order, cycling when exhausted. Optionally raises a
configured exception to test error paths.
last_call
property
¶
Most recent call record, or None if no calls yet.
complete
async
¶
complete(messages: Sequence[dict[str, Any]], *, temperature: float | None = None, max_tokens: int | None = None, tools: Sequence[dict[str, Any]] | None = None, **kwargs: Any) -> LLMResponse
Return the next pre-configured response or raise the configured exception.
Source code in executionkit/_mock.py
Custom adapter checklist¶
Implement a custom provider in three steps:
- Define a class with an async
completemethod matchingLLMProvider:
from executionkit.provider import LLMResponse
class MyProvider:
async def complete(
self,
messages,
*,
temperature=None,
max_tokens=None,
tools=None,
**kwargs,
) -> LLMResponse:
...
-
Return
LLMResponse(content=..., usage={...}).usageshould be a dict with at leastinput_tokensandoutput_tokensso cost tracking works. Empty dict is acceptable (cost will be0). -
For tool calling, set
supports_tools = Trueand populateLLMResponse.tool_callsfrom the upstream response.react_loopwill refuse providers withoutsupports_tools=True.
The structural-protocol design means no registration step — pass your provider directly to any pattern.
Notes on the default Provider¶
- API key masking.
Provider.__repr__always showsapi_key='***'regardless of the actual key length or prefix. Keys are never written to repr output, log lines, or exception messages. - Credential redaction in errors. HTTP error messages are scanned for credential-shaped substrings (matching
sk-...,bearer ...,token=..., etc.) and redacted to[REDACTED]before being raised. - Connection lifecycle.
Providersupportsasync withandawait provider.aclose(). With thehttpxbackend, this closes the underlyingAsyncClientcleanly. - Retries are at the call layer, not the HTTP layer. Use
RetryConfigon the pattern call (e.g.consensus(..., retry=RetryConfig(...))).