react_loop() runs the standard think → act → observe loop with tool calling. Each round, the LLM may either return a final answer or request one or more tool calls. Tool calls are executed (with timeout and JSON-Schema argument validation), their results appended to the conversation, and the loop continues until the model answers or max_rounds is hit.
sequenceDiagram
participant App
participant react
participant Provider
participant Tool
App->>react: react_loop(provider, prompt, tools, max_rounds=8)
loop until final answer or max_rounds
react->>Provider: complete(messages, tools=schemas)
Provider-->>react: response (may contain tool_calls)
alt no tool_calls
react-->>App: PatternResult(value=content, ...)
else has tool_calls
react->>react: append assistant msg with tool_calls
loop each tool_call
react->>react: validate args against JSON Schema
react->>Tool: execute(**args) with timeout
Tool-->>react: result string (truncated to max_observation_chars)
react->>react: append role="tool" message
end
end
end
react-->>App: raise MaxIterationsError
importasyncioimportastimportoperatorimportosfromexecutionkitimportProvider,Tool,react_loop# Safe AST-based math evaluator — never use eval() on LLM output._OPS={ast.Add:operator.add,ast.Sub:operator.sub,ast.Mult:operator.mul,ast.Div:operator.truediv,ast.Pow:operator.pow,ast.USub:operator.neg,}def_safe_eval(node:ast.AST)->float:ifisinstance(node,ast.Constant)andisinstance(node.value,(int,float)):returnnode.valueifisinstance(node,ast.BinOp):return_OPS[type(node.op)](_safe_eval(node.left),_safe_eval(node.right))ifisinstance(node,ast.UnaryOp):return_OPS[type(node.op)](_safe_eval(node.operand))raiseValueError(f"Unsupported expression node: {type(node).__name__}")asyncdefcalculator(expression:str)->str:tree=ast.parse(expression,mode="eval")returnstr(_safe_eval(tree.body))calc=Tool(name="calculator",description="Evaluate an arithmetic expression. Supports + - * / ** and unary -.",parameters={"type":"object","properties":{"expression":{"type":"string"}},"required":["expression"],"additionalProperties":False,},execute=calculator,timeout=2.0,)asyncdefmain()->None:asyncwithProvider(base_url="https://api.openai.com/v1",api_key=os.environ["OPENAI_API_KEY"],model="gpt-4o-mini",)asprovider:result=awaitreact_loop(provider,"What is (17 * 83) + (12 ** 3)? Use the calculator.",tools=[calc],max_rounds=4,)print(result.value)# 3139print(result.metadata["rounds"])# 2print(result.metadata["tool_calls_made"])# 2asyncio.run(main())
@dataclass(frozen=True,slots=True)classTool:name:strdescription:strparameters:Mapping[str,Any]# JSON Schema for argumentsexecute:Callable[...,Awaitable[str]]# async function returning a stringtimeout:float=30.0
Arguments are validated against parameters (JSON Schema) before execute is called. Validation covers required, additionalProperties: false, and primitive type checks (string, integer, number, boolean, array, object) — using stdlib only, no jsonschema dependency.
execute must be async and return a string. Convert non-string results yourself.
O(rounds) LLM calls. Bounded by max_rounds. Each round = one completion regardless of how many tools are called.
Sequential. Each round depends on prior tool outputs — no parallelism across rounds. (Tools within a round run sequentially in the current implementation.)
Context grows with every round unless max_history_messages is set. For loops > ~20 rounds or with verbose tools, set max_history_messages to bound the prompt size.
Tool failures don't crash the loop. Unknown tools, schema violations, timeouts, and exceptions return an error string as the observation; the LLM gets a chance to recover.
Never eval() LLM output. The example above uses a safe AST walker. Treat all tool inputs as adversarial.
Tool errors return only the exception class name to the LLM (e.g. "Tool 'X' failed: TimeoutError"), not the full message — this prevents leaking internal details to the model.
JSON-Schema validation runs beforeexecute is called. Tools with additionalProperties: false reject unknown keys; missing required fields are caught.
Tool timeout defaults to 30 s but is overridable per-call via tool_timeout=. Set short timeouts for network tools.