Autonomous agents are powerful precisely because they act without waiting to be asked. They call APIs, run queries, post messages, and trigger workflows on their own. That speed is the point.
Until it isn't.
As soon as an agent interacts with something consequential, whether that is a financial transaction, a production database, or a customer-facing message, the priorities shift. Speed becomes less important than control. The question shifts from "how fast can this run?" to "who approved this?"
This is where many frameworks fall short. Human oversight is often treated as something to layer on later, rather than something designed into the system from the beginning.
Agno takes a different approach. Agno treats it as a first-class design primitive, built directly into tools, workflows, and the approvals system. The result is a system where you get the full power of autonomous agents, with the ability to draw a clear line between what runs freely and what waits for a human.
What are the three levels of human-in-the-loop oversight for AI agents?
Agno's Human-in-the-Loop (HITL) capabilities operate across three distinct layers: tools, workflows, and approvals. Understanding where to apply each one is what separates a well-governed agent from one that's just hoping nothing goes wrong.
Tool-level oversight
At the most granular level, tool-level HITL allows you to pause execution right before a specific function runs. This gives a human the opportunity to review the action, approve it, reject it, or provide additional input before anything actually happens. It is ideal for situations where a single action carries risk, such as sending a message or executing a transaction.
Workflow-level oversight
At a broader level, workflow-level oversight applies to entire steps within a multi-stage process. Instead of reviewing individual actions, you can pause the workflow at key points and evaluate progress before allowing it to continue. This is especially useful when the outcome depends on a sequence of decisions rather than a single call.
Approval-level oversight
Built on top of tool-level confirmation, approvals introduce a formal sign-off layer for actions that require more than a user's consent. By using the @approval decorator, you can mark tools as requiring administrator or compliance review before execution when type="required", or designate them for audit logging without blocking execution when type="audit". Pending approvals are exposed through the Approvals API, where they can be reviewed and resolved, creating a persistent record that remains available for auditing over time.
In practice, most production systems rely on all three layers working together. Each is applied with a level of granularity that matches the risk of the operation, creating a balance between autonomy and control.
What types of human input can interrupt an AI agent mid-run?
Not every pause in an agent's execution is about a simple yes or no decision. In practice, there are a few different ways a human might need to step in, depending on what the agent is trying to do and what is missing.
Agno supports three distinct modes of interaction during a run, each designed for a different kind of intervention.
User Confirmation
The approve/reject model. The agent presents what it wants to do; the human says yes or no.
User Input allows the agent to pause and request the specific information it needs to continue, whether that is a value it could not infer, a preference the user needs to provide, or additional context that was not included in the original prompt.
External Tool Execution
A pause that represents a handoff. The agent identifies that a step must be completed outside its own capabilities, such as a human performing an action in an external system. It pauses the run, delegates that step, and then resumes once the action has been completed.
Together, these three modes cover the full spectrum of human oversight patterns, from quick approvals to deeper input and, when needed, full handoffs for actions that cannot or should not be automated.
Tool-level HITL: How to add human confirmation to a tool before it executes
The most direct way to introduce human oversight in Agno is at the tool level. By adding requires_confirmation=True to a tool function, you create a checkpoint right before that function is allowed to run. When an agent reaches that point, execution pauses and a RunRequirement is surfaced.
This is a structured request for human input that clearly shows what the agent is about to do. Nothing proceeds until someone reviews the request and decides how to handle it.
import httpx
from agno.agent import Agent
from agno.tools import tool
@tool(requires_confirmation=True)
def send_customer_email(customer_id: str, subject: str, body: str) -> str:
"""Send an email to a customer."""
# This will not execute until a human confirms
response = httpx.post(
"https://api.yourservice.com/emails",
json={"customer_id": customer_id, "subject": subject, "body": body}
)
return response.json()
agent = Agent(tools=[send_customer_email])
How this pause appears depends on how the agent is running. In a streaming flow, it shows up as a RunPaused event. In a synchronous flow, it is returned directly in the agent's response. In both cases, the behavior is consistent. The agent stops, presents the intended action, and waits for a decision.
How to handle a paused agent run in your application code
When an agent reaches a point that requires confirmation, your application receives a structured RunRequirement object that clearly describes the pending action. From that moment, control shifts to your application to decide how to proceed.
from agno.agent import Agent, RunResponse
from agno.tools import tool
from rich.prompt import Prompt
@tool(requires_confirmation=True)
def delete_record(record_id: str) -> str:
"""Permanently delete a record from the database."""
# deletion logic here
return f"Record {record_id} deleted."
agent = Agent(tools=[delete_record])
response: RunResponse = agent.run("Delete record ID 8821")
while response.is_paused:
for requirement in response.requirements:
print(f"\nAgent wants to call: {requirement.tool_name}")
print(f"With arguments: {requirement.tool_args}")
decision = Prompt.ask("Allow this action?", choices=["yes", "no"])
if decision == "yes":
requirement.confirm()
else:
requirement.reject()
response = agent.continue_run(response)
The confirm() / reject() pattern keeps your application code clean. You do not need to manage execution state yourself or restart the agent from the beginning. Once a decision is made, the agent resumes exactly where it paused, continuing the flow as if it had never been interrupted.
How to stream a HITL flow
If you're streaming agent responses, HITL works the same way. You handle the pause at the event level instead of on the response object.
for run_event in agent.run("Delete record ID 8821", stream=True):
if run_event.is_paused:
for requirement in run_event.active_requirements:
decision = Prompt.ask("Allow this action?", choices=["yes", "no"])
if decision == "yes":
requirement.confirm()
else:
requirement.reject()
response = agent.continue_run(
run_id=run_event.run_id,
requirements=run_event.requirements,
stream=True
)How to use HITL in multi-agent teams
HITL works identically when your agent is part of a team. If a member agent calls a tool that requires confirmation, the entire team run pauses. Each requirement includes member_agent_name so you know exactly which agent triggered it:
run_response = team.run("Process refunds for churned users")
if run_response.is_paused:
for requirement in run_response.active_requirements:
if requirement.needs_confirmation:
print(
f"Agent '{requirement.member_agent_name}' wants to call "
f"'{requirement.tool_execution.tool_name}'"
)
requirement.confirm()
run_response = team.continue_run(run_response)
Tools attached directly to the team leader, not just member agents, also support HITL in the same way.
How to require human approval for MCP tools without modifying each one
The same confirmation model applies to Model Context Protocol (MCP) tools, with controls available at the toolkit level. Instead of modifying each tool one by one, you can define approval requirements across an entire set of tools in a single place.
from agno.tools.mcp import MCPTools
mcp_tools = MCPTools(url="https://your-mcp-server.com/sse")
# Set requires_confirmation after initialization
# (MCP tools aren't loaded until build_tools() is called)
mcp_tools.requires_confirmation_tools = ["delete_file", "send_message", "update_record"]
This becomes especially valuable when working with third-party MCP servers, where you do not control how the tools are defined. You can still take advantage of the full toolkit while enforcing clear approval boundaries around the actions that matter most.
Workflow-level HITL: How to insert a human approval checkpoint between workflow steps
Tool-level confirmation gives you precise control over individual actions, but not every situation comes down to a single function call. In many cases, you need a deliberate pause between stages of a larger process, after one step completes and before the next begins.
Agno supports this through workflow-level confirmation. By setting requires_confirmation=True on a workflow Step, you introduce a checkpoint at that exact point in the sequence. When the workflow reaches it, execution pauses and returns a WorkflowRunOutput with is_paused=True, signaling that it is waiting for human input before continuing.
from agno.workflows import Workflow, Step
data_collection_step = Step(name="Collect Data", agent=data_agent)
review_step = Step(
name="Send to Customers",
agent=send_agent,
requires_confirmation=True,
confirmation_message="Review the data above before sending to customers."
)
workflow = Workflow(steps=[data_collection_step, review_step])
run_output = workflow.run("Run the weekly customer report")
if run_output.is_paused:
for requirement in run_output.steps_requiring_confirmation:
print(f"\nStep '{requirement.step_name}' requires confirmation")
print(f"Message: {requirement.confirmation_message}")
user_input = input("Continue? (yes/no): ").strip().lower()
if user_input in ("yes", "y"):
requirement.confirm()
else:
requirement.reject()
run_output = workflow.continue_run(run_output)
This workflow-level approval model is particularly effective in multi-agent pipelines where one agent produces output that a human needs to review before a downstream agent acts on it. You create a pause at the natural boundary between those stages, ensuring that the right decisions are made at the right moment.
Runtime-level approvals: How to add admin approval gates on top of tool confirmation
User confirmation gives individuals control over whether a specific tool call proceeds, but some actions require more than a simple yes from a user. They may need formal sign-off from an administrator, a compliance reviewer, or an external approvals system. Agno's approvals layer is designed to handle these cases by providing a structured way to manage and enforce those additional approval requirements.
Approvals work on top of tool-level oversight. Where requires_confirmation=True pauses a run and asks the user, the @approval decorator routes the request to a dedicated approvals system before execution can proceed. The run stays paused—and the state is persisted to your database—until a human explicitly resolves it through the Approvals API.
There are two modes:
Setting type="required" creates a blocking gate where the tool does not execute until an administrator approves or rejects the request through the Approvals API. This approach is appropriate for high-stakes operations such as fund transfers, schema changes, or any action that requires a formal audit trail.
Setting type="audit" is non-blocking, so the tool executes immediately while a record is created in the approvals system for compliance logging. This approach does not introduce a gate and instead provides a traceable record of the action.
from agno.agent import Agent
from agno.tools import tool, approval
from agno.models.openai import OpenAIChat
@tool
@approval(type="required")
def process_refund(customer_id: str, amount: float) -> str:
"""Process a refund for a customer. Requires admin approval."""
# Will not execute until approved via the Approvals API
return f"Refund of ${amount} processed for customer {customer_id}"
@tool
@approval(type="audit")
def export_customer_data(customer_id: str) -> str:
"""Export customer data. Executes immediately but creates an audit record."""
return f"Data exported for customer {customer_id}"
agent = Agent(
model=OpenAIChat(id="gpt-4o"),
tools=[process_refund, export_customer_data],
)Resolving an approval
An admin resolves the approval by updating the record through the database provider. The expected_status="pending" guard prevents race conditions if two admins try to resolve the same request at the same time, so only one update succeeds.
db.update_approval(
approval_id,
expected_status="pending", # Prevents race conditions
status="approved", # or "rejected"
resolved_by="admin_user_id",
resolved_at=int(time.time()),
)
Once the approval is resolved, continue the run using the original run_id. The SDK checks the resolution before proceeding, and if the record is missing or still pending, continue_run raises a RuntimeError.
run = agent.continue_run(run_id=run.run_id, requirements=run.requirements)AgentOS: the admin panel, built in
The @approval decorator handles persistence and the resolution protocol, but you still have to build your own admin UI for querying and resolving pending approvals. AgentOS removes that burden entirely.
The code stays nearly identical. You point to Postgres and wrap your agent in AgentOS.
from agno.agent import Agent
from agno.approval import approval
from agno.db.postgres import PostgresDb
from agno.models.openai import OpenAIChat
from agno.os import AgentOS
from agno.tools import tool
db = PostgresDb(db_url="postgresql+psycopg://ai:ai@localhost:5532/ai")
@approval
@tool(requires_confirmation=True)
def delete_user_data(user_id: str) -> str:
"""Permanently delete all data for a user. Requires admin approval."""
return f"All data for user {user_id} has been deleted."
agent = Agent(
id="data-manager",
model=OpenAIChat(id="gpt-4o-mini"),
tools=[delete_user_data],
instructions=["You help users manage data operations."],
db=db,
)
app = AgentOS(
agents=[agent],
db=db,
).get_app()
The AgentOS Control Panel automatically surfaces all pending approvals—the agent, tool, arguments, and requesting user—and lets admins approve or reject with a click. Full resolution history is tracked.
If you want to wire approvals into an existing internal tool or Slack bot, AgentOS also exposes a complete REST API:
This is what separates a prototype from a production system in regulated environments. User confirmation handles the "should this specific user proceed?" question. Approvals handle the "does this action have the organizational sign-off it needs?" question. The two layers are complementary: one governs individual tool calls in the moment, and the other enforces formal governance with persistent records that can be audited long after the fact.
When to use each layer
What can you build with human-in-the-loop agent controls?
A customer communication agent with a review gate. It can draft emails, summaries, or reports entirely on its own, handling the bulk of the work without intervention. But before anything is actually sent, a human steps in to review and approve the final output. The agent moves quickly, but the human remains in control of what reaches the customer.
A data pipeline with a checkpoint. An agent can collect, transform, and prepare data end to end, but the final step that pushes results into production is gated behind confirmation. That pause creates a natural checkpoint where issues can be caught and corrected before they have downstream impact.
A finance agent with approval tiers. Routine queries and low-impact actions can run without interruption, while higher-stakes operations such as transfers, refunds, or account changes require approval. These thresholds are defined in code, but enforced consistently at runtime, ensuring that the right actions are always reviewed.
An infrastructure agent with a hard stop. An agent can analyze systems, generate plans, and propose changes, but anything that would modify production resources is stopped until a human explicitly approves it. This allows the agent to operate as a powerful assistant without ever taking irreversible action on its own.
Why human oversight becomes more critical as AI agents get more capable
Agents are getting more capable, which means the consequences of unchecked actions are also growing. The same properties that make an agent useful—autonomy, speed, the ability to chain many actions together—are also what make an unreviewed agent risky in sensitive contexts.
This is why human oversight has become more important, not less. Human-in-the-loop isn't a limitation on what your agent can do. It's what makes it safe to give your agent more to do.
When approval boundaries are clearly defined in code and consistently enforced at the runtime level, you get the best of both: agents that move fast in the places where speed is safe, and agents that stop and wait in the places where oversight is required. The line between those two zones is yours to draw, based on the level of risk you are willing to accept and the level of control you want to maintain.
How to get started
Mark any tool function with requires_confirmation=True:
from agno.tools import tool
@tool(requires_confirmation=True)
def your_sensitive_tool(param: str) -> str:
"""Your tool description."""
...For MCP tools, set requires_confirmation_tools on the MCPTools instance after initialization. For workflows, add requires_confirmation=True to any Step that should pause before running. For admin-mediated approvals with a persistent audit trail, add @approval and point your agent at AgentOS.
The full Human-in-the-Loop documentation covers the complete RunRequirement API, streaming flows, workflow continuation, and AgentOS runtime enforcement, including ready-to-run cookbook examples.


.png)