When an agent needs to accomplish a task that's more involved than a single tool can handle, it often chains tools together. In these multi-step sequences, the output from one tool frequently becomes a necessary piece of information for a subsequent tool. This flow of information creates dependencies: Tool B cannot do its job until Tool A provides the required data. Effectively managing these dependencies is fundamental to building sophisticated agents that can execute complex plans.Imagine an agent tasked with planning a weekend trip. It might first use a get_flight_prices tool. The output of this tool, say the cheapest flight option with its dates and times, then becomes a significant input for a book_hotel tool, which needs to know the arrival and departure dates to find suitable accommodation. Without the flight details, the hotel booking tool is stuck. This is a common pattern: the successful execution of one step enables the next.Identifying and Passing Dependent DataThe agent, or the underlying orchestration logic you design, needs a way to understand and manage these data handoffs. There are generally two ways these dependencies are handled:Agent-Driven Data Flow: The LLM itself, as part of its reasoning process, can determine that the output of tool_A_output should be used as the parameter_x for tool_B. This relies heavily on well-written tool descriptions (as discussed in Chapter 1) that clearly specify what a tool outputs and what inputs it expects. The LLM generates the call to Tool B with the necessary data mapped from Tool A's result.Orchestrator-Managed Data Flow: In more structured agent frameworks or custom-built orchestrators, you might explicitly define how data flows between tools. The orchestrator executes Tool A, captures its output, and then programmatically passes the relevant parts of that output to Tool B when it's time for Tool B to run.Regardless of whether the LLM or an orchestrator is primarily managing the flow, the mechanism for passing data usually involves one of the following approaches:Direct Output-to-Input MappingThis is the most straightforward method. The agent (or orchestrator) takes the direct output of a preceding tool and feeds it as an argument to a parameter of the next tool.For instance, if tool_A returns a JSON object like {"user_id": "123", "email": "user@example.com"}, and tool_B needs a user_identifier, the system would map tool_A_output.user_id to the user_identifier parameter of tool_B.# Illustrative Python-like pseudocode user_data_from_tool_A = agent.execute_tool("fetch_user_profile", user_name="Alice") # user_data_from_tool_A might be: {"id": "u456", "preferences": ["music", "hiking"]} if user_data_from_tool_A and user_data_from_tool_A.get("id"): recommendations = agent.execute_tool( "get_recommendations", user_id=user_data_from_tool_A["id"], categories=user_data_from_tool_A.get("preferences", []) ) # Process recommendations else: # Handle missing user_id or failed tool_A execution print("Could not retrieve user ID to get recommendations.")In this snippet, the id field from user_data_from_tool_A is directly used as the user_id argument for get_recommendations.Using a Shared Context or ScratchpadFor more intricate sequences, or when multiple prior tool outputs contribute to a later tool's input, a shared context (sometimes called a "scratchpad" or "memory") can be very effective. Each tool can write its results to a well-defined location in this shared space. Subsequent tools can then read from this context to gather their necessary inputs.Consider an agent helping a user analyze sales data:Tool_LoadData: Loads sales figures from a CSV into the context as context["sales_data"].Tool_FilterData: Takes context["sales_data"], applies a filter (e.g., for a specific region), and writes the result to context["filtered_sales_data"].Tool_CalculateTotal: Reads context["filtered_sales_data"] and calculates the total, writing it to context["total_sales_for_region"].Tool_GenerateReport: Reads context["total_sales_for_region"] and context["filtered_sales_data"] to create a summary.# Illustrative context usage agent_context = {} # Step 1: Get user location location_data = agent.execute_tool("get_user_location") # e.g., {"city": "London", "country": "UK"} if location_data and location_data.get("city"): agent_context["user_city"] = location_data["city"] else: # Handle failure to get location agent_context["user_city"] = "default_city" # Fallback or error # Step 2: Fetch weather based on location from context if "user_city" in agent_context: weather_report = agent.execute_tool("get_weather_forecast", city=agent_context["user_city"]) # weather_report might be: {"temperature_celsius": 15, "condition": "Cloudy"} if weather_report: agent_context["current_temp_celsius"] = weather_report.get("temperature_celsius") agent_context["current_condition"] = weather_report.get("condition") # Step 3: Suggest activity based on weather from context if "current_temp_celsius" in agent_context and "current_condition" in agent_context: activity_suggestion = agent.execute_tool( "suggest_activity", temperature=agent_context["current_temp_celsius"], weather_condition=agent_context["current_condition"] ) print(f"Suggested activity: {activity_suggestion}")While flexible, using a shared context requires careful management to avoid naming collisions between keys and to ensure data isn't overwritten unintentionally or becomes stale.Data Transformation Between ToolsSometimes, the output format of one tool doesn't perfectly align with the input format required by the next. For example, Tool_A might output a temperature in Celsius, but Tool_B expects Fahrenheit. Or Tool_A returns a complex object, and Tool_B only needs a single field from it.In such cases, a transformation step is needed. This transformation can be:Performed by the LLM: If the LLM is orchestrating the calls, it might be instructed (or infer) to reformat or extract data. For example, "Take the 'temp_c' field from the weather tool's output, convert it to Fahrenheit, and use it as the 'temperature' input for the clothing suggestion tool."Handled by the Orchestrator: Your agent's orchestrating code can include small utility functions or steps to perform these transformations. This is often more reliable for precise numerical conversions or structural changes.Built into the consuming tool: Tool B could be designed to accept multiple formats or to perform common transformations internally, but this can make Tool B more complex.The objective is to ensure that the data passed to a tool is in a structure and format that the tool can reliably process. Clear input and output schemas for your tools, as detailed in Chapter 1, significantly simplify this.Visualizing Data Flow in Tool ChainsUnderstanding dependencies becomes easier when you can visualize the flow of data. For a sequence of tools, you can think of it as a directed graph where nodes are tools and edges represent data being passed.digraph G { rankdir=TB; node [shape=box, style="filled", fillcolor="#a5d8ff", fontname="Arial"]; edge [fontname="Arial", fontsize=10]; ToolA [label="Get User Profile\n(Outputs: user_id, email)"]; ToolB [label="Fetch Order History\n(Inputs: user_id)"]; ToolC [label="Summarize Recent Orders\n(Inputs: order_list)"]; Data1 [label="user_id", shape=oval, style="filled", fillcolor="#ffec99"]; Data2 [label="order_list", shape=oval, style="filled", fillcolor="#ffec99"]; ToolA -> Data1 [label="provides"]; Data1 -> ToolB [label="input to"]; ToolB -> Data2 [label="provides"]; Data2 -> ToolC [label="input to"]; }A simple data flow diagram showing ToolA providing user_id to ToolB, which then provides an order_list to ToolC.This visual representation helps in designing and debugging complex tool interactions, ensuring that each tool in the chain receives the necessary inputs from its predecessors.Handling Failures in Dependent CallsA significant aspect of managing dependencies is deciding what to do if a preceding tool in a chain fails or doesn't return the expected data. If Tool_A fails, Tool_B (which depends on Tool_A's output) cannot proceed as planned.Strategies for handling such failures include:Retrying the failed tool: Perhaps the failure was transient.Using a fallback value or default: If appropriate for the task.Invoking an alternative tool: If another tool can provide similar data.Terminating the current sub-task: And reporting the failure to the agent or user.Re-planning: The agent might need to devise a new sequence of tool calls.We'll discuss error recovery in more detail in the "Recovering from Failures in Tool Chains" section later in this chapter. For now, it's important to recognize that a dependency management strategy must account for potential upstream failures.Best Practices for Managing DependenciesAs you design agents that use sequences of tools, consider these practices:Clear Tool Signatures: Ensure your tool definitions (descriptions, input parameters, output schemas) are precise. This makes it easier for both the LLM and any orchestrating code to understand what data is produced and consumed.Standardized Data Formats: Prefer common, easily parsable data formats like JSON for tool outputs. This simplifies data exchange and transformation.Isolate Transformation Logic: If data transformations are needed, try to isolate this logic. It could be a small, dedicated function in your orchestrator or a specific instruction to the LLM, rather than burdening primary tools with excessive input flexibility.Minimize Complex Interdependencies: If you find yourself with an overly tangled web of dependencies, it might be a signal that your tools are too granular or that the task decomposition could be improved. Aim for a clear, mostly linear flow where possible, or well-contained branches.Explicit vs. Implicit Dependencies: Be mindful of whether dependencies are explicitly declared in your orchestration logic or implicitly handled by the LLM. Explicit declarations are generally more dependable and easier to debug, while LLM-handled dependencies offer more flexibility but can be harder to predict.By thoughtfully managing how data flows between tools, you empower your LLM agent to perform more sophisticated, multi-step tasks, moving from simple tool calls to coordinated workflows. This ability to connect outputs to inputs is what allows an agent to build upon previous results and achieve more significant goals.