LangChain Agents in 2025 | Full Tutorial for v0.3

The tutorial introduces LangChain agents as intelligent components that extend language models’ capabilities by using tools—structured Python functions with clear metadata—to perform tasks like calculations and web searches, managed through an agent executor that handles reasoning, tool execution, and conversational memory. It also demonstrates integrating external APIs and custom tools to create versatile, context-aware agents capable of multi-step problem solving and dynamic information retrieval.

In this chapter, the tutorial introduces the concept of agents within the LangChain framework, emphasizing their importance in AI applications. Agents serve as intelligent components that can perform tasks beyond the capabilities of a standalone language model (LM), such as searching the web using external tools like the SER API. The chapter begins by setting up prerequisites and explaining the role of tools, which are essentially code functions that the agent can invoke to augment the LM’s abilities. Tools are designed with clear, natural language docstrings, descriptive parameter names, and type annotations to ensure the LM understands when and how to use them effectively.

The tutorial then demonstrates how to create simple calculator tools (addition, multiplication, exponentiation, subtraction) using LangChain’s tool decorator. This decorator converts regular Python functions into structured tool objects that include metadata such as names, descriptions, and JSON schemas. These schemas help the LM understand the parameters required and their types, enabling it to generate appropriate JSON-formatted strings to invoke these tools. The process of converting the LM’s output into function calls and executing the tools is explained as part of the agent’s execution logic.

Next, the video covers constructing a basic tool-calling agent using LangChain’s expression language. The agent’s prompt includes placeholders for user input, chat history, and an agent scratchpad, which stores the agent’s internal reasoning and tool usage steps. The tutorial uses an older conversational memory class to maintain chat history and demonstrates how the agent decides which tool to use based on user queries. However, the initial agent setup only generates tool usage instructions without executing them, so the agent executor class is introduced to handle the full agentic flow, including executing tools, managing memory, and iterating through multiple reasoning steps.

The tutorial showcases the agent executor in action with examples, such as multiplying precise decimal numbers that a standard LM would struggle to calculate accurately. It highlights how the agent can perform multiple tool calls in parallel and maintain conversational memory, recalling user details like names across interactions. The video also points out some limitations, such as the agent occasionally misordering calculations, which may require better prompting or examples to improve accuracy. Overall, the agent executor simplifies managing intermediate steps and memory updates, making the agent more robust and conversational.

Finally, the chapter explores integrating external APIs by loading pre-built LangChain tools like the SER API for Google search and creating custom tools to fetch the current location and time based on IP address. The agent is redefined to use these tools in a one-shot prompt scenario, answering questions about the current date, time, and local weather conditions. The agent successfully combines information from multiple tools, converting units when necessary, and provides detailed, context-aware responses. The tutorial concludes by noting that the next chapter will dive deeper into agents and cover more advanced implementations in LangChain version 0.3.