The video “Build Hour: Built-In Tools” features Christine and Katya from OpenAI demonstrating how built-in tools like web search, file search, and the code interpreter enable large language models to perform complex, real-time tasks without additional coding by developers. Through live demos and a guest appearance, the session highlights how these tools simplify AI application development by allowing seamless integration with external APIs and data sources, empowering developers to build sophisticated, multi-tool workflows efficiently.
The video titled “Build Hour: Built-In Tools” features Christine from the startup marketing team and Katya from the developer experience team at OpenAI. They introduce the concept of built-in tools designed to extend the capabilities of large language models (LLMs) without requiring users to write additional code. These tools allow models to interact with live data, perform complex tasks, and access external APIs seamlessly, enhancing the development of AI-powered applications. The session aims to empower developers by demonstrating how to use these tools effectively through the OpenAI playground and integrating them into real-world applications.
Katya explains the distinction between built-in tools and function calling. While function calling requires developers to define functions, execute them on their infrastructure, and feed results back to the model, built-in tools operate entirely on OpenAI’s infrastructure. This means the model can autonomously execute tools like web search or file search and incorporate the results into its responses without developer intervention. This approach simplifies development and leverages OpenAI’s expertise and infrastructure to handle complex tasks such as real-time web searches or retrieval-augmented generation (RAG) from private data sources.
The video highlights six key built-in tools currently available: web search, file search, MCP (Model Context Protocol) tool, code interpreter, computer use, and image generation. Web search enables models to access up-to-date information beyond their training cutoff, while file search allows querying private documents without fine-tuning. The MCP tool connects models to external APIs like Shopify or Stripe, enabling dynamic interactions with third-party services. The code interpreter executes code for tasks like data analysis and visualization, running securely on OpenAI’s infrastructure. The computer use and image generation tools are mentioned but not covered in detail.
A live demonstration showcases how these tools can be used in the OpenAI playground and integrated into a Python application. Katya uploads files for file search, performs live web searches, interacts with Shopify’s MCP server to find products, and uses the code interpreter to analyze and visualize data. She then builds a data exploration dashboard combining Stripe data, offline sales data, and web search results, illustrating how multiple tools can work together in a multi-turn conversation to provide rich, actionable insights with minimal coding effort.
The session concludes with a guest appearance by Will, the technical lead at Hebia, a startup specializing in AI-powered search for financial and legal services. Will discusses how Hebia leverages built-in tools, particularly web search, to overcome LLM limitations like knowledge cutoffs and to provide up-to-date, contextually relevant information at scale. He demonstrates Hebia’s products that combine web search with private data sources to deliver deep research and insights. The video ends with a brief Q&A addressing best practices for managing multiple tools and customization options, emphasizing the ease and power of built-in tools for building sophisticated AI applications quickly.