David Cox - VP, AI Models; IBM Director, MIT-IBM Watson AI Lab

David Cox, VP of AI Models at IBM, outlines IBM’s strategic focus on developing transparent, trustworthy AI technologies like Watson X that integrate with enterprise hardware to deliver reliable solutions in complex industries such as healthcare and finance. He emphasizes practical, tool-oriented AI applications over speculative AGI, advocating for continuous learning and adaptability while fostering innovation through a balance of open-source experimentation and enterprise-grade deployment.

In this insightful conversation, David Cox, Vice President of AI Models at IBM and Director at the MIT-IBM Watson AI Lab, shares a comprehensive overview of IBM’s role and strategy in the evolving AI landscape. He explains that IBM Research, a global industrial research lab, focuses on developing large language models (LLMs) and generative AI technologies that power IBM’s products and anticipate future directions. Cox clarifies the evolution of IBM’s Watson from its Jeopardy-winning origins to its current incarnation as Watson X, a suite of AI tools adapted for the generative AI era, emphasizing the challenges of marketing and deploying AI in complex industries like healthcare.

Cox discusses the distinction between early AI systems like Watson and modern LLMs, noting that while both involve synthesizing information to answer complex queries, today’s models rely heavily on massive datasets and probabilistic modeling. He highlights IBM’s commitment to transparency and trustworthiness, openly sharing curated training data and model details, contrasting this with many open-weight models that lack transparency about their training data. This approach aligns with IBM’s focus on dependable, enterprise-grade AI solutions, especially important in regulated sectors such as finance and healthcare.

The conversation also delves into IBM’s integration of AI with its hardware systems, including mainframes that process a significant portion of global transactions. Cox explains how IBM embeds AI capabilities directly into hardware chips to enhance real-time processing, such as fraud detection, while maintaining the company’s hallmark reliability and trust. He stresses the importance of bridging open-source innovation with enterprise-grade solutions, enabling developers to experiment with AI models like Granite while providing scalable, secure products for large organizations.

Addressing the broader AI ecosystem, Cox reflects on the challenges of defining and regulating AI, including the problematic use of terms like “hallucination” and “agent,” which can mislead public understanding. He expresses skepticism about the immediate need for artificial general intelligence (AGI), advocating instead for practical, tool-oriented AI applications that augment human work without unnecessary complexity. Cox also highlights cultural differences in AI adoption globally and the trend toward localized, sovereign AI models that reflect diverse languages and values.

Finally, Cox offers career advice for those entering the AI and tech fields, emphasizing adaptability, continuous learning, and the enduring value of computer science skills despite rapid technological change. He acknowledges the current hype and uncertainty but remains optimistic about AI’s potential to enhance productivity and software development. Cox encourages treating AI as a powerful software tool rather than a mystical entity, underscoring IBM’s mission to build safe, transparent, and trustworthy AI systems that empower users and foster innovation across industries.