How I See AI Evolving in 2026 (as an AI Engineer)

Dave Abal, an AI engineer, discusses the current state and near-future evolution of AI, noting incremental improvements in large language models, Google’s rising dominance due to its integrated technology stack, and the growing importance of agentic coding and context engineering for developers. He predicts that voice interfaces will become increasingly central, and encourages aspiring AI professionals to focus on practical skills and their areas of interest as the field continues to expand.

In this video, Dave Abal, an experienced AI engineer and founder of Dat Luminina, shares his perspective on how AI is evolving as of 2026. He begins by discussing the current limitations of large language models (LLMs), particularly their tendency to hallucinate and the lack of significant breakthroughs in their architecture. While LLMs have improved incrementally in tasks like coding and tool use, Abal notes that the fundamental way developers extract value from these models hasn’t changed much in recent years. He highlights ongoing research, such as new paradigms proposed by leading scientists like Yann LeCun and techniques like recursive language models, but emphasizes that for now, LLMs remain the best available tools, and practical improvements are focused on maximizing their current effectiveness.

Abal then turns his attention to Google’s growing influence in the AI space. After a period of relative quiet, Google has emerged with state-of-the-art models across language, image, and video, and stands out for owning its entire technology stack, including proprietary TPUs for model training. He finds Google’s A2A protocol particularly noteworthy, as it enables complex agent workflows and interoperability, potentially positioning Google as a major player in the coming years. Abal suggests that Google’s integration of models, data, compute, and protocols could make it a dominant force in AI by 2026.

The video also addresses the ongoing debate between using deterministic workflows (DAGs) versus agentic approaches in AI applications. Abal references industry opinions, such as those from Anthropic, which advocate for simple, reliable workflow patterns over complex agentic systems for most use cases. However, he argues that the choice depends on the specific application: deterministic workflows are preferable for high-stakes, error-sensitive processes, while agentic approaches can be effective in human-in-the-loop scenarios where some flexibility is acceptable. He advises engineers to start with the simplest solution and only add complexity as needed.

A significant area of progress, according to Abal, is agentic coding. He observes that LLMs excel at coding tasks, and that ongoing improvements in both models and supporting tools (like Cursor and Claude Code) are rapidly enhancing developer productivity. He recommends learning best practices such as spec-driven development to get the most out of these tools, and stresses the importance of mastering both traditional coding skills and the effective use of AI-powered coding assistants. Abal also highlights context engineering as a crucial skill for developers, involving the strategic management of information provided to LLMs to optimize their performance.

Finally, Abal discusses the future of voice as a primary interface for interacting with technology, predicting that voice-driven applications will become increasingly prevalent over the next decade. He encourages viewers to become comfortable with voice interfaces and mentions his own work on a voice-to-text application. Abal concludes by reassuring viewers that the AI field is still in its early stages, with most companies just beginning to adopt AI solutions. He encourages aspiring AI professionals to focus on their interests and specialties, emphasizing that there are many opportunities to contribute and succeed as the field continues to grow.