Anthropic AI is KILLING IBM - Holy Crap, an Actual Valuable Use Case

In the video, Eli the Computer Guy critiques the tech industry’s hype around AI, highlighting both its practical benefits—like using AI to modernize legacy code and assist IT professionals—and its risks, such as poorly managed “shadow AI” projects by non-experts. He emphasizes the need for professional standards, documentation, and ethical leadership in tech, while sharing personal experiences and advocating for continued learning and responsible system design.

In this video, Eli the Computer Guy delivers a wide-ranging, energetic monologue covering his experiences teaching technology classes, thoughts on artificial intelligence (AI), and frustrations with the tech industry. He begins by humorously lamenting his long commutes to teach free technology classes in North Carolina, reflecting on how his original intention to avoid commuting has been upended by his current dedication to Silicon Dojo. Eli discusses the practical labs he’s developed for his classes, such as using Python scripts to analyze ARP tables and network pings, and how AI can be leveraged to make network management and troubleshooting more efficient. He emphasizes that while AI won’t replace IT jobs, it can serve as a valuable tool—like a spell checker for networks—by helping professionals spot issues they might otherwise miss.

Eli then shifts to a critique of the current AI landscape, particularly targeting the hypocrisy of major AI companies like Anthropic and OpenAI. He mocks Anthropic’s CEO for complaining about Chinese companies scraping their AI models, pointing out that these same companies built their own models by scraping the world’s intellectual property without permission. Eli expresses deep cynicism toward Silicon Valley leaders such as Sam Altman and Elon Musk, accusing them of lacking ethics and being more concerned with profit than societal impact. He also rails against the AI industry’s tendency to overhype its products and ignore the real-world consequences of their actions, such as overwhelming websites with bot traffic.

The discussion moves into the dangers of “vibe coding”—the trend of non-technical people using AI tools to build business-critical applications without proper understanding of security, maintenance, or documentation. Eli draws parallels to the early days of PHP, when secretaries could build web apps but often neglected essential IT practices. He warns that this new wave of shadow IT and “shadow AI” will lead to disasters, such as lost data due to expired credit cards or undocumented systems that no one can maintain after the original creator leaves or passes away. Eli stresses the importance of professional standards, documentation, and robust system design to avoid these pitfalls.

Eli also touches on the history of technology outsourcing and automation, noting that fears about AI or foreign workers taking all tech jobs are overblown. He recounts the Y2K era and the rise of outsourcing to India, arguing that while some tasks can be automated or outsourced, the core value of IT professionals lies in their ability to manage complexity, ensure reliability, and adapt to new tools. He believes that AI will change workflows and eliminate some roles, but skilled technology professionals who keep learning will continue to thrive.

Finally, Eli discusses a recent news story about IBM’s stock dropping due to fears that Anthropic’s AI tools could disrupt legacy code modernization, particularly for COBOL systems. He sees real value in using large language models (LLMs) to analyze, document, and refactor decades-old codebases, making modernization more feasible and cost-effective. Eli concludes by reiterating the importance of building systems that are maintainable by others, so that when a professional leaves or passes away, the business can continue to function smoothly. Throughout the video, he intersperses personal anecdotes, critiques of tech culture, and appeals for support for his free educational initiatives.