Definitions of AI and How Companies Use Them to Lie

The video explains that the term “AI” is ambiguous and used by companies to make vague or exaggerated claims, with definitions ranging from sci-fi concepts to narrow task-specific systems and generative models like ChatGPT. The speaker, Carl, emphasizes the importance of understanding these distinctions and maintaining skepticism to avoid being misled by hype and to make informed decisions about AI technologies.

The video discusses the problematic nature of the term “AI” (artificial intelligence) due to its many varying definitions, which AI companies often exploit to make vague claims that are difficult to challenge. The speaker, Carl, a software professional with over 35 years of experience, explains that the term AI has been popularized by science fiction, such as in William Gibson’s Neuromancer and the Matrix series, but these portrayals do not provide a clear or consistent understanding of what AI actually means in real-world applications. This ambiguity benefits AI companies and the media, as it allows exaggerated or misleading statements to go unchallenged.

Carl outlines several distinct definitions of AI. One is the sci-fi or theoretical future AI, which refers to an advanced intelligence that might exist billions of years from now, capable of unimaginable feats. Another is the current AI bubble definition, which refers to whatever AI technology companies claim will be achievable before the current hype and investment bubble bursts. This definition is often used strategically by CEOs and companies to attract investment while maintaining plausible deniability about the actual capabilities of their products.

The video also covers narrow or specialized AI, such as reinforcement learning systems that excel at specific tasks like playing chess, Go, or protein folding. These AI systems are highly effective in well-defined environments with clear rules and measurable outcomes. However, Carl points out that such AI is fundamentally different from the more generalized or human-like intelligence that many people fear or expect. For example, the idea of AI killing all humans is not a feasible task for reinforcement learning AI because the criteria for success are unclear and the environment is unpredictable.

Generative AI, exemplified by ChatGPT and similar models, is another definition Carl discusses. This type of AI generates text, images, code, and other content and is currently the most visible form of AI in everyday use. While generative AI is impressive, it is not perfect and often produces approximate or flexible answers rather than guaranteed correct ones. Beyond this, Carl explains the concepts of AGI (artificial general intelligence), which would be as smart as a human and capable of performing any intellectual task, and ASI (artificial superintelligence), a hypothetical intelligence vastly superior to humans. Neither AGI nor ASI currently exists, and their arrival remains speculative.

In conclusion, Carl urges viewers to be cautious when hearing the term “AI” because it can mean very different things depending on the context. He encourages skepticism, especially when multiple definitions are conflated to create misleading impressions. Reporters and the public should demand clarity from AI companies and their leaders to avoid being misled by vague or exaggerated claims. Ultimately, understanding these distinctions can help people make better-informed decisions about AI products, investments, and the future impact of AI technology.