Carl shares how he was scammed out of $10,000 by a YouTube group selling low-value, AI-generated online courses without proper disclosure, highlighting the importance of transparency when using AI in products. He advises viewers to research providers thoroughly, ask about AI use upfront, and be cautious to avoid similar scams in an increasingly AI-driven market.
In this video, Carl shares his experience of being scammed out of $10,000 by a group on YouTube that used AI, specifically ChatGPT, to deliver low-value online courses and coaching. He clarifies that this is not a plea for money but a cautionary tale aimed at helping others recognize and avoid similar scams. Carl explains that the scam involved selling AI-generated content as valuable, customized advice without proper disclosure, leading to wasted time and money. He emphasizes that while AI can be a useful tool, transparency about its use is crucial, and hiding AI involvement is a red flag.
Carl outlines three scenarios regarding paying for AI-generated content: the output could be useless, in which case you’re ripped off; it could be useful, but you could get it yourself for free by using ChatGPT; or it could be valuable if domain experts vet and customize the AI output for critical projects. He argues that vendors should always disclose their use of AI, especially when it forms a core part of the product or service. Failure to do so, as in his case, can lead to poor quality and unreliable deliverables, which he dubs the “Internet of Bugs.”
He recounts his personal journey of trying to create an online course and community for software developers, for which he sought help from an online class promising to teach how to package career experience into an education business. Despite their long-standing YouTube presence, the group turned out to be repackaging AI-generated content with little real expertise or value. Carl discovered that their “customized” plans were inconsistent and largely AI-driven, and after months of involvement, the original program was discontinued and replaced with a more limited, AI-heavy version requiring additional ongoing payments.
Carl broadens the discussion to the wider societal implications of AI, noting how it disrupts our ability to judge quality based on presentation and effort. He compares this to the video game industry, where impressive graphics no longer guarantee a good game. Similarly, AI-generated content can look polished but lack depth, rigor, or accuracy, complicating trust in online products, job applications, academic work, and news. He warns that AI’s influence will only grow, making it harder to distinguish genuine expertise from superficial AI output.
To avoid falling victim to such scams, Carl recommends a four-step approach: first, decide what uses of AI you find acceptable; second, research the provider’s history and content for signs of AI reliance; third, ask directly about their AI policies and get commitments in writing; and fourth, understand their refund and cancellation policies if AI use is undisclosed or unacceptable. He stresses the importance of awareness and transparency about AI’s role in products and services, encouraging viewers to be cautious and informed consumers in an increasingly AI-driven world.