Forget GPT-4o's voice -- the real problem with AI is us

Recent advancements in AI technology include Google’s text-to-video tool and OpenAI’s new GPT model that can speak, showcasing the increasing capabilities and investments in generative AI models. Concerns have been raised about the ethical use and potential misuse of AI technology, particularly in relation to its persuasive capabilities and the power wielded by those who control it.

The past weeks have seen significant advancements in AI technology, with Google introducing a new text-to-video tool and OpenAI releasing a new GPT model that can speak, with some claiming it sounds like their voice. Studies have shown that AI is adept at understanding humans, and investment in generative AI models like GPT has surged, reaching over $25 billion in 2023. The development of large AI models comes at a high cost, with OpenAI’s new GPT version and Google’s Gmany Ultra costing millions to train. Google’s text-to-video tool, Vo, can create high-definition videos, but availability is limited. Additionally, GPT-4o, which stands for “Omni,” can process text, audio, images, and videos, showcasing its versatility.

There have been internal changes at OpenAI, with the disbanding of the super alignment team responsible for ensuring the alignment of AI goals with human values. Several key members, including the head of the team and a co-founder, have left, citing concerns about the organization’s leadership. Meanwhile, studies have shown that AI models like GPT-4 are highly convincing in debates, particularly when provided with anonymized personal information. Another study found that large language models excel at understanding theory of mind, often outperforming humans in such tasks.

Looking ahead, the prospect of artificial general intelligence (AGI) is on the horizon, though the timeline remains uncertain. Current AI models are impressive but lack critical information about the physical world, posing challenges for achieving AGI. The primary concern, however, lies not in AGI itself but in the power and influence wielded by those who control it. AI’s ability to persuade and manipulate based on data poses a significant risk, especially given humans’ predictability and susceptibility to influence.

There are apprehensions about the potential misuse of AI, with scenarios envisioning governments or individuals exploiting AI’s persuasive capabilities to push agendas or control populations. Concerns about AGI alignment are acknowledged, but immediate worries center on the ethical use of existing AI technology. The text concludes by highlighting the prevalence of AI in various fields and recommends exploring courses on platforms like brilliant.org to gain a deeper understanding of AI concepts, quantum mechanics, and other scientific topics. A special offer is extended to viewers to try out Brilliant’s courses for 30 days with a discount on the annual premium subscription.