Ex-OpenAI genius launches new “Super Intelligence” company

The video explores the resurgence of Ilia Skver, former OpenAI co-founder, as the founder of Super Safe Intelligence (SSI), a startup focusing on developing safe artificial superintelligence (ASI). It raises doubts about SSI’s intentions and emphasizes the ethical considerations and potential risks associated with advancing AI technology towards achieving superintelligence.

The video discusses the rise and fall of Ilia Skver, a former co-founder of OpenAI known for his AI research contributions. Skver was involved in a controversy where he voted to fire CEO Sam Altman, leading to his reputation being tarnished, and he disappeared from the public eye. However, he recently resurfaced as the founder of a new startup called Super Safe Intelligence (SSI), aiming to develop superintelligence that is safe and beneficial for humanity.

SSI’s goal is to create artificial superintelligence (ASI), a hypothetical advanced form of intelligence superior to human intelligence. The video highlights the potential dangers of ASI if it falls into the wrong hands, drawing parallels between humans’ treatment of carrots and potential mistreatment by superintelligence. It also discusses the current state of AI, noting that we have not yet achieved artificial general intelligence (AGI) but have models like GPT-4 and Gemini that are closer to search engines than true intelligence.

The video questions the legitimacy of SSI, speculating on whether it is a genuine effort to achieve ASI or simply a hype-driven venture. It mentions one of the co-founders, Daniel Gross, an AI investor, and raises concerns about the company’s lack of breakthrough announcements. The narrator suggests that SSI may be leveraging its founders’ reputations to attract attention and talent, rather than presenting groundbreaking advancements in AI technology.

There is a discussion about the motives behind SSI’s creation, with a reference to a concept called solid state intelligence (SSI) described in a book by John C. Lilly. The narrator hints at a darker undertone to SSI’s intentions, implying that the company’s name, Super Safe Intelligence, may be misleading and potentially hint at more sinister goals. The video also touches on the military’s use of AI and the risks associated with superintelligence falling into the wrong hands.

Overall, the video raises skepticism about SSI’s claims and speculates on the potential implications of achieving artificial superintelligence. It warns of the dangers of AI technology if not carefully controlled and emphasizes the need for ethical considerations in the development of superintelligence. The narrator leaves viewers with a cautionary message about the potential risks of advancing AI technology without proper safeguards in place.