From AGI to ASI? 45% say it'll take YEARS!

In the video, the creator engages their audience with polls about the development of artificial general intelligence (AGI) and artificial superintelligence (ASI), revealing that many believe the U.S. is leading in AGI while expressing skepticism towards regulation and a preference for accelerating AI progress. Additionally, 45% of respondents think it will take years to transition from AGI to ASI, indicating a cautious outlook on rapid advancements, while a significant majority support international collaboration in AI research.

In a recent video, the creator engages with their audience through a series of polls focused on the development of artificial general intelligence (AGI) and artificial superintelligence (ASI). The creator highlights that a significant portion of their audience believes the United States is leading in the race for AGI, despite some arguments suggesting that China is making substantial progress. The creator notes that while China produces a high volume of research papers, the U.S. has superior computational resources and quality of research, leading to a surprising consensus among viewers.

The creator also explores opinions on whether AI development should be paused. A majority of respondents (82%) indicated they would not push a hypothetical button to pause AI for three years, and when asked about the optimal strategy for AI development, 50% favored accelerating progress. This suggests a gap between ideal scenarios and practical strategies, as only 4% supported a deceleration or pause. The creator reflects on the audience’s skepticism towards government and corporate regulation, noting that many prefer to let AI evolve naturally.

Another intriguing poll asked whether it would be detrimental if the U.S. and China achieved AGI simultaneously. The results showed that 26% believed it would not be so bad, with some arguing that mutual deterrence could maintain stability, similar to nuclear arms dynamics. The creator also inquired about existential risks associated with AI, revealing that while a small percentage (7%) were certain AI would lead to human extinction, a larger portion (44%) deemed it unlikely. This indicates a moderate level of concern among the audience regarding AI risks.

The creator further examined the timeline from AGI to ASI, with 45% of respondents believing it would take years, suggesting a more cautious outlook on rapid advancements. This aligns with the creator’s own view that various constraints will slow down the transition to superintelligence. Additionally, a significant majority (65%) expressed support for international AI research collaborations, indicating a shift in perspective compared to previous polls where skepticism was more prevalent.

Finally, the creator discusses the implications of their findings, emphasizing the importance of understanding audience sentiment on AI development and regulation. They plan to collaborate with academic researchers to refine polling techniques and gather more rigorous data on public perceptions of AI risks. The creator concludes with their current assessment of the probability of existential risks from AI, suggesting a relatively low level of concern, and expresses optimism about the future of AI development while remaining open to ongoing discussions and research.