Sir David Attenborough says AI clone of his voice is 'disturbing' | BBC News

In a BBC News segment, Sir David Attenborough expressed his concerns about AI-generated clones of his voice, calling it “disturbing” and highlighting the implications for identity and authenticity. The discussion, featuring insights from AI audio researcher Dr. Jennifer Williams, emphasized the potential misuse of voice cloning technology and the need for public awareness and regulatory measures to protect individuals from identity theft.

In a recent BBC News segment, Sir David Attenborough expressed his concerns regarding the use of AI-generated clones of his voice, describing it as “disturbing.” The segment featured a comparison between Attenborough’s authentic voice and an AI-generated imitation, showcasing how closely the two sounded. The AI clone was created using a clip from Attenborough’s narration of his series “Asia,” raising questions about the implications of such technology on identity and authenticity.

The discussion highlighted the ease with which AI can replicate voices by scraping data from the internet. Dr. Jennifer Williams, an AI audio researcher from the University of Southampton, explained that voice cloning can occur through various methods, including collecting enough audio samples to create a model of a person’s voice. This raises significant concerns about the potential misuse of such technology, as it can be employed for both creative and nefarious purposes.

Attenborough’s reaction to the AI cloning was one of profound disturbance, as he emphasized the importance of truth in his work and objected to others using his identity without consent. The segment also featured a statement from a website that offers AI-generated voices, clarifying that they do not have any affiliation with Attenborough and that their technology is available for anyone to use. This further underscores the accessibility of voice cloning technology and the potential for misuse.

Dr. Williams pointed out that while some may use voice cloning for humor or parody, there are serious risks associated with creating authoritative voices for misinformation or disinformation. The conversation emphasized the need for public awareness regarding the existence and implications of AI-generated voices, as well as the importance of developing legal and regulatory frameworks to protect individuals from identity theft through this technology.

To navigate the challenges posed by AI-generated content, Dr. Williams recommended the “sift method,” which involves stopping to investigate the source of information, finding other sources, and considering the context. The segment concluded with a reiteration of the importance of being vigilant about the authenticity of voices and messages in an era where AI technology is becoming increasingly sophisticated and accessible.