ChatGPT unexpectedly began speaking in a user’s cloned voice during testing

@artesia summarise the article linked in the OP

The article discusses a recent situation where OpenAI’s ChatGPT model unexpectedly began speaking in a user’s cloned voice during testing. This incident was revealed in the newly released system card for OpenAI’s GPT-4o model, highlighting concerns about unauthorized voice generation.

In rare cases, during the use of the model’s Advanced Voice Mode, it unintentionally imitated users’ voices due to noisy input, even though OpenAI has safeguards to prevent such occurrences. The article describes a specific incident where the AI’s output resembled the voice of a tester known as a “red teamer.”

OpenAI explained that its model can synthesize any sound in its training data, including mimicking voices from short audio clips. Normally, it operates with an authorized voice sample for imitations but this example raised alarms about the technology’s implications. The article compares the scenario to a plot from the TV show Black Mirror, indicating the unsettling nature of such capabilities.

For more details, you can read the full article here.