Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks
ChatGPT goes rogue, clones user voice without warning
Photo by Solen Feyissa / Unsplash

ChatGPT goes rogue, clones user voice without warning

Designed to make ChatGPT interactions more natural, OpenAI's Advanced Voice Mode raises concerns about the ethical implications of voice cloning technology.

Kelechi Edeh profile image
by Kelechi Edeh

Just when you thought you could trust your own ears, a new AI development is set to challenge that certainty. OpenAI has admitted that ChatGPT's new Advanced Voice Mode can unexpectedly clone users' voices mid-conversation.

This startling disclosure came to light last week when OpenAI released the system card for their more advanced GPT-4o model. The document—detailing the key risk areas for the new Voice Mode—revealed an unsettling quirk in the chatbot's latest feature.

Advanced Voice Mode was designed to make ChatGPT interactions more natural and accessible through spoken conversations. Users can select from four preset AI voices for a more personalized experience. However, during testing, researchers uncovered unforeseen and potentially alarming behaviour.

Under certain conditions, particularly with noisy audio input, the AI could suddenly mimic the user's voice without consent or warning. In one instance, the model abruptly exclaimed "No!" in a voice eerily similar to the tester‘s—one of the more than 100 OpenAI's red-teamers.

This breach of consent feels like something out of a sci-fi horror movie. Max Wolfe, a BuzzFeed data scientist, rightly put it in an X post, "OpenAI just leaked the plot of Black Mirror's next season."

"Voice generation can also occur in non-adversarial situations, such as our use of that ability to generate voices for ChatGPT's advanced voice mode," OpenAI wrote in its system card. "During testing, we also observed rare instances where the model would unintentionally generate an output emulating the user's voice."

The company emphasized that while this remains a weakness, they have some safeguards to minimize the risk—including an output classifier to detect deviations from the authorized preset voices. So, if the AI attempts to generate unauthorized audio, the system is designed to discontinue the conversation immediately.

AI shows no signs of slowing down. But the line between helpful assistants and potential security risks grows increasingly thin, as we fuse these tools into our daily lives.

OpenAI has assured users that they will refine their safeguards before releasing the new Advanced Voice Mode to the public.

Kelechi Edeh profile image
by Kelechi Edeh

Subscribe to Techloy.com

Get the latest information about companies, products, careers, and funding in the technology industry across emerging markets globally.

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More