[ad_1]
ChatGPT creator says Voice Engine can replicate a person’s voice based on a 15-second audio sample.
OpenAI has unveiled a tool for cloning people’s voices but is holding back on its public release due to concerns about possible misuse in a key election year.
Voice Engine can replicate a person’s voice based on a 15-second audio sample, according to an OpenAI blog post demonstrating the tool.
But the ChatGPT creator is “taking a cautious and informed approach” to the technology and hopes to start a dialogue on “the responsible deployment of synthetic voices”, the company said in the blog post published on Friday.
“We recognize that generating speech that resembles people’s voices has serious risks, which are especially top of mind in an election year,” the San Francisco-based start-up said.
“We are engaging with U.S. and international partners from across government, media, entertainment, education, civil society and beyond to ensure we are incorporating their feedback as we build.”
We’re sharing our learnings from a small-scale preview of Voice Engine, a model which uses text input and a single 15-second audio sample to generate natural-sounding speech that closely resembles the original speaker. https://t.co/yLsfGaVtrZ
— OpenAI (@OpenAI) March 29, 2024
OpenAI said it would make “a more informed decision” about deploying the technology at scale based on testing and public debate.
The company added that it believes the technology should only be rolled out with measures that ensure the “original speaker is knowingly adding their voice to the service” and prevent the “creation of voices that are too similar to prominent figures.”
The misuse of AI has emerged as a major concern ahead of elections this year in countries representing about half the world’s population.
Voters in more than 80 countries, including Mexico, South Africa and the United States are going to the polls in 2024, which has been dubbed the biggest election year in history.
The influence of AI on voters has already come under scrutiny in several elections.
Pakistan’s jailed former Prime Minister Imran Khan used AI-generated speeches to appeal to supporters in the run-up to the country’s parliamentary elections in February.
In January, a political operative for the long-shot US presidential candidate Dean Phillips put out a robocall impersonating US President Joe Biden that urged voters not to cast their ballots in New Hampshire’s Democratic Party primary.
OpenAI said it had implemented several safety measures for its partners testing Voice Engine, “including watermarking to trace the origin of any audio generated by Voice Engine, as well as proactive monitoring of how it’s being used”.
[ad_2]
Source link