In today's episode of What Could Go Wrong: AI Edition, a tech company that uses artificial intelligence to mimic voices is adding more"safeguards" to its tech after it was used to generate clips of celebrities reading offensive content.is a research company specializing in AI speech software that generates realistic-sounding voices to voice-over audiobooks, games, and articles in any language.
ElvenLabs says it has been taking steps to keep Voice Lab from being used for"malicious purposes," posting how it plans to keep its tech out of the wrong hands in a lengthyElevenLabs claims it"always had the ability to trace any generated audio clip back to a specific user." Next week it will release a tool that will allow anyone to confirm that a clip was generated using its technology and report it.
The company says that the malicious content was created by"free anonymous accounts," so it will add a new layer of identity verification. Voice Lab will be made available only on paid tiers, and immediately remove the free version from its site. ElevenLabs are currently tracking and banning any account that creates harmful content in violation of its policies.
One step closer to Terminators....
Never rely on your users making the right and ethical choices
We are going to use AI to kill each other well before it becomes sentient
Tech companies will make the internet unusable and then wonder why people are leaving
Boo! jannies had to ruin everyone's fun
We have gone too far, this tech hell will only get worse with time🙄
at least we can still count on trump to say shit without needing AI