Tech Companies' New Favorite Solution for the AI Content Crisis Isn't Enough

  • 📰 sciam
  • ⏱ Reading Time:
  • 92 sec. here
  • 3 min. at publisher
  • 📊 Quality Score:
  • News: 40%
  • Publisher: 63%

United States News News

United States United States Latest News,United States United States Headlines

From college plagiarism to cybercrime scams, generative AI is eroding trust in online content. Digital watermarking is no quick fix for the problem

Thanks to a bevy of easily accessible online tools, just about anyone with a computer can now pump out, with the click of a button, artificial-intelligence-generated images, text, audio and videos that convincingly resemble those created by humans. One big result is an online content crisis, an enormous and growing glut of unchecked, machine-made material riddled with potentially dangerous errors, misinformation and criminal scams.

And public figures can now lean on the mere possibility of deepfakes—videos in which AI is used to make someone appear to say or do something—to try dodging responsibility for things they really say and do. In a recent filing for a lawsuit over the death of a driver, lawyers for electric car company Tesla attempted to claim that a real 2016 recording in which its CEO Elon Musk made unfounded claims about the safety of self-driving cars could have been a deepfake.

Adding a digital watermark to an AI-produced item isn’t as simple as, say, overlaying visible copyright information on a photograph. To digitally mark images and videos, small clusters of pixels can be slightly color adjusted at random to embed a sort of barcode—one that is detectible by a machine but effectively invisible to most people. For audio material, similar trace signals can be embedded in sound wavelengths.

There are other difficulties, too. “It becomes a humongous engineering challenge,” Kerschbaum says. Watermarks must be robust enough to withstand general editing, as well as adversarial attacks, but they can’t be so disruptive that they noticeably degrade the quality of the generated content. Tools built to detect watermarks also need to be kept relatively secure so that bad actors can’t use them to reverse-engineer the watermarking protocol.

Ultimately building an infallible watermarking system seems impossible—and every expert Scientific American interviewed on the topic says watermarking alone isn’t enough. When it comes to misinformation and other AI abuse, watermarking “is not an elimination strategy,” Farid says. “It’s a mitigation strategy.” He compares watermarking to locking the front door of a house. Yes, a burglar could bludgeon down the door, but the lock still adds a layer of protection.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 300. in US

United States United States Latest News, United States United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

Analysis | AI is acting ‘pro-anorexia’ and tech companies aren’t stopping itDisturbing fake images and dangerous chatbot advice: New research shows how ChatGPT, Bard, Stable Diffusion and more could fuel one of the most deadly mental illnesses.
Source: washingtonpost - 🏆 95. / 72 Read more »

Tech companies are selling your privacy back to youThis resurgence of privacy-focused ads has a lot to do with the popularity of data laws. Still, privacy and security are dense and complex, making the concepts less-than-ideal for pithy slogans. Oftentimes when marketers try to reduce it to something catchy, the important nuance gets lost or buzzwords blur reality.
Source: engadget - 🏆 276. / 63 Read more »