Thanks to a bevy of easily accessible online tools, just about anyone with a computer can now pump out, with the click of a button, artificial-intelligence-generated images, text, audio and videos that convincingly resemble those created by humans. One big result is an online content crisis, an enormous and growing glut of unchecked, machine-made material riddled with potentially dangerous errors, misinformation and criminal scams.
And public figures can now lean on the mere possibility of deepfakes—videos in which AI is used to make someone appear to say or do something—to try dodging responsibility for things they really say and do. In a recent filing for a lawsuit over the death of a driver, lawyers for electric car company Tesla attempted to claim that a real 2016 recording in which its CEO Elon Musk made unfounded claims about the safety of self-driving cars could have been a deepfake.
Adding a digital watermark to an AI-produced item isn’t as simple as, say, overlaying visible copyright information on a photograph. To digitally mark images and videos, small clusters of pixels can be slightly color adjusted at random to embed a sort of barcode—one that is detectible by a machine but effectively invisible to most people. For audio material, similar trace signals can be embedded in sound wavelengths.
There are other difficulties, too. “It becomes a humongous engineering challenge,” Kerschbaum says. Watermarks must be robust enough to withstand general editing, as well as adversarial attacks, but they can’t be so disruptive that they noticeably degrade the quality of the generated content. Tools built to detect watermarks also need to be kept relatively secure so that bad actors can’t use them to reverse-engineer the watermarking protocol.
Ultimately building an infallible watermarking system seems impossible—and every expert Scientific American interviewed on the topic says watermarking alone isn’t enough. When it comes to misinformation and other AI abuse, watermarking “is not an elimination strategy,” Farid says. “It’s a mitigation strategy.” He compares watermarking to locking the front door of a house. Yes, a burglar could bludgeon down the door, but the lock still adds a layer of protection.
France Dernières Nouvelles, France Actualités
Similar News:Vous pouvez également lire des articles d'actualité similaires à celui-ci que nous avons collectés auprès d'autres sources d'information.
La source: washingtonpost - 🏆 95. / 72 Lire la suite »
La source: engadget - 🏆 276. / 63 Lire la suite »