Tech Companies Are Taking Action on AI Election Misinformation. Will it Matter?

  • 📰 TIME
  • ⏱ Reading Time:
  • 40 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 19%
  • Publisher: 53%

ประเทศไทย ข่าว ข่าว

ประเทศไทย ข่าวล่าสุด,ประเทศไทย หัวข้อข่าว

A mobile phone over a keyboard is seen, with the logo of Meta on its screen.

Wednesday that it would require labels for political ads that have been digitally altered, using AI or other technology, in ways that could be misleading.it was also taking a number of steps to protect elections, including offering tools to watermark AI-generated content and deploying a “Campaign Success Team” to advise political campaigns on AI, cybersecurity, and other related issues.

Often people overestimate the effects of misinformation because they overestimate both how easy it is to change people’s views on charged issues such as voting behavior and how capable misinformation-enabling technologies such as AI are, says Jungherr. This has already happened. In 2019, an allegation that a video of Ali Bongo, then the president of Gabon, was a fake was used toan attempted coup.

Watermarking and provenance measures by AI developers are likely to be ineffective because malicious actors can easily access AI models that have been, such as Meta’s Llama 2, says Jungherr. “I would argue that this is an attempt by these companies to avoid negative coverage,” he says. “I'm not necessarily sure that they expect that these tools will shift an election.”

เราได้สรุปข่าวนี้มาให้อ่านอย่างรวดเร็ว หากสนใจข่าว สามารถอ่านฉบับเต็มได้ที่นี่ อ่านเพิ่มเติม:

 /  🏆 93. in TH
 

ขอบคุณสำหรับความคิดเห็นของคุณ ความคิดเห็นของคุณจะถูกเผยแพร่หลังจากได้รับการตรวจสอบแล้ว

ประเทศไทย ข่าวล่าสุด, ประเทศไทย หัวข้อข่าว