The new voluntary agreements include commitments to external, third-party testing prior to releasing an AI product and the development of watermarking systems to inform the public when a piece of audio or video material was generated using AI systems. The Biden administration said these voluntary commitments, which essentially amount to self-policing by tech firms, mark just the first of several steps needed to properly manage AI risk.
For security, the companies are committing to invest in additional cybersecurity and insider threat safeguards and agree to support third parties in discovering and reporting vulnerabilities to their systems. Perhaps most interestingly, the tech firms say they will all develop technical mechanisms like watermarks to ensure users know when content is AI-generated. A White House official speaking on the phone said these commitments were intended to push back against the threat of deepfakes and build trust among the public. Similarly, the AI-makers haveto prioritize research showing the risks of bias, privacy, and discrimination their products can pose.
The White House official speaking with Gizmodo and other reporters said the commitments were intended to bring each of these seven companies together under the same set of agreements. And while this does not officially affect the hundreds of other smaller companies working on AI systems, the White House hopes the baseline set here could encourage others in the industry to follow a similar path.
ایران آخرین اخبار, ایران سرفصلها
Similar News:همچنین می توانید اخبار مشابهی را که از منابع خبری دیگر جمع آوری کرده ایم، بخوانید.
منبع: KPRC2 - 🏆 80. / 68 ادامه مطلب »
منبع: washingtonpost - 🏆 95. / 72 ادامه مطلب »
منبع: thedailybeast - 🏆 307. / 63 ادامه مطلب »
8,000 authors demand compensation from AI companies for using their worksJames Patterson, Margaret Atwood, and 8,000 other authors want AI companies to pay them for using their works
منبع: BusinessInsider - 🏆 729. / 51 ادامه مطلب »
منبع: NBCPhiladelphia - 🏆 569. / 51 ادامه مطلب »