The new voluntary agreements include commitments to external, third-party testing prior to releasing an AI product and the development of watermarking systems to inform the public when a piece of audio or video material was generated using AI systems. The Biden administration said these voluntary commitments, which essentially amount to self-policing by tech firms, mark just the first of several steps needed to properly manage AI risk.
For security, the companies are committing to invest in additional cybersecurity and insider threat safeguards and agree to support third parties in discovering and reporting vulnerabilities to their systems. Perhaps most interestingly, the tech firms say they will all develop technical mechanisms like watermarks to ensure users know when content is AI-generated. A White House official speaking on the phone said these commitments were intended to push back against the threat of deepfakes and build trust among the public. Similarly, the AI-makers haveto prioritize research showing the risks of bias, privacy, and discrimination their products can pose.
The White House official speaking with Gizmodo and other reporters said the commitments were intended to bring each of these seven companies together under the same set of agreements. And while this does not officially affect the hundreds of other smaller companies working on AI systems, the White House hopes the baseline set here could encourage others in the industry to follow a similar path.
대한민국 최근 뉴스, 대한민국 헤드 라인
Similar News:다른 뉴스 소스에서 수집한 이와 유사한 뉴스 기사를 읽을 수도 있습니다.
출처: KPRC2 - 🏆 80. / 68 더 많은 것을 읽으십시오 »
출처: washingtonpost - 🏆 95. / 72 더 많은 것을 읽으십시오 »
출처: thedailybeast - 🏆 307. / 63 더 많은 것을 읽으십시오 »
8,000 authors demand compensation from AI companies for using their worksJames Patterson, Margaret Atwood, and 8,000 other authors want AI companies to pay them for using their works
출처: BusinessInsider - 🏆 729. / 51 더 많은 것을 읽으십시오 »
출처: NBCPhiladelphia - 🏆 569. / 51 더 많은 것을 읽으십시오 »