The new voluntary agreements include commitments to external, third-party testing prior to releasing an AI product and the development of watermarking systems to inform the public when a piece of audio or video material was generated using AI systems. The Biden administration said these voluntary commitments, which essentially amount to self-policing by tech firms, mark just the first of several steps needed to properly manage AI risk.
For security, the companies are committing to invest in additional cybersecurity and insider threat safeguards and agree to support third parties in discovering and reporting vulnerabilities to their systems. Perhaps most interestingly, the tech firms say they will all develop technical mechanisms like watermarks to ensure users know when content is AI-generated. A White House official speaking on the phone said these commitments were intended to push back against the threat of deepfakes and build trust among the public. Similarly, the AI-makers haveto prioritize research showing the risks of bias, privacy, and discrimination their products can pose.
The White House official speaking with Gizmodo and other reporters said the commitments were intended to bring each of these seven companies together under the same set of agreements. And while this does not officially affect the hundreds of other smaller companies working on AI systems, the White House hopes the baseline set here could encourage others in the industry to follow a similar path.
日本 最新ニュース, 日本 見出し
Similar News:他のニュース ソースから収集した、これに似たニュース記事を読むこともできます。
ソース: KPRC2 - 🏆 80. / 68 続きを読む »
ソース: washingtonpost - 🏆 95. / 72 続きを読む »
ソース: thedailybeast - 🏆 307. / 63 続きを読む »
8,000 authors demand compensation from AI companies for using their worksJames Patterson, Margaret Atwood, and 8,000 other authors want AI companies to pay them for using their works
ソース: BusinessInsider - 🏆 729. / 51 続きを読む »
ソース: NBCPhiladelphia - 🏆 569. / 51 続きを読む »