Arati Prabhakar, left photo, Director of the White House Office of Science and Technology Policy, and Jennifer Klein, Director of the White House Gender Policy Council, are shown in 2023 file photos. Klein and Prabhakar are co-authors of a Thursday announcement calling on the tech industry and financial institutions to commit to new measures to curb the creation of AI-generated nonconsensual sexual imagery.
“As generative AI broke on the scene, everyone was speculating about where the first real harms would come. And I think we have the answer,” said Biden’s chief science adviser Arati Prabhakar, director of the White House’s Office of Science and Technology Policy. A document shared with AP ahead of its Thursday release calls for action from not just AI developers but payment processors, financial institutions, cloud computing providers, search engines and the gatekeepers — namely Apple and Google — that control what makes it onto mobile app stores.
“But sometimes it’s not enforced; sometimes they don’t have those terms of service,” she said. “And so that’s an example of something that could be done much more rigorously.” The most widely known victim of pornographic deepfake images is Taylor Swift, whose ardent fanbase fought back in January when abusive AI-generated images of the singer-songwriter began circulating on social media. Microsoft promised to strengthen its safeguards after some of the Swift images were traced to its AI visual design tool. deepfake nudes depicting their students. In some cases, fellow teenagers were found to be creating AI-manipulated images and sharing them with classmates.