The Federal Trade Commission will come down hard on businesses whose AI tools harm consumers, officials said Tuesday.
Regulators are scrutinizing AI tools that businesses have used in making hiring decisions, or in deciding who to loan money to. They're watching tools that can generate text, images, voice and even video, trying to make sure consumers don't fall prey to mass deceptions or closely-targeted misinformation.
FTC Chair Lina Khan also said during a press event that the FTC will keep big companies honest, as they race for resources in the growing AI field. "A handful of powerful firms today control the necessary raw materials, not only the vast stores of data, but also the cloud services and computing power that startups and other businesses rely on to develop and deploy AI products," Khan said.She warned that while AI might novel abilities, regulators will still hold it to the same account as other business tools.Regulators elsewhere in the world are also trying to keep up with AI's progress.
The EU's proposed AI Act would sort the growing uses of AI tools into different risk categories, banning those considered most dangerous outright and subjecting other high-risk applications like AI-powered hiring decisions to stronger laws.