Such responses are possible because, while AI can generate coherent text free of grammatical errors, it does not understand language, he opined. It does not know what is real and what is not. Generative AI therefore often fabricates facts, and produces inappropriate responses.
Christine Montgomery, IBM's chief privacy and trust officer, urged Congress to establish rules to govern the deployment of AI in specific use cases instead of wide-ranging policies that could restrict innovation. She suggested the US government apply"different rules for different risks." John Kennedy even went so far as to ask Altman whether he was qualified to lead such a group himself."I love my current job," he replied.
Hawley said he wanted to make it easier for consumers harmed by AI – he suggested harms such as generating medical or election misinformation – to launch class-action lawsuits against companies that built the technology. Montgomery was more optimistic about the US government's ability to control things and expressed his belief that current regulators can address current concerns."I think we don't want to slow down regulation to address real risks right now. We have existing regulatory authorities in place, who have been clear that they have the ability to regulate in their respective domains."