AI is the subject of constant controversy. Some are quick to argue that recent events highlight a need for government intervention, but in fact, they show the opposite: They show the market working to discipline AI, prompting companies to improve their models and make them more useful and appropriate.
In both cases, the models failed and feedback from the market catalyzed the companies to employ a solution. Unfortunately, both resulted from the companies’ teams aiming to make their models “woke.” Google’s “Objectives for AI” list includes all sorts of do-good phrases, such as “be socially beneficial” and “avoid creating or reinforcing unfair bias,” but nowhere on its list does “be truthful and accurate” appear.
Neither model, in the case of Google and Meta, was answering prompts accurately, and feedback from market participants prompted the companies to correct their mistakes and override much of the models’ Some models, such as Microsoft’s image creator, have also had problems with producing violent or sexual material. An engineer from Microsoft has brought attention to these concerns with the company’s image applications, taking his concerns to