Little by little, trust in AI output — in both its operational and generative forms — is leading to lights-out, hands-off processes on an increasingly wider scale. But how much authority will humans have — and should have — to step in and overrule AI decisions?
Senan pointed to examples of situations where humans have had to intervene to prevent AI-driven decisions. “From sifting through flagged transactions to identify false positives in fraud detection tools, to providing a safety override in self-driving cars, and making critical judgments on sensitive social media content, humans ensure ethical considerations, critical thinking, and handling situations beyond the AI's training.
Areas of higher risk may include financial transactions, transportation, and even certain kinds of content creation, Harfield continued. “Here, “it’s vital that humans provide oversight and have the ability to exercise practical judgment under exceptional conditions.” Whereas in low-risk AI systems “such as chatbots or product review summarization on websites, constant human review is not only less necessary but also infeasible,” Senan added.