The developer was being asked to make it harder for rivals to connect to Facebook by putting “restrictions on firms that might pose competitive threats” in 2013, the Federal Trade Commission says in an ongoing antitrust lawsuit. “I’m just dumbfounded,” a colleague replied in an email. Another said, “It is sort of unethical.” A third said, “I agree it is bad.”
Green, the center’s director of technology ethics, says tech presents new ethical predicaments to employees not prepared and trained to handle them. For example, social media companies have struggled with political disinformation, an age-old problem that is now supercharged, due to the speed of the messages, the size of audiences and the revenue that can be gleaned from targeted, data-driven advertising.
“We’ve recently seen reports from internal employees expressing discomfort” because Facebook’s algorithm was recommending posts to users from “pages that post highly partisan content,” a company memo said. One post being debated by employees was not political, but it was from Ben Shapiro, a conservative pundit who had accused Facebook of censoring right-wing opinions.
Employees stop resisting and glaze over, shutting down ethical instincts that previously defined their character and sense of identity, says Green. “People start ignoring things,” he says."They start to normalize it, and large sections of the organization start going astray.” “One by one, they all backed out,” Scarlett says. “And one of the last people said to me, ‘If it wasn't just you and me, I would stay, but I can't. I just can't put myself in that position. And I admire you for taking this for all of us.’”
How has speaking up about ethical issues become so frightening that whistleblowers have the same level of anxiety as cancer patients? How can tech make it easier for employees to come forward with concerns?Employees on the forefront of new technologies should not face ethical issues alone, Green says. Companies must build into new technologies the capacity to identify, point out and process ethical issues.
Microsoft set out to “shift a culture” around its artificial intelligence products, which had run amok in 2016 when a bot named Tay began spewing racist tweets after Twitter users hijacked its programming. “We are deeply sorry for the unintended offensive and hurtful tweets from Tay,” the company wrote in a blog post.