The building is "croissant-shaped" and features views of the surrounding foliage.IBM is trying to demystify the questions around the technology in a number of ways. But one problem remains defining what a fair model is. To solve that issue, IBM introduced "AI Fairness 360," a library of algorithms that can be used to check whether a data set is biased.
"You actually grow this culture of understanding AI biases. And as we all evolve then eventually, maybe one day, it's not going to be a problem," Saska Mojsilovic, who head the Foundations of Trusted AI group at IBM, told Business Insider. Accenture's head of artificial intelligence shares the 4-step plan every company should consider before investing in AI
Explaining the AI is also a challenge. Say a financial institution uses an algorithm to determine whether an individual qualifies for a loan. If the application is denied, that company needs to be able to outline to the customer the reasoning behind the decision. IBM recently introduced a set of algorithms known as "Explainability 360" that provide insight into how AI models come to a final conclusion, including one that outlines what information was used to come to the decision. It also shows what features that, if they were present, would have reversed the choice. So if a loan application is denied, the algorithm could provide a route for a customer to improve their chances the next time.