Explain Yourself, AI

As use of AI grows (27% of executives in a PwC study have already implemented AI), so do calls for ways to interpret how AI models make decisions. This has given rise to a new buzzword: explainable AI, which refers to algorithms that make decisions humans can explain. PwC, for example, says it "integrates risk mitigation and ethical concerns into algorithms and data sets from the start."

Sixty-one percent of executives surveyed by PwC said creating transparent, explainable, provable AI methods was a step they planned to take in 2019.

Krishna Gade, a former engineering manager of Facebook’s News Feed, and Amit Paka, who worked on Samsung’s shopping apps, saw a need for a platform that could clearly explain to company shareholders how AI models make decisions. They founded Fiddler Labs to do just that.

Samsung used machine-learning systems to recommend products to users, but it was difficult to measure return on investment and compare new models to older ones, Paka told CNBC. At Facebook, Gade’s challenge was measuring how well the News Feed was working on any given day. "We needed to build tools and platforms to unlock this thing and provide those insights to an engineer all the way to an executive within Facebook," he said.

A February 2018 survey from McKinsey & Co. indicates that the new company is on to something. Of the 1,646 professionals surveyed, 24% thought that uncertain or low expectations for return on AI investment were significant barriers to their organization’s adoption of AI.

The arrival of the European Union’s General Data Protection Regulation (GDPR) further complicates things. Article 22 of the regulation maintains that Europeans have a right to know how an automated decision involving them was reached, and the right to know if and how an automated process is using their personal information.

"You have to start thinking about, 'How do I deploy AI knowingly, giving it ownership of data rights and make sure it’s compliant with rules and regulations?'" said Ganesh Padmanabhan, vice president and head of marketing and business development at Cognitive Scale.

Not knowing could be a costly mistake. The PwC report found that 34% of executives surveyed were concerned about the new liabilities AI presented, and 37% said ensuring AI systems were trustworthy was their top priority.

"Behind the Numbers" Podcast