Training an artificial intelligence (AI) algorithm requires data—lots of data. But staying GDPR-compliant while acquiring that data can be almost impossible.
Here’s the problem: To make a decision about someone—e.g., that they like the color blue and should be targeted with blue advertisements—an AI algorithm combines their personal data with other data inside its big black box, and spits out the answer. To get the data the AI needs, GDPR requires companies to get consent to use that personal data, tell that person exactly what it’s being used for, and guarantee it won’t be used for anything else. But companies have no idea what’s happening inside that black box, so true consent becomes a myth.
Article 22 of GDPR complicates the issue by giving consumers the right to not have an automated process make a decision about them that has legal affects or otherwise “significantly effects them.” It also states that if someone asks for an explanation of how a decision was reached, a company must explain the reasoning. But once again, only the algorithm itself can explain its decision-making.
“It's one thing to look at a picture and say it's a cat or a dog. It's a totally different thing when you reject a medical claim on an insurance provider, the patient died and you're looking at a $40 million lawsuit,” said Ganesh Padmanabhan, vice president and head of marketing and business development at Cognitive Scale.
Adobe’s product marketing manager Tatiana Mejia echoed his sentiment: “It can be particularly sensitive depending on the industry or geography, and there’s a responsibility to give insight into what can be in a black box.”