The adoption of artificial intelligence (AI) is accelerating, with many industries already using it in their business operations. It is playing an increasingly influential role in businesses and daily lives, providing decisions, recommendations, or predictions to people. However, AI has raised a range of concerns. Black box AI is one of them.
A lot of AI applications are carried out unexplainable and opaquely to humans, as if they are done in a black box. People cannot explain why AI arrives at a specific decision. This may entail a black box problem, which is about the trust and confidence in AI algorithm or decision making.
Take online bank loan as an example. AI is growing popular in processing online bank loans. Its automatic processing may lead to a refusal of a person’s application without offering clues as to why it has rejected the application.
According to a global PwC survey in 2017, 76% of CEOs are most concerned with the potential for bias and lack of transparency when it comes to AI adoption.
Some AI experts said the black box AI conundrum mostly arises from the way people train AI systems particularly in deep learning. Deep learning requires minimal human intervention as it learns through identifying patterns from the data and information it can access.
Problematic for finance & insurance sectors
“The problem of black box AI is in general quite serious, especially with the current deep learning trend,” said Andy Chun, the convener of artificial intelligence specialist group of the Hong Kong Computer Society, also the associate professor of the department of computer science at the City University of Hong Kong (CityU).
In Hong Kong, the finance and insurance industries are probably most likely to be affected by the black box AI problem according to Chun. These industries are increasingly using machine learning in fraud detection, investment advice, portfolio management, algorithmic trading, and loan or insurance underwriting. Bias might be resulted from machine learning or AI systems.
Machine learning learns from data, so it’s going to replicate any biases in the data set. “If the data models themselves contain biases then the results from AI machine learning will potentially also be biased,” he said.
“The black box AI phenomenon is particularly problematic for consumer facing applications,” Chun added. “For example, if a loan or insurance policy got rejected because of AI recommendations, the consumer would want to know why.” In these situations, humans have to involve in reviewing the AI algorithm or offering explanations to the consumer.
Echoing the same sentiment, Samson Tai, IBM Hong Kong’s distinguished engineer and CTO said, “Biases are arised mainly due to problems in data processes rather than training algorithms. It’s important to be aware of the issues of biases in data sets.”
Tai cited an example of training an image recognition system for identifying CEOs. If images of white, male CEOs are often used in the training data set, the system will likely associate CEOs with white men more strongly than black men or women.
Impact of GDPR
Chun added that the European Union’s General Data Protection Regulation (GDPR) will have a great impact to the black box AI problem. Some of the GDPR’s provisions such as the right to explanations and the right to be forgotten will pose a challenge to companies.
“If certain decisions are made by automated processing, including AI, that has legal implications, the data subject has a right to an explanation on the reasons and logic involved in making those decisions,” Chun said. “Obviously, this is problematic for ‘black box’ AI, as the name implies.”
GDPR also gives data subjects the right to be forgotten. They have the right to request a company to erase their own personal data from its systems. If those data were used in machine learning, Chun doubts whether removing just the raw data will be sufficient.
“If face recognition software already learned how to recognize a particular person, it is unclear how a ‘black box AI can ‘unlearn’ that face,” Chun said.
Industry experts provide advices on how enterprises can unleash AI potential while keeping it responsible and transparent.
“Technologies such as Explainable AI (EAI) will one day provide ways to generate explanations from different AI approaches,” said Chun. “However, most of that work is still in the research phase. This is particularly troubling for applications that make use of AI deep learning techniques.”
IBM has been working continuously to deliver AI services that are built responsibly, unbiased and explainable.
“It is necessary to enhance machine learning algorithms while making sure data are unbiased,” said Tai. “We are developing algorithms to address dataset biases. In the next four to five years, we expect to see more mature algorithms to minimize bias in data and AI models.”
PwC has developed a responsible AI framework that aims to help companies build trust and confidence in AI while unleashing its potentials. The framework provides a mechanism for companies to ensure effective monitoring and stewardship of AI outcomes.
For companies that use third-party tools or services to build their AI models, they have to be cautious to avoid bias while protecting data privacy.
“For companies that leverage third-party services, they have to assess whether those services are biased or not,” said Tai.
“If an organization is using third-party AI tools and libraries, it will need to make sure whether those tools and libraries provide features to support GDPR requirements,” said Chun.
Chun added, “Companies might consider using 'privacy preserving' approaches to AI processing and learning, where individual's personal data or identity need not be uploaded to the cloud for processing”.