Unconscious bias is one of the ways in which the human brain makes sense of a complex environment. These biases sort information into categories for quicker decision-making.
In a world where survival was at stake, unconscious bias often helped human beings stay alive. Unconscious affinities toward healthy green plants over wilted, yellowing ones provided more nutrition, for example, while freeing up mental energy for tasks other than assessing each plant individually.
Today, however, unconscious bias can hinder effective decision-making. Trained correctly, artificial intelligence (AI) can help insurance professionals sidestep the negative effects of underlying bias.
Media coverage of biased behavior by artificial intelligence algorithms make for interesting reading, but also tends to obscure or distort the state of AI and bias as it currently exists.
AI does present a risk of replicating biases present in its own training data. Studies of early AI use in hiring, for instance, found that when a business primarily hired workers from a handful of schools, artificial intelligence was more likely to report that attending one of those schools was an essential factor to hiring success. Unlike humans, AI doesn’t have a store of life experience to draw from; it knows only what its training data can tell it.
Aware of the bias risks in AI, researchers continue to address sources of algorithmic bias. Tom Bigham and fellow researchers at Deloitte recommend that insurers consider three key points when weighing the use of AI versus other technologies for a task:
Awareness of bias concerns is one of the best ways that human users can prepare for and address bias in AI — and use the tool to effectively combat their own unconscious biases.
Combating bias in artificial intelligence isn’t as simple as creating unbiased data sets or monitoring the creation of machine learning processes. Instead, insurers must consider the entire context in which AI tools operate — including how these tools intersect with human efforts.
“If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public’s trust in AI. Many of these factors go beyond the technology itself to the impacts of the technology,” says Reva Schwartz at the National Institute of Standards and Technology.
AI systems operate in context. In insurance, that context is the world of insurance professionals and agents, decisions made by customers, and the vast stores of data that support underwriting and distribution decisions. Insurance professionals must continually monitor AI systems to ensure that their analyses and recommendations adhere both to good data science practices and to the organization’s business ethics.
Effective use of artificial intelligence can help combat human’s unconscious biases in several ways:
Used effectively, AI can combat human tendencies toward unconscious bias. By doing so, AI can help insurers provide needed insurance coverage while balancing costs and claims, write researchers Martin Mullins, Christopher P. Holland and Martin Cunneen in a 2021 issue of Patterns. As insurance becomes more accessible, the insurance gap shrinks, reducing the catastrophic impact of losses.
All AI is created by humans. As a result, it may replicate our own biases. Used thoughtfully, however, AI can also provide safeguards against our own unconscious behavior and decision-making shortcuts.
Images by: andreypopov/©123RF.com, milkos/©123RF.com