Racial Bias in Artificial Intelligence Restricts Vital Access to Healthcare and Financial Services, Says Data Scientist - Taylor & Francis Newsroom

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Cookie Policy. By closing this message, you are consenting to our use of cookies.

Book publication announcement

9th March 2023

Racial Bias in Artificial Intelligence Restricts Vital Access to Healthcare and Financial Services, Says Data Scientist


Terrelle and Lorraine were turned down for a house loan they could easily afford, despite a six-figure salary each and great credit history. 

Frederick was discharged from the hospital as fit and healthy, only to later suffer a heart attack. 

Eva was preparing to vote in a hotly contested election for the first time in her life but was targeted with fake Facebook posts telling her the polling station was closed. 

What do all these people have in common? They are People of Color, and victims of racial bias in AI. 

These are just some examples given by a leading data science expert who has analyzed the depths of systemic racism in AI and suggested the ways in which the biases can be confronted. 

A pervasive threat 

Artificial intelligence is a pervasive part of modern-day life and is used by vital institutions from banks to police forces. 

But a growing mountain of evidence suggests that the AI used by these organizations can entrench systemic racism. 

This can negatively impact Black and ethnic minority groups when applying for a mortgage or seeking healthcare, according to an industry expert. 

Confronting biases 

Calvin D Lawrence is a Distinguished Engineer at IBM. He has gathered evidence to show that technology used by policing and judicial systems contain in-built biases stemming from human prejudices and systemic or institutional preferences. But, he says, there are steps AI developers and technologists can do to redress the balance. 

Lawrence said: “AI is an inescapable mechanism of modern society, and it affects everyone, yet its internal biases are rarely confronted – and I think it is time we address that.” 

In his new book, Hidden in White Sight, published today, Lawrence explores the breadth of AI use in the United States and Europe including healthcare services, policy, advertising, banking, education and applying for and getting loans.

Hidden in White Sight reveals the sobering reality that AI outcomes can restrict those most in need of these services.

He added: “Artificial Intelligence was meant to be the great social equalizer that helps promote fairness by removing human bias, but in fact I have found in my research and in my own life that this is far from the case.” 

A tool for society 

Lawrence has been designing and developing software for the last thirty years, working on many AI-based systems at the U.S. Army, NASA, Sun Microsystems, and IBM.  

With his expertise and experience, Lawrence advises readers on what they can do to fight against it and how developers and technologists can build fairer systems. 

These recommendations include rigorous quality testing of AI systems, full transparency of datasets, viable opt-outs and in-built ‘right to be forgotten’. Lawrence also suggests that people should be able to easily check what data is held against their names and be given clear access to recourse if the data is inaccurate. 

Lawrence added: “This is not a problem that just affects one group of people, this is a societal issue. It is about who we want to be as a society and whether we want to be in control of technology, or whether we want it to control us. 

“I would urge anyone who has a seat at the table, whether you’re a CEO or tech developer or somebody who uses AI in your daily life, to be intentional with how you use this powerful tool.”