Skip to Content, Navigation, or Footer.
The Tufts Daily
Where you read it first | Saturday, November 23, 2024

Dr. Anu Gokhale discusses the implications of algorithmic bias during Tufts event

IMG_3596-1
The Tufts Alumnae Lounge is pictured.

Dr. Anu Gokhale, professor and chairperson in the Saint Augustine’s University Department of Computer Information Systems, gave a lecture at Alumnae Hall titled “Algorithmic Bias: Myth or Reality?” on April 4. Gokhale was invited to speak about her insights and research as a leader in a STEM field.

Gokhale began her talk by discussing a case study in algorithmic bias, explaining how a government-created system that uses algorithms to assess an individual’s creditworthiness was found to be biased.

“Automated Underwriting Systems are created by the government and are race-blind … yet are discriminatory,” Gokhale said.

The 2022 study conducted by the Federal Reserve found that the algorithm, when compared to human loan assessors, was just as likely to deny loan applications made by Black and Hispanic individuals.

“Racial and ethnic discrimination by mortgage lenders continues,” Gokhale concluded.

Gokhale noted that there are two main types of algorithms: supervised and unsupervised.

“A supervised learning algorithm analyzes the training data and produces an inferred function, which is then used to map new examples,” Gokhale said. “The unsupervised [algorithm’s] learning task is to find hidden structure in unlabeled data.”

Essentially, this means supervised algorithms use pre-labeled data to make decisions while unsupervised do not. Gokhale spoke about how challenges exist in optimizing artificial intelligence models because of the complexity of the algorithms upon which they are constructed.

“AI models are basically a black box, then: a system using inputs and outputs to create useful information … with minimal knowledge of its inner workings,” Gokhale said. “How many of us users, when we do something, know how it works in the background? … We don’t know, and yet we rely on it and we use it.”

Gokhale identified the desire to save time and resources as a primary motive for preferring algorithms over human decision-making.

“Algorithms are replacing human decision-making because they are perceived to be unbiased while saving money and time,” Gokhale said. “That’s huge for any organization, correct? If you can save money and time, you’re going to do it.”

Gokhale identified multiplying bias, or the self-enforcing cycle of algorithms amplifying bias, as a key characteristic and struggle of algorithmic design. She spoke about the presence of multiplying bias in the use of Correctional Offender Management Profiling for Alternative Sanctions, an algorithm used in the criminal justice system to predict an individual’s likelihood of recidivism.

“Due to the data that was used, the model that was chosen and the process of creating the algorithm overall, COMPAS predicted twice as many false positives for recidivism for Black offenders than white offenders,” Gokhale said. “This [algorithm] is widely used. … The more it gets used, the more it feels empowered to do the same thing over again, multiplying bias.”

Gokhale continued sharing examples of algorithmic bias to suggest that bias is human-engineered. She discussed 2019 findings about an algorithm used on over 200 million U.S. hospital patients that incorrectly predicted that white patients have more healthcare needs than Black patients because white patients were found to incur greater healthcare costs on average.

“The rationale was that cost summarizes how many health care needs a particular person has,” Gokhale said. “On average, Black patients incurred lower healthcare costs than white patients with the same conditions.”

Therefore, the algorithm incorrectly concluded, white patients would need more medical care than equally sick Black patients.

Gokhale shared another example of gender bias found in algorithms used for hiring in tech-related fields. In particular, she talked about Amazon’s realization in 2015 that its hiring algorithm was biased against female applicants.

“The firm realized that … the algorithm was based on the number of resumes submitted over the past 10 years, and since most of the applicants were men, it was trained to favor men over women,” Gokhale said.
To conclude her discussion, Gokhale encouraged students in the audience to consider strategies to reduce bias in algorithmic design and support the creation of responsible AI. A central lesson from Gokhale was to use evidence-centered design in the creation of algorithms.

“Add more representative data. This will increase the overall accuracy,” Gokhale said. “Be conscious of features to include in data, what features can directly or indirectly signal race, gender, or socioeconomic status. … [Be] cognizant of the features that are being used and their implications.”

Gokhale received a question from a student audience member about the feasibility of eliminating algorithmic bias, to which she responded by acknowledging the ongoing efforts required for improving algorithmic design.

“I think nothing can ever be eliminated. It’s basically a moving target,” Gokhale said. “We are constantly evolving, doing better, definitely than what we were doing five years ago, 10 years ago, but I think … we can do better.”