Data Science

Bias and Fairness: Striking a Balance


By AI Trends Staff

AI relies on large datasets to train machine learning models, and those datasets can be highly skewed in characteristics of race, wealth and gender, for example. And while bias may be detected numerically, fairness is a social construct. It’s unlikely every decision will be fair to all parties. In this environment, it’s important for data scientists to be building what business managers want, and for both to be cognizant of bias and fairness issues.

In a recent account in Forbes, Dr. Rebecca Parsons, chief technology officer of ThoughtWorks, spoke about how the company with 6,000 employees in 14 countries worldwide addresses the issue of bias.

Dr. Parsons noted that the infusion of bias into an AI application is usually unintentional, a function of the environment the data scientist grew up in. And the teams building the biggest AI systems are not representative of society at large. Some 2.5 percent of Google’s workforce is black and 10 percent of AI research staff at Google is female, according to recent research from NYU. “This lack of representation is what leads to biased datasets and ultimately algorithms that are much more likely to perpetuate systemic biases,” Dr. Parsons was quoted as saying.

Dr. Rebecca Parsons, CTO, ThoughtWorks

Dr. Parsons recommended developers cross check their algorithms for unintended patterns. You can test for under- or over-representation in the data. For example, a widely-used facial recognition data training set was estimated to be more than 75% male and more than 80% white. As a result, it was much less successful in successfully identifying darker-skinned females. The fix was to add more faces to the training data; results improved.

The impact of biased data sets in healthcare could be life or death; in criminal justice, unfair prison terms; in the law, establishing liability around AI is a coming blood sport. Rules must be established for cross-examining the algorithm or its creators.

Work is picking up in the area of fighting bias; awareness is being raised. The Algorithmic Justice League was founded by Joy Buolamwini, a computer scientist with the MIT Media Lab. Her research on bias in facial recognition system data prompted responses by IBM and Microsoft to improve their software. Her project is called Gender Shades.The Algorithmic Justice League aims to highlight bias in code that can lead to discrimination of under-represented groups.

Fairness is a Social Construct

While bias can be identified by statistical correlations from a dataset, fairness is a social construct with many definitions, suggests an article in strategy+business. A paper from the 2018 ACM/IEEE International Workshop on Software Fairness included some 20 definitions of fairness for algorithmic clarification. (See Fairness Definitions Explained.)

Systems can be designed to meet fairness goals. Some companies are striving for responsible AI, in which considerations include ethical concerns about AI algorithms, risk mitigation, workforce issues and the general good. However, the authors suggest that data scientists must be speaking the strategic language of the business. “At most organizations, there exists a gap between what the data scientists are building and what the company leaders want to achieve with an AI implementation,” say the authors.

The business leaders and data scientists need to decide on the right idea of fairness for the decision that needs to be made, so that can be designed into the algorithm that drives the applications.

In an application to assess credit worthiness, for example, a company could see a true positives and false positives in the results. In a true positive, the model correctly picks who would be a good credit risk. In a false positive, a bad risk customer is assigned a good credit score.  Efforts to minimize losses, for example, need to be careful not to discriminate based on gender or race.

In this example, the marketing team may want to maximize the number of credit cards issues, and the risk management team may want to minimize potential losses. The teams cannot both be satisfied; they need to strike a balance. The exploration of these issues should result in a more responsible AI.

Read the source articles in Forbes and strategy+business.


Source link

Guest Blogger

We feature multiple guest blogger from around the digital world. If you are featured here, don't be surprised, you are a our knowledge star. :)

Related Articles

Back to top button
Close
Close