Computing Bias Homework

Popcorn Hack #1

Biased System:

A facial recognition system used in security applications has difficulty accurately identifying individuals with darker skin tones. It often misidentifies them or fails to recognize them compared to individuals with lighter skin tones.

Type of Bias:

This represents Pre-existing Social Bias because the training data used for the facial recognition system contained more images of lighter-skinned individuals, leading to uneven performance across different racial groups.

Solution:

To reduce this bias, developers should ensure that the training dataset includes a diverse and representative sample of faces from all racial and ethnic backgrounds. This will help the system recognize all individuals more accurately.


Popcorn Hack #2

Ways to Mitigate Bias in AI Loan Approval:

  1. Account for Historical Discrimination in the Model – Women were historically denied access to financial opportunities, including loans, for much of history. Since AI learns from past data, it may unfairly prefer male applicants because they had more loan approvals in the past. To fix this, financial institutions should adjust the model to account for these historical disparities by applying fairness constraints or reweighting the data to ensure women aren’t penalized for past discrimination.

  2. Regular Bias Audits and Transparency – Financial institutions should conduct frequent fairness audits and use bias detection tools to identify and correct any discriminatory patterns in the AI system’s decisions. If the model disproportionately favors male applicants, adjustments such as rebalancing training data or modifying decision thresholds should be implemented to create a more equitable system.


Homework Hack: Understanding Bias in Computing

System:

A voice assistant (e.g., Siri, Alexa, Google Assistant) that struggles to recognize and accurately interpret commands from people with strong accents.

Bias:

This is an example of Technical Bias because the system may have been trained primarily on speech data from speakers with standard or commonly used accents, making it less effective for those with regional or non-native accents.

Solution:

To fix this bias, developers should expand the dataset by incorporating voice samples from speakers with various accents and dialects. Additionally, they can implement adaptive learning techniques so the system improves its understanding of diverse speech patterns over time.