Back to Glossary
Ethics

Bias

In the context of AI, bias refers to systematic and repeatable errors in a model that consistently favor certain outcomes or groups over others. This can lead to unfair or discriminatory results, even if unintended.

Explanation

AI bias arises when the data used to train a model is not representative of the real world or when the model's design inadvertently amplifies existing societal biases. Sources of bias include: (1) biased training data reflecting historical prejudices or skewed sampling; (2) algorithmic bias due to flaws in the model's architecture or learning process; (3) human bias in data labeling or feature selection, reflecting subjective judgements; and (4) societal bias, where the AI system is used in a biased context. Addressing bias requires careful data curation, algorithmic fairness techniques, and ongoing monitoring to ensure equitable outcomes. Mitigation strategies include: data augmentation, re-weighting, adversarial debiasing, and fairness-aware model evaluation metrics. The consequences of unchecked bias can range from subtle inaccuracies to significant social harms, particularly in applications impacting sensitive decisions like hiring, lending, and criminal justice.

Related Terms