Back to Glossary
Artificial Intelligence Ethics and Governance

Explainable AI (XAI)

A set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms.

Explanation

Explainable AI (XAI) refers to the field of artificial intelligence focused on making the internal mechanics of AI systems transparent and understandable to humans. Traditional black box models, such as deep neural networks, often provide accurate predictions but lack clarity on how they reached a specific conclusion. XAI aims to bridge this gap by providing justifications for decisions, identifying which features influenced an outcome, and ensuring that the model's logic aligns with human reasoning. This is critical for high-stakes industries like healthcare, finance, and law, where accountability, fairness, and regulatory compliance are essential.

Related Terms