Back to Glossary
AI Safety and Ethics

existential risk

A potential future event that could result in the permanent destruction of humanity's potential or the extinction of the human species.

Explanation

In the context of artificial intelligence, existential risk (often abbreviated as x-risk) refers to the possibility that a superintelligent AI system could act in ways that lead to human extinction or the irreversible loss of human agency. This could occur through goal misalignment, where an AI pursues an objective with unintended side effects, or through the loss of control over a system that surpasses human cognitive abilities. Prominent researchers and philosophers argue that without robust safety measures and alignment protocols, the rapid advancement of AGI (Artificial General Intelligence) poses a non-negligible threat to the long-term survival of civilization.

Related Terms