AI Glossary

A comprehensive encyclopedia of artificial intelligence and context management terminology — with definitions, in-depth articles, and authoritative sources.

AI Alignment

Also known as: Value Alignment, AI Safety Alignment

The research field focused on ensuring that AI systems' goals, behaviors, and values are compatible with human intentions and societal well-being throughout their operation.

AI Safety

AI Governance

Also known as: AI Policy, AI Regulation, AI Oversight

The frameworks, policies, standards, and oversight mechanisms that guide the development, deployment, and use of AI systems within organizations and across society.

AI Safety

Bias in AI

Also known as: Algorithmic Bias, AI Bias, Machine Learning Bias

Systematic errors in AI system outputs that create unfair outcomes for certain groups, typically arising from biased training data, flawed model design, or biased evaluation metrics.

AI Safety

Explainability

Also known as: XAI, Interpretability, Explainable AI

The degree to which the internal workings and decision-making processes of an AI system can be understood, interpreted, and explained to humans in meaningful terms.

AI Safety

Hallucination

Also known as: AI Hallucination, Confabulation, Model Hallucination

When an AI model generates information that sounds plausible but is factually incorrect, fabricated, or not supported by its training data or provided context.

AI Safety

Responsible AI

Also known as: Ethical AI, Trustworthy AI, AI Ethics

The practice of designing, developing, deploying, and using AI systems in ways that are ethical, transparent, fair, accountable, and aligned with human rights and societal values.

AI Safety

6 terms under "AI Safety"