AI Glossary
A comprehensive encyclopedia of artificial intelligence and context management terminology — with definitions, in-depth articles, and authoritative sources.
AI Alignment
Also known as: Value Alignment, AI Safety Alignment
The research field focused on ensuring that AI systems' goals, behaviors, and values are compatible with human intentions and societal well-being throughout their operation.
AI Governance
Also known as: AI Policy, AI Regulation, AI Oversight
The frameworks, policies, standards, and oversight mechanisms that guide the development, deployment, and use of AI systems within organizations and across society.
Artificial General Intelligence
Also known as: AGI, Strong AI, Human-Level AI
A hypothetical form of AI that possesses the ability to understand, learn, and apply intelligence across any intellectual task that a human being can, exhibiting flexibility and adaptability across domains.
Artificial Intelligence (AI)
Also known as: AI, Machine Intelligence
The simulation of human intelligence processes by computer systems, including learning, reasoning, self-correction, and the ability to perform tasks that typically require human cognition.
Attention Mechanism
Also known as: Self-Attention, Scaled Dot-Product Attention, Multi-Head Attention
A neural network component that allows models to selectively focus on the most relevant parts of their input, dynamically weighting the importance of different elements in a sequence.
Chain-of-Thought
Also known as: CoT, Chain-of-Thought Prompting, Step-by-Step Reasoning
A prompting technique that improves AI reasoning by instructing the model to decompose complex problems into intermediate reasoning steps before arriving at a final answer.
Context Compression
Also known as: Prompt Compression, Context Condensation
Techniques for reducing the token count of context provided to language models while preserving the most essential information, enabling more efficient use of limited context windows.
Context Window
Also known as: Context Length, Token Limit, Context Size
The maximum amount of text (measured in tokens) that a language model can process in a single interaction, determining how much information the model can consider when generating a response.
Embeddings
Also known as: Vector Embeddings, Text Embeddings, Semantic Embeddings
Dense numerical vector representations of data (text, images, audio) that capture semantic meaning, enabling similarity comparisons and machine learning operations in a continuous vector space.
Explainability
Also known as: XAI, Interpretability, Explainable AI
The degree to which the internal workings and decision-making processes of an AI system can be understood, interpreted, and explained to humans in meaningful terms.
Few-Shot Learning
Also known as: Few-Shot, In-Context Learning, k-Shot Learning
A machine learning approach where models learn to perform tasks from only a small number of examples, typically provided within the prompt or during a brief adaptation phase.
Fine-Tuning
Also known as: Model Fine-Tuning, Transfer Learning, Domain Adaptation
The process of further training a pre-trained AI model on a specialized dataset to adapt its behavior, knowledge, or output style for a specific domain or task.
Function Calling
Also known as: Tool Use, Tool Calling, AI Actions
A capability of AI models to generate structured outputs that invoke predefined functions or APIs, enabling AI systems to take actions, retrieve data, and interact with external systems.
Machine Learning
Also known as: ML
A subset of artificial intelligence that enables systems to learn and improve from experience without being explicitly programmed, using algorithms that identify patterns in data.
Model Context Protocol
Also known as: MCP
An open standard developed by Anthropic that standardizes how AI applications connect to external data sources, tools, and context providers through a unified protocol.
Natural Language Processing
Also known as: NLP, Computational Linguistics
A field of AI focused on enabling computers to understand, interpret, generate, and meaningfully interact with human language in both text and speech forms.
Neural Network
Also known as: ANN, Artificial Neural Network
A computing system inspired by biological neural networks, consisting of interconnected nodes (neurons) organized in layers that process information using learnable weights and activation functions.
Reinforcement Learning from Human Feedback
Also known as: RLHF, Human Feedback Training
A training technique that uses human evaluations of AI outputs to train a reward model, which then guides the AI system to produce outputs more aligned with human preferences.
Responsible AI
Also known as: Ethical AI, Trustworthy AI, AI Ethics
The practice of designing, developing, deploying, and using AI systems in ways that are ethical, transparent, fair, accountable, and aligned with human rights and societal values.
Retrieval-Augmented Generation
Also known as: RAG
A technique that enhances AI model outputs by retrieving relevant information from external knowledge sources and incorporating it into the model's context before generating a response.
Semantic Search
Also known as: Vector Search, Neural Search, Meaning-Based Search
A search methodology that understands the contextual meaning and intent behind a query rather than matching exact keywords, using embeddings and vector similarity to find semantically relevant results.
Supervised Learning
Also known as: Supervised ML
A machine learning paradigm where models are trained on labeled datasets containing input-output pairs, learning to map inputs to correct outputs for prediction and classification tasks.
Tokens
Also known as: Token, Subword Token, BPE Token
The basic units of text that language models process, typically representing words, subwords, or characters. Token counts determine context window usage and API costs.
Training Data
Also known as: Training Dataset, Training Corpus, Training Set
The curated dataset used to train machine learning models, whose quality, diversity, size, and representativeness directly determine the model's capabilities and limitations.
Transformer
Also known as: Transformer Architecture, Transformer Model
A neural network architecture based on self-attention mechanisms that processes input sequences in parallel, forming the foundation of virtually all modern large language models.
33 terms