AI Governance
Also known as: AI Policy, AI Regulation, AI Oversight
The frameworks, policies, standards, and oversight mechanisms that guide the development, deployment, and use of AI systems within organizations and across society.
“The frameworks, policies, standards, and oversight mechanisms that guide the development, deployment, and use of AI systems within organizations and across society.
“
Overview
AI governance encompasses the rules, practices, and organizational structures that ensure AI systems are developed and used responsibly. It operates at multiple levels — organizational, national, and international — and addresses questions of accountability, risk management, compliance, and ethical standards.
Organizational AI Governance
- AI Ethics Boards: Cross-functional committees that review AI projects for ethical risks
- Model Risk Management: Frameworks for assessing and managing risks associated with AI models
- Documentation Standards: Requirements for documenting AI systems, including model cards and datasheets
- Monitoring and Auditing: Ongoing monitoring of AI system performance and periodic audits
- Incident Response: Procedures for addressing AI system failures or harmful outcomes
Regulatory Frameworks
EU AI Act
The world's first comprehensive AI regulation, establishing a risk-based framework with requirements ranging from transparency obligations for low-risk systems to strict prohibitions on unacceptable uses.
U.S. Approach
A combination of executive orders, sector-specific regulations, and voluntary frameworks like the NIST AI RMF. The approach emphasizes innovation while establishing guard rails for high-risk applications.
International Coordination
Organizations like the OECD, G7, and the UN are working to coordinate AI governance globally through shared principles and interoperable standards.
Context Management and Governance
AI governance directly impacts context management practices. Data governance requirements determine what context can be provided to AI systems, data retention policies affect how long context is stored, and privacy regulations constrain how personal information can be used as context.
Sources & Further Reading
OECD AI Policy Observatory
Organisation for Economic Co-operation and Development
EU AI Act Full Text
European Union
AI Risk Management Framework (AI RMF)
National Institute of Standards and Technology
UN Global Digital Compact - AI Governance
United Nations
Related Terms
AI Alignment
The research field focused on ensuring that AI systems' goals, behaviors, and values are compatible with human intentions and societal well-being throughout their operation.
Bias in AI
Systematic errors in AI system outputs that create unfair outcomes for certain groups, typically arising from biased training data, flawed model design, or biased evaluation metrics.
Responsible AI
The practice of designing, developing, deploying, and using AI systems in ways that are ethical, transparent, fair, accountable, and aligned with human rights and societal values.