AI governance is no longer a future concern. The EU AI Act has been progressively applying since August 2024, with high-risk system obligations fully in force by August 2026. In the US, NIST's AI Risk Management Framework has become the de facto reference for federal contractors and is being cited in state-level AI regulations. Engineering teams that ignored governance while building are now scrambling to retrofit it.
Retrofitting is expensive. Governance designed into a system from the start costs a fraction of what it costs to add later. This post is about understanding both frameworks well enough to make architectural decisions that satisfy both — and about where the two frameworks genuinely diverge.
EU AI Act: Risk Tiers and What They Mean
The EU AI Act uses a four-tier risk classification: unacceptable risk (banned), high risk (heavily regulated), limited risk (transparency obligations), and minimal risk (largely unregulated). The tier your system falls into determines the compliance burden.
Unacceptable risk systems are prohibited. These include real-time biometric surveillance in public spaces, social scoring by governments, and systems that exploit psychological vulnerabilities. If your product falls here, there is no compliance path — it cannot be deployed in the EU.
High-risk systems face the most significant obligations. The list includes AI in critical infrastructure, education and vocational training, employment decisions, access to essential private and public services, law enforcement, migration, and administration of justice. For high-risk systems: mandatory conformity assessment, technical documentation, data governance requirements, transparency and human oversight, accuracy and robustness standards, and mandatory logging.
- Risk management system: Documented, tested, continuously monitored throughout the lifecycle.
- Data governance: Training and test data must be relevant, representative, free of errors. Data provenance documented.
- Technical documentation: Full documentation of system purpose, architecture, training methodology, performance metrics.
- Transparency: Users must know they are interacting with an AI system. Clear instructions for use.
- Human oversight: Systems must be designed to allow human intervention and override. Cannot be designed to circumvent oversight.
- Accuracy and robustness: Performance metrics must meet defined thresholds. Robustness against adversarial inputs required.
- Logging: Automatic logging sufficient to ensure traceability across system lifetime.
NIST AI RMF: The Voluntary Framework Built for Adoption
The NIST AI RMF is organized around four functions: GOVERN, MAP, MEASURE, and MANAGE. GOVERN establishes the organizational policies and accountability structures. MAP identifies and categorizes AI risks in context. MEASURE quantifies those risks using qualitative and quantitative methods. MANAGE deploys responses to identified risks.
What makes the RMF useful is its specificity about process without being prescriptive about technology. It does not tell you which bias mitigation algorithm to use — it tells you that you need a process for identifying, measuring, and addressing bias. This leaves room for engineering judgment while ensuring governance gaps do not persist.
| Dimension | NIST AI RMF | EU AI Act |
|---|---|---|
| Legal status | Voluntary (US); state laws reference it | Mandatory law in EU |
| Scope | All AI systems | Risk-tiered; obligations scale with risk |
| Prescriptiveness | Process-oriented, technology-neutral | Specific technical requirements for high-risk |
| Enforcement | Market pressure + emerging regulation | National market surveillance authorities |
| Penalties | None directly | Up to €35M or 7% global revenue |
| Documentation | Guidance-based, flexible format | Mandatory standardized technical documentation |
| Best for | US organizations building AI governance culture | Organizations deploying AI in EU markets |
Where the Frameworks Align
Despite different origins and legal force, NIST RMF and the EU AI Act converge on several key principles. Both require continuous risk assessment, not point-in-time evaluation. Both emphasize human oversight as a design requirement. Both require documented testing against performance metrics. Both address bias and fairness explicitly. Both require transparency — about the system's nature and its limitations.
This alignment means that a governance program built around NIST RMF provides substantial coverage for EU AI Act compliance, particularly for the documentation and risk management requirements. The EU Act adds specific legal requirements around conformity assessment and CE marking for high-risk systems that NIST does not address, but the underlying governance practices overlap heavily.
Engineering for Governance: Practical Steps
Building a Governance-Ready AI System
Before writing a line of code, determine whether your system falls under EU AI Act high-risk categories and which NIST RMF risk tier it maps to. This decision shapes architecture choices, not just documentation choices. If you are near a high-risk boundary, design as if you are in it.
Both frameworks require audit trails. Logging sufficient for governance means capturing: inputs to the model, outputs from the model, confidence scores or uncertainty estimates, any human overrides, and the version of the model used. Design the log schema before you design the feature.
Document your model's intended use, limitations, performance across demographic groups, and known failure modes. This is required by the EU AI Act's technical documentation requirement and aligns with NIST RMF's MAP function. Update these documents when the model changes.
Every AI decision that affects a person must have a path for that person to contest or escalate. This is both an EU AI Act requirement for high-risk systems and a NIST RMF best practice. Build the UI and process for human review before launch, not in response to complaints after.
Bias does not stay constant. Model performance across demographic groups shifts as real-world data distributions shift. Schedule regular bias evaluations — quarterly at minimum — and define the thresholds that trigger remediation. Document both the methodology and the results.
The Governance Gap Most Teams Miss
Most engineering teams focus on the model: they document it, test it, measure its bias. The governance gap is usually in the system surrounding the model — the data pipeline that feeds it, the post-processing logic that transforms its outputs, the human interfaces that display those outputs. A well-governed model embedded in a poorly governed system does not satisfy either framework.
“Governance is not a model problem. It is a system problem. Everything the model touches is in scope.”
Third-party model use creates additional complexity that teams frequently underestimate. If you use a foundation model API for a high-risk application, you are responsible for the governance of the resulting system even though you do not control the model. The EU AI Act places obligations on the deployer, not just the developer. Understand what your model provider can and cannot attest to — and document where the governance responsibility boundary sits.