The science and practice of proportionality in AI risk evaluations
A global challenge in artificial intelligence (AI) regulation lies in achieving effective risk management without compromising innovation and technical progress (1). The European Union (EU) Artificial Intelligence Act (2) represents the first regulatory attempt worldwide to navigate this tension in the form of a binding, risk-based framework. In August 2025, obligations for providers of general-purpose AI (GPAI) models under the EU AI Act entered into application. They require providers of the most advanced GPAI models to evaluate possible systemic risks stemming from their models (3). This raises the regulatory challenge of ensuring that the evaluations provide meaningful risk information without imposing excessive burden on providers. The principle of proportionality, a binding requirement under EU law, requires the regulator to calibrate its actions to their intended objectives. The application of proportionality to model evaluations for AI risk opens opportunities to develop scientific methods that operationalize such calibration within concrete evaluation practices.
MOUGAN Carlos;
MORLOCK Lauritz;
AGUIRRE Jair;
BLACK James;
BRAUNER Jan;
CAMPOS Simeon;
DEV Sunishchal;
FERNANDEZ LLORCA David;
ALBERTO Franzin;
FRITZ Mario;
GOMEZ Emilia;
GROSSE HOLZ Friederike;
HAMILTON Eloise;
HASIN Max;
HERNANDEZ-ORALLO Jose;
LAHAV Dan;
MASSARELLI Luca;
MAVROUDIS Vasilios;
MURRAY Malcolm;
PASKOV Patricia;
RALDUA Jaime;
WOUT Schellaert;
2026-03-06
AMER ASSOC ADVANCEMENT SCIENCE
JRC143876
1095-9203 (online),
https://www.science.org/doi/10.1126/science.aea3835,
https://publications.jrc.ec.europa.eu/repository/handle/JRC143876,
10.1126/science.aea3835 (online),
Additional supporting files
| File name | Description | File type | |