AI evidence pathway for operationalising trustworthy AI in health: An ontology unfolding ethical principles into translational and fundamental concepts
Health, inherently rich in multi-modal data, could profit significantly from artificial intelligence (AI). Yet, adoption of AI in health remains challenging due to three key issues: (1) The “trust barrier”: while a plethora of documents based on AI (ethical) principles are available, there remains a significant interpretation gap between high-level desiderata and detailed actionable concepts. This hampers determination of both type and level of evidence that would render AI tools sufficiently trustworthy for adoption and integration into use contexts and environments. This is further complicated by the heterogeneous landscape of principles used by various organisations - despite robust evidence on a convergence towards ca 10 principles. (2) The “complexity barrier”: health is complex in terms of life cycle and value chains, involving specialised communities that need to develop and translate AI governance into pragmatic approaches that integrate up- and downstream life cycle stages in terms of evidence requirements. This requires networked thinking, forward-looking planning and bridging of disciplines and domains. However, out-of-domain literacy is typically limited, impeding effective collaboration for trustworthy AI. (3) The “technical barrier”: interoperability and infrastructure needs that may collide with the underfunding of health systems. To tackle these issues, we propose an ‘AI evidence pathway for health’ aimed at collaboration for evidence on trustworthy AI. The present ontology is its cornerstone. It lays out a pathway for evidence identification, using 10 consensus ethical principles which are unfolded into 42 high-level ‘translational concepts’ that branch into further 110 lower-level concepts (part A of the ontology). The translational concepts connect to 12 clusters of 179 fundamental socio-ethical, scientific, technical, and clinical concepts relevant for AI design, development, evaluation, use and monitoring (part B). Relationships between individual concepts are indicated throughout. The ontology defines user communities for AI innovation in health and outlines a comprehensive life cycle and value chain framework. We introduce the concept of “algorithm-to-model transition” to capture all decisions that may impact on benefits and risks of a model – throughout the life cycle and across value chains. The ontology embraces the benefit-risk ratio concept, emphasising the need for robust real-world evidence on possible benefits of AI tools. The concept descriptions are enriched by a total of ca. 900 publication references. The ontology provides an innovative and comprehensive knowledge resource to support the bridging of relevant actor communities and foster collaboration in view of ‘operationalising’ trustworthy AI in health.
GRIESINGER Claudius Benedict;
REINA Vittorio;
PANIDIS Dimitrios;
CHASSAIGNE Hubert;
2025-07-11
Publications Office of the European Union
JRC140726
978-92-68-29680-6 (online),
1831-9424 (online),
EUR 40379,
OP KJ-01-25-369-EN-N (online),
https://publications.jrc.ec.europa.eu/repository/handle/JRC140726,
10.2760/8107037 (online),
Additional supporting files
| File name | Description | File type | |