Explainability can foster trust in artificial intelligence in geoscience
Uptake of explainable artificial intelligence (XAI) methods in geoscience is currently limited. We argue that such methods that reveal the decision processes of AI models can foster trust in their results and facilitate the broader adoption of AI.
DRAMSCH Jesper;
KUGLITSCH Monique;
FERNÁNDEZ-TORRES Miguel Angel;
TORETI Andrea;
ALBAYRAK Arif;
NAVA Lorenzo;
GHAFFARIAN Saman;
CHENG Ximeng;
MA Jackie;
SAMEK Wojciech;
VENGUSWAMY Rudy;
KOUL Anirudh;
MUTHUREGUNATHAN Raghavan;
HRAST ESSENFELDER Arthur;
2025-05-23
NATURE PUBLISHING GROUP
JRC137352
1752-0894 (online),
https://doi.org/10.1038/s41561-025-01639-x,
https://publications.jrc.ec.europa.eu/repository/handle/JRC137352,
10.1038/s41561-025-01639-x (online),
Additional supporting files
| File name | Description | File type | |