GNN’s FAME: Fairness-Aware MEssages for Graph Neural Networks
Graph Neural Networks (GNNs) have shown success in various domains but often inherit societal biases from training data, limiting their real-world applications. Historical data can contain patterns of discrimination related to sensitive attributes like age or gender. GNNs can even amplify these biases due to their topology and message-passing mechanism, where nodes with similar sensitive attributes tend to connect more frequently. While many studies have addressed algorithmic fairness in machine learning through pre-processing and post-processing techniques, few have focused on bias mitigation within the GNN training process.In this paper, we propose FAME (Fairness-Aware MEssages), an in-processing bias mitigation technique that modifies the GNN training’s message-passing algorithm to promote fairness. By incorporating a bias correction term, the FAME layer adjusts messages based on the difference between the sensitive attributes of connected nodes. FAME is compatible with Graph Convolutional Networks, and a variant called A-FAME is designed for attention-based GNNs. Experiments conducted on three datasets evaluate the effectiveness of our approach against three classes of algorithms and six models, considering two notions of algorithmic fairness. Results show that the proposed approaches produce accurate and fair node classifications. These results provide a strong foundation for further exploration and validation of this methodology. The source code is available at https://github.com/HannanJaved/FAME.
PURIFICATO Erasmo;
MAHADIK Hannan;
BORATTO Ludovico;
DE LUCA Ernesto William;
2025-07-25
Association for Computing Machinery (ACM)
JRC141233
979-8-4007-1313-2 (online),
https://doi.org/10.1145/3699682.3728324,
https://publications.jrc.ec.europa.eu/repository/handle/JRC141233,
10.1145/3699682.3728324 (online),
Additional supporting files
| File name | Description | File type | |