Skip to content

mohamedabdelrazek9/FairUP

Repository files navigation

Python

FairUP

The official implmentation of "FairUP: a Framework for Fairness Analysis of Graph Neural Network-Based User Profiling Models"

fairup_architecture

The framework currently supports these GNN models:

Abstract

Modern user profiling approaches capture different forms of interactions with the data, from user-item to user-user relationships. Graph Neural Networks (GNNs) have become a natural way to model these behaviours and build efficient and effective user profiles. However, each GNN-based user profiling approach has its own way of processing information, thus creating heterogeneity that does not favour the benchmarking of these techniques. To overcome this issue, we present FairUP, a framework that standardises the input needed to run three state-of-the-art GNN-based models for user profiling tasks. Moreover, given the importance that algorithmic fairness is getting in the evaluation of machine learning systems, FairUP includes two additional components to (1) analyse pre-processing and post-processing fairness and (2) mitigate the potential presence of unfairness in the original datasets through three pre-processing debiasing techniques. The framework, while extensible in multiple directions, in its first version, allow the user to conduct experiments on four real-world datasets.

Description

FairUP is a standardised framework that empowers researchers and practitioners to simultaneously analyse state-of-the-art Graph Neural Network-based models for user profiling task, in terms of classification performance and fairness metrics scores.

The framework, whose architecture is shown above, presents several components, which allow end-users to:

  • compute the fairness of the input dataset by means of a pre-processing fairness metric, i.e. disparate impact;
  • mitigate the unfairness of the dataset, if needed, by applying different debiasing methods, i.e. sampling, reweighting and disparate impact remover;
  • standardise the input (a graph in Neo4J or NetworkX format) for each of the included GNNs;
  • train one or more GNN models, specifying the parameters for each of them;
  • evaluate post-hoc fairness by exploiting four metrics, i.e. statistical parity, equal opportunity, overall accuracy equality, treatment equality.

Requirements

The code has been executed under Python 3.8.1, with the dependencies listed below.

dgl==0.6.1
dgl_cu113==0.7.2
fasttext==0.9.2
fitlog==0.9.13
hickle==4.0.4
matplotlib==3.5.1
metis==0.2a5
networkx==2.6.3
numpy==1.22.0
pandas==1.3.5
scikit_learn==1.0.2
scipy==1.7.3
texttable==1.6.4
torch==1.10.1+cu113
torch_geometric==2.0.3
torch_scatter==2.0.9
tqdm==4.62.3

Notes:

  • the file requirements.txt installs all dependencies for both models;
  • the dependencies including cu113 are meant to run on CUDA 11.3 (install the correct package based on your version of CUDA).

Demonstration

Next steps

  • Adding new GNN models.
  • Adding more datasets and fairness metrics.

Contact

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published