Tutorial on Graph Rewiring: From Theory to Applications in Fairness
This page provides information and materials about “Graph Rewiring: From Theory to Applications in Fairness”, following the tutorial presented at Learning on Graphs 2022 on the 11th of December, 2022. For the corresponding resources, check out the following links:
Here are some additional resources, and we expect to keep updating and improving the content in the future, stay tuned!
Organizers
Panel
Outline
Section | Content |
---|---|
Motivation 10am ET (16:00 CET) |
|
Introduction to Spectral Theory Code Available |
|
Transductive Rewiring Code Available |
|
Inductive Rewiring Code Available |
|
Graph Fairness |
|
Panel 12am-1pm ET (18:00-19:00 CET) |
|
Q&A | Questions |
Citation
- If you use the code or the tutorial from parts Introduction to Spectral Theory, Introduction to Lovász Bound, Transductive RW or Inductive Rewiring (DiffWire), please cite the original sources and:
@inproceedings{arnaiz2022diffwire,
title={DiffWire: Inductive Graph Rewiring via the Lovasz Bound},
author={Adrian Arnaiz-Rodriguez and Ahmed Begga and Francisco Escolano and Nuria Oliver},
booktitle={The First Learning on Graphs Conference},
year={2022},
url={https://openreview.net/pdf?id=IXvfIex0mX6f}
}
- If you use the code or the tutorial from Graph Fairness part, please cite the original sources : To Be Added.
Tutorial Overview
Graph Neural Networks (GNNs) have been shown to achieve competitive results to tackle graph-related tasks, such as node and graph classification, link prediction and node and graph clustering in a variety of domains. Despite their promising results, GNNs have been reported to suffer from over-smoothing, over-squashing and under-reaching. Graph rewiring and graph pooling have been proposed in the literature as solutions to address these limitations. Graph rewiring consists of modifying (editing and/or re-weighting) the edges of a graph so that the information flow is optimized for a specific task, such as graph/node classification or link prediction. Many graph rewiring methods rely on edge sampling strategies: first, the edges are assigned new weights according to a relevance function and then they are re-sampled according to the new weights to retain the most relevant edges (i.e. those with larger weights). Edge relevance might be computed in different ways, including randomly, based on similarity or on the edge’s curvature. This tutorial provides an overview of the most relevant techniques proposed in the literature for graph rewiring based on diffusion, curvature or spectral concepts. It will explain their relationship and will present the most relevant state-of-the-art techniques and their application to different domains. The tutorial will outline open questions in this field, both from a theoretical, empirical and ethical perspective.
Goals
The main goal of this tutorial is to teach the fundamentals of graph rewiring and its current challenges. We will motivate the need for mathematically sound graph rewiring methods as a solution to address the main limitations of GNNs: under-reaching, over-smoothing and over-squashing. We will explain the two main approaches proposed in the literature to achieve graph rewiring:
- Transductive methods compute a new convolution matrix of each graph as a pre-processing step in order to improve the performance of the task at hand. Examples are parameterized diffusion or curvature-based approaches.
- Inductive methods learn new convolution matrices from training on subgraphs/graphs and then predict those in unseen graphs. Ideally, this process is fully differentiable and parameter free. We will delve into the implementation of this methods. Hands-on section.
In addition, we will discuss the potential that graph rewiring has to address social and ethical challenges posed by AI, and particularly as a tool to achieve algorithmic fairness.
Target Audicence
This tutorial has a good balance between intermediate and advanced materials. Attendees should have knowledge of Graph Theory and Machine Learning, particularly GNNs. Intermediate algebra, probability theory and programming knowledge are recommended, as we will show programming libraries and specific examples in the form of Jupyter notebooks to ease the understanding of key concepts. This tutorial will be of particular interest to those who would like to deepen their knowledge on GNNS for a variety of purposes, including building more expressive GNN architectures, improving their robustness, enhancing their explicability or developing trustworthy graph methods. As the tutorial will be held virtually, the maximum number of participants are not a strong requirement (ideally up to 200).
Expected learning outcomes
Attendees of this tutorial will acquire understanding of the essential concepts in:
- Spectral Graph Theory
- Laplacians
- Dirichlet energies
- Isoperimetric problems
- Random walks
- Spectral gap
- Commute Times (aka effective resistance)
- Rewiring
- Transductive Methods
- Inductive Methods
- Theory and hands-on content
- Graph Algorithmic Fairness
- Basic concepts of algorithmic fairness
- Dyadic fairness
- Edge editing methods
Last but not least, attendees will acquire technical skills to perform inductive rewiring in a hands-on part of the tutorial, and very useful insights from recognized experts in the field in the panel session.
This combination of methodologies will provide attendees with the necessary knowledge and tools to use GNNs to address problems in a variety of domains, such as medicine, biology, chemistry, social and communication network analysis and computational social sciences.
Learning on Graphs Conference 2022
Learning on Graphs. LoG22 was the first edition of this conference. There were 70+ attendes during the 3 hours of the conference on Sunday 11th December. The conference was held in a virtual format, and the talks were recorded and uploaded to the conference website. The conference was a great success, and we are looking forward to the next edition.
The posts promoting the event had (at least) 25000 visualizations on different social media platforms.