Search⌘ K

Mathematics for Graphs

Explore the mathematical foundations necessary to process graph-structured data within graph neural networks. Understand key concepts like degree matrices, normalized graph Laplacians, eigenvalues, and eigenvectors. Learn how these elements enable stable gradient-based training and support applications such as spectral image segmentation and varied graph representations.

The basic maths for processing graph-structured data

We already defined the graph signal XRN×FX \in R^{N \times F}and the adjacency matrix ARN×NA \in R^{N \times N}. A very important and practical feature is the degree of each node, which is simply the number of nodes that it is connected to. For instance, every non-corner pixel in an image has a degree of 8, which is the surrounding pixels.

If AA is binary, the degree corresponds to the number of neighbors in the graph. In general, we calculate the degree vector by summing the rows of AA. Since the degree corresponds to some kind of feature that is linked to the node, it is more convenient to place the degree vector in a diagonal N×NN \times N matrix:

Python
import torch
#rand binary Adj matrix
a = torch.rand(3,3)
a[a>0.5] = 1
a[a<=0.5] = 0
def calc_degree_matrix(a):
return torch.diag(a.sum(dim=-1))
d = calc_degree_matrix(a)
print("A:" ,a)
print("D:", d)

The degree matrix DD is fundamental in graph theory because it provides a single value of each node. It is also used for the computation of the most important graph operator: the graph Laplacian!

The graph Laplacian

The graph Laplacian is defined as:

L=DAL = D - A

In fact, the diagonal elements of LL will have the degree of the node if AA has no self-loops. On the other hand, the non-diagonal elements Lij=1L_{ij} = -1 \quad ...