Mathematics for Graphs
Explore the mathematical foundations necessary to process graph-structured data within graph neural networks. Understand key concepts like degree matrices, normalized graph Laplacians, eigenvalues, and eigenvectors. Learn how these elements enable stable gradient-based training and support applications such as spectral image segmentation and varied graph representations.
We'll cover the following...
The basic maths for processing graph-structured data
We already defined the graph signal and the adjacency matrix . A very important and practical feature is the degree of each node, which is simply the number of nodes that it is connected to. For instance, every non-corner pixel in an image has a degree of 8, which is the surrounding pixels.
If is binary, the degree corresponds to the number of neighbors in the graph. In general, we calculate the degree vector by summing the rows of . Since the degree corresponds to some kind of feature that is linked to the node, it is more convenient to place the degree vector in a diagonal matrix:
The degree matrix is fundamental in graph theory because it provides a single value of each node. It is also used for the computation of the most important graph operator: the graph Laplacian!
The graph Laplacian
The graph Laplacian is defined as:
In fact, the diagonal elements of will have the degree of the node if has no self-loops. On the other hand, the non-diagonal elements ...