Graph Convolutional Networks
Explore the fundamental concepts of graph convolutional networks within graph neural networks. Understand graph and node classification tasks, the role of the graph Laplacian, spectral filtering, and the use of Chebyshev expansion to efficiently perform convolutions on graphs.
We'll cover the following...
Types of graph tasks: graph and node classification
We discussed a bit about the input representation. But what about the target (output)? The most basic tasks in graph neural networks are:
-
Graph classification: We have a lot of graphs and we would like to find a single label for each individual graph (similar to image classification). This task is casted as a standard supervised problem. In graphs, you will see the term inductive learning for this task.
-
Node classification: Usually, in this type of task, we have a huge graph (>5000 nodes) and we try to find a label for the nodes (similar to image segmentation). Importantly, we have very few labeled nodes to train the model (for instance <5%). The aim is to predict the missing labels for all the other nodes in the graph. That is why this task is formulated as a semi-supervised learning task or transductive learning equivalently. It is called semi-supervised because even though the nodes do not have labels, we feed the graph (with all the nodes) in the neural network and formulate a supervised loss term for the labeled nodes only.
Next, you will be provided with some minimal theory on how graph data is processed
How are graph convolutions layer formed
Principle: Convolution in the vertex domain is equivalent to multiplication in the graph spectral domain.
The most straightforward implementation of a graph neural network would be something like this:
Where is a trainable parameter and ...