[1910.10866] DFNets: Spectral CNNs for Graphs with Feedback-Looped Filters
In addition to these, we have discussed some nice properties of feedback-looped filters, such as guaranteed convergence, linear convergence time, and universal design

Abstract: We propose a novel spectral convolutional neural network (CNN) model on graph
structured data, namely Distributed Feedback-Looped Networks (DFNets). This
model is incorporated with a robust class of spectral graph filters, called
feedback-looped filters, to provide better localization on vertices, while
still attaining fast convergence and linear memory requirements. Theoretically,
feedback-looped filters can guarantee convergence w.r.t. a specified error
bound, and be applied universally to any graph without knowing its structure.
Furthermore, the propagation rule of this model can diversify features from the
preceding layers to produce strong gradient flows. We have evaluated our model
using two benchmark tasks: semi-supervised document classification on citation
networks and semi-supervised entity classification on a knowledge graph. The
experimental results show that our model considerably outperforms the
state-of-the-art methods in both benchmark tasks over all datasets.

‹Figure 1: A simplified example of illustrating feedback-looped filters, where v1 is the current vertex and the similarity of the colours indicates the correlation between vertices, e.g., v1 and v5 are highly correlated, but v2 and v6 are less correlated with v1: (a) an input graph, where λi is the original frequency to vertex vi; (b) the feedforward filtering, which attenuates some low order frequencies, e.g. λ2, and amplify other frequencies, e.g. λ5 and λ6; (c) the feedback filtering, which reduces the error in the frequencies generated by (b), e.g. λ6. (Introduction)Figure 5: Accuracy (%) of DFNet under different polynomial orders p and q. (Comparison under Different Polynomial Orders)

Figure 6: Accuracy (%) of DFNet under different polynomial orders p and q. (Comparison under Different Polynomial Orders)Figure 7: Accuracy (%) of our models in three cases: (1) using both scaled-normalization and cut-off frequency, (2) using only cut-off frequency, and (3) using only scaled-normalization. (Evaluation of Scaled-Normalization and Cut-off Frequency)Figure 8: The t-SNE visualization of the 2-D node embedding space for the Pubmed dataset in GCN, GAT, and Our method. (Node Embeddings)Figure 9: GCN Figure 10: GAT Figure 11: DFNet (ours) Figure 12: The t-SNE visualization of the 2-D node embedding space for the Pubmed dataset. Figure 13: GCN Figure 14: GAT Figure 15: DFNet (ours) Figure 16: The t-SNE visualization of the 2-D node embedding space for the Cora dataset. (Node Embeddings)Figure 17: The t-SNE visualization of the 2-D node embedding space for the Cora dataset in GCN, GAT, and Our method. (Node Embeddings)›