[1911.03584v1] On the Relationship between Self-Attention and Convolutional Layers
We showed that self-attention layers applied to images can express any convolutional layer (given sufficiently many heads) and that learned fully-attentional models do behave similar to CNN in practice.More generally, fully-attentional models seem to learn a generalization of CNNs where the kernel pattern is learned at the same time as the filters—similar to deformable convolutions (??)

Recent trends of incorporating attention mechanisms in vision have led researchers to reconsider the supremacy of convolutional layers as a primary building block. Beyond helping CNNs to handle long-range dependencies, ? showed that attention can completely replace convolution and achieve state-of-the-art performance on vision tasks.This raises the question: do learned attention layers operate similarly to convolutional layers? This work provides evidence that attention layers can perform convolution and, indeed, they often learn to do so in practice. Specifically, we prove that a multi-head self-attention layer with sufficient number of heads is at least as expressive as any convolutional layer.Our numerical experiments then show that the phenomenon also occurs in practice, corroborating our analysis.Our code is publicly available [https://github.com/epfml/ attention-cnn/tree/arxiv-v1 ].
‹