[1910.11093v1] Scale-Equivariant Steerable Networks
We have obtained the exact formula for scale-equivariant mappings and demonstrated how it can be implemented for discretized signals

The effectiveness of Convolutional Neural Networks (CNNs) has been substantially attributed to their built-in property of translation equivariance. However, CNNs do not have embedded mechanisms to handle other types of transformations. In this work, we pay attention to scale changes, which regularly appear in various tasks due to the changing distances between the objects and the camera. First, we introduce the general theory for building scale-equivariant convolutional networks with steerable filters. We develop scale-convolution and generalize other common blocks to be scale-equivariant. We demonstrate the computational efficiency and numerical stability of the proposed method. We compare the proposed models to the previously developed methods for scale equivariance and local scale invariance. We demonstrate state-of-the-art results on the MNIST-scale dataset and on the STL-10 dataset in the supervised learning setting.
‹

Figure 1: Left: the way steerable filters are computed using steerable filter basis. Middle and right: a representation of scale-convolution using Equation ?? and Equation ??. As an example we use input signal f with 3 channels. It has 1 scale on T and 4 scales on H. It is convolved with filter κ = w × Ψ without scale interaction, which produces the output with 2 channels and 4 scales as well. Here we represent only channels of the signals and the filter. Spatial components are hidden for simplicity. (Implementation)Figure 2: Equivariance error ∆ as a function of the number of layers (left), downscaling applied to the input image (middle), and as a function of number of scales in interscale interactions (right). The bars indicate standard deviation. (Experiments)›