Enable JavaScript to see more content
Recently hyped ML content linked in one simple page
Sources: reddit/r/{MachineLearning,datasets}, arxivsanity, twitter, kaggle/kernels, hackernews, awesomedatasets, sota changes
Made by: Deep Phrase HK Limited
1


[1602.02660] Exploiting Cyclic Symmetry in Convolutional Neural Networks
We would like to to extend our work to volumetric data, where reducing the number of parameters is even more important and where a larger number of symmetries can be exploited without requiring costly interpolation.
Abstract: Many classes of images exhibit rotational symmetry. Convolutional neural
networks are sometimes trained using data augmentation to exploit this, but
they are still required to learn the rotation equivariance properties from the
data. Encoding these properties into the network architecture, as we are
already used to doing for translation equivariance by using convolutional
layers, could result in a more efficient use of the parameter budget by
relieving the model from learning them. We introduce four operations which can
be inserted into neural network models as layers, and which can be combined to
make these models partially equivariant to rotations. They also enable
parameter sharing across different orientations. We evaluate the effect of
these architectural modifications on three datasets which exhibit rotational
symmetry and demonstrate improved performance with smaller models.
‹Figure 1. The four operations that constitute our proposed framework for building rotation equivariant neural networks. (Introduction)Figure 2. Convolving an image with a rotated filter (middle) and inversely rotating the result is the same as convolving the inversely rotated image with the unrotated filter (bottom). This follows from the fact that rotation is distributive w.r.t. convolution. (Cyclic symmetry)Figure 3. Schematic representation of the effect of the cyclic slice, roll and pool operations on the feature maps in a CNN. Arrows represent network layers. Each square represents a minibatch of feature maps. The letter ‘R’ is used to clearly distinguish orientations. Different colours are used to indicate that feature maps are qualitatively different, i.e. they are not rotations of each other. Feature maps in a column are stacked along the batch dimension in practice; feature maps in a row are stacked along the feature dimension. (Encoding equivariance in neural nets)Figure 4. Plankton Figure 5. Galaxies Figure 6. Example images for the Plankton and Galaxies datasets, which are rotation invariant. (Datasets)Figure 7. Satellite image Figure 8. Building labels Figure 9. Example tile from the Massachusetts buildings dataset, which is sameequivariant to rotation, and corresponding labels. (Datasets)Figure 10. Baseline architectures for plankton (left), galaxies (middle) and Massachusetts buildings (right). Conv. layers are shown in red, pooling layers in blue, dense layers in orange. The numbers of units are indicated on the left, filter sizes on the right. ReLUs are used throughout. Dropout with p = 0.5 is applied before all dense layers. (Experimental setup)›



Related: TFIDF
[1612.09346] Rotation equivariant vector field networks[1803.06253] Land cover mapping at very high resolution with rotation equivariant CNNs: towards small yet accurate models[1602.07576] Group Equivariant Convolutional Networks[1904.09472] ChoiceNet: CNN learning through choice of multiple feature map representations[1612.04642] Harmonic Networks: Deep Translation and Rotation Equivariance[1707.09873] Representation Learning on Large and Small Data[1604.06720] Learning rotation invariant convolutional filters for texture classification[1604.06318] TIPOOLING: transformationinvariant pooling for feature learning in Convolutional Neural Networks[1412.5104] Locally ScaleInvariant Convolutional Neural Networks[1707.09725] Analysis and Optimization of Convolutional Neural Network Architectures
Mentions
[1402.4437] Learning the Irreducible Representations of Commutative Lie Groups[1602.07576] Group Equivariant Convolutional Networks[1512.03385] Deep Residual Learning for Image Recognition[1412.6980] Adam: A Method for Stochastic Optimization[1411.5908] Understanding image representations by measuring their equivariance and equivalence[1409.1556] Very Deep Convolutional Networks for LargeScale Image Recognition[1409.4842] Going Deeper with Convolutions[1601.07532] Learning to Extract Motion from Videos in Convolutional Neural Networks[1507.08754] FlipRotatePooling Convolution and Split Dropout on Convolution Neural Networks for Image Classification

Related: TFIDF
[1612.09346] Rotation equivariant vector field networks[1803.06253] Land cover mapping at very high resolution with rotation equivariant CNNs: towards small yet accurate models[1602.07576] Group Equivariant Convolutional Networks[1904.09472] ChoiceNet: CNN learning through choice of multiple feature map representations[1612.04642] Harmonic Networks: Deep Translation and Rotation Equivariance[1707.09873] Representation Learning on Large and Small Data[1604.06720] Learning rotation invariant convolutional filters for texture classification[1604.06318] TIPOOLING: transformationinvariant pooling for feature learning in Convolutional Neural Networks[1412.5104] Locally ScaleInvariant Convolutional Neural Networks[1707.09725] Analysis and Optimization of Convolutional Neural Network Architectures
Mentions
[1402.4437] Learning the Irreducible Representations of Commutative Lie Groups[1602.07576] Group Equivariant Convolutional Networks[1512.03385] Deep Residual Learning for Image Recognition[1412.6980] Adam: A Method for Stochastic Optimization[1411.5908] Understanding image representations by measuring their equivariance and equivalence[1409.1556] Very Deep Convolutional Networks for LargeScale Image Recognition[1409.4842] Going Deeper with Convolutions[1601.07532] Learning to Extract Motion from Videos in Convolutional Neural Networks[1507.08754] FlipRotatePooling Convolution and Split Dropout on Convolution Neural Networks for Image Classification