[1911.05856v1] SpiralNet++: A Fast and Highly Efficient Mesh Convolution Operator
We explicitly introduce SpiralNet++ to the domain of 3D shape meshes, where data are generally aligned instead of varied topologies, which allows SpiralNet++ to efficiently fuse neighboring node features with local geometric structure information

Abstract Intrinsic graph convolution operators with differentiable kernel functions play a crucial role in analyzing 3D shape meshes. In this paper, we present a fast and efficient intrinsic mesh convolution operator that does not rely on the intricate design of kernel function. We explicitly formulate the order of aggregating neighboring vertices, instead of learning weights between nodes, and then a fully connected layer follows to fuse local geometric structure information with vertex features. We provide extensive evidence showing that models based on this convolution operator are easier to train, and can efficiently learn invariant shape features. Specifically, we evaluate our method on three different types of tasks of dense shape correspondence, 3D facial expression classification, and 3D shape reconstruction, and show that it significantly outperforms state-of-the-art approaches while being significantly faster, without relying on shape descriptors. Our source code is available on GitHub [https: //github.com/sw-gong/spiralnet_plus].
‹Figure 1. Examples of texture transfer from a reference shape in neural pose (left) using shape correspondences predicted by SpiralNet++ (middle) and SpiralNet (right) [25]. Note that we use only 3D coordinates as input features for both methods. (Introduction)Figure 2. Examples of Spiral++ and DilatedSpiral++ on a triangle mesh. Note that the dilated version supports exponential expansion of the receptive field without increasing the spiral length. (Spiral Sequence)Figure 3. Visualization of pointwise geodesic errors (in % geodesic diameter) of our method and SpiralNet [25] on the test shapes of the FAUST [3] human dataset. The error values are saturated at 7.5% of the geodesic diameter, which corresponds to approximately 15 cm. Hot colors represent large errors. (Spiral Convolution)Figure 4. Geodesic error plot of the shape correspondence experiments on the FAUST [3] humans dataset. Geodesic error is measured according to the Princeton benchmark protocol [18]. The x axis displays the geodesic error in % of diameter and the y axis shows the percentage of correspondences that lie within a given geodesic error around the correct node. (Experiments)Figure 5. Qualitative results of 3d shape reconstruction in the CoMA [31] dataset. Pointwise error (euclidean distance from the groundtruth) is computed for visualization. The error values are saturated at 10 (millimeters). Hot colors represent large errors. (3D Facial Expression Classification)›