[1911.00482v1] High-dimensional Nonlinear Profile Monitoring based on Deep Probabilistic Autoencoders
We find that AAE is a better model over other benchmark methods such as AE, VAE, and PCA, due to its flexible prior regularization in terms of image reconstruction accuracy and process monitoring accuracy.

Note to Practitioners Wide accessibility of imaging and profile sensors in modern industrial systems created an abundance of high-dimensional sensing variables. This led to a a growing interest in the research of high-dimensional process monitoring. However, most of the approaches in the literature assume the in-control population to lie on a linear manifold with a given basis (i.e., spline, wavelet, kernel, etc) or an unknown basis (i.e., principal component analysis and its variants), which cannot be used to efficiently model profiles with a nonlinear manifold which is common in many real-life cases. We propose deep probabilistic autoencoders as a viable unsupervised learning approach to model such manifolds. To do so, we formulate nonlinear and probabilistic extensions of the monitoring statistics from classical approaches as the expected reconstruction error (ERE) and the KL-divergence (KLD) based monitoring statistics. Through extensive simulation study, we provide insights on why latent-space based statistics are unreliable and why residual-space based ones typically perform much better for deep learning based approaches. Finally, we demonstrate the superiority of deep probabilistic models via both simulation study and a real-life case study involving images of defects from a hot steel rolling process.
‹Figure 1: Generalized depiction of 1D profiles (a) and a real-life examples of 2D profiles from a hot steel rolling process (b). (Introduction)Figure 2: Illustrations of linear (a) and nonlinear (b) 2D subspaces in 3D. For (a), Euclidean and geodesic distance are overlapping and depicted with a dotted line between point q and p. In (b), both geodesic and Euclidean distances are depicted. The coloring is to aid representation of geodesic closeness of the points on the subspace. (Introduction)Figure 3: Graphical depiction of the proposed monitoring statistics with probabilistic autoencoders (Decomposability of the Proposed Monitoring Statistics)Figure 6: Ratio of negative valued correlation coefficients and the ratio of significant (≤ 0.05) 2-tailed p-values from Pearson’s test (Hyperparameter Tunning)

E_(mathbold(z) ~ q_mbphi(mbz g mbx)) * log(p_mbtheta(mbx g mbz) =~ 1/(m sigma**2) sum(norm(mathbold(x) - f_mathbold(theta)(mathbold(z) _i))**2)) where * mathbold(z)_i ~ q_mbphi(mbz g mbx)

Figure 4: Illustrations of simulated gasket beads images. From left to right the horizontal component of center location c0 shifts while from top to bottom horizontal width a increases (Gasket Bead Simulation Setup)Figure 5: (a) and (e) shows original and reconstructed versions of a sample of IC gaskets taken from the test partition. Each image is positioned on the inferred mean locations µ(x). Coloring scheme in (b) and (c) shows how the actual center location c0 changes over the inferred locations and coloring schemes in (f) and (g) show the change in actual width a. (c) and (g) depicts IC and OC samples together for comparison. (c) and (g) adds a mediocre case of OC samples on top of (b) and (f) respectively. (d) and (h) depicts the encodings for a mediocre case of mean shift and magnitude shift, respectively, as opposed to IC samples. (Comparison Between T2 KLD and QERE Statistics)Figure 8: Mean estimations of latent code and Gaussian kernel density estimations of test statistics for an AAE model with latent dimension 2. Thresholds found from the validation set is illustrated with dashed black line. (Case Study and Results)Figure 10: Example reconstructions produced by each method for each OC class along with the original version. The first column is the original image while the second to fifth columns are AAE, AE, VAE and PCA reconstructions respectively. Each row is a different OC class and the row order is the same as in ??. (Case Study and Results)Figure 7: Detection power based comparison of AAE, AE, VAE and PCA for varying intensities of all OC behaviors. The error bars represents a 95% confidence interval (Hyperparameter Tunning)Figure 9: Example reconstructions produced by AAE, AE, VAE and PCA methods for a randomly selected IC sample, compared to the original image shown in the first column. (Case Study and Results)›