Sparse Sensor Placement Optimization for Classification
Choosing a limited set of sensor locations to characterize or classify a high-dimensional system is an important challenge in engineering design. Traditionally, optimizing the sensor locations involves a brute-force, combinatorial search, which is NP-hard and is computationally intractable for even moderately large problems. Using recent advances in sparsity-promoting techniques, we present a novel algorithm to solve this sparse sensor placement optimization for classification (SSPOC) that exploits low-dimensional structure exhibited by many high-dimensional systems. Our approach is inspired by compressed sensing, a framework that reconstructs data from few measurements. If only classification is required, reconstruction can be circumvented and the measurements needed are orders-of-magnitude fewer still. Our algorithm solves an $\ell_1$ minimization to find the fewest nonzero entries of the full measurement vector that exactly reconstruct the discriminant vector in feature space; these entries represent sensor locations that best inform the decision task. We demonstrate the SSPOC algorithm on five classification tasks, using datasets from a diverse set of examples, including physical dynamical systems, image recognition, and microarray cancer identification. Once training identifies sensor locations, data taken at these locations forms a low-dimensional measurement space, and we perform computationally efficient classification with accuracy approaching that of classification using full-state data. The algorithm also works when trained on heavily subsampled data, eliminating the need for unrealistic full-state training data.
Cross-validated accuracy of classification in cat/dog recognition. Panel (a) compares using r learned sensors/features (solid red and green lines) against using r random pixel sensors (dashed blue line) and projections onto the first r principal components of the full image (solid blue line). Each data point summarizes 400 random iterations. At each iteration, a different 90% subsample was used to train the classifier, whose accuracy was assessed on the remaining 10% of images. Error bars are standard deviations. (b) A summary of meancross-validated accuracy varying p, the number of pixels used in the random subsample, and r, the number of features/sensors used to construct the classifier.