Bartolo, Alexandra, McGuire, Patrick C., Camilleri, Kenneth P., Spiteri, Christopher, Borg, Jonathan C., Farrugia, Philip J., Ormo, Jens, Gomez-Elvira, Javier, Rodriguez-Manfredi, Jose Antonio, Diaz-Martinez, Enrique, Ritter, Helge, Haschke, Robert, Oesker, Markus, Ontrup, Joerg
We have used a simple camera phone to significantly improve an `exploration system' for astrobiology and geology. This camera phone will make it much easier to develop and test computer-vision algorithms for future planetary exploration. We envision that the `Astrobiology Phone-cam' exploration system can be fruitfully used in other problem domains as well.
Here novelty detection identifies salient image features to guide autonomous robotic exploration. There is little advance knowledge of the features in the scene or the proportion that should count as outliers. A new algorithm addresses this ambiguity by modeling novel data in advance and characterizing regular data at run time. Detection thresholds adapt dynamically to reduce misclassification risk while accommodating homogeneous and heterogeneous scenes. Experiments demonstrate the technique on a representative set of navigation images from the Mars Exploration Rover "Opportunity." An efficient image analysis procedure filters each image using the integral transform. Pixel-level features are aggregated into covariance descriptors that represent larger regions. Finally, a distance metric derived from generalized eigenvalues permits novelty detection with kernel density estimation. Results suggest that exploiting training examples of novel data can improve performance in this domain.
This, coupled with limited bandwidth and latencies, motivates on-board autonomy that ensures the quality of the science data return. Increasing quality of the data requires better sample selection, data validation, and data reduction. Robotic studies in Mars-like desert terrain have advanced autonomy for long-distance exploration and seeded technologies for planetary rover missions. In these field experiments the remote science team uses a novel control strategy that intersperses preplanned activities with autonomous decision making. The robot performs automatic data collection, interpretation, and response at multiple spatial scales.
Wettergreen, David (Carnegie Mellon University) | Foil, Greydon (Carnegie Mellon University) | Furlong, Michael (Carnegie Mellon University) | Thompson, David R. (Jet Propulsion Laboratory, California Institute of Technology)
As planetary rovers expand their capabilities, traveling longer distances, deploying complex tools, and collecting voluminous scientific data, the requirements for intelligent guidance and control also grow. This, coupled with limited bandwidth and latencies, motivates onboard autonomy that ensures the quality of the science data return. Increasing quality of the data involves better sample selection, data validation, and data reduction. Robotic studies in Mars-like desert terrain have advanced autonomy for long distance exploration and seeded technologies for planetary rover missions. In these field experiments the remote science team uses a novel control strategy that intersperses preplanned activities with autonomous decision making. The robot performs automatic data collection, interpretation, and response at multiple spatial scales. Specific capabilities include instrument calibration, visual targeting of selected features, an onboard database of collected data, and a long range path planner that guides the robot using analysis of current surface and prior satellite data. Field experiments in the Atacama Desert of Chile over the past decade demonstrate these capabilities and illustrate current challenges and future directions.
Automated detection of new, interesting, unusual, or anomalous images within large data sets has great value for applications from surveillance (e.g., airport security) to science (observations that don't fit a given theory can lead to new discoveries). Many image data analysis systems are turning to convolutional neural networks (CNNs) to represent image content due to their success in achieving high classification accuracy rates. However, CNN representations are notoriously difficult for humans to interpret. We describe a new strategy that combines novelty detection with CNN image features to achieve rapid discovery with interpretable explanations of novel image content. We applied this technique to familiar images from ImageNet as well as to a scientific image collection from planetary science.