Autonomous spacecraft relative navigation technology has been planned for and applied to many famous space missions. The development of on-board electronics systems has enabled the use of vision-based and LiDAR-based methods to achieve better performances. Meanwhile, deep learning has reached great success in different areas, especially in computer vision, which has also attracted the attention of space researchers. However, spacecraft navigation differs from ground tasks due to high reliability requirements but lack of large datasets. This survey aims to systematically investigate the current deep learning-based autonomous spacecraft relative navigation methods, focusing on concrete orbital applications such as spacecraft rendezvous and landing on small bodies or the Moon. The fundamental characteristics, primary motivations, and contributions of deep learning-based relative navigation algorithms are first summarised from three perspectives of spacecraft rendezvous, asteroid exploration, and terrain navigation. Furthermore, popular visual tracking benchmarks and their respective properties are compared and summarised. Finally, potential applications are discussed, along with expected impediments.
Researchers often spend weeks sifting through decades of unlabeled satellite imagery(on NASA Worldview) in order to develop datasets on which they can start conducting research. We developed an interactive, scalable and fast image similarity search engine (which can take one or more images as the query image) that automatically sifts through the unlabeled dataset reducing dataset generation time from weeks to minutes. In this work, we describe key components of the end to end pipeline. Our similarity search system was created to be able to identify similar images from a potentially petabyte scale database that are similar to an input image, and for this we had to break down each query image into its features, which were generated by a classification layer stripped CNN trained in a supervised manner. To store and search these features efficiently, we had to make several scalability improvements. To improve the speed, reduce the storage, and shrink memory requirements for embedding search, we add a fully connected layer to our CNN make all images into a 128 length vector before entering the classification layers. This helped us compress the size of our image features from 2048 (for ResNet, which was initially tried as our featurizer) to 128 for our new custom model. Additionally, we utilize existing approximate nearest neighbor search libraries to significantly speed up embedding search. Our system currently searches over our entire database of images at 5 seconds per query on a single virtual machine in the cloud. In the future, we would like to incorporate a SimCLR based featurizing model which could be trained without any labelling by a human (since the classification aspect of the model is irrelevant to this use case).
Artificial intelligence and machine learning have had a profound influence on a wide range of areas and businesses, where they have paved the way for the automation and optimization of operations as well as the development of new business opportunities. However, due to quick advances, these technological innovations are being used in research and development outside of our atmosphere and into space. Now, let's take a quick look at how NASA uses AI and Machine Learning for various space projects and earth science. NASA is constantly progressing in AI applications for space research, such as automating image analysis for the galaxy, planet, and star classification, developing autonomous space probes that can avoid space junk without human involvement, by using AI-based radio technology to make communication networks more effective and disturbance-free. However, the creation of autonomous landers (robots) that wander the surface of other planets is one of NASA's most critical AI applications.
A group of researchers is using artificial intelligence techniques to calibrate some of NASA's images of the Sun, helping improve the data that scientists use for solar research. A solar telescope has a tough job. Staring at the Sun takes a harsh toll, with a constant bombardment by a never-ending stream of solar particles and intense sunlight. Over time, the sensitive lenses and sensors of solar telescopes begin to degrade. To ensure the data such instruments send back is still accurate, scientists recalibrate periodically to make sure they understand just how the instrument is changing.
The abundance of clouds, located both spatially and temporally, often makes remote sensing (RS) applications with optical images difficult or even impossible to perform. Traditional cloud removing techniques have been studied for years, and recently, Machine Learning (ML)-based approaches have also been considered. In this manuscript, a novel method for the restoration of clouds-corrupted optical images is presented, able to generate the whole optical scene of interest, not only the cloudy pixels, and based on a Joint Data Fusion paradigm, where three deep neural networks are hierarchically combined. Spatio-temporal features are separately extracted by a conditional Generative Adversarial Network (cGAN) and by a Convolutional Long Short-Term Memory (ConvLSTM), from Synthetic Aperture Radar (SAR) data and optical time-series of data respectively, and then combined with a U-shaped network. The use of time-series of data has been rarely explored in the state of the art for this peculiar objective, and moreover existing models do not combine both spatio-temporal domains and SAR-optical imagery. Quantitative and qualitative results have shown a good ability of the proposed method in producing cloud-free images, by also preserving the details and outperforming the cGAN and the ConvLSTM when individually used. Both the code and the dataset have been implemented from scratch and made available to interested researchers for further analysis and investigation.
Since it launched on February 11, 2010, NASA's Solar Dynamics Observatory, or SDO, has provided high-definition images of the Sun for over a decade. The images have provided a detailed look at various solar phenomena. SDO uses Atmospheric Imaging Assembly (AIA) to continuously look at the sun, taking images in 10 wavelengths every 10 seconds. It creates a wealth of information about our Sun never previously possible. Due to constant staring, AIA degrades over time, and the data needs to be frequently calibrated.
A group of researchers is using artificial intelligence techniques to calibrate some of NASA's images of the Sun, helping improve the data that scientists use for solar research. The new technique was published in the journal Astronomy & Astrophysics on April 13, 2021. A solar telescope has a tough job. Staring at the Sun takes a harsh toll, with a constant bombardment by a never-ending stream of solar particles and intense sunlight. Over time, the sensitive lenses and sensors of solar telescopes begin to degrade.
NASA has started nurturing AI technologies, from multiple Silicon Valley tech giants like Google, IBM and Intel for further enhancement in space science. NASA is focused on levelling up the ante to study and research more about life in outer space with Artificial Intelligence in the nearby future. Leveraging Artificial Intelligence can provide some amazing unknown data for accurate prediction of the behaviour of the universe. Silicon Valley tech giants are world-known for their constant innovation in Artificial Intelligence to enhance the traditional work system efficiently and effectively. Thus, NASA is partnering with these reputed companies to apply advanced machine learning algorithms for solving complex universal problems.
Data imbalance is a ubiquitous problem in machine learning. In large scale collected and annotated datasets, data imbalance is either mitigated manually by undersampling frequent classes and oversampling rare classes, or planned for with imputation and augmentation techniques. In both cases balancing data requires labels. In other words, only annotated data can be balanced. Collecting fully annotated datasets is challenging, especially for large scale satellite systems such as the unlabeled NASA's 35 PB Earth Imagery dataset. Although the NASA Earth Imagery dataset is unlabeled, there are implicit properties of the data source that we can rely on to hypothesize about its imbalance, such as distribution of land and water in the case of the Earth's imagery. We present a new iterative method to balance unlabeled data. Our method utilizes image embeddings as a proxy for image labels that can be used to balance data, and ultimately when trained increases overall accuracy.
Maplesoft offers AI-backed technology that can help students solve math problems. Our brains have long relied on machines to help with mathematics – calculators being the most obvious example. Waterloo, Ont.-based Maplesoft offers AI-backed technology that can help students solve math problems, check their homework and explore graphs in 3-D within seconds. "Our mission is to just make math more accessible," says Karishma Punwani, Maplesoft's director of academic products. "We want to change the way students view, learn and access math to help them see the awe in it." The Canadian-built technology isn't only used in the classroom: Maplesoft's software is also used by engineers and researchers at organizations such as Google, NASA, the Canadian Space Agency and research labs around the world.