arcsec
Machine Learning for Exoplanet Detection in High-Contrast Spectroscopy: Revealing Exoplanets by Leveraging Hidden Molecular Signatures in Cross-Correlated Spectra with Convolutional Neural Networks
Garvin, Emily O., Bonse, Markus J., Hayoz, Jean, Cugno, Gabriele, Spiller, Jonas, Patapis, Polychronis A., de la Roche, Dominique Petit Dit, Nath-Ranga, Rakesh, Absil, Olivier, Meinshausen, Nicolai F., Quanz, Sascha P.
The new generation of observatories and instruments (VLT/ERIS, JWST, ELT) motivate the development of robust methods to detect and characterise faint and close-in exoplanets. Molecular mapping and cross-correlation for spectroscopy use molecular templates to isolate a planet's spectrum from its host star. However, reliance on signal-to-noise ratio (S/N) metrics can lead to missed discoveries, due to strong assumptions of Gaussian independent and identically distributed noise. We introduce machine learning for cross-correlation spectroscopy (MLCCS); the method aims to leverage weak assumptions on exoplanet characterisation, such as the presence of specific molecules in atmospheres, to improve detection sensitivity for exoplanets. MLCCS methods, including a perceptron and unidimensional convolutional neural networks, operate in the cross-correlated spectral dimension, in which patterns from molecules can be identified. We test on mock datasets of synthetic planets inserted into real noise from SINFONI at K-band. The results from MLCCS show outstanding improvements. The outcome on a grid of faint synthetic gas giants shows that for a false discovery rate up to 5%, a perceptron can detect about 26 times the amount of planets compared to an S/N metric. This factor increases up to 77 times with convolutional neural networks, with a statistical sensitivity shift from 0.7% to 55.5%. In addition, MLCCS methods show a drastic improvement in detection confidence and conspicuity on imaging spectroscopy. Once trained, MLCCS methods offer sensitive and rapid detection of exoplanets and their molecular species in the spectral dimension. They handle systematic noise and challenging seeing conditions, can adapt to many spectroscopic instruments and modes, and are versatile regarding atmospheric characteristics, which can enable identification of various planets in archival and future data.
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- North America > United States > Texas > Travis County > Austin (0.04)
- (2 more...)
High Frequency, High Accuracy Pointing onboard Nanosats using Neuromorphic Event Sensing and Piezoelectric Actuation
Latif, Yasir, Anastasiou, Peter, Ng, Yonhon, Prime, Zebb, Lu, Tien-Fu, Tetlow, Matthew, Mahony, Robert, Chin, Tat-Jun
As satellites become smaller, the ability to maintain stable pointing decreases as external forces acting on the satellite come into play. At the same time, reaction wheels used in the attitude determination and control system (ADCS) introduce high frequency jitter which can disrupt pointing stability. For space domain awareness (SDA) tasks that track objects tens of thousands of kilometres away, the pointing accuracy offered by current nanosats, typically in the range of 10 to 100 arcseconds, is not sufficient. In this work, we develop a novel payload that utilises a neuromorphic event sensor (for high frequency and highly accurate relative attitude estimation) paired in a closed loop with a piezoelectric stage (for active attitude corrections) to provide highly stable sensor-specific pointing. Event sensors are especially suited for space applications due to their desirable characteristics of low power consumption, asynchronous operation, and high dynamic range. We use the event sensor to first estimate a reference background star field from which instantaneous relative attitude is estimated at high frequency. The piezoelectric stage works in a closed control loop with the event sensor to perform attitude corrections based on the discrepancy between the current and desired attitude. Results in a controlled setting show that we can achieve a pointing accuracy in the range of 1-5 arcseconds using our novel payload at an operating frequency of up to 50Hz using a prototype built from commercial-off-the-shelf components. Further details can be found at https://ylatif.github.io/ultrafinestabilisation
- North America > United States > Alabama > Lamar County (0.25)
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.04)
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
Selection functions of strong lens finding neural networks
Herle, A., O'Riordan, C. M., Vegetti, S.
Convolution Neural Networks trained for the task of lens finding with similar architecture and training data as is commonly found in the literature are biased classifiers. An understanding of the selection function of lens finding neural networks will be key to fully realising the potential of the large samples of strong gravitational lens systems that will be found in upcoming wide-field surveys. We use three training datasets, representative of those used to train galaxy-galaxy and galaxy-quasar lens finding neural networks. The networks preferentially select systems with larger Einstein radii and larger sources with more concentrated source-light distributions. Increasing the detection significance threshold to 12$\sigma$ from 8$\sigma$ results in 50 per cent of the selected strong lens systems having Einstein radii $\theta_\mathrm{E}$ $\ge$ 1.04 arcsec from $\theta_\mathrm{E}$ $\ge$ 0.879 arcsec, source radii $R_S$ $\ge$ 0.194 arcsec from $R_S$ $\ge$ 0.178 arcsec and source S\'ersic indices $n_{\mathrm{Sc}}^{\mathrm{S}}$ $\ge$ 2.62 from $n_{\mathrm{Sc}}^{\mathrm{S}}$ $\ge$ 2.55. The model trained to find lensed quasars shows a stronger preference for higher lens ellipticities than those trained to find lensed galaxies. The selection function is independent of the slope of the power-law of the mass profiles, hence measurements of this quantity will be unaffected. The lens finder selection function reinforces that of the lensing cross-section, and thus we expect our findings to be a general result for all galaxy-galaxy and galaxy-quasar lens finding neural networks.
- Europe > Germany > North Rhine-Westphalia > Upper Bavaria > Munich (0.04)
- South America > Argentina (0.04)
Desaturating EUV observations of solar flaring storms
Guastavino, Sabrina, Piana, Michele, Massone, Anna Maria, Schwartz, Richard, Benvenuto, Federico
The three steps of the DESAT pipeline strongly exploit the knowledge of an estimate of the image background and this is the actual drawback of this approach. Background estimation is in general a tricky issue in solar imaging and DESAT addresses it by exploiting a specific aspect of AIA hardware. In fact, this telescope is equipped with a feedback system that automatically reduces the exposure time in correspondence of intense emission. It follows that a typical AIA observation along a time range of some minutes, is characterized by some unsaturated frames that can be utilized for background estimation. Specifically, for each saturated image, DESAT interpolates the pixel content belonging to the two unsaturated maps recorded just before and just after it and the resulting signal is assigned to the background pixels. But what if AIA is observing an extremely intense flaring storm so that the feedback system becomes ineffective and strong saturation effects occur for a whole time series of acquired images?
- Europe > Italy (0.05)
- North America > United States > New York > New York County > New York City (0.04)