Goto

Collaborating Authors

 machine-learning method


Improving the discovery of near-Earth objects with machine-learning methods

Vereš, Peter, Cloete, Richard, Payne, Matthew J., Loeb, Abraham

arXiv.org Artificial Intelligence

We present a comprehensive analysis of the digest2 parameters for candidates of the Near-Earth Object Confirmation Page (NEOCP) that were reported between 2019 and 2024. Our study proposes methods for significantly reducing the inclusion of non-NEO objects on the NEOCP. Despite the substantial increase in near-Earth object (NEO) discoveries in recent years, only about half of the NEOCP candidates are ultimately confirmed as NEOs. Therefore, much observing time is spent following up on non-NEOs. Furthermore, approximately 11% of the candidates remain unconfirmed because the follow-up observations are insufficient. These are nearly 600 cases per year. To reduce false positives and minimize wasted resources on non-NEOs, we refine the posting criteria for NEOCP based on a detailed analysis of all digest2 scores. We investigated 30 distinct digest2 parameter categories for candidates that were confirmed as NEOs and non-NEOs. From this analysis, we derived a filtering mechanism based on selected digest2 parameters that were able to exclude 20% of the non-NEOs from the NEOCP while maintaining a minimal loss of true NEOs. We also investigated the application of four machine-learning (ML) techniques, that is, the gradient-boosting machine (GBM), the random forest (RF) classifier, the stochastic gradient descent (SGD) classifier, and neural networks (NN) to classify NEOCP candidates as NEOs or non-NEOs. Based on digest2 parameters as input, our ML models achieved a precision of approximately 95% in distinguishing between NEOs and non-NEOs. Results. Combining the digest2 parameter filter with an ML-based classification model, we demonstrate a significant reduction in non-NEOs on the NEOCP that exceeds 80%, while limiting the loss of NEO discovery tracklets to 5.5%. Importantly, we show that most follow-up tracklets of initially misclassified NEOs are later correctly identified as NEOs.


Machine-learning method used for self-driving cars could improve lives of type-1 diabetes patients

Robohub

Scientists at the University of Bristol have shown that reinforcement learning, a type of machine learning in which a computer program learns to make decisions by trying different actions, significantly outperforms commercial blood glucose controllers in terms of safety and effectiveness. By using offline reinforcement learning, where the algorithm learns from patient records, the researchers improve on prior work, showing that good blood glucose control can be achieved by learning from the decisions of the patient rather than by trial and error. Type 1 diabetes is one of the most prevalent auto-immune conditions in the UK and is characterised by an insufficiency of the hormone insulin, which is responsible for blood glucose regulation. Many factors affect a person's blood glucose and therefore it can be a challenging and burdensome task to select the correct insulin dose for a given scenario. Current artificial pancreas devices provide automated insulin dosing but are limited by their simplistic decision-making algorithms.


Delivering Document Conversion as a Cloud Service with High Throughput and Responsiveness

#artificialintelligence

Document understanding is a key business process in the data-driven economy since documents are central to knowledge discovery and business insights. Converting documents into a machine-processable format is a particular challenge here due to their huge variability in formats and complex structure. Accordingly, many algorithms and machine-learning methods emerged to solve particular tasks such as Optical Character Recognition (OCR), layout analysis, table-structure recovery, figure understanding, etc. We observe the adoption of such methods in document understanding solutions offered by all major cloud providers. Yet, publications outlining how such services are designed and optimized to scale in the cloud are scarce.


A Mathematical Framework for MRI "Hallucinations"

#artificialintelligence

Machine-learning methods are being actively developed for computed imaging systems like MRI. However, these methods occasionally introduce false, unexplainable structures in images, known as hallucinations, that can lead to incorrect diagnoses. Researchers at the Beckman Institute for Advanced Science and Technology and the Computational Imaging Science Laboratory have defined a mathematical framework for identifying hallucinations, a first step toward reducing their frequency. This work, "On hallucinations in tomographic image reconstruction," is published in IEEE Transactions on Medical Imaging in a special issue on machine learning methods for image reconstruction. Most modern medical imaging devices -- such as MRI, computed tomography, and PET -- do not record images directly.


How AI is helping the natural sciences

#artificialintelligence

The impact of climate change on Brazil's Atlantic coastline is a research focus at the University of São Paulo's machine-intelligence centre.Credit: Antonello Veneri/AFP via Getty Artificial intelligence (AI) is increasingly becoming a tool for researchers in other science and technology fields, forging collaborations across disciplines. Stanford University in California, which produces an index that tracks AI-related data, finds in its 2021 report that the number of AI journal publications grew by 34.5% from 2019 to 2020; up from 19.6% between 2018 and 2019 (see go.nature.com/3mdt2yq). AI publications represented 3.8% of all peer-reviewed scientific publications worldwide in 2019, up from 1.3% in 2011. Five AI researchers describe the fruits of these collaborations, beyond journal publications, and talk about how they are helping to break down barriers between disciplines. At the University of São Paulo in Brazil, where I lead the Center for Artificial Intelligence (C4AI), our main goal is to produce machine-intelligence research that has a direct impact on society and industry.


Machine learning made easy for optimizing chemical reactions

Nature

The optimization of reactions used to synthesize target compounds is pivotal to chemical research and discovery, whether in developing a route for manufacturing a life-saving medicine1 or unlocking the potential of a new material2. But reaction optimization requires iterative experiments to balance the often conflicting effects of numerous coupled variables, and frequently involves finding the sweet spot among thousands of possible sets of experimental conditions. Expert synthetic chemists currently navigate this expansive experimental void using simplified model reactions, heuristic approaches and intuition derived from observation of experimental data3. Writing in Nature, Shields et al.4 report machine-learning software that can optimize diverse classes of reaction with fewer iterations, on average, than are needed by humans. Machine learning has emerged as a useful tool for various aspects of chemical synthesis, because it is ideally suited to extrapolating predictive models that are used to solve synthetic problems by recognizing patterns in multidimensional data sets5.


Analyzing Thermal Spectra with Machine Learning

#artificialintelligence

Editor's note: Astrobites is a graduate-student-run organization that digests astrophysical literature for undergraduate students. We hope you enjoy this post from astrobites; the original can be viewed at astrobites.org. Galaxy clusters are among the largest gravitationally bound structures in the universe. One of their defining characteristics is that they tend to be embedded within a large reservoir of superheated gas, known as the intracluster medium (ICM). With temperatures up to 108 Kelvin, the ICM is a strong emitter of X-ray radiation.


Using machine learning to estimate COVID-19's seasonal cycle – IAM Network

#artificialintelligence

One of the many unanswered scientific questions about COVID-19 is whether it is seasonal like the flu -- waning in warm summer months then resurging in the fall and winter. Now scientists at Lawrence Berkeley National Laboratory (Berkeley Lab) are launching a project to apply machine-learning methods to a plethora of health and environmental datasets, combined with high-resolution climate models and seasonal forecasts, to tease out the answer. "Environmental variables, such as temperature, humidity, and UV [ultraviolet radiation] exposure, can have an effect on the virus directly, in terms of its viability. They can also affect the transmission of the virus and the formation of aerosols," said Berkeley Lab scientist Eoin Brodie, the project lead. "We will use state-of-the-art machine-learning methods to separate the contributions of social factors from the environmental factors to attempt to identify those environmental variables to which disease dynamics are most sensitive."


Interview: How artificial intelligence will change medicine

#artificialintelligence

Question: You lead the "Scientific Data Management" research group at TIB – Leibniz Information Centre for Science and Technology. You focus your research on how big data technologies can be used in the health sector to improve health care. What exactly are you researching? The amount of available big data has grown drastically in the last decade, and it is expected a faster growth rate in the coming years. Specifically, in the biomedical domain, there are a wide variety of methods, e.g.


Structure-based AI tool can predict wide range of very different reactions

#artificialintelligence

New software has been created that can predict a wide range of reaction outcomes but is also more flexible than other programs when it comes to dealing with completely different chemical problems. The machine-learning platform, which uses structure-based molecular representations instead of big reaction-based datasets, could find diverse applications in organic chemistry. Although machine-learning methods have been widely used to predict the molecular properties and biological activities of target molecules, their application in predicting reaction outcomes has been limited because current models usually can't be transferred to different problems. Instead, complex parameterisation is required for each individual case to achieve good results. Researchers in Germany are now reporting a general approach that overcomes this limitation.