Goto

Collaborating Authors

 prediction tool


Beyond Sequence: Impact of Geometric Context for RNA Property Prediction

Xu, Junjie, Moskalev, Artem, Mansi, Tommaso, Prakash, Mangal, Liao, Rui

arXiv.org Artificial Intelligence

Accurate prediction of RNA properties, such as stability and interactions, is crucial for advancing our understanding of biological processes and developing RNA-based therapeutics. RNA structures can be represented as 1D sequences, 2D topological graphs, or 3D all-atom models, each offering different insights into its function. Existing works predominantly focus on 1D sequence-based models, which overlook the geometric context provided by 2D and 3D geometries. This study presents the first systematic evaluation of incorporating explicit 2D and 3D geometric information into RNA property prediction, considering not only performance but also real-world challenges such as limited data availability, partial labeling, sequencing noise, and computational efficiency. To this end, we introduce a newly curated set of RNA datasets with enhanced 2D and 3D structural annotations, providing a resource for model evaluation on RNA data. Our findings reveal that models with explicit geometry encoding generally outperform sequence-based models, with an average prediction RMSE reduction of around 12% across all various RNA tasks and excelling in low-data and partial labeling regimes, underscoring the value of explicitly incorporating geometric context. On the other hand, geometry-unaware sequence-based models are more robust under sequencing noise but often require around 2-5x training data to match the performance of geometry-aware models. Our study offers further insights into the trade-offs between different RNA representations in practical applications and addresses a significant gap in evaluating deep learning models for RNA tasks.


Predicting Postoperative Nausea And Vomiting Using Machine Learning: A Model Development and Validation Study

Glebov, Maxim, Lazebnik, Teddy, Orkin, Boris, Berkenstadt, Haim, Bunimovich-Mendrazitsky, Svetlana

arXiv.org Artificial Intelligence

Background: Postoperative nausea and vomiting (PONV) is a frequently observed complication in patients undergoing surgery under general anesthesia. Moreover, it is a frequent cause of distress and dissatisfaction during the early postoperative period. The tools used for predicting PONV at present have not yielded satisfactory results. Therefore, prognostic tools for the prediction of early and delayed PONV were developed in this study with the aim of achieving satisfactory predictive performance. Methods: The retrospective data of adult patients admitted to the post-anesthesia care unit after undergoing surgical procedures under general anesthesia at the Sheba Medical Center, Israel, between September 1, 2018, and September 1, 2023, were used in this study. An ensemble model of machine learning algorithms trained on the data of 54848 patients was developed. The k-fold cross-validation method was used followed by splitting the data to train and test sets that optimally preserve the sociodemographic features of the patients, such as age, sex, and smoking habits, using the Bee Colony algorithm. Findings: Among the 54848 patients, early and delayed PONV were observed in 2706 (4.93%) and 8218 (14.98%) patients, respectively. The proposed PONV prediction tools could correctly predict early and delayed PONV in 84.0% and 77.3% of cases, respectively, outperforming the second-best PONV prediction tool (Koivuranta score) by 13.4% and 12.9%, respectively. Feature importance analysis revealed that the performance of the proposed prediction tools aligned with previous clinical knowledge, indicating their utility. Interpretation: The machine learning-based tools developed in this study enabled improved PONV prediction, thereby facilitating personalized care and improved patient outcomes.


Mind the Gap: Dialogs on Artificial Intelligence: Episode 2: AI as a Prediction Tool - Business Law Today from ABA

#artificialintelligence

So far, advances in AI are not bringing us real "intelligence." Rather, these advances are bringing us a key part of intelligence: prediction. This enables businesses to make predictions faster and more precisely to improve their business models and marketplace advantage. In this episode of Mind the Gap, Avi Goldfarb, an economist at the University of Toronto's Rotman School of Management and one of the authors of "Prediction Machines: The Simple Economics of Artificial Intelligence," will explain the economics of AI and how it can lead to better and cheaper predictions.


Do I need a brolly? Google uses AI to try to improve two-hour rain forecasts

The Guardian

Weather forecasts are notoriously bad at predicting the chances of impending rain – as anyone who has been drenched after leaving the house without an umbrella can testify. Now, scientists at Google DeepMind have developed an artificial intelligence-based forecasting system which they claim can more accurately predict the likelihood of rain within the next two hours than existing systems. Today's weather forecasts are largely driven by powerful numerical weather prediction (NWP) systems, which use equations that describe the movement of fluids in the atmosphere to predict the likelihood of rain and other types of weather. "These models are really amazing from six hours up to about two weeks in terms of weather prediction, but there is area – especially around zero to two hours – in which the models perform particularly poorly," said Suman Ravuri, a staff research scientist at DeepMind in London and co-lead of the project. "Precipitation nowcasting" is an attempt to fill this blind spot.


Renewables make it into the grid better with AI

#artificialintelligence

In a highly competitive market, all energy generators rely on highly accurate predictions of how much electricity they'll be able to make. Australian researchers have figured out a way to improve these predictions for wind and solar farms, using artificial intelligence. The National Energy Market – "the grid" – requires automatic forecasts every five minutes from electricity generators. This ensures that electricity generation meets demand. It can be very costly if those five-minute forecasts prove to be incorrect.


AI leverages Fugaku's power to develop a Tsunami prediction tool

#artificialintelligence

It was last summer that I wrote about the Japanese computing giant'Fugaku' surpassing the American reigning champion Summit to become the fastest supercomputer in the World. Since then, Fugaku has solidified its position at the top spot -- according to the 56th edition of the TOP500 list published in Nov. 2020, its capacity has increased from 7,299,072 cores to 7,630,848 cores, posting a new world record 442 petaflops result on HPL. The most powerful supercomputer by RIKEN Center for Computational Science & Fujitsu has now been engaged in developing a real-world prediction tool. In a multinational collaborative endeavor, The International Research Institute of Disaster Science at Tohoku University, the Earthquake Research Institute at the University of Tokyo, and Fujitsu Laboratories have come together to develop an AI model that will be able to predict tsunami flooding in coastal areas in near real-time. This could be a real handy tool for disaster management teams.


Individual dynamic prediction of clinical endpoint from large dimensional longitudinal biomarker history: a landmark approach

Devaux, Anthony, Genuer, Robin, Pérès, Karine, Proust-Lima, Cécile

arXiv.org Machine Learning

The individual data collected throughout patient follow-up constitute crucial information for assessing the risk of a clinical event, and eventually for adapting a therapeutic strategy. Joint models and landmark models have been proposed to compute individual dynamic predictions from repeated measures to one or two markers. However, they hardly extend to the case where the complete patient history includes much more repeated markers possibly. Our objective was thus to propose a solution for the dynamic prediction of a health event that may exploit repeated measures of a possibly large number of markers. We combined a landmark approach extended to endogenous markers history with machine learning methods adapted to survival data. Each marker trajectory is modeled using the information collected up to landmark time, and summary variables that best capture the individual trajectories are derived. These summaries and additional covariates are then included in different prediction methods. To handle a possibly large dimensional history, we rely on machine learning methods adapted to survival data, namely regularized regressions and random survival forests, to predict the event from the landmark time, and we show how they can be combined into a superlearner. Then, the performances are evaluated by cross-validation using estimators of Brier Score and the area under the Receiver Operating Characteristic curve adapted to censored data. We demonstrate in a simulation study the benefits of machine learning survival methods over standard survival models, especially in the case of numerous and/or nonlinear relationships between the predictors and the event. We then applied the methodology in two prediction contexts: a clinical context with the prediction of death for patients with primary biliary cholangitis, and a public health context with the prediction of death in the general elderly population at different ages. Our methodology, implemented in R, enables the prediction of an event using the entire longitudinal patient history, even when the number of repeated markers is large. Although introduced with mixed models for the repeated markers and methods for a single right censored time-to-event, our method can be used with any other appropriate modeling technique for the markers and can be easily extended to competing risks setting.


Police built an AI to predict violent crime. It was seriously flawed

#artificialintelligence

A flagship artificial intelligence system designed to predict gun and knife violence before it happens had serious flaws that made it unusable, police have admitted. The error led to large drops in accuracy and the system was ultimately rejected by all of the experts reviewing it for ethical problems. The prediction system, known as Most Serious Violence (MSV), is part of the National Data Analytics Solution (NDAS) project. The Home Office has funded NDAS with at least £10 million during the last two years with the aim to create machine learning systems that can be used across England and Wales. As a result of the failure of MSV, police have stopped developing the prediction system in its current form.


How artificial intelligence could guide drug discovery

#artificialintelligence

Researchers can use AI to reduce the number of experiments needed to develop new medications. Drug discovery is traditionally a high-risk and resource-intensive process -- so much so that it has drawn comparisons to gambling. Brendan Frey, a U of T professor, put it bluntly: "It's like the Big Pharma companies come into a casino, put a million-dollar coin into a slot machine, and with some probability like 10 per cent or something, they get a win." But recently, a growing trend in the field is reducing uncertainty around drug discovery by using artificial intelligence (AI) as a prediction tool. Dr. Christine Allen, a professor at the Leslie Dan Faculty of Pharmacy, together with post-doctoral researcher Pauric Bannigan, recently published a review paper on the subject in the Journal of Controlled Release.


MIT's new interactive machine learning prediction tool could give everyone AI superpowers – TechCrunch

#artificialintelligence

Soon, you might not need anything more specialized than a readily accessible touchscreen device and any existing data sets you have access to in order to build powerful prediction tools. A new experiment from MIT and Brown University researchers have added a capability to their'Northstar' interactive data system that can "instantly generate machine-learning models" to use with their exiting data sets in order to generate useful predictions. One example the researchers provide is that doctors could make use of the system to make predictions about the likelihood their patients have of contracting specific diseases based on their medial history. Or, they suggest, a business owner could use their historical sales data to develop more accurate forecasts, quickly and without a ton of manual analytics work. Researchers are calling this feature the Northstar system's "virtual data scientist," (or VDS) and it sounds like it could actually replace the human equivalent, especially in settings where one would never actually be readily available or resourced anyway.