Goto

Collaborating Authors

Results


Drowning in Data

#artificialintelligence

In 1945 the volume of human knowledge doubled every 25 years. Now, that number is 12 hours [1]. With our collective computational power rapidly increasing, vast amounts of data and our ability to assimilate it, has seeded unprecedented fertile ground for innovation. Healthtech companies are rapidly sprouting from data ridden soil at exponential rates. Cell free DNA companies, once a rarity, are becoming ubiquitous. The genomics landscape, once dominated by the few, are being inundated by a slew of competitors. Grandiose claims of being able to diagnose 50 different cancers from a single blood sample, or use AI to best dermatologists, radiologists, pathologists, etc., are being made at alarming rates. Accordingly, it's imperative to know how to assess these claims as fact or fiction, particularly when such claimants may employ "statistical misdirection". In this addition to "The Insider's Guide to Translational Medicine" we disarm perpetrators of statistical warfare of their greatest ...


AI transformer models touted to help design new drugs

#artificialintelligence

Special report AI can study chemical molecules in ways scientists can't comprehend, automatically predicting complex protein structures and designing new drugs, despite having no real understanding of science. The power to design new drugs at scale is no longer limited to Big Pharma. Startups armed with the right algorithms, data, and compute can invent tens of thousands of molecules in just a few hours. New machine learning architectures, including transformers, are automating parts of the design process, helping scientists develop new drugs for difficult diseases like Alzheimer's, cancer, or rare genetic conditions. In 2017, researchers at Google came up with a method to build increasingly bigger and more powerful neural networks.


How A.I. Is Finding New Cures in Old Drugs

#artificialintelligence

In the elegant quiet of the café at the Church of Sweden, a narrow Gothic-style building in Midtown Manhattan, Daniel Cohen is taking a break from explaining genetics. He moves toward the creaky piano positioned near the front door, sits down, and plays a flowing, flawless rendition of "Over the Rainbow." If human biology is the scientific equivalent of a complicated score, Cohen has learned how to navigate it like a virtuoso. Cohen was the driving force behind Généthon, the French laboratory that in December 1993 produced the first-ever "map" of the human genome. He essentially introduced Big Data and automation to the study of genomics, as he and his team demonstrated for the first time that it was possible to use super-fast computing to speed up the processing of DNA samples.


What's next for AlphaFold and the AI protein-folding revolution

#artificialintelligence

For more than a decade, molecular biologist Martin Beck and his colleagues have been trying to piece together one of the world's hardest jigsaw puzzles: a detailed model of the largest molecular machine in human cells. This behemoth, called the nuclear pore complex, controls the flow of molecules in and out of the nucleus of the cell, where the genome sits. Hundreds of these complexes exist in every cell. Each is made up of more than 1,000 proteins that together form rings around a hole through the nuclear membrane. These 1,000 puzzle pieces are drawn from more than 30 protein building blocks that interlace in myriad ways. Making the puzzle even harder, the experimentally determined 3D shapes of these building blocks are a potpourri of structures gathered from many species, so don't always mesh together well. And the picture on the puzzle's box -- a low-resolution 3D view of the nuclear pore complex -- lacks sufficient detail to know how many of the pieces precisely fit together. In 2016, a team led by Beck, who is based at the Max Planck Institute of Biophysics (MPIBP) in Frankfurt, Germany, reported a model1 that covered about 30% of the nuclear pore complex and around half of the 30 building blocks, called Nup proteins.


Better data for better therapies: The case for building health data platforms

#artificialintelligence

The past decade has seen an important and, for many patients, a life-changing rise in the number of innovative new drugs reaching the market to treat diseases such as multiple sclerosis, malaria, and subtypes of certain cancers (such as melanoma or leukemia). In the United States, the Food and Drug Administration approved an average of 41 new molecular entities (including biologic license applications) each year from 2011 to 2020--almost double the number in the previous decade. Despite the immense costs of such achievements, 2 2. Asher Mullard, "New drugs cost US $2.6 billion to develop," Nature Reviews Drug Discovery, December 1, 2014. A major barrier is the daunting challenge of understanding the multifactorial nature of many diseases coupled with the vast set of variables in therapy design. Very few diseases, such as cystic fibrosis, are linked to variants in single genes. Drug development therefore tends to rely on a reductionist, hypothesis-driven approach that narrows the focus to individual cell types or pathways. Focused assays often based on partial information or informed by animal models that never perfectly reflect human disease then attempt to identify single molecules that will benefit patients.


How artificial intelligence is changing drug discovery

#artificialintelligence

An enormous figure looms over scientists searching for new drugs: the estimated US$2.6-billion price tag of developing a treatment. A lot of that effectively goes down the drain, because it includes money spent on the nine out of ten candidate therapies that fail somewhere between phase I trials and regulatory approval. Few people in the field doubt the need to do things differently. Leading biopharmaceutical companies believe a solution is at hand. Pfizer is using IBM Watson, a system that uses machine learning, to power its search for immuno-oncology drugs.


The great puzzle of the body and disease is beginning to yield to AI, says Recursion CEO

ZDNet

One way to think about artificial intelligence, in its modern deep learning form, is as a jigsaw puzzle. You have a picture on the box, and you begin to organize your pieces. "I usually start by finding the edge pieces, matching the colors, seeing here's a white cat, say," says Chris Gibson of his approach to working on jigsaw puzzles. Gibson is the co-founder and CEO of a nine-year-old company called Recursion Pharmaceuticals, which uses deep learning to hunt for novel therapeutic approaches to disease. Gibson is, in fact, sorting what might be pieces to a very big puzzle.


Edge AI and its Benefits

#artificialintelligence

Talk about your background experience, and anything you think would be relevant for our audience to get a better sense of who are listening to.


Artificial intelligence and machine learning show promise in cancer diagnosis and treatment

#artificialintelligence

Amsterdam, March 1, 2022 – Artificial intelligence (AI), deep learning (DL), and machine learning (ML) have transformed many industries and areas of science. Now, these tools are being applied to address the challenges of cancer biomarker discovery, where the analysis of vast amounts of imaging and molecular data is beyond the ability of traditional statistical analyses and tools. In a special issue of Cancer Biomarkers, researchers propose various approaches and explore some of the unique challenges of using AI, DL, and ML to improve the accuracy and predictive power of biomarkers for cancer and other diseases. "The biomarker field is blessed with a plethora of imaging and molecular-based data, and at the same time, plagued with so much data that no one individual can comprehend it all," explained Guest Editor Karin Rodland, PhD, Pacific Northwest National Laboratory, Richland; and Oregon Health and Science University, Portland, OR, USA. "AI offers a solution to that problem, and it has the potential to uncover novel interactions that more accurately reflect the biology of cancer and other diseases."


AI and machine learning could improve cancer diagnosis through biomarker discovery

#artificialintelligence

Artificial intelligence (AI), deep learning (DL), and machine learning (ML) have transformed many industries and areas of science. Now, these tools are being applied to address the challenges of cancer biomarker discovery, where the analysis of vast amounts of imaging and molecular data is beyond the ability of traditional statistical analyses and tools. In a special issue of Cancer Biomarkers, researchers propose various approaches and explore some of the unique challenges of using AI, DL, and ML to improve the accuracy and predictive power of biomarkers for cancer and other diseases. "The biomarker field is blessed with a plethora of imaging and molecular-based data, and at the same time, plagued with so much data that no one individual can comprehend it all," explained Guest Editor Karin Rodland, PhD, Pacific Northwest National Laboratory, Richland; and Oregon Health and Science University, Portland, OR, USA. "AI offers a solution to that problem, and it has the potential to uncover novel interactions that more accurately reflect the biology of cancer and other diseases."