If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Scientists and researchers have long extolled the extraordinary potential capabilities of universal quantum computers, like simulating physical and natural processes or breaking cryptographic codes in practical time frames. Yet important developments in the technology--the ability to fabricate the necessary number of high-quality qubits (the basic units of quantum information) and gates (elementary operations between qubits)--is most likely still decades away. However, there is a class of quantum devices--ones that currently exist--that could address otherwise intractable problems much sooner than that. These near-term quantum devices, coined Noisy Intermediate-Scale Quantum (NISQ) by Caltech professor John Preskill, are single-purpose, highly imperfect, and modestly sized. Dr. Anton Toutov is the cofounder and chief science officer of Fuzionaire and holds a PhD in organic chemistry from Caltech.
They used an autoencoder, as illustrated in the diagram hereafter, which is constituted of an encoder and a decoder. The encoder converts a molecule (represented as a SMILES string) into a continuous probabilistic representation. This continuous representation of molecules constitutes the latent space, which is of lower dimensionality than the starting space (thus, the input molecule is compressed into its latent representation). The decoder is able to generate a molecular structure (as a SMILES string) from any point in the continuous latent space. Additionally, the latent representation of a molcule can be used to predict its properties, such as its drug-likeness or synthetic accessibility, using a neural network.
Using artificial intelligence in drug design would give pharmaceutical research a boost, says Gisbert Schneider. In the medium term, computers could even carry out experiments autonomously. Designing drugs is a complex and challenging task. How do you create effective new medicines without adverse side effects to address the world's most pressing health issues? Medical chemists have to consider an array of interactions: drugs interact with cells and organs in the human body in many ways, and these often differ widely from one patient to another.
Finding the best light-harvesting chemicals for use in solar cells can feel like searching for a needle in a haystack. Over the years, researchers have developed and tested thousands of different dyes and pigments to see how they absorb sunlight and convert it to electricity. Sorting through all of them requires an innovative approach. Now, thanks to a study that combines the power of supercomputing with data science and experimental methods, researchers at the U.S. Department of Energy's (DOE) Argonne National Laboratory and the University of Cambridge in England have developed a novel "design to device" approach to identify promising materials for dye-sensitized solar cells (DSSCs). DSSCs can be manufactured with low-cost, scalable techniques, allowing them to reach competitive performance-to-price ratios.
In 2017, the "digital medicine" Spinraza was released to the public, after years of drug development to cure Spinal Muscular Atrophy (SMA), at a price of $750,000 initially and $375,000 annually after that. The cause of SMA was a simple mutation on the SMN1 gene on chromosome 5. One altered nucleotide sequence in the exon of the SMN1 gene changed the complete life trajectory for children born with this disease, many dying before the end of infancy. However, the price of pharmaceutical drug, which many government's and insurance companies refuse to pay, has left children unable to acquire treatment. All the medicine simply does, is takes the reverse compliment sequence of a neighbouring intronic sequence, and binds to it.
In 2013, the machine learning (ML) research community demonstrated the uncanny ability for deep neural networks trained with backpropagation on graphics processing units to solve complex computer vision tasks. The same year, I wrapped up my PhD in cancer research that investigated the genetic regulatory circuitry of cancer metastasis. Over the 6 years that followed, I've noticed more and more computer scientists (we call them bioinformaticians:) and software engineers move into the life sciences. This influx is both natural and extremely welcome. The life sciences have become increasingly quantitative disciplines thanks to high-throughput omics assays such as sequencing and high-content screening assays such as multi-spectral, time-series microscopy. If we are to achieve a step-change in experimental productivity and discovery in life sciences, I think it's uncontroversial to posit that we desperately need software-augmented workflows. This is the era of empirical computation (more on that here). But what life science problems should we tackle and what software approaches should we develop?
Drug discovery can be viewed as a multi-parameter optimisation problem that stretches over vast length scales. Successful drugs are those that exhibit desirable molecular, pharmacokinetic and target binding properties. These pharmacokinetic and pharmacology properties are expressed as absorption, distribution, metabolism, and excretion (ADME), as well as toxicity in humans and protein-ligand (i.e. Traditionally, these features are examined empirically in vitro using chemical assays and in vivo using animal models. To do so, most academic labs will rely on lab scientists endlessly pipetting and transferring small amounts of liquids between plastic vials, tissue culture and various pieces of analytical equipment.
Graphs and their study have received a lot of attention since ages due to their ability of representing the real world in a fashion that can be analysed objectively. Indeed, graphs can be used to represent a lot of useful, real world datasets such as social networks, web link data, molecular structures, geographical maps, etc. Apart from these cases which have a natural structure to them, non-structured data such as images and text can also be modelled in the form of graphs in order to perform graph analysis on them. Due to the expressiveness of graphs and a tremendous increase in the available computational power in recent times, a good amount of attention has been directed towards the machine learning way of analysing graphs, i.e. According to this paper, Graph neural networks (GNNs) are connectionist models that capture the dependence of graphs via message passing between the nodes of graphs. They are extensions of the neural network model to capture the information represented as graphs.
Scientists have developed a deep neural network that sidesteps a problem that has bedeviled efforts to apply artificial intelligence to tackle complex chemistry--a shortage of precisely labeled chemical data. The new method gives scientists an additional tool to apply deep learning to explore drug discovery, new materials for manufacturing, and a swath of other applications. Predicting chemical properties and reactions among millions upon millions of compounds is one of the most daunting tasks that scientists face. There is no source of complete information from which a deep learning program could draw upon. Usually, such a shortage of a vast amount of clean data is a show-stopper for a deep learning project.
Calcium is a critical signaling molecule for most cells, and it is especially important in neurons. Imaging calcium in brain cells can reveal how neurons communicate with each other; however, current imaging techniques can only penetrate a few millimeters into the brain. MIT researchers have now devised a new way to image calcium activity that is based on magnetic resonance imaging (MRI) and allows them to peer much deeper into the brain. Using this technique, they can track signaling processes inside the neurons of living animals, enabling them to link neural activity with specific behaviors. "This paper describes the first MRI-based detection of intracellular calcium signaling, which is directly analogous to powerful optical approaches used widely in neuroscience but now enables such measurements to be performed in vivo in deep tissue," says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering, and an associate member of MIT's McGovern Institute for Brain Research.