New computational algorithms make it possible to build neural networks with many input nodes and many layers, and distinguish "deep learning" of these networks from previous work on artificial neural nets.
Researchers are developing a deep learning network capable of detecting disease biomarkers with a much higher degree of accuracy. Experts at the University of Waterloo's Cheriton School of Computer Science have created a deep neural network that achieves 98 per cent detection of peptide features in a dataset. That means scientists and medical practitioners have a greater chance of discovering possible diseases through tissue sample analysis. There are multiple existing techniques for detecting diseases by analyzing the protein structure of bio-samples. Computer programs increasingly play a part in this process by examining the large amount of data produced in such tests to pinpoint specific markers of disease.
As a hereditary disease, breast cancer has affected hundreds of families throughout the state. Annually, an average of 1,190 women are diagnosed with breast cancer in Hawaiʻi. As October approaches in recognition of National Breast Cancer Awareness Month, new public impact research from the University of Hawaiʻi Cancer Center is using artificial intelligence (AI) to improve risk assessment for breast cancer to aid in prevention and early detection, improving breast cancer outcomes for women all over the world. To reduce unnecessary imaging for breast cancer and costs associated with it, UH Cancer Center Researcher John Shepherd and his colleagues found that AI is able to distinguish between the mammograms of women who are more likely to develop breast cancer later on, and those who are not. The study was published in Radiology.
Quantifying the pathogenicity of protein variants in human disease-related genes would have a marked effect on clinical decisions, yet the overwhelming majority (over 98%) of these variants still have unknown consequences1–3. In principle, computational methods could support the large-scale interpretation of genetic variants. However, state-of-the-art methods4–10 have relied on training machine learning models on known disease labels. As these labels are sparse, biased and of variable quality, the resulting models have been considered insufficiently reliable11. Here we propose an approach that leverages deep generative models to predict variant pathogenicity without relying on labels. By modelling the distribution of sequence variation across organisms, we implicitly capture constraints on the protein sequences that maintain fitness. Our model EVE (evolutionary model of variant effect) not only outperforms computational approaches that rely on labelled data but also performs on par with, if not better than, predictions from high-throughput experiments, which are increasingly used as evidence for variant classification12–16. We predict the pathogenicity of more than 36 million variants across 3,219 disease genes and provide evidence for the classification of more than 256,000 variants of unknown significance. Our work suggests that models of evolutionary information can provide valuable independent evidence for variant interpretation that will be widely useful in research and clinical settings. A new computational method, EVE, classifies human genetic variants in disease genes using deep generative models trained solely on evolutionary sequences.
This post walks through our submission to the recent Kaggle competition: RSNA-MICCAI Brain Tumor Radiogenomic Classification, which aims at brain tumor detection from 3D MRI scans. I briefly describe the competition and provide data. Later I design a simple training workflow building on several well-established frameworks to produce a robust baseline solution. This baseline received a bronze medal on the private leaderboard even though the public leaderboard score was lower! My takeaway is "simple is better than complex" and it's not wise to overfit the public leaderboards. The RSNA-MICCAI Brain Tumor Radiogenomic Classification competition addresses a fundamental medical screening challenge -- detecting a malignant tumor in the brain.
Abacus.AI, the two-year-old startup that is developing "hybrid" neural network forms of deep learning, on Wednesday announced the company has obtained $50 million in venture capital financing in a Series C round, lead by private equity firm Tiger Global Management. The company now has received $90.3 million in financing. Tiger Global is joined by returning investors Coatue Management and Index Ventures, as well as Alkeon. "A large chunk of it will goes to R&D and engineering and science," explained Bindu Reddy, co-founder and CEO of the company, in an interview with ZDNet via Zoom. "We continue to want to be the best of breed in AI and ML platforms."
I recently started an AI-focused educational newsletter, that already has over 100,000 subscribers. TheSequence is a no-BS (meaning no hype, no news etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Quantifying trust and fairness is one of the most important challenges to ensure the mainstream adoption of deep learning systems. But what does trust truly means in the context of deep learning systems?
Artificial Intelligence (AI) is beginning to impact science, like physics, by solving some of the most complex, time-consuming, or even impossible problems humans solve. This post discusses some of the applications of artificial intelligence in physics that have been extensively researched. Physicists are also tasked with deciphering deep learning. Deep neural networks are being used in a growing number of applications for automated learning from data, but core theoretical questions regarding how they function remain unanswered. A physics-based solution may assist in closing the gap.
Welcome to our October 2021 monthly digest where you can catch up with any AIhub stories you may have missed, get the low-down on recent events, and much more. In this edition we cover our latest focus issue, the concept of foundation models, 100 days of machine learning, Beethoven's 10th symphony, and more. Our latest focus series life on land (as part of our wider series on the UN sustainable development goals) was launched this month. We spoke to Lily Xu about her work in green security. Lily and her colleagues apply machine learning and game theory techniques to wildlife conservation.
As a result, you may send a photo to a deep neural network that has been trained to recognise dogs and cats and get an output that tells you whether the photo contains a dog or a cat. The network outputs the chance of the photo containing a dog or a cat (the two classes you trained it to identify) and the output sums to 100 per cent if the last network layer is a softmax layer. You get scores that you can interpret as probabilities of content belonging to each class, independently, when the last layer is a sigmoid-activated layer. The scores will not always add up to 100 per cent. Because its architecture outputs the entire image as being of a given class, a simple CNN can't duplicate the instances below.
Music artists, composers and producers today swim in massive amounts of musical notes to test the barriers of what melodies, harmonies and symphonies they can create and what works best with their songs. Although the advances in technology have significantly simplified and streamlined the process, it is still a long and challenging one for everyone involved in music creation. However, a technological revolution may be about to chance music creation as we know it. A team of computer scientists were able to use AI to complete the unfinished 10th symphony, originally created over 250 years ago by Ludwig Van Beethoven. This project has provoked interesting discussions, such as whether the now completed symphony is what Beethoven was originally trying to create, and also raised the important question -- what can Artificial Intelligence (AI) and Machine learning (ML) do for music production in the music entertainment industry? The team at Brainpool have been pondering on the answer to the latter, so we took the time to test a few of the various readily available AI music demos and reflected on how they could help transform the music industry.