One of the challenges with modern machine learning systems is that they are very heavily dependent on large quantities of data to make them work well. This is especially the case with deep neural nets, where lots of layers means lots of neural connections which requires large amounts of data and training to get to the point where the system can provide results at acceptable levels of accuracy and precision. Indeed, the ultimate implementation of this massive data, massive network vision is the currently much-vaunted Open AI GPT-3, which is so large that it can predict and generate almost any text with surprising magical wizardry. However, in many ways, GPT-3 is still a big data magic trick. Indeed, Professor Luis Perez-Breva makes this exact point when he says that what we call machine learning isn't really learning at all.
The resurgence of artificial intelligence (AI) is largely due to advances in pattern-recognition due to deep learning, a form of machine learning that does not require explicit hard-coding. The architecture of deep neural networks is somewhat inspired by the biological brain and neuroscience. Like the biological brain, the inner workings of exactly why deep networks work are largely unexplained, and there is no single unifying theory. Recently researchers at the Massachusetts Institute of Technology (MIT) revealed new insights about how deep learning networks work to help further demystify the black box of AI machine learning. The MIT research trio of Tomaso Poggio, Andrzej Banburski, and Quianli Liao at the Center for Brains, Minds, and Machines developed a new theory as to why deep networks work and published their study published on June 9, 2020 in PNAS (Proceedings of the National Academy of Sciences of the United States of America).
This past spring, as billions of people languished at home under lockdown and stared at gloomy graphs, Linda Wang and Alexander Wong, scientists at DarwinAI, a Canadian startup that works in the field of artificial intelligence, took advantage of their enforced break: In collaboration with the University of Waterloo, they helped develop a tool to detect COVID-19 infection by means of X-rays. Using a database of thousands of images of lungs, COVID-Net – as they called the open-access artificial neural network – can detect with 91 percent certainty who is ill with the virus. In the past, we would undoubtedly have been suspicious of, or at least surprised by, a young company (DarwinAI was established in 2018) with no connection to radiology, having devised such an ambitious tool within mere weeks. But these days, we know it can be done. Networks that draw on an analysis of visual data using a technique known as "deep learning" can, with relative flexibility, adapt themselves to decipher any type of image and provide results that often surpass those obtained by expert radiologists.
See also the article by Pan et al in this issue. Safwan S. Halabi, MD, is a clinical associate professor of radiology at the Stanford University School of Medicine and serves as the medical director for radiology informatics at Stanford Children's Health. Dr Halabi's clinical and administrative leadership roles are directed at improving quality of care, efficiency, and patient safety. His current academic and research interests include imaging informatics, deep/machine learning in imaging, artificial intelligence in medicine, clinical decision support, and patient-centric health care delivery. Bone age assessment became an early AI "poster child" that demonstrated the power of applying regression and machine learning techniques to a mundane and monotonous radiologic diagnostic task.
"Working on a real-life project that will introduce students to how algorithms work in applications with crucial outcomes will provide them with the important skills that can transfer to other areas of computer and data science." As the race for a COVID-19 vaccine continues, Moataz Khalifa, assistant professor and director of Data Education at Washington and Lee University, is involved in an equally promising research project that focuses on a non-invasive, early detection system of the virus. In March, just as the numbers of cases were climbing around the world, Khalifa was invited by Wu Feng, Elizabeth & James Turner Fellow, professor of computer science at Virginia Tech and director of its SyNeRGy lab, to join his research lab to develop a deep-learning algorithm to enhance low-radiation CT scans of people's lungs. Feng's current research was already investigating similar applications in CT scans of brain tumors, and he received two National Science Foundation grants totaling $250,000 to expand his project to work on the COVID-19 early detection system. Currently, the genetic-based RT-PCR tests available to detect COVID-19 rely on swabbing the nasal cavity.
The ability to accurately identify cancer--and classify cancer types--using machine learning would provide a tremendous advance in cancer diagnostics for both physicians and patients. But that is just one role of many that machine learning can play in cancer. Another application is to predict genomic alterations from morphological characteristics learned from digital slides. The genomicA team at the University of Chicago (UChicago) Medicine Comprehensive Cancer Center, working with colleagues in Europe, created a deep learning algorithm that can infer molecular alterations directly from routine histology images across multiple common tumor types. It also provides spatially resolved tumor and normal tissue distinction.
Sometimes it's tempting to think of every technological advancement as the brave first step on new shores, a fresh chance to shape the future rationally. In reality, every new tool enters the same old world with its same unresolved issues. In a moment where society is collectively reckoning with just how deep the roots of racism reach, a new paper from researchers at DeepMind -- the AI lab and sister company to Google -- and the University of Oxford presents a vision to "decolonize" artificial intelligence. The aim is to keep society's ugly prejudices from being reproduced and amplified by today's powerful machine learning systems. The paper, published this month in the journal Philosophy & Technology, has at heart the idea that you have to understand historical context to understand why technology can be biased.
A team lead by researchers in the Pritzker School of Molecular Engineering (PME) at the University of Chicago reports that it has developed an artificial intelligence-led process that uses big data to design new proteins that could have implications across the healthcare, agriculture, and energy sectors. By developing machine-learning models that can review protein information culled from genome databases, the scientists say they found relatively simple design rules for building artificial proteins. When the team constructed these artificial proteins in the lab, they discovered that they performed chemistries so well that they rivaled those found in nature. "We have all wondered how a simple process like evolution can lead to such a high-performance material as a protein," said Rama Ranganathan, PhD, Joseph Regenstein Professor in the Department of Biochemistry and Molecular Biology, Pritzker Molecular Engineering, and the College. "We found that genome data contains enormous amounts of information about the basic rules of protein structure and function, and now we've been able to bottle nature's rules to create proteins ourselves."
WIRE)--NTT Research, Inc., a division of NTT (TYO:9432), today announced that a research scientist in its Physics & Informatics (PHI) Lab, Dr. Hidenori Tanaka, was the lead author on a technical paper that advances basic understanding of biological neural networks in the brain through artificial neural networks. Titled "From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction," the paper was presented at NeurIPS 2019, a leading machine-learning, artificial intelligence (AI) and computational neuroscience conference, and published in Advances in Neural Information Processing Systems 32 (NIPS 2019). Work on the paper originated at Stanford University, academic home of the paper's six authors when the research was performed. At the time, a post-doctoral fellow and visiting scholar at Stanford University, Dr. Tanaka joined NTT Research in December 2019. The underlying research aligns with the PHI Lab's mission to rethink the computer by drawing inspirations from computational principles of neural networks in the brain.