The Accident That Led to Machines That Can See - Issue 107: The Edge


For something so effortless and automatic, vision is a tough job for the brain. It's remarkable that we can transform electromagnetic radiation--light--into a meaningful world of objects and scenes. After all, light focused into an eye is merely a stream of photons with different wave properties, projecting continuously on our retinas, a layer of cells on the backside of our eyes. Before it's transduced by our eyes, light has no brightness or color, which are properties of animal perception. Our retinas transform this energy into electrical impulses that propagate within our nervous system. Somehow this comes out as a world: skies, children, art, auroras, and occasionally ghosts and UFOs.

Scientists show how AI may spot unseen signs of heart failure


A special artificial intelligence (AI)-based computer algorithm created by Mount Sinai researchers was able to learn how to identify subtle changes in electrocardiograms (also known as ECGs or EKGs) to predict whether a patient was experiencing heart failure. "We showed that deep-learning algorithms can recognize blood pumping problems on both sides of the heart from ECG waveform data," said Benjamin S. Glicksberg, Ph.D., Assistant Professor of Genetics and Genomic Sciences, a member of the Hasso Plattner Institute for Digital Health at Mount Sinai, and a senior author of the study published in the Journal of the American College of Cardiology: Cardiovascular Imaging. "Ordinarily, diagnosing these type of heart conditions requires expensive and time-consuming procedures. We hope that this algorithm will enable quicker diagnosis of heart failure." The study was led by Akhil Vaid, MD, a postdoctoral scholar who works in both the Glicksberg lab and one led by Girish N. Nadkarni, MD, MPH, CPH, Associate Professor of Medicine at the Icahn School of Medicine at Mount Sinai, Chief of the Division of Data-Driven and Digital Medicine (D3M), and a senior author of the study.

Artificial Intelligence, and the Future of Work – Should We Be Worried?


Artificial intelligence is at the top of many lists of the most important skills in today's job market. In the last decade or so we have seen a dramatic transition from the "AI winter" (where AI has not lived up to its hype) to an "AI spring" (where machines can now outperform humans in a wide range of tasks). Having spent the last 25 years as an AI researcher and practitioner, I'm often asked about the implications of this technology on the workforce. I'm quite often disheartened by the amount of disinformation there is on the internet on this topic, so I've decided to share some of my own thoughts. The difference between what I am about to write, and what you may have read before elsewhere is due to an inherent bias. Rather than being a pure "AI" practitioner, my PhD and background is in Cognitive Science - the scientific study of how the mind works, spanning such areas as psychology, neuroscience, philosophy, and artificial intelligence. My area of research has been to look explicitly at how the human mind works, and to reverse engineer these processes in the development of artificial intelligence platforms.

Artificial Intelligence (AI) in Healthcare Market to Grow at a CAGR of 49.8% to reach US$ 107,797.82 Million from 2020 to 2027


Artificial intelligence in healthcare is the use of machine-learning algorithms and software to analyze, process and present complex medical and health care data. It has been widely used to support clinical decisions, improve workflows and predict health outcomes. Thus, wide application of AI in the healthcare sector is likely to propel the growth of the market. The growth of the artificial intelligence in healthcare market is attributed to the rising application of artificial intelligence in healthcare, growing investment in AI healthcare start-ups, and increasing cross-industry partnerships and collaborations. However, dearth of skilled AI workforce and imprecise regulatory guidelines for medical software is the major factor hindering the market growth.

New AI Detects Breast Cancer from Ultrasounds


Artificial intelligence (AI) machine learning is rapidly transforming how physicians, clinicians, pathologists, and health care providers diagnose patient conditions. A recent NYU Langone Health study published in Nature Communications shows how AI applied to ultrasound images can identify breast cancer with radiologist-level accuracy, reduce requested biopsies by 27.8 percent, and significantly decrease false positive rates of breast cancer by 37 percent. "In this work, we present an AI system that achieves radiologist-level accuracy in identifying breast cancer in ultrasound images," wrote Krzysztof Geras, PhD., the study senior investigator and assistant professor at NYU Grossman School of Medicine, in collaboration with co-investigator and radiologist Linda Moy, MD. a professor at NYU Grossman School of Medicine, and their research colleagues. Both Geras and Moy are members of the Perlmutter Cancer Center. Breast cancer is a leading cause of death among women worldwide.

Postdoc Innovative Deep Learning Techniques for Image-guided Surgery


Project information Cancer is a leading cause of death worldwide. In many cancers, surgery, where the surgeon removes malignant tissue, plays a pivotal role. A significant development in surgical procedures is tracking surgical instruments in conjunction with preoperative MRI/CT imaging to guide the procedure, ensuring more accurate, safer, and less invasive procedures. This project will develop novel deep learning-based image-guided surgery techniques for surgical procedures in the abdomen. In this region, interoperative motility, proximity to major vessels, and critical structures like nerves are important considerations for using image-guided techniques.

Deep Learning Enhances Cancer Diagnostic Tools


Yi "Edwin" Sun, a Ph.D. candidate in electrical and computer engineering at the University of Illinois Urbana-Champaign and member of the Beckman Institute's Biophotonics Imaging Laboratory headed by Stephen Boppart, explored how deep learning methods can make polarization-sensitive optical coherence tomography, or PS-OCT, more cost-effective and better equipped to diagnose cancer in biological tissues. The paper, titled "Synthetic polarization-sensitive optical coherence tomography by deep learning," was published in npj Digital Medicine. OCT systems are common clinically and are used to generate high-resolution cross-sectional images of regions in the human body. Sun and his team developed a groundbreaking method of applying software to the OCT tool to provide polarization-sensitive capabilities -- without the cost and complexity that accompany hardware-based PS-OCT imaging systems. "We're trying to replace the hardware associated with PS-OCT," Sun said.

Open data key to cracking the protein structure prediction problem


Proteins are the building blocks for all living things, providing structure and managing processes in cells. Understanding how these molecules fold into specific 3D shapes is key to understanding their function but requires expensive equipment and lots of time, limiting the progress of research and development. A new artificial intelligence programme called AlphaFold has been shown to accurately predict protein structure in minutes, solving a decades old challenge. Its success is built on the availability of thousands of experimentally determined protein structures, a result of long-term research funding, infrastructure investment and data-sharing policies. DeepMind, the developers of AlphaFold, have made the AlphaFold code and protein structure predictions openly available to the global scientific community.



To develop a proof-of-concept convolutional neural network (CNN) to synthesize T2 maps in right lateral femoral condyle articular cartilage from anatomic MR images by using a conditional generative adversarial network (cGAN). In this retrospective study, anatomic images (from turbo spin-echo and double-echo in steady-state scans) of the right knee of 4621 patients included in the 2004–2006 Osteoarthritis Initiative were used as input to a cGAN-based CNN, and a predicted CNN T2 was generated as output. These patients included men and women of all ethnicities, aged 45–79 years, with or at high risk for knee osteoarthritis incidence or progression who were recruited at four separate centers in the United States. These data were split into 3703 (80%) for training, 462 (10%) for validation, and 456 (10%) for testing. Linear regression analysis was performed between the multiecho spin-echo (MESE) and CNN T2 in the test dataset.

Google's DeepMind faces suit over UK health data


DeepMind Faces: Google's AI department, otherwise known as DeepMined, the Google-owned AI research company, is the subject of a lawsuit. The lawsuit focuses on the company's use of the personal records of a whopping 1.6 million UK National Service patients, including confidential medical records. The #Google #AI department is getting a class-action lawsuit for gaining 1.6 million confidential medical records of #NHS patients. According to PCGamer, DeepMind received the documents to create a health application the company calls Streams. It was supposed to be an AI-based assistant to help healthcare workers and was previously used by the British National Health Service.