This article is a response to an article arguing that an AI Winter maybe inevitable. However, I believe that there are fundamental differences between what happened in the 1970s (the fist AI winter) and late 1980s (the second AI winter with the fall of Expert Systems) with the arrival and growth of the internet, smart mobiles and social media resulting in the volume and velocity of data being generated constantly increasing and requiring Machine Learning and Deep Learning to make sense of the Big Data that we generate. For those wishing to see a details about what AI is then I suggest reading an Intro to AI, and for the purposes of this article I will assume Machine Learning and Deep Learning to be a subset of Artificial Intelligence (AI). AI deals with the area of developing computing systems that are capable of performing tasks that humans are very good at, for example recognising objects, recognising and making sense of speech, and decision making in a constrained environment. The rapid growth in Big Data has driven much of the growth in AI alongside reduced cost of data storage (Cloud Servers) and Graphical Processing Units (GPUs) making Deep Learning more scalable.
By combining machine learning and T-cell engineering researchers were able to develop cell therapies that can selectively and effectively target and destroy solid tumours. Researchers working to identify anti-cancer therapeutics that can selectively target tumour cells without damaging normal cells have used machine learning and therapeutic cell engineering to develop'smart' cell therapies for solid tumours. The research is split across two papers, in the first – published in Cell Systems – researchers in the lab of Dr Wendell Lim at the University Of California – San Francisco (UCSF) Cell Design Initiative and Center for Synthetic Immunology, both US, teamed up with a group of computer scientists led by Dr Olga Troyanskaya at Princeton University's Lewis-Sigler Institute for Integrative Genomics and the Simons Foundation's Flatiron Institute. The scientists used computational approaches to examine the gene expression profile of more than 2,300 genes in normal and tumour cells to see what antigens could help discriminate between diseased and healthy cells. They then used machine learning techniques to come up with antigen combinations and determined if they could significantly improve how T cells recognise tumours while ignoring normal tissue.
The U.S. property market has proven to be more resilient than you might have assumed it would be in the midst of a coronavirus pandemic, and today a startup that's built a computer vision tool to help owners assess and fix those properties more easily is announcing a significant round of funding as it sees a surge of growth in usage. Hover -- which has built a platform that uses eight basic smartphone photos to patch together a 3D image of your home that can then be used by contractors, insurance companies and others to assess a repair, price out the job and then order the parts to do the work -- has raised $60 million in new funding. The Series D values the company at $490 million post-money, and significantly, it included a number of strategic investors. Three of the biggest insurance companies in the U.S. -- Travelers, State Farm Ventures and Nationwide -- led the round, with building materials giant Standard Industries, and other unnamed building tech firms, also participating. Past financial backers Menlo Ventures, GV (formerly Google Ventures) and Alsop Louie Partners, as well as new backer Guidewire Software, were also in this round.
Researchers at Massachusetts General Hospital (MGH) have developed a deep learning model that identifies imaging biomarkers on screening mammograms to predict a patient's risk for developing breast cancer with greater accuracy than traditional risk assessment tools. Results of the study are being presented at the annual meeting of the Radiological Society of North America (RSNA). "Traditional risk assessment models do not leverage the level of detail that is contained within a mammogram," said Leslie Lamb, M.D., M.Sc., breast radiologist at MGH. "Even the best existing traditional risk models may separate sub-groups of patients but are not as precise on the individual level." Currently available risk assessment models incorporate only a small fraction of patient data such as family history, prior breast biopsies, and hormonal and reproductive history. Only one feature from the screening mammogram itself, breast density, is incorporated into traditional models.
In late January, scientists at DeepMind, Google's London-based AI unit, gathered to discuss whether there was anything they could do to help fight the brewing coronavirus pandemic. At the time, the spread of Covid-19 was still largely confined to the city of Wuhan, but as case numbers continued to grow exponentially, machine learning experts from London to San Francisco were gearing up to try and harness the power of AI to fight the Sars-CoV-2 virus. "Our first reaction was to think how we might be able to help," says Demis Hassabis, CEO and co-founder of DeepMind. "Front of mind was our system, AlphaFold, which we had shown could predict the 3D structure of proteins with unprecedented accuracy compared to other computational methods." At the start of March, DeepMind released predictions generated by AlphaFold for the structures of various proteins associated with SARS-CoV-2, to try and accelerate the process of understanding how the virus functions.
In this episode, our interviewer Lauren Klein speaks with Kim Baraka about his PhD research to enable robots to engage in social interactions, including interactions with children with Autism Spectrum Disorder. Baraka discusses how robots can plan their actions across multiple modalities when interacting with humans, and how models from psychology can inform this process. He also tells us about his passion for dance, and how dance may serve as a testbed for embodied intelligence within Human-Robot Interaction. Kim Baraka is a postdoctoral researcher in the Socially Intelligent Machines Lab at the University of Texas at Austin, and an upcoming Assistant Professor in the Department of Computer Science at Vrije Universiteit Amsterdam, where he will be part of the Social Artificial Intelligence Group. Baraka recently graduated with a dual PhD in Robotics from Carnegie Mellon University (CMU) in Pittsburgh, USA, and the Instituto Superior Técnico (IST) in Lisbon, Portugal.
Medical researchers since March have been pivoting projects to focus on COVID-19, driving the critical need for machine learning and imaging analysis tools to support big data initiatives, according to a Nov. 13 Wall Street Journal report. At the Center for Clinical Data Science, which is part of Boston-based Massachusetts General Hospital and Brigham and Women's Hospital, multidisciplinary teams with artificial intelligence skills have been vital for organizing and sifting through COVID-19 data sets. "Many of us dropped all other research and tried to focus entirely on doing COVID modeling," Jayashree Kalpathy-Cramer, PhD, scientific director of the Center for Clinical Data Science, told the publication. The work required large amounts of data storage, easy access to data and enough computer power to build complex AI models. Over the past several months, researchers from various MGH task forces have collaborated on AI algorithms in numerous ways, including using the models to predict which COVID-19 patients will require more advanced treatments and to identify how many intensive care unit beds could be needed at a particular time.
In the past year, lockdowns and other COVID-19 safety measures have made online shopping more popular than ever, but the skyrocketing demand is leaving many retailers struggling to fulfill orders while ensuring the safety of their warehouse employees. Researchers at the University of California, Berkeley, have created new artificial intelligence software that gives robots the speed and skill to grasp and smoothly move objects, making it feasible for them to soon assist humans in warehouse environments. The technology is described in a paper published online today (Wednesday, Nov. 18) in the journal Science Robotics. Automating warehouse tasks can be challenging because many actions that come naturally to humans--like deciding where and how to pick up different types of objects and then coordinating the shoulder, arm and wrist movements needed to move each object from one location to another--are actually quite difficult for robots. Robotic motion also tends to be jerky, which can increase the risk of damaging both the products and the robots. "Warehouses are still operated primarily by humans, because it's still very hard for robots to reliably grasp many different objects," said Ken Goldberg, William S. Floyd Jr. Distinguished Chair in Engineering at UC Berkeley and senior author of the study.
Today marks the start of RSNA 2020, the annual meeting of the Radiological Society of North America. I participated in my first RSNA 35 years ago and I am super excited--as I am every year--to reconnect with my radiology colleagues and friends and learn about the latest medical and scientific advances in our field. Of course, RSNA will be very different this year. Instead of traveling to Chicago to attend sessions and presentations, and wander the exhibits, I'll experience it all online. While I will miss the fun, excitement, and opportunities to connect that come with being there in person, I am amazed by what a rich and comprehensive conference the organizers of RSNA 2020 have put together using the advanced digital tools that we have at hand now.
While some of the applications for artificial intelligence involve say, winning games of Texas hold'em or recreating pretty paintings, there are areas where the technology could have truly profound consequences. Among those is medical care, and a major breakthrough from Alphabet's DeepMind AI could be a gamechanger in this regard, with the system demonstrating an ability to predict the 3D structures of unique proteins, overcoming a problem that has plagued biologists for half a century. By understanding the 3D shapes of different proteins, scientists can better understand what they do and how the cause diseases, which in turn paves the way for better drug discovery. Beyond that, as a central component to the chemical processes for all living things, more expedient mapping of 3D protein structures would benefit many fields of biological research, but this process has proven painstaking. This is because while modern scientific tools such as X-ray crystallography and cryo-electron microscopy allow researchers to study these structures in amazing new detail, they all still hinge on a process of trial and error.