McCarthy et al  organized the Dartmouth workshop in 1956 to initiate artificial intelligence (AI) as a research field with a lofty goal to simulate, enhance, or even surpass human intelligence. Given the tremendous potentials and challenges, the excitements and frustrations are equally remarkable. Their interactions lead to alterations of AI springs and winters, through which the AI field has been developed step by step, and elevated to today's level, and we believe that this field will have an even brighter future. Currently, AI is in a new spring, especially its sub-field machine learning (ML) which enjoys rapid development and constant innovations featured by deep neural networks, also known as deep learning. On August 30, 2019, the White House issued a memorandum on the Fiscal Year 2021 Administration Research and Development Budget Priorities , underlining that'departments and agencies should prioritize basic and applied research investments that are consistent with the 2019 Executive Order on Maintaining American Leadership in Artificial Intelligence and the eight strategies detailed in the 2019 update of the National Artificial Intelligence Research and Development Strategic Plan.'
Governor Northam announced that Virginians can now use COVIDCheck, a new online risk-assessment tool to check their symptoms and connect with the appropriate health care resource, including COVID-19 testing. "If you are feeling sick or think you may have been exposed to someone with COVID-19, it is important that you take action right away," said Governor Northam. "This online symptom-checking tool can help Virginians understand their personal risk for COVID-19 and get recommendations about what to do next from the safety of their homes. As we work to flatten the curve in our Commonwealth, telehealth services like this will be vital to relieving some of the strains on providers and health systems and making health care more convenient and accessible." COVIDCheck is a free, web-based, artificial intelligence-powered telehealth tool that can help individuals displaying symptoms associated with COVID-19 self-assess their risk and determine the best next steps, such as self-isolation, seeing a doctor, or seeking emergency care.
"I would love to see a future where looking inside the body becomes as routine as a blood pressure cuff measurement," says Charles Cadieu '04, MEng '05. As president of the medical technology startup Caption Health, he sees that future in reach--with the help of artificial intelligence. Cadieu still remembers the "lightbulb moment" during his postdoctoral research at MIT when he realized that the field of AI would never be the same. He was working in the lab of James DiCarlo (now the Peter de Florez Professor of Neuroscience) on neural networks--AI systems made up of deep-learning algorithms that emulate the dense networks of neurons in the brain. Until then, neural networks had been unable to perform even simple visual tasks that the brain handles with ease.
The multi-limbed da Vinci can be utilized in a variety of procedures, including cardiovascular, colorectal, gynaecological, head and neck, thoracic and urologic medical procedures, however, only if they're minimally invasive. How large the market could be is as yet hazy, yet experts concur the potential still can't seem to be tapped. So more players are moving in, and rapidly. As the beginning of robotic surgery offers an approach to increasingly precise control and better patient results, early pioneers like Intuitive Surgical Inc. are seeing increased pressure from large organizations like Johnson and Johnson and Medtronic PLC, which have made major M&A investments to break into the market as of late. Intuitive's da Vinci system was first affirmed by the U.S. Food and Drug Administration in 2000 for urology.
Automated medical image classification with convolutional neural networks (CNNs) has great potential to impact healthcare, particularly in resource-constrained healthcare systems where fewer trained radiologists are available. However, little is known about how well a trained CNN can perform on images with the increased noise levels, different acquisition protocols, or additional artifacts that may arise when using low-cost scanners, which can be underrepresented in datasets collected from well-funded hospitals. In this work, we investigate how a model trained to triage head computed tomography (CT) scans performs on images acquired with reduced x-ray tube current, fewer projections per gantry rotation, and limited angle scans. These changes can reduce the cost of the scanner and demands on electrical power but come at the expense of increased image noise and artifacts. We first develop a model to triage head CTs and report an area under the receiver operating characteristic curve (AUROC) of 0.77. We then show that the trained model is robust to reduced tube current and fewer projections, with the AUROC dropping only 0.65% for images acquired with a 16x reduction in tube current and 0.22% for images acquired with 8x fewer projections. Finally, for significantly degraded images acquired by a limited angle scan, we show that a model trained specifically to classify such images can overcome the technological limitations to reconstruction and maintain an AUROC within 0.09% of the original model.
Police in China are using Robocop-style helmets embedded with AI to spot someone with a fever from 16 feet away. A restaurant in L.A. has been checking people's temperatures at the door with an infrared noncontact thermometer. And a hotel near Texas Medical Center in Houston just deployed germ-zapping robots to sanitize guest rooms and common areas. In the war against the spread of the coronavirus, tech gadgets and telemedicine services are getting fast-tracked to the front lines. It's been a whirlwind few months for Dr. Samir Qamar.
Alphabet is using its dominance in the search and advertising spaces -- and its massive size -- to find its next billion-dollar business. From healthcare to smart cities to banking, here are 10 industries the tech giant is targeting. With growing threats from its big tech peers Microsoft, Apple, and Amazon, Alphabet's drive to disrupt has become more urgent than ever before. The conglomerate is leveraging the power of its first moats -- search and advertising -- and its massive scale to find its next billion-dollar businesses. To protect its current profits and grow more broadly, Alphabet is edging its way into industries adjacent to the ones where it has already found success and entering new spaces entirely to find opportunities for disruption. Evidence of Alphabet's efforts is showing up in several major industries. For example, the company is using artificial intelligence to understand the causes of diseases like diabetes and cancer and how to treat them. Those learnings feed into community health projects that serve the public, and also help Alphabet's effort to build smart cities. Elsewhere, Alphabet is using its scale to build a better virtual assistant and own the consumer electronics software layer. It's also leveraging that scale to build a new kind of Google Pay-operated checking account. In this report, we examine how Alphabet and its subsidiaries are currently working to disrupt 10 major industries -- from electronics to healthcare to transportation to banking -- and what else might be on the horizon. Within the world of consumer electronics, Alphabet has already found dominance with one product: Android. Mobile operating system market share globally is controlled by the Linux-based OS that Google acquired in 2005 to fend off Microsoft and Windows Mobile. Today, however, Alphabet's consumer electronics strategy is being driven by its work in artificial intelligence. Google is building some of its own hardware under the Made by Google line -- including the Pixel smartphone, the Chromebook, and the Google Home -- but the company is doing more important work on hardware-agnostic software products like Google Assistant (which is even available on iOS).
In the 1960s, the Star Trek television series brought the vision of artificial intelligence into the living rooms of millions of people. AI was everywhere in the show, in the form of machines that had all the intelligence of humans -- and a lot more. Take, for example, the universal translator on the USS Enterprise. It could translate alien languages into English or any other language instantaneously. That, of course, was all science fiction back in the days when Lyndon B. Johnson was the U.S. president, as were a lot of the other AI applications in use on the starship.
The rapid entry of artificial intelligence is stretching the boundaries of medicine. It will also test the limits of the law. Artificial intelligence (AI) is being used in health care to flag abnormalities in head CT scans, cull actionable information from electronic health records, and help patients understand their symptoms. At some point, AI is bound to make a mistake that harms a patient. When that happens, who -- or what -- is liable?
Artificial intelligence shows promise for solving many practical societal problems in areas such as healthcare and transportation. However, the current mechanisms for AI model diffusion such as Github code repositories, academic project webpages, and commercial AI marketplaces have some limitations; for example, a lack of monetization methods, model traceability, and model auditabilty. In this work, we sketch guidelines for a new AI diffusion method based on a decentralized online marketplace. We consider the technical, economic, and regulatory aspects of such a marketplace including a discussion of solutions for problems in these areas. Finally, we include a comparative analysis of several current AI marketplaces that are already available or in development. We find that most of these marketplaces are centralized commercial marketplaces with relatively few models.