Aidoc announced today that the US Food and Drug Administration (FDA) has given regulatory clearance for the commercial use of its triaging and notification algorithms for flagging and communicating incidental pulmonary embolism . Flagging incidental, critical findings is a huge technical challenge due to the varied imaging protocols used and lower incidences of such cases. The ability to prioritize incidental critical conditions accurately is a breakthrough in the value AI can bring to the radiologist workflow. "The most common use case we experienced is for critical unsuspected findings in oncology surveillance patients" said Dr. Cindy Kallman, Chief, Section of CT at Cedars-Sinai Medical Center. "The ability to call the referring physician while the patient is still in the house is huge. We are essentially offering a point-of-care diagnosis of PE for our outpatients. Our referring physicians have been completely wowed by this."
Eight technologies developed by MIT Lincoln Laboratory researchers, either wholly or in collaboration with researchers from other organizations, were among the winners of the 2020 R&D 100 Awards. Annually since 1963, these international R&D awards recognize 100 technologies that a panel of expert judges selects as the most revolutionary of the past year. Six of the laboratory's winning technologies are software systems, a number of which take advantage of artificial intelligence techniques. The software technologies are solutions to difficulties inherent in analyzing large volumes of data and to problems in maintaining cybersecurity. Another technology is a process designed to assure secure fabrication of integrated circuits, and the eighth winner is an optical communications technology that may enable future space missions to transmit error-free data to Earth at significantly higher rates than currently possible.
And yet rapid tests like the Abbott test have led to reports among the general population of false negatives (reports that you don't have the virus when you really do). That means some people may have been unknowingly spreading the virus to others. The White House outbreak is a very good illustration of the limitations of rapid testing. But it should not deter us from the strategy entirely--we just need to use the technology properly. No test is 100% accurate, but the gold standard for diagnosing covid-19 is a PCR test.
A University of Central Florida researcher is part of a new study showing that artificial intelligence can be nearly as accurate as a physician in diagnosing COVID-19 in the lungs. The study, recently published in Nature Communications, shows the new technique can also overcome some of the challenges of current testing. Researchers demonstrated that an AI algorithm could be trained to classify COVID-19 pneumonia in computed tomography (CT) scans with up to 90 percent accuracy, as well as correctly identify positive cases 84 percent of the time and negative cases 93 percent of the time. CT scans offer a deeper insight into COVID-19 diagnosis and progression as compared to the often-used reverse transcription-polymerase chain reaction, or RT-PCR, tests. These tests have high false negative rates, delays in processing and other challenges.
New Orleans Saints' fullback Michael Burton will be active for Sunday's game against the Detroit Lions just one day after receiving a false positive COVID-19 test result. Burton tested positive on Saturday night signaling trouble for the league already dealing with an outbreak and several other isolated cases among teams but a re-test on Sunday morning turned back a negative test result, The Athletic reported. Burton and other Saints players also underwent rapid testing which all came back negative giving them a green light to carry on with the Lions game as scheduled. The NFL has been forced to postpone two games and adjust team schedules after the Tennessee Titans had around 20 people - 10 players and 10 personnel - test positive this past week. The Titans-Pittsburgh Steelers game, originally scheduled for Sunday, was postponed until Oct. 25 -- during Tennessee's bye.
The new UCF co-developed algorithm can accurately identify COVID-19 cases, as well as distinguish them from influenza. ORLANDO, Sept. 30, 2020 - A University of Central Florida researcher is part of a new study showing that artificial intelligence can be nearly as accurate as a physician in diagnosing COVID-19 in the lungs. The study, recently published in Nature Communications, shows the new technique can also overcome some of the challenges of current testing. Researchers demonstrated that an AI algorithm could be trained to classify COVID-19 pneumonia in computed tomography (CT) scans with up to 90 percent accuracy, as well as correctly identify positive cases 84 percent of the time and negative cases 93 percent of the time. CT scans offer a deeper insight into COVID-19 diagnosis and progression as compared to the often-used reverse transcription-polymerase chain reaction, or RT-PCR, tests.
Financially strapped airlines are pushing an idea intended to breathe new life into the travel industry: coronavirus tests that passengers can take before boarding a flight. Several airlines, including United, American, Hawaiian, JetBlue and Alaska, have announced plans to begin offering testing -- either kits mailed to a passenger's home or rapid tests taken at or near airports -- that would allow travelers to enter specific states and countries without having to quarantine. The tests will cost fliers $90 to $250, depending on the airline and the type of test. At Los Angeles International Airport, a design company has announced plans to convert cargo containers into a coronavirus testing facility with an on-site lab that can produce results in about two hours. On Thursday, Tampa International Airport began offering testing to all arriving and departing passengers on a walk-in basis. It's an idea that has gone global, with a trade group for the world's airlines calling on governments to create a testing standard for airline passengers as a way to fight the COVID-19 pandemic instead of using travel restrictions and mandatory quarantines.
"Being good is easy, what is difficult is being just." "We need to defend the interests of those whom we've never met and never will." Note: This article is intended for a general audience to try and elucidate the complicated nature of unfairness in machine learning algorithms. As such, I have tried to explain concepts in an accessible way with minimal use of mathematics, in the hope that everyone can get something out of reading this. Supervised machine learning algorithms are inherently discriminatory. They are discriminatory in the sense that they use information embedded in the features of data to separate instances into distinct categories -- indeed, this is their designated purpose in life. This is reflected in the name for these algorithms which are often referred to as discriminative algorithms (splitting data into categories), in contrast to generative algorithms (generating data from a given category). When we use supervised machine learning, this "discrimination" is used as an aid to help us categorize our data into distinct categories within the data distribution, as illustrated below. Whilst this occurs when we apply discriminative algorithms -- such as support vector machines, forms of parametric regression (e.g.
Editor's Note: The use of face recognition technology in policing has been a long-standing subject of concern, even more-so now after the murder of George Floyd and the demonstrations that have followed. In this article, Mike Loukides, VP of Content Strategy at O'Reilly Media, reviews how companies and cities have addressed these concerns, as well as ways in which individuals can mitigate face recognition technology or even use it to increase accountability. We'd love to hear from you about what you think about this piece. Largely on the impetus of the Black Lives Matter movement, the public's response to the murder of George Floyd, and the subsequent demonstrations, we've seen increased concern about the use of facial identification in policing. First, in a highly publicized wave of announcements, IBM, Microsoft, and Amazon have announced that they will not sell face recognition technology to police forces.
The use of machine learning (ML) in health care raises numerous ethical concerns, especially as models can amplify existing health inequities. Here, we outline ethical considerations for equitable ML in the advancement of health care. Specifically, we frame ethics of ML in health care through the lens of social justice. We describe ongoing efforts and outline challenges in a proposed pipeline of ethical ML in health, ranging from problem selection to post-deployment considerations. We close by summarizing recommendations to address these challenges.