Goto

Collaborating Authors

Results


Massachusetts suspends Boston-based coronavirus testing lab Orig3n after nearly 400 false positives

Boston Herald

The state has suspended Boston-based COVID-19 testing lab Orig3n Laboratory after it produced nearly 400 false positive results. Public health officials became aware in early August of an "unusually high positivity rate" among the lab's test results and requested that Orig3n stop testing for the virus as of Aug. 8. Specimens were sent to an independent lab to be retested as part of a state Department of Public Health investigation, and the results showed at least 383 false positives. On Aug. 27, the state Department of Public Health notified Orig3n of "three significant certification deficiencies that put patients at immediate risk of harm," according to a DPH spokeswoman. They included the failure of the lab's director to provide overall management, issues with the extraction phase of testing, and a failure to meet analytic requirements such as documenting the daily sanitizing of equipment used for coronavirus testing. A statement of deficiency was issued on Sept. 4. The lab must now respond with a written plan of correction by Sept. 14, "and if action is not taken it can face sanctions," DPH said.


US Air Force grapples with vexing problem of AI spoofing

#artificialintelligence

The US Department of Defense (DoD) is worried that artificial intelligence programs might have serious and unknown vulnerabilities that adversaries could exploit. In particular, the Pentagon is worried that the technology could not only be hacked, but could be "spoofed". That is, it could be intentionally deceived into thinking that it sees objects – or military targets – that do not exist. The opposite is true as well: military targets could be erroneously ignored. That is one reason the US Air Force (USAF) and the Massachusetts Institute of Technology founded the "MIT-Air Force AI Accelerator" in 2019.


Coronavirus US: Boston Dynamics' robot dog detects symptoms

Daily Mail - Science & tech

A hospital in Massachusetts has found another job for Spot, Boston Dynamics' dog-like robot: Doctor. The yellow-and-black quadruped has been proven able to take patients' vital signs from a distance of over six feet. That could allow healthcare workers to keep a safe distance from patients who may be infected with the coronavirus or other contagion. So far, Spot has only been tested on healthy patients at Harvard Medical School's Brigham and Women's Hospital - the next step would be to try it out in an emergency room setting. Researchers at MIT say they've developed cameras that allow Spot, Boston Dynamics' dog-like robot, to take vital signs from more than six feet away.


US awards more than $1B to establish 12 new AI and quantum science research institutes

#artificialintelligence

The White House Office of Science and Technology Policy, the National Science Foundation (NSF), and the US Department of Energy (DOE) announced more than $1 billion in awards for the establishment of 12 new artificial intelligence (AI) and quantum information science (QIS) research institutes nationwide. The $1 billion will go towards NSF-led AI Research Institutes and DOE QIS Research Centers over five years, establishing 12 multi-disciplinary and multi-institutional national hubs for research and workforce development in these critical emerging technologies. Together, the institutes will spur cutting edge innovation, support regional economic growth, and advance American leadership in these critical industries of the future. The National Science Foundation and additional Federal partners, including the US Department of Agriculture, are awarding $140 million for seven NSF-led AI Research Institutes over five years to accelerate a number of AI R&D areas, such as machine-learning, synthetic manufacturing, precision agriculture, and forecasting prediction. The NSF-led AI Research Institutes will be hosted by universities across the country, including at the University of Oklahoma at Norman, University of Texas at Austin, University of Colorado at Boulder, University of Illinois at Urbana-Champaign, University of California at Davis, and the Massachusetts Institute of Technology.


How disabled Americans are harmed by a system meant to help them

Al Jazeera

Boston, United States - In 2015, I fell 25 feet (7.6 metres) from a Redwood tree and was in a coma for 10 days. I spent the rest of that year using an arm crutch and went through four months of outpatient rehabilitation. Nine months later, I had eye muscle surgery to correct double vision that resulted from damage to my occipital lobe. Five years later, I still suffer from fine motor deficits, balance issues, and have trouble with my memory and speech. My first application for Social Security Disability Insurance (SSDI) - a government grant which provides health insurance and a monthly allotment of money for people with disabilities to live on - was filled out on my behalf by my parents. I have no recollection of it and my short-term memory is still impaired.


Deepfake video shows President Richard Nixon announcing the failure of the 1969 moon landing

Daily Mail - Science & tech

A scarily realistic deepfake video shows what it would have looked like if President Richard Nixon was forced to deliver a sombre address to the world had the Apollo 11 mission ended in disaster. It is well-known that the American president had two speeches prepared, one in case of a safe landing and one in the event that tragedy struck. Fortuitously, the landing on July 20 1969 by Neil Armstrong and Buzz Aldrin was a resounding success, rendering the latter redundant. However, experts at Massachusetts Institute of Technology (MIT) have created an entirely artificial video showing what it may have looked and sounded like. It is part of a project called'Moon Disaster' and is designed to draw attention to the risk deepfakes pose and how they can manipulate people and spread fake news.


Could this software help users trust machine learning decisions?

#artificialintelligence

WASHINGTON - New software developed by BAE Systems could help the Department of Defense build confidence in decisions and intelligence produced by machine learning algorithms, the company claims. BAE Systems said it recently delivered its new MindfuL software program to the Defense Advanced Research Projects Agency in a July 14 announcement. Developed in collaboration with the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory, the software is designed to increase transparency in machine learning systems--artificial intelligence algorithms that learn and change over time as they are fed ever more data--by auditing them to provide insights about how it reached its decisions. "The technology that underpins machine learning and artificial intelligence applications is rapidly advancing, and now it's time to ensure these systems can be integrated, utilized, and ultimately trusted in the field," said Chris Eisenbies, product line director of the cmpany's Autonomy, Control, and Estimation group. "The MindfuL system stores relevant data in order to compare the current environment to past experiences and deliver findings that are easy to understand."


DARPA honors artificial intelligence expert

#artificialintelligence

Best listening experience is on Chrome, Firefox or Safari. The irony of artificial intelligence is how much human brainpower is required to build it. For three years, our next guest had been on loan from the University of Massachusetts, to the Defense Advanced Research Projects Agency. There's she headed up several DARPA artificial intelligence projects. Now she's been awarded a high honor, the Meritorious Public Service Medal.


DARPA honors artificial intelligence expert

#artificialintelligence

Best listening experience is on Chrome, Firefox or Safari. The irony of artificial intelligence is how much human brainpower is required to build it. For three years, our next guest had been on loan from the University of Massachusetts, to the Defense Advanced Research Projects Agency. There's she headed up several DARPA artificial intelligence projects. Now she's been awarded a high honor, the Meritorious Public Service Medal.


Ed Markey, Ayanna Pressley push for federal ban on facial recognition technology

Boston Herald

Massachusetts Sen. Ed Markey and Rep. Ayanna Pressley are pushing to ban the federal government's use of facial recognition technology, as Boston last week nixed the city use of the technology and tech giants pause their sale of facial surveillance tools to police. The momentum to stop the government use of facial recognition technology comes in the wake of the police killing of George Floyd in Minneapolis -- a black man killed by a white police officer. Floyd's death has sparked nationwide protests for racial justice and triggered calls for police reform, including ways police track people. Facial recognition technology contributes to the "systemic racism that has defined our society," Markey said on Sunday. "We cannot ignore that facial recognition technology is yet another tool in the hands of law enforcement to profile and oppress people of color in our country," Markey said during an online press briefing.