2020-11
Need a Hypothesis? This A.I. Has One
Machine-learning algorithms seem to have insinuated their way into every human activity short of toenail clipping and dog washing, although the tech giants may have solutions in the works for both. If Alexa knows anything about such projects, she's not saying. But one thing that algorithms presumably cannot do, besides feel heartbreak, is formulate theories to explain human behavior or account for the varying blend of motives behind it. They are computer systems; they can't play Sigmund Freud or Carl Jung, at least not convincingly. Social scientists have used the algorithms as tools, to number-crunch and test-drive ideas, and potentially predict behaviors -- like how people will vote or who is likely to engage in self-harm -- secure in the knowledge that ultimately humans are the ones who sit in the big-thinking chair.
tl;dr: this AI sums up research papers in a sentence
TLDR generates one-sentence summaries of computer-science papers on the scientific search engine Semantic Scholar.Credit: Agnese Abrusci/Nature The creators of a scientific search engine have unveiled software that automatically generates one-sentence summaries of research papers, which they say could help scientists to skim-read papers faster. The free tool, which creates what the team call TLDRs (the common Internet acronym for'Too long, didn't read'), was activated this week for search results at Semantic Scholar, a search engine created by the non-profit Allen Institute for Artificial Intelligence (AI2) in Seattle, Washington. For the moment, the software generates sentences only for the ten million computer-science papers covered by Semantic Scholar, but papers from other disciplines should be getting summaries in the next month or so, once the software has been fine-tuned, says Dan Weld, who manages the Semantic Scholar group at AI2 and led the work. Preliminary testing suggests that the tool helps readers to sort through search results faster than viewing titles and abstracts, especially on mobile phones, he says. "People seem to really like it."
Can We Make Our Robots Less Biased Than Us?
On a summer night in Dallas in 2016, a bomb-handling robot made technological history. Police officers had attached roughly a pound of C-4 explosive to it, steered the device up to a wall near an active shooter and detonated the charge. In the explosion, the assailant, Micah Xavier Johnson, became the first person in the United States to be killed by a police robot. Afterward, then-Dallas Police Chief David Brown called the decision sound. Before the robot attacked, Mr. Johnson had shot five officers dead, wounded nine others and hit two civilians, and negotiations had stalled.
Very Little Stands Between the U.S. and a Technological Panopticon
This article is part of the Policing and Technology Project, a collaboration between Future Tense and the Tech, Law, & Security Program at American University Washington College of Law that examines the relationship between law enforcement, police reform, and technology. On Friday, Nov. 20, at 1 p.m. Eastern, Future Tense will co-host "Technology, Policing, and Earning the Public Trust," an online event about the role of technology in law enforcement reform. This summer, when officials in a few cities started using facial recognition software to identify protesters, many cried foul. Those objections turned ironic when protesters used facial recognition to identify police officers who had covered their badges or nameplates during protests. Powerful technology beloved by police had become a tool for accountability: David defeats Goliath.
- North America > United States > Maryland (0.16)
- Asia > China (0.06)
- North America > United States > Florida (0.05)
- North America > United States > Arizona (0.05)
Softening Up Robots
MIT CSAIL's flexible sensors can be applied as skin to the bodies of soft robots. When you picture a robot, you likely envision one large and rigid, with limited movement and an outer shell that is hard to the touch. Several projects currently underway seek to change that, with the use of soft, more human-like artificial skin. Artificial skins include any surface-based device or distributed network of sensors that enable an agent to perceive mechanical deformations, touch, temperature, vibration, and/or pain, according to Ryan Truby, a post-doctoral fellow in the Massachusetts Institute of Technology (MIT) Computer Science & Artificial Intelligence Lab (CSAIL). Engineers are working to create skins that include as many of these sensations as possible, while also possessing high sensitivity and spatial resolution in sensing, he adds.
- North America > United States > Massachusetts (0.24)
- North America > United States > Texas (0.04)
- North America > United States > Maryland (0.04)
- Europe > Germany > Baden-Württemberg > Stuttgart Region > Stuttgart (0.04)
- Health & Medicine (0.94)
- Energy > Energy Storage (0.70)
- Electrical Industrial Apparatus (0.70)
How to Make AI Less Biased
The artificial intelligence world is making a strong push to root out bias in AI systems, but it faces some significant obstacles. Academic researchers and the technology industry are working to eliminate bias from artificial intelligence (AI) systems. IBM's AI Fairness 360 and Google's What-if tool are among the open source packages that can be used to audit models for biased results. Most researchers say bigger and more representative training sets is the best way to address the issue of bias. For instance, a spokeswoman for Apple said the company used a dataset of more than 2 billion faces to develop a more accurate facial recognition system to unlock its iPhones.
Magnetic spray turns objects into mini robots that can deliver drugs
A glue-like magnetic spray can turn objects, such as pills, into mini robots that can be controlled by magnets and navigated through the body. The sprayed objects can be made to roll, flip and crawl using a magnetic field. Shen Yajing at City University of Hong Kong and his colleagues even used the spray to animate the wings of an origami crane. "Our spray can convert various tiny objects to mini robots directly," says Yajing. The object can be flat or three-dimensional, he says, and only a thin coating of spray is required.
The way we train AI is fundamentally flawed
Underspecification is a known issue in statistics, where observed effects can have many possible causes. D'Amour, who has a background in causal reasoning, wanted to know why his own machine-learning models often failed in practice. He wondered if underspecification might be the problem here too. D'Amour soon realized that many of his colleagues were noticing the same problem in their own models. "It's actually a phenomenon that happens all over the place," he says.
US Navy's huge uncrewed robot ship has journeyed through Panama Canal
A robotic cargo vessel has passed through the Panama Canal for the first time. The uncrewed ship, an Overlord Unmanned Surface Vessel (USV) of the US Navy, made a 4700 nautical mile (8700 kilometre) journey including passage from the Atlantic to the Pacific almost entirely without human assistance. Pentagon spokesman Josh Frey says the vessel was in autonomous mode for over 97 per cent of the trip's length. A remote crew assisted when needed.
- North America > United States (1.00)
- North America > Panama (1.00)
- Transportation > Marine (1.00)
- Government > Regional Government > North America Government > US Government (1.00)
- Government > Military > Navy (0.82)
The ethical questions that haunt facial-recognition research
In September 2019, four researchers wrote to the publisher Wiley to "respectfully ask" that it immediately retract a scientific paper. The study, published in 2018, had trained algorithms to distinguish faces of Uyghur people, a predominantly Muslim minority ethnic group in China, from those of Korean and Tibetan ethnicity1. China had already been internationally condemned for its heavy surveillance and mass detentions of Uyghurs in camps in the northwestern province of Xinjiang -- which the government says are re-education centres aimed at quelling a terrorist movement. According to media reports, authorities in Xinjiang have used surveillance cameras equipped with software attuned to Uyghur faces. As a result, many researchers found it disturbing that academics had tried to build such algorithms -- and that a US journal had published a research paper on the topic. And the 2018 study wasn't the only one: journals from publishers including Springer Nature, Elsevier and the Institute of Electrical and Electronics Engineers (IEEE) had also published peer-reviewed papers that describe using facial recognition to identify Uyghurs and members of other Chinese minority groups. The complaint, which launched an ongoing investigation, was one foray in a growing push by some scientists and human-rights activists to get the scientific community to take a firmer stance against unethical facial-recognition research.
- North America > United States (1.00)
- Asia > China > Xinjiang (0.34)
- Media (1.00)
- Law > Civil Rights & Constitutional Law (1.00)
- Information Technology (1.00)