AI-Alerts


Robots have already mastered games like chess and Go. Now they're coming for Jenga.

Washington Post

For several decades, various types of artificial intelligence have been facing off with people in highly competitive games and then quickly destroying their human competition. AI long ago mastered chess, the Chinese board game Go and even the Rubik's cube, which it managed to solve in just 0.38 seconds. Now machines have a new game that will allow them to humiliate humans: Jenga, the popular game ---- and source of melodramatic 1980s commercials ---- in which players strategically remove pieces from an increasingly unstable tower of 54 blocks, placing each one on top until the entire structure collapses. A newly released video from MIT shows a robot developed by the school's engineers playing the game with surprising precision. The machine is quipped with a soft-pronged gripper, a force-sensing wrist cuff and an external camera, allowing the robot to perceive the tower's vulnerabilities the way a human might, according to Alberto Rodriguez, the Walter Henry Gale career development assistant professor in the Department of Mechanical Engineering at MIT. "Unlike in more purely cognitive tasks or games such as chess or Go, playing the game of Jenga also requires mastery of physical skills such as probing, pushing, pulling, placing, and aligning pieces," Rodriguez said in a statement released by the school.


Feared or celebrated, Amazon's Alexa is star of Super Bowl ads

The Japan Times

An android child struggles to control his emotions. Robots threaten to take away human jobs. These dark themes were explored by this year's Super Bowl commercials, with brands such as TurboTax, Olay and Sprint capitalizing on fears that technology is encroaching on our lives. Inc.'s Alexa, an increasingly ubiquitous digital assistant sold by one of the world's most powerful companies. Amazon had its own commercial, where it poked fun at Alexa.


AI Can Easily Break Text CAPTCHA

#artificialintelligence

A new study suggests that text-based CAPTCHAs are no longer safe. Researchers from Northwest University and Peking University in China, and Lancaster University in the U.K., say they developed a machine learning algorithm that can crack most text-based CAPTCHAs within 0.05 seconds. Northwest University's Fang Dingyi said the algorithm exhibited a more than 50% success rate on decoding text-based CAPTCHA schemes used by 50 popular websites within that timeframe. The tool uses a generative adversarial network that teaches a CAPTCHA generator program to produce large numbers of CAPTCHAs to train a solver. Said Fang, "This research suggests one can easily launch an attack on a new CAPTCHA scheme using [artificial intelligence]. It means that this first defense of many websites is no longer reliable."


Mind-controlled robot lets you weld metal without using your hands

New Scientist

The person controlling the robot wears an electroencephalography (EEG) cap, which measures the brain's electrical activity via the scalp. They then look at a screen that has several pre-selected metal seams for the robot to weld. When their chosen option flickers, it generates a specific electrical response in the brain detectable by the EEG.


Putting neural networks under the microscope

MIT News

Researchers from MIT and the Qatar Computing Research Institute (QCRI) are putting the machine-learning systems known as neural networks under the microscope. In a study that sheds light on how these systems manage to translate text from one language to another, the researchers developed a method that pinpoints individual nodes, or "neurons," in the networks that capture specific linguistic features. Neural networks learn to perform computational tasks by processing huge sets of training data. In machine translation, a network crunches language data annotated by humans, and presumably "learns" linguistic features, such as word morphology, sentence structure, and word meaning. Given new text, these networks match these learned features from one language to another, and produce a translation.


San Francisco Could Be First to Ban Facial Recognition Tech

WIRED

If a local tech industry critic has his way, San Francisco could become the first US city to ban its agencies from using facial recognition technology. Aaron Peskin, a member of the city's Board of Supervisors, proposed the ban Tuesday as part of a suite of rules to enhance surveillance oversight. In addition to the ban on facial recognition technology, the ordinance would require city agencies to gain the board's approval before buying new surveillance technology, putting the burden on city agencies to publicly explain why they want the tools as well as the potential harms. It would also require an audit of any existing surveillance tech--things like gunshot-detection systems, surveillance cameras, or automatic license plate readers--in use by the city; officials would have to report annually on how the technology was used, community complaints, and with whom they share the data. Those rules would follow similar ordinances passed in nearby Oakland and Santa Clara County.


Intel Bets Big On Kubernetes For Nauta Deep Learning Platform

#artificialintelligence

Intel announced Nauta, an open source deep learning project based on Kubernetes. The project comes with select open source components and Intel-developed custom applications, tools, and scripts for building deep learning models. According to Intel, Nauta provides a multi-user, distributed computing environment for running deep learning model training experiments on systems based on Intel Xeon processor. The software foundation for the distributed platform is built on Kubernetes – industry's leading container orchestration engine. Mainstream deep learning tools and frameworks such as TensorFlow, TensorBoard and Jupyter Hub are tightly integrated with the platform.


Parkland Is Embracing Student Surveillance

The Atlantic

In the 11 months since 17 teachers and students were killed at Marjory Stoneman Douglas High School in Parkland, Florida, campuses across the country have started spending big on surveillance technology. The Lockport, New York, school district spent $1.4 million in state funds on a facial-recognition system. Schools in Michigan, Massachusetts, and Los Angeles have adopted artificial-intelligence software--prone to false positives--that scans students' Facebook and Twitter accounts for signs that they might become a shooter. In New Mexico, students as young as 6 are under acoustic surveillance, thanks to a gunshot-detection program originally developed for use by the military to track enemy snipers. Earlier this month, the Marjory Stoneman Douglas High School Public Safety Commission released its report on the safety and security failures that contributed to fatalities during last year's shooting.


Drones Help Rid Galapagos Island of Invasive Rats

IEEE Spectrum Robotics Channel

The Galapagos Islands are famous for their exotic wildlife, which in most cases is not nearly as afraid of humans as it should be. Humans have done some seriously horrible things to the animals living there, like packing thousands of giant tortoises upside down on ships because they would stay alive without food or water for months and could then be eaten. People traveling to and living in the Galapagos have caused other serious problems to the fragile ecosystem: In addition to devastating oil spills, humans have introduced numerous invasive species to the islands. In particular, goats, which were brought on purpose, and rats, which were brought accidentally, have been catastrophic for endemic animal populations. For decades, the Galapagos National Park Directorate (DPNG) has been working to remove invasive species island by island, including tens of thousands of feral goats, pigs, and donkeys.


SONYC

Communications of the ACM

Over an 11-month period--May 2016 to April 2017--51% of all noise complaints in the focus area were related to after-hours construction activity (6 P.M.–7 A.M.), three times the amount in the next category. Note combining all construction-related complaints adds up to 70% of this sample, highlighting how disruptive to the lives of ordinary citizens this particular category of noise can be. Figure 4c includes SPL values (blue line) at a five-minute resolution for the after-hours period during or immediately preceding a subset of the complaints. Dotted green lines correspond to background levels, computed as the moving average of SPL measurements within a two-hour window. Dotted black lines correspond to SPL values 10dB above the background, the threshold defined by the city's noise code to indicate potential violations.