Goto

Collaborating Authors

 2018-02


Transforming Robotic Steering Wheel Is a Reminder That Your Car Needs You

IEEE Spectrum

Most of the autonomous vehicles that you're likely to encounter in the near future are either Level 2 or Level 4 autonomous. Level 2, which you'll find in a Tesla on the highway, means that the car drives itself in specific situations but expects you to be paying attention the entire time. Level 4 y...


A Programmable Programming Language

Communications of the ACM

Matthias Felleisen (matthias@ccs.neu.edu) is a Trustee Professor in the College of Computer Science at Northeastern University, Boston, MA, USA. Robert Bruce Findler (robby@eecs.northwestern.edu) is a professor of computer science at Northwestern University, Evanston, IL, USA. Matthew Flatt (mflatt@cs.utah.edu) is a professor of computer science at the University of Utah, Salt Lake City, UT, USA. Shriram Krishnamurthi (sk@cs.brown.edu) is a professor of computer science at Brown University, Providence, RI, USA. Eli Barzilay (eli@barzilay.org) is a research scientist at Microsoft Research, Cambridge, MA, USA. Jay McCarthy (jay.mccarthy@gmail.com) is an associate professor of computer science at the University of Massachusetts, Lowell, MA, USA. Sam Tobin-Hochstadt (samth@cs.indiana.edu) is an assistant professor of computer science at Indiana University, Bloomington, IN, USA.


The State of Fakery

Communications of the ACM

An image of a dog created by a deep convolutional generative adversarial network (GAN) algorithm. Back in 1999, Hany Farid was finishing his postdoctoral work at the Massachusetts Institute of Technology (MIT) and was in a library when he stumbled on a book called The Federal Rules of Evidence. The book caught his eye, and Farid opened to a random page, on which was a section entitled "Introducing Photos into a Court of Law as Evidence." Since he was interested in photography, Farid wondered what those rules were. While Farid was not surprised to learn that a 35mm negative is considered admissible as evidence, he was surprised when he read that then-new digital media would be treated the same way.


People freaked out after robot dogs opened a door. Now they're resisting humans.

General News Tweet Watch

In one of the scariest moments in the movie "Jurassic Park," a pair of intelligent Velociraptors, brought back to Earth by man's hubris, defy an assumption about their limitations: They open a kitchen door. Now imagine that the raptors are real, transformed into headless robot dogs that can negoti...


Why Self-Taught Artificial Intelligence Has Trouble With the Real World Quanta Magazine

#artificialintelligence

Until very recently, the machines that could trounce champions were at least respectful enough to start by learning from human experience. To beat Garry Kasparov at chess in 1997, IBM engineers distilled centuries of chess wisdom into a formula that was hard-wired into their Deep Blue computer. In 2016, Google DeepMind's AlphaGo thrashed champion Lee Sedol at the ancient board game Go after poring over millions of positions from tens of thousands of human games. But now artificial intelligence researchers are rethinking the way their bots incorporate the totality of human knowledge. The current trend is: Don't bother. Last October, the DeepMind team published details of a new Go-playing system, AlphaGo Zero, that studied no human games at all. Instead, it started with the game's rules and played against itself.


Autonomous Cars Are About To Transform The Suburbs

Forbes Europe

Technicians analyze data following the trial of an autonomous self-driving vehicle in a pedestrianised zone, during a media event in Milton Keynes, north of London, on October 11, 2016. Suburbs have largely been dismissed by environmentalists and urban planners as bad for the planet, a form that ne...


The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

#artificialintelligence

In the coming decades, artificial intelligence (AI) and machine learning technologies are going to transform many aspects of our world. Much of this change will be positive; the potential for benefits in areas as diverse as health, transportation and urban planning, art, science, and cross-cultural understanding are enormous. We've already seen things go horribly wrong with simple machine learning systems; but increasingly sophisticated AI will usher in a world that is strange and different from the one we're used to, and there are serious risks if this technology is used for the wrong ends. Today EFF is co-releasing a report with a number of academic and civil society organizations1 on the risks from malicious uses of AI and the steps that should be taken to mitigate them in advance. At EFF, one area of particular concern has been the potential interactions between computer insecurity and AI.


Deep learning for biology

#artificialintelligence

The brain's neural network has long inspired artificial-intelligence researchers.Credit: Alfred Pasieka/SPL/Getty Four years ago, scientists from Google showed up on neuroscientist Steve Finkbeiner's doorstep. The researchers were based at Google Accelerated Science, a research division in Mountain View, California, that aims to use Google technologies to speed scientific discovery. They were interested in applying'deep-learning' approaches to the mountains of imaging data generated by Finkbeiner's team at the Gladstone Institute of Neurological Disease in San Francisco, also in California. Deep-learning algorithms take raw features from an extremely large, annotated data set, such as a collection of images or genomes, and use them to create a predictive tool based on patterns buried inside. Once trained, the algorithms can apply that training to analyse other data, sometimes from wildly different sources.


Why Artificial Intelligence Researchers Should Be More Paranoid

#artificialintelligence

Life has gotten more convenient since 2012, when breakthroughs in machine learning triggered the ongoing frenzy of investment in artificial intelligence. Speech recognition works most of the time, for example, and you can unlock the new iPhone with your face. People with the skills to build things such systems have reaped great benefits--they've become the most prized of tech workers. But a new report on the downsides of progress in AI warns they need to pay more attention to the heavy moral burdens created by their work. It calls for urgent and active discussion of how AI technology could be misused.


At The Winter Olympics, Robots Are Here To Help. But Don't Assume They Work All Hours

NPR Technology

A robot sweeps the floor at the main press center at the Pyeonchang Winter Olympics. A robot sweeps the floor at the main press center at the Pyeonchang Winter Olympics. Directions, weather reports, water bottles – those are some of the things we've seen robots offering at the Pyeongchang Winter Olympics, helping to host thousands of visitors and media. Most of the robots we've seen in Pyeongchang and Gangneung – the two areas where the Winter Games are being held – weren't made to look human. Instead, they present a wide range of looks -- and autonomy.