"The field of Machine Learning seeks to answer these questions: How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?"
– from The Discipline of Machine Learning by Tom Mitchell. CMU-ML-06-108, 2006.
In the early part of the 20th century, a group of German experimental psychologists began to question how the brain acquires meaningful perceptions of a world that is otherwise chaotic and unpredictable. To answer this question, they developed the notion of the "gestalt effect"--the idea that when it comes to perception, the whole is something other than the parts. Sine then, psychologists have discovered that the human brain is remarkably good at perceiving complete pictures on the basis of fragmentary information. A good example is the figure shown here. The brain perceives two-dimensional shapes such as a triangle and a square, and even a three-dimensional sphere.
Legal experts warn people's online photos are being used without permission to power facial-recognition technology that could eventually be used for surveillance. Legal experts warn people's online photos are being used without permission to power facial-recognition technology that could eventually be used for surveillance. Said New York University School of Law's Jason Schultz, "This is the dirty little secret of [artificial intelligence] training sets. Researchers often just grab whatever images are available in the wild." IBM recently issued a set of nearly 1 million photos culled from the image-hosting site Flickr, and programmed to describe subjects' appearance, allegedly to help reduce bias in facial recognition; although IBM said Flickr users can opt out of the database, deleting photos is almost impossible.
Quantum computing and artificial intelligence are both hyped ridiculously. But it seems a combination of the two may indeed combine to open up new possibilities. In a research paper published today in the journal Nature, researchers from IBM and MIT show how an IBM quantum computer can accelerate a specific type of machine-learning task called feature matching. The team says that future quantum computers should allow machine learning to hit new levels of complexity. As first imagined decades ago, quantum computers were seen as a different way to compute information.
Machine learning and quantum computing have their staggering levels of technology hype in common. But certain aspects of their mathematical foundations are also strikingly similar. In a paper in Nature, Havlíček et al.1 exploit this link to show how today's quantum computers can, in principle, be used to learn from data -- by mapping data into the space in which only quantum states exist. One of the first things one learns about quantum computers is that these machines are extremely difficult to simulate on a classical computer such as a desktop PC. In other words, classical computers cannot be used to obtain the results of a quantum computation.
The department disclosed its use of the technology only this month, with Levine and Cholas-Wood detailing their work in the INFORMS Journal on Applied Analytics in an article alerting other departments how they could create similar software. Speaking about it with the news media for the first time, they told The Associated Press recently that theirs is the first police department in the country to use a pattern-recognition tool like this.
A startup called CogitAI has developed a platform that lets companies use reinforcement learning, the technique that gave AlphaGo mastery of the board game Go. Gaining experience: AlphaGo, an AI program developed by DeepMind, taught itself to play Go by practicing. It's practically impossible for a programmer to manually code in the best strategies for winning. Instead, reinforcement learning let the program figure out how to defeat the world's best human players on its own. Drug delivery: Reinforcement learning is still an experimental technology, but it is gaining a foothold in industry.
The images are huge and square and harrowing: a form, reminiscent of a face, engulfed in fiery red-and-yellow currents; a head emerging from a cape collared with glitchy feathers, from which a shape suggestive of a hand protrudes; a heap of gold and scarlet mottles, convincing as fabric, propping up a face with grievous, angular features. These are part of "Faceless Portraits Transcending Time," an exhibition of prints recently shown at the HG Contemporary gallery in Chelsea, the epicenter of New York's contemporary-art world. All of them were created by a computer. The catalog calls the show a "collaboration between an artificial intelligence named AICAN and its creator, Dr. Ahmed Elgammal," a move meant to spotlight, and anthropomorphize, the machine-learning algorithm that did most of the work. According to HG Contemporary, it's the first solo gallery exhibit devoted to an AI artist.
Could a program detect potential terrorists by reading their facial expressions and behavior? This was the hypothesis put to the test by the US Transportation Security Administration (TSA) in 2003, as it began testing a new surveillance program called the Screening of Passengers by Observation Techniques program, or Spot for short. While developing the program, they consulted Paul Ekman, emeritus professor of psychology at the University of California, San Francisco. Decades earlier, Ekman had developed a method to identify minute facial expressions and map them on to corresponding emotions. This method was used to train "behavior detection officers" to scan faces for signs of deception.
Human-robot interaction is easy to do badly, and very difficult to do well. One approach that has worked well for robots from R2-D2 to Kuri is to avoid the problem of language--rather than use real words to communicate with humans, you can do pretty well (on an emotional level, at least) with a variety of bleeps and bloops. But as anyone who's watched Star Wars knows, R2-D2 really has a lot going on with the noises that it makes, and those noises were carefully designed to be both expressive and responsive. Most actual robots don't have the luxury of a professional sound team (and as much post-production editing as you need), so the question becomes how to teach a robot to make the right noises at the right times. At Georgia Tech's Center for Music Technology (GTCMT), Gil Weinberg and his students have a lot of experience with robots that make noise of various sorts, and they've used a new deep learning-based technique to teach their musical robot Shimi a basic understanding of human emotions, and how to communicate back to those humans in just the right way, using music.