"The field of Machine Learning seeks to answer these questions: How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?"
– from The Discipline of Machine Learning by Tom Mitchell. CMU-ML-06-108, 2006.
Machine vision coupled with artificial intelligence (AI) has made great strides toward letting computers understand images. Thanks to deep learning, which processes information in a way analogous to the human brain, machine vision is doing everything from keeping self-driving cars on the right track to improving cancer diagnosis by examining biopsy slides or x-ray images. Now some researchers are going beyond what the human eye or a camera lens can see, using machine learning to watch what people are doing on the other side of a wall. The technique relies on low-power radio frequency (RF) signals, which reflect off living tissue and metal but pass easily through wooden or plaster interior walls. AI can decipher those signals, not only to detect the presence of people, but also to see how they are moving, and even to predict the activity they are engaged in, from talking on a phone to brushing their teeth.
Despite the rapid advances it has made it over the past decade, deep learning presents many industrial users with problems when they try to implement the technology, issues that the Internet giants have worked around through brute force. "The challenge that today's systems face is the amount of data they need for training," says Tim Ensor, head of artificial intelligence (AI) at U.K.-based technology company Cambridge Consultants. "On top of that, it needs to be structured data." Most of the commercial applications and algorithm benchmarks used to test deep neural networks (DNNs) consume copious quantities of labeled data; for example, images or pieces of text that have already been tagged in some way by a human to indicate what the sample represents. The Internet giants, who have collected the most data for use in training deep learning systems, have often resorted to crowdsourcing measures such as asking people to prove they are human during logins by identifying objects in a collection of images, or simply buying manual labor through services such as Amazon's Mechanical Turk.
In the summer of 2009, the Israeli neuroscientist Henry Markram strode onto the TED stage in Oxford, England, and made an immodest proposal: Within a decade, he said, he and his colleagues would build a complete simulation of the human brain inside a supercomputer. They'd already spent years mapping the cells in the neocortex, the supposed seat of thought and perception. "It's a bit like going and cataloging a piece of the rain forest," Markram explained. "How many trees does it have? What shapes are the trees?"
Early in 2018, the volcano Anak Krakatau in Indonesia started falling apart. It was a subtle transformation -- one that nobody noticed at the time. The southern and southwestern flanks of the volcano were slipping towards the ocean at a rate of about 4 millimetres per month, a shift so small that researchers only saw it after the fact as they combed through satellite radar data. By June, though, the mountain began showing obvious signs of unrest. It spewed fiery ash and rocks into the sky in a series of small eruptions. And it was heating up.
Anyone looking for an illustration of how rapidly shopping habits changed when covid-19 hit needed only to glance at the top 10 search terms on Amazon in the week of April 12 to 18. In place of former mainstays like phone cases, phone chargers, and Lego sets were "toilet paper," "face mask," "hand sanitizer," "paper towels," "Lysol spray," "Clorox wipes," "mask," "Lysol," "masks for germ protection," and "N95 mask." People weren't just searching; they were buying, too--and in bulk. The majority of people looking for masks ended up buying the new Amazon #1 best seller, "Face Mask, Pack of 50." Nozzle, a London-based consultancy specializing in algorithmic advertising for Amazon sellers, captured the rapid change back in February in this simple graph.
Artist, designer and programmer Cyril Diagne recently created a bit of tech that looks more like science fiction that science fact. Using a combination of augmented reality and machine learning tech, he's figured out a way to "copy and paste" objects from the real world into Photoshop, using just a smartphone. What might, at first blush, seem like special effects trickery or an April Fools prank is actually a piece of bona-fide tech that he's actually published to GitHub. As the description on GitHub explains, this is "an AR ML prototype that allows cutting elements from your surroundings and pasting them in an image editing software." According to Diagne, the "secret sauce" is a piece of machine learning (AKA "Artificial Intelligence") tech called BASNet, which automatically recognizes and cuts out real-world objects when taking a photograph.
NEW YORK (Reuters) - After a week or so sick in bed in their New York City apartment in March, members of the Johnson-Baruch family were convinced they had been stricken by the novel coronavirus. Subsequent test results left them with more questions than answers. Tests both for the virus itself and for the antibodies the immune system produces to fight the infection are becoming more widely available, but they are not perfect. For Maree Johnson-Baruch, her husband, Jason Baruch, and their two teenage daughters, their experience ran the gamut. They all became sick around the same time with the same symptoms.
MathWorks today introduced Release 2020a with expanded AI capabilities for deep learning. Engineers can now train neural networks in the updated Deep Network Designer app, manage multiple deep learning experiments in a new Experiment Manager app, and choose from more network options to generate deep learning code. R2020a introduces new capabilities specifically for automotive and wireless engineers in addition to hundreds of new and updated features for all users of MATLAB and Simulink. More details are available in the Release 2020a video. "MathWorks provides a comprehensive platform for building AI-driven systems," said David Rich, MATLAB marketing director.
Researchers have proposed a technique for shrinking deep learning models that they say is simpler and produces more accurate results than state-of-the-art methods. Massachusetts Institute of Technology (MIT) researchers have proposed a technique for compressing deep learning models, by retraining a smaller model whose weakest connections have been "pruned," at its faster, initial rate of learning. The technique's groundwork was partly laid by the AutoML for model compression (AMC) algorithm from MIT's Song Han, which automatically removes redundant neurons and connections, and retrains the model to reinstate its initial accuracy. MIT's Jonathan Frankle and Michael Carbin determined that the model could simply be rewound to its early training rate without tinkering with any parameters. Although greater shrinkage is accompanied by reduced model accuracy, in comparing their method to AMC or earlier work by Frankle on weight-rewinding techniques, Frankle and Carbin found that it performed better regardless of the amount of compression.
It was not long ago that the world watched World Chess Champion Garry Kasparov lose a decisive match against a supercomputer. IBM's Deep Blue embodied the state of the art in the late 1990s, when a machine defeating a world (human) champion at a complex game such as chess was still unheard of. Fast-forward to today, and not only have supercomputers greatly surpassed Deep Blue in chess, they have managed to achieve superhuman performance in a string of other games, often much more complex than chess, ranging from Go to Dota to classic Atari titles. Many of these games have been mastered just in the last five years, pointing to a pace of innovation much quicker than the two decades prior. Recently, Google released work on Agent57, which for the first time showcased superior performance over existing benchmarks across all 57 Atari 2600 games.