Oregon-based programmer Dan Hon, who created the programme for fun, posted a new list of British place names created by the AI on Twitter. US programmer Dan Hon trained an AI to create new British place names and posted some of his favourites to Twitter. Researchers from Maluuba, a Canadian startup owned by Microsoft, used a type of AI called reinforcement learning to play the Atari 2600 version of Ms. Pac-Man perfectly. Researchers from Maluuba, a Canadian startup owned by Microsoft, used a type of AI called reinforcement learning to play the Atari 2600 version of Ms. Pac-Man perfectly.
This incident highlights two significant challenges with respect to the application of artificial intelligence in the world of the Internet of Things (IoT), edge analytics and creating "smart" devices: The challenge for any autonomous device (car, truck, drone, washer, wind turbine, pace maker) is how to manage challenge #1 within the computational and storage limitations of #2. And I'm not even sure where to begin with how an autonomous car might handle something like this (I hope your autonomous car hasn't been watching any Transformer movies…) We don't need autonomous devices as much as we need "smart" devices; devices smart enough to do what my daughter did when faced with an unexpected situation requiring a real-time decision with only a limited amount of historical data and experience. Historically, the rapidly declining storage costs and rapidly increasing CPU processing capabilities have allowed technologists to wait for the technology to advance in order to address the problem for them. Unfortunately, the growth in the sensor data and complexity of "smart" decisions at the edges is increasing faster than Moore's Law can cover (see Figure 2).
More cyberattacks will leverage machine learning to make more autonomous malware, more efficient fuzzing for vulnerabilities, etc. Semi-supervised learning and one-shot learning will reduce the amount of data needed to train several kinds of models and make AI use more widespread. More cyberattacks will leverage machine learning to make more autonomous malware, more efficient fuzzing for vulnerabilities, etc. Semi-supervised learning and one-shot learning will reduce the amount of data needed to train several kinds of models and make AI use more widespread.
"A lot of companies working on AI use games to build intelligent algorithms because there's a lot of human-like intelligence capabilities that you need to beat the games," Maluuba program manager Rahul Mehrotra explains in the story, noting that the variety of situations you can encounter while playing the games makes them a good testing ground. That divide between the top agent's egalitarian programming and each individual agent's individual desire to achieve its specific result or collect its specific pellet regardless of the obstacles or ghosts in the way, proved to be the algorithm's secret sauce. "There's this nice interplay between how they have to, on the one hand, cooperate based on the preferences of all the agents, but at the same time each agent cares only about one particular problem," Maluuba research manager Harm Van Seijen says in the story. "It really enables us to make further progress in solving these really complex problems," research manager Van Seijen says.
Yann LeCun, arguably the father of modern machine learning, has described Generative Adversarial Networks (GANs) as the most interesting idea in deep learning in the last 10 years (and there have been a lot of interesting ideas in Machine Learning over the past 10 years). You train the discriminator on real data to classify, say, an image as either a real photo or a non-photographic image. Given that the central problem of using Deep Learning models in business applications is lack of training data, this is a really big deal. This technology could, and probably should, form a pillar of next generation (big data and machine learning) risk management.
Google has taught its DeepMind AI to navigate a parkour course by using reinforcement learning. Reinforcement learning is the practice of rewarding desirable behaviour. The faster the AI could navigate the virtual parkour course, the greater the reward. It's fascinating (and humourous) to observe all the leaps, crouches, leaps, and limbos the AI decided was the best method of navigating the course.
It's a common tool used in machine learning, and now the the Alphabet team has used it to teach the DeepMind AI to successfully navigate a parkour course. Cool paper from colleagues at DeepMind https://t.co/X0PwKXrQ2M You can see the full results in this video; all of the stick figure's navigation was taught via reinforcement learning. It presents interesting possibilities for future AI because robots don't actually have to restrict themselves to human-like movements in order to accomplish set goals. It will be interesting to see if this has an effect on future AI and robot development.
As Big Data is the hottest trend in the tech industry at the moment, machine learning is incredibly powerful to make predictions or calculated suggestions based on large amounts of data. Some of the most common examples of machine learning are Netflix's algorithms to make movie suggestions based on movies you have watched in the past or Amazon's algorithms that recommend books based on books you have bought before. The textbook that we used is one of the AI classics: Peter Norvig'sArtificial Intelligence -- A Modern Approach, in which we covered major topics including intelligent agents, problem-solving by searching, adversarial search, probability theory, multi-agent systems, social AI, philosophy/ethics/future of AI. In the model, the data variables are assumed to be linear mixtures of some unknown latent variables, and the mixing system is also unknown.
It's there you'll find the professors who solved the game of checkers, beat a top human player in the game of Go and used cutting-edge artificial intelligence to outsmart a handful of professional poker players for the very first time. He's a pioneer in a branch of artificial intelligence research known as reinforcement learning -- the computer science equivalent of treat-training a dog, except in this case the dog is an algorithm that's been incentivized to behave in a certain way. U of A computing science professors and artificial intelligence researchers (left to right) Richard Sutton, Michael Bowling and Patrick Pilarski are working with Google's DeepMind to open the AI company's first research lab outside the U.K., in Edmonton. Last week, Google's AI subsidiary DeepMind announced it was opening its first international office in Edmonton, where Sutton -- alongside professors Michael Bowling and Patrick Pilarski -- will work part-time.
On the mimicking side, AI has focused a lot on image recognition, speech recognition, and natural language processing. A major obstacle to machine learning is this feature engineering step, which requires domain experts to identify important signals before feeding into the training process. Nevertheless, although both of them are very powerful and provide non-linear model fitting to training data, data scientists still need to carefully create features in order to achieve good performance. This gives new birth to the DNN (deep neural network) and provides a significant breakthrough in image classification and speech recognition tasks.