After six months of competition (and a few last-minute submissions), we are happy to announce the conclusion and winners of the Obstacle Tower Challenge. We want to thank all of the participants for both rounds and congratulate Alex Nichol, the Compscience.org We are also excited to share that we have open-sourced Obstacle Tower for the research community to extend for their own needs. We started this challenge in February as a way to help foster research in the AI community, by providing a challenging new benchmark of agent performance built in Unity, which we called Obstacle Tower. The Obstacle Tower was developed to be difficult for current machine learning algorithms to solve, and push the boundaries of what was possible in the field by focusing on procedural generation.
Both industries and governments alike have invested significantly in the AI field, with many AI-related startups established in the last 5 years. If another AI winter were to come about many people could lose their jobs, and many startups might have to shut down, as has happened before. Moreover, the economic difference between an approaching winter period or ongoing success is estimated to be at least tens of billions of dollars by 2025, according to McKinsey & Company. This paper does not aim to discuss whether progress in AI is to be desired or not. Instead, the purpose of the discussions and results presented herein is to to inform the reader of how likely progress in AI research is. For a detailed overview of both AI winters check out my first and second medium article on the topic. In this section, the central causes of the AI winters are extracted from the above discussion of previous winters.
"Our AI takes about 20 moves, most of the time solving it in the minimum number of steps," Baldi says. "Right there, you can see the strategy is different, so my best guess is that the AI's form of reasoning is completely different from a human's." The ultimate goal of projects such as this one is to build the next generation of AI systems, Baldi says. Whether they know it or not, artificial intelligence touches people every day through apps such as Siri and Alexa and recommendation engines working behind the scenes of their favorite online services. "But these systems are not really intelligent; they're brittle, and you can easily break or fool them," Baldi says.
LIKE other human champions facing a machine opponent, Grzegorz "MaNa" Komincz rated his chances. "A realistic goal would be 4-1 in," he my favour told an interviewer before the match. One of the world's best players of video game StarCraft II, Komincz was at the height of a successful esports career. Artificial intelligence company DeepMind invited him to face its latest AI, a StarCraft II-playing bot called AlphaStar, on 19 December 2018. Komincz was expected to be a tough opponent.
Microsoft's listening program continues to grow in scope after a new report reveals that contractors harvested unintentional audio from Xbox users through Cortana and the Kinect. Motherboard reports that Xbox users were recorded by Microsoft as part of a program to analyze users' voice-commands for accuracy and that those recordings were assessed by human contractors. While the program was designed to only scrape audio uttered after a wake-word, contractors hired by Microsoft report that some recordings were taken accidentally without provocation. The practice, reports Motherboard, has been ongoing for several years since the early days of Xbox One and predates Xbox's integration with its voice assistant, Cortana. Xbox users were being recorded by Microsoft in a listening program that scraped audio from Cortana and its augmented reality hardware, Kinect.
In this article, I'm going to tell you about automating corporate strategies using artificial intelligence, and look at some recent progress in automatically generating strategies in the face of uncertainty. Every day, progress in artificial intelligence is addressing tasks currently performed only by humans, and it's worthwhile to take a short-term view of what this all means to your company. Games like chess have been tackled by artificial intelligence with amazing results, but there was this big gap between those games - where everything about the game state and consequences is known before making a decision - and the reality of life where, like poker, there is only a little bit of information available to the decision-makers, and the quality and quantity of information used to make decisions varies wildly. We humans face this situation of high uncertainty every time we cross the street or eat a hamburger, but it doesn't seem to bother us. Until recently, computers have had a lot of trouble dealing with games that give the decision-maker incomplete information about the state of the game.
Science-fiction can sometimes be a good guide to the future. In the film Upgrade (2018) Grey Trace, the main character, is shot in the neck. His wife is shot dead. Trace wakes up to discover that not only has he lost his wife, but he now faces a future as a wheelchair-bound quadriplegic. He is implanted with a computer chip called Stem designed by famous tech innovator Eron Keen – any similarity with Elon Musk must be coincidental – which will let him walk again.
Researchers from the University of Maryland have figured out how to reliably create such questions through a human-computer collaboration, developing a dataset of more than 1,200 questions that, while easy for people to answer, stump the best computer answering systems today. The system that learns to master these questions will have a better understanding of language than any system currently in existence. The work is described in an article published in the 2019 issue of the journal Transactions of the Association for Computational Linguistics. "Most question-answering computer systems don't explain why they answer the way they do, but our work helps us see what computers actually understand," said Jordan Boyd-Graber, associate professor of computer science at UMD and senior author of the paper. "In addition, we have produced a dataset to test on computers that will reveal if a computer language system is actually reading and doing the same sorts of processing that humans are able to do."