Goto

Collaborating Authors

Results


A Decade Of Advancements As We Enter A New Age Of AI

#artificialintelligence

As we embark on the next decade of innovations in AI, Daniel Pitchford looks back at the five biggest industry milestones of the 2010s, how they impacted investment in the sector and how they've shaped the advance of technology. The 2010s will be known for the advent of one of the most powerful technologies on the planet – Artificial Intelligence. Over the next decade, as more funding is made available for its development and it becomes more accepted by companies and consumers alike, it is worth reviewing some of the major milestones over the last decade that have made this advancement possible. The game is on, Watson: IBM's Jeopardy triumph The first major milestone of AI hitting the mainstream was when IBM's "super-computer" Watson beat long-standing Jeopardy champions Ken Jennings and Brad Rutter in 2011. Watson won the $1m TV game show with $77,147, leaving Jennings and Ruttner far behind at $24,000 and $21,600 respectively.



AI Supercomputers: Microsoft Oxford, IBM Watson, Google DeepMind, Baidu Minwa

@machinelearnbot

The Artificial Intelligence revolution is here. We are moving further into an age, where the imagination stirred from our childhood spent watching movies, is now becoming reality.


Language understanding remains one of AI's grand challenges

#artificialintelligence

David Ferrucci will deliver a keynote at the O'Reilly Artificial Intelligence Conference in NYC, June 26-29, 2017. His colleague Jennifer Chu-Caroll will also give a talk, "Beyond the state of the art in reading comprehension," at the same conference. Subscribe to the O'Reilly Data Show Podcast to explore the opportunities and techniques driving big data, data science, and AI. Find us on Stitcher, TuneIn, iTunes, SoundCloud, RSS. In this episode of the Data Show, I spoke with David Ferrucci, founder of Elemental Cognition and senior technologist at Bridgewater Associates.


AI Supercomputers: Microsoft Oxford, IBM Watson, Google DeepMind, Baidu Minwa

@machinelearnbot

The Artificial Intelligence revolution is here. We are moving further into an age, where the imagination stirred from our childhood spent watching movies, is now becoming reality. Leading us into this age are the typical (and untypical) tech giants, who are fiercely competing for the next break through. Project Oxford is Microsoft's venture into the world of artificial intelligence and deep learning. It takes in several key areas, including image, facial, text and speech recognition, and hopes to implement the technology into its computer operating systems and smartphone software.


Artificial intelligence

#artificialintelligence

Major AI researchers and textbooks define the field as "the study and design of intelligent agents", where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as "The science and engineering of making intelligent machines". AI research is highly technical and specialized, deeply divided into subfields that often fail to communicate with each other. Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues.


The AI Behind Watson -- The Technical Article

#artificialintelligence

The Jeopardy Challenge helped us address requirements that led to the design of the DeepQA architecture and the implementation of Watson. After 3 years of intense research and development by a core team of about 20 researcherss, Watson is performing at human expert levels in terms of precision, confidence, and speed at the Jeopardy quiz show. Our results strongly suggest that DeepQA is an effective and extensible architecture that may be used as a foundation for combining, deploying, evaluating, and advancing a wide range of algorithmic techniques to rapidly advance the field of QA. The architecture and methodology developed as part of this project has highlighted the need to take a systems-level approach to research in QA, and we believe this applies to research in the broader field of AI. We have developed many different algorithms for addressing different kinds of problems in QA and plan to publish many of them in more detail in the future.


Analysis of Watson's Strategies for Playing Jeopardy!

Journal of Artificial Intelligence Research

Major advances in Question Answering technology were needed for IBM Watson to play Jeopardy! at championship level -- the show requires rapid-fire answers to challenging natural language questions, broad general knowledge, high precision, and accurate confidence estimates. In addition, Jeopardy! features four types of decision making carrying great strategic importance: (1) Daily Double wagering; (2) Final Jeopardy wagering; (3) selecting the next square when in control of the board; (4) deciding whether to attempt to answer, i.e., "buzz in." Using sophisticated strategies for these decisions, that properly account for the game state and future event probabilities, can significantly boost a player's overall chances to win, when compared with simple "rule of thumb" strategies. This article presents our approach to developing Watson's game-playing strategies, comprising development of a faithful simulation model, and then using learning and Monte-Carlo methods within the simulator to optimize Watson's strategic decision-making. After giving a detailed description of each of our game-strategy algorithms, we then focus in particular on validating the accuracy of the simulator's predictions, and documenting performance improvements using our methods. Quantitative performance benefits are shown with respect to both simple heuristic strategies, and actual human contestant performance in historical episodes. We further extend our analysis of human play to derive a number of valuable and counterintuitive examples illustrating how human contestants may improve their performance on the show.


Analysis of Watson's Strategies for Playing Jeopardy!

Journal of Artificial Intelligence Research

Major advances in Question Answering technology were needed for IBM Watson to play Jeopardy! at championship level -- the show requires rapid-fire answers to challenging natural language questions, broad general knowledge, high precision, and accurate confidence estimates. In addition, Jeopardy! features four types of decision making carrying great strategic importance: (1) Daily Double wagering; (2) Final Jeopardy wagering; (3) selecting the next square when in control of the board; (4) deciding whether to attempt to answer, i.e., "buzz in." Using sophisticated strategies for these decisions, that properly account for the game state and future event probabilities, can significantly boost a player's overall chances to win, when compared with simple "rule of thumb" strategies. This article presents our approach to developing Watson's game-playing strategies, comprising development of a faithful simulation model, and then using learning and Monte-Carlo methods within the simulator to optimize Watson's strategic decision-making. After giving a detailed description of each of our game-strategy algorithms, we then focus in particular on validating the accuracy of the simulator's predictions, and documenting performance improvements using our methods. Quantitative performance benefits are shown with respect to both simple heuristic strategies, and actual human contestant performance in historical episodes. We further extend our analysis of human play to derive a number of valuable and counterintuitive examples illustrating how human contestants may improve their performance on the show.


Analysis of Watson's Strategies for Playing Jeopardy!

Journal of Artificial Intelligence Research

Major advances in Question Answering technology were needed for IBM Watson to play Jeopardy! at championship level -- the show requires rapid-fire answers to challenging natural language questions, broad general knowledge, high precision, and accurate confidence estimates. In addition, Jeopardy! features four types of decision making carrying great strategic importance: (1) Daily Double wagering; (2) Final Jeopardy wagering; (3) selecting the next square when in control of the board; (4) deciding whether to attempt to answer, i.e., "buzz in." Using sophisticated strategies for these decisions, that properly account for the game state and future event probabilities, can significantly boost a player's overall chances to win, when compared with simple "rule of thumb" strategies. This article presents our approach to developing Watson's game-playing strategies, comprising development of a faithful simulation model, and then using learning and Monte-Carlo methods within the simulator to optimize Watson's strategic decision-making. After giving a detailed description of each of our game-strategy algorithms, we then focus in particular on validating the accuracy of the simulator's predictions, and documenting performance improvements using our methods. Quantitative performance benefits are shown with respect to both simple heuristic strategies, and actual human contestant performance in historical episodes. We further extend our analysis of human play to derive a number of valuable and counterintuitive examples illustrating how human contestants may improve their performance on the show.