Poker is considered a good challenge for AI, as it is seen as combination of mathematical/strategic play, and human intuition, especially about the strategies of others. I would consider the game a cross between the two extremes of technical vs. human skill: chess and rock-paper-scissors. In the game of chess, the technically superior player will almost always win, an amateur would lose literally 100% of their games to the top chess playing AI. In rock-paper-scissors, if the top AI plays the perfect strategy, of each option 1/3rd of the time, it will be unbeatable, but also by definition be incapable of beating anyone. To see why let's analyse how it plays against the Bart Simpson strategy: If your opponent always plays rock, you will play rock 1/3rd of the time, paper 1/3rd and scissors 1/3rd, meaning you will tie 1/3rd, win 1/3rd, and lose 1/3rd.
When IBM's Deep Blue chess machine defeated world chess champion Garry Kasparov in 1997, the world responded with surprise and angst at how far computers had come: "Be Afraid," read a Weekly Standard headline reacting to the news. Artificial intelligence has since made advancements that were unthinkable just 20 years ago -- in the past decade alone, robots have achieved dominance over humans in games far more complex than chess. While most of those advances can't be quantified with milestones like chess victories, programmers have continued the tradition of building machines designed to outsmart humans at our own games. Here's a comprehensive list of the competitions, games, and challenges that robots beat humans at in the past decade.
This is a clip from a conversation with Garry Kasparov from Oct 2019. You can watch the full conversation here: https://www.youtube.com/watch?v 8RVa0... (more links below) Podcast full episodes playlist: https://www.youtube.com/playlist?list... Podcasts clips playlist: https://www.youtube.com/playlist?list... Podcast website: https://lexfridman.com/ai Note: I select clips with insights from these much longer conversation with the hope of helping make these ideas more accessible and discoverable. Ultimately, this podcast is a small side hobby for me with the goal of sharing and discussing ideas. I did a poll and 92% of people either liked or loved the posting of daily clips, 2% were indifferent, and 6% hated it, some suggesting that I post them on a separate YouTube channel.
Garry Kasparav, the chess legend defeated Deep Blue', the powerful computer built by IBM in a historic match in 1996. A year later, in a re-match, much to the surprise of many, a vastly improved computer beat the International Grandmaster. What happened was Deep Blue-2' had used the same heuristics as Deep Blue-1, but it was empowered with more CPU power. Today, two decades later, we see significant advances in Artificial Intelligence (AI). IBM's Watson built jeopardy, combined by speech recognition, search and speech generation.
Thank you so much for joining me so early today. I am very excited to be here. And thank you to the organizers at my con and especially Paul for having me. We are not easing into this morning, we are crashing straight in to a 30 minute crash course on what is the state of the current field of artificial intelligence. So thank you for being on this journey with me.
Artificial Intelligence (AI) is probably one of the most misinterpreted technologies within the Industry 4.0 umbrella. On the one hand, as a primary subject of science-fiction, many surreal, dramatic, and romanticized forms of AI emerged through the years, blurring our understanding of what computer science is capable of nowadays. On the other hand, doomsday reports of economic and job losses caused by AI fill chronicles, forecasts, and editorial works of traditional and digital media. Unfortunately, the scope of this post prevents the author from adequately address those misunderstandings by providing a clear and thorough landscape for AI. And because of this polarizing focus, only a few are realizing that AI is inducing a paradigm shift in computing.
Artificial Intelligence (AI) and esports might seem like a strange combination at first but actually they go hand in hand. Games have been used to test and advance AI research for a long time. In 1997 Deep Blue, an AI developed by IBM, beat the world chess champion Kasparov at chess. This was a huge achievement for the field of artificial intelligence and it did not stop there. Recently DeepMind developed an AI that can play Starcraft II, a complex real time strategy game, at human levels which is also a great achievement.
From 2010 heading this way the use of artificial intelligence (AI) by active managers has been increasing in a most absorbing manner. To use the Roman historian Suetonius, "AI investing is not going away." In a 2017 conference organised by J. P. Morgan, the bank asked 237 investors about big data and machine learning, and the resulting data found that "70 per cent thought that the importance of these tools (of AI) will gradually grow for all investors. And a further 23 per cent said they expected a revolution, with rapid changes to the investment landscape". But this investor interest with AI also signals a certain frustration with current active, and specifically quant, managers and the nascent promise shown by AI hedge funds.
James Dean, who has been dead for 64 years, is set to star in an upcoming movie about the Vietnam War. According to the Hollywood Reporter, the movie, an adaptation of Gareth Crocker's novel, Finding Jack, will feature a computer-generated image (CGI) version of Dean. "We searched high and low for the perfect character to portray the role of Rogan, which has some extreme complex character arcs, and after months of research, we decided on James Dean," says Anton Ernst, one of the film's co-directors. Dean will be constructed through "full-body" CGI using archival footage from his films – he will be physically captured through the movements of an actor and voiced by another actor. The fact that the directors couldn't find an actor of this capability in an age where much so much untapped talent is available is questionable.
We have to go back to the 19th century to find of the mathematical challenges that set the stage for this technology. For example, Bayes' theorem (1812) defined the probability of an event occurring based on knowledge of the previous conditions that could be related to this event. Years later, in the 1940s, another group of scientists laid the foundation for computer programming, capable of translating a series of instructions into actions that a computer could execute. These precedents made it possible for the mathematician Alan Turing, in 1950, to ask himself the question of whether it is possible for machines to think. This planted the seed for the creation of computers with artificial intelligence that are capable of autonomously replicating tasks that are typically performed by humans, such as writing or image recognition.