The explosion of breakthroughs, investments, and entrepreneurial activity around artificial intelligence over the last decade has been driven exclusively by deep learning, a sophisticated statistical analysis technique for finding hidden patterns in large quantities of data. A term coined in 1955--artificial intelligence--was applied (or mis-applied) to deep learning, a more advanced version of an approach to training computers to perform certain tasks--machine learning--a term coined in 1959. The recent success of deep learning is the result of the increased availability of lots of data (big data) and the advent of Graphics Processing Units (GPUs), significantly increasing the breadth and depth of the data used for training computers and reducing the time required for training deep learning algorithms. The technology that animated movies like "Toy Story" and enabled a variety of special effects is the ... [ ] focus of this year's Turing Award, the technology industry's version of the Nobel Prize. The term "big data" first appeared in computer science literature in an October 1997 article by Michael Cox and David Ellsworth, "Application-controlled demand paging for out-of-core visualization," published in the Proceedings of the IEEE 8th conference on Visualization.
Ken Burns has spent the last 40 years chronicling the most poignant and influential events in American history. The 66-year-old Oscar-nominated filmmaker has crafted definitive and multifaceted histories of the Civil War, baseball, the Roosevelts, cancer, country music and jazz. In an age of short Tweets and shorter attention spans, Burns's films are sprawling, deep-dive studies on topics that simultaneously reveal the best and worst of America. We are living through one of those moments right now as the coronavirus shakes every aspect of American life. With most of the country stuck at home and weathering a torrent of fear and breaking news, Burns is offering an alternative.
As someone educated in science and engineering, I've always considered the pursuit of new technologies a higher calling. As someone raised Roman Catholic, I also tend to pay attention when another high call comes in -- like from the Vatican. Last year, the Vatican reached out to our company, IBM. Pope Francis was worried about technology's effects on society and families around the world and its potential to widen the gap between the rich and poor. How could the world harness AI for the greater good while reducing its potential to be a force for evil?
People that worked with me on this episode; Doki Tops of Utomik.com and Cris Reed of The Level Up Experience. Quick word; The games industry is doing realy well during this lockdown. All good, but lets make sure that we, as an industry, will also look after people that are being hurt, financially, physically and emotionally. It is our audience(s) that is under pressure by COVID–19, the corona virus. Let's make sure we also do our part to make the world healty and secure again!
Not only does this provide useful information to users in the moment, but it has also helped raise awareness and increase the adoption of Lexikon. Since launching the Lexikon Slack Bot, we've seen a sustained 25% increase in the number of Lexikon links shared on Slack per week. You just listened to a track by a new artist on your Discover Weekly and you're hooked. You want to hear more and learn about the artist. So, you go to the artist page on Spotify where you can check out the most popular tracks across different albums, read an artist bio, check out playlists where people tend to discover the artist, and explore similar artists.
Good gamers can tune out distractions and unimportant on-screen information and focus their attention on avoiding obstacles and overtaking others in virtual racing games like Mario Kart. However, can machines behave similarly in such vision-based tasks? A possible solution is designing agents that encode and process abstract concepts, and research in this area has focused on learning all abstract information from visual inputs. This however is compute intensive and can even degrade model performance. Now, researchers from Google Brain Tokyo and Google Japan have proposed a novel approach that helps guide reinforcement learning (RL) agents to what's important in vision-based tasks.
With more board configurations than there are atoms in the universe, the ancient Chinese game of Go has long been considered a grand challenge for artificial intelligence. On March 9, 2016, the worlds of Go and artificial intelligence collided in South Korea for an extraordinary best-of-five-game competition, coined The DeepMind Challenge Match. Hundreds of millions of people around the world watched as a legendary Go master took on an unproven AI challenger for the first time in history. Directed by Greg Kohs with an original score by Academy Award nominee, Hauschka, AlphaGo chronicles a journey from the halls of Oxford, through the backstreets of Bordeaux, past the coding terminals of DeepMind in London, and ultimately, to the seven-day tournament in Seoul. As the drama unfolds, more questions emerge: What can artificial intelligence reveal about a 3000-year-old game?
Video games are being prescribed as a recommended treatment for our ongoing homebound existence, brought on by the coronavirus pandemic. Game makers on Saturday began kicking off a new World Health Organization (WHO) initiative entitled #PlayApartTogether to encourage people on how to entertain themselves and also practice physical distancing. The initiative is particularly noteworthy because WHO previously designated video game addiction as an official mental health disorder. But the group hopes that the industry can "reach millions with important messages to help prevent the spread of COVID-19," said Ray Chambers, the U.S. ambassador to WHO in a statement. Game companies will encourage players to stay distanced and observe other safety measures including hand hygiene, he said.
Today machines with artificial intelligence (AI) are becoming more prevalent in society. Across many fields, AI has taken over numerous tasks that humans used to do earlier. As the reference is to human intelligence, artificial intelligence is being modified into what humans can do. However, the technology has not yet matched the level of utmost wisdom possessed by humans and it seems like it is not going to achieve the milestone any time sooner. To replace human beings at most jobs, machines need to exhibit what we intuitively call "common sense".
Note: the original article has been split into two since I think the two points were only vaguely related, I will leave it as is here, since I'd rather not re-post stuff and I think the audience on LW might see the "link" between the two separate ideas presented here. Let's begin with a gentle introduction in to the field of AI risk - possibly unrelated to the broader topic, but it's what motivated me to write about the matter; it's also a worthwhile perspective to start the discussion from. I hope for this article to be part musing on what we should assume machine learning can do and why we'd make those assumptions, part reference guide for "when not to be amazed that a neural network can do something". I've often had a bone to pick against "AI risk" or, as I've referred to it, "AI alarmism". When evaluating AI risk, there are multiple views on the location of the threat and the perceived warning signs. I would call one of these viewpoints the "Bostromian position", which seems to be mainly promoted by MIRI, philosophers like Nick Bostrom and on forums such as AI Alignment.