Games


Emergent behavior by minimizing chaos

Robohub

All living organisms carve out environmental niches within which they can maintain relative predictability amidst the ever-increasing entropy around them (1), (2). Humans, for example, go to great lengths to shield themselves from surprise -- we band together in millions to build cities with homes, supplying water, food, gas, and electricity to control the deterioration of our bodies and living spaces amidst heat and cold, wind and storm. The need to discover and maintain such surprise-free equilibria has driven great resourcefulness and skill in organisms across very diverse natural habitats. Motivated by this, we ask: could the motive of preserving order amidst chaos guide the automatic acquisition of useful behaviors in artificial agents? This central problem in artificial intelligence has evoked several candidate solutions, largely focusing on novelty-seeking behaviors (3), (4), (5).


Regulation will 'stifle' AI and hand the lead to Russia and China, warns Garry Kasparov

#artificialintelligence

Garry Kasparov has warned that any attempts by the Government to regulate artificial intelligence (AI) could "stifle" its development and give Russia and China an advantage. The former world chess champion has become an advocate for AI development following his resignation from professional chess in 2005. He told The Telegraph that "the government should be involved" in helping researchers and private firms to develop AI in order to "pave the road" for the technology. However, he cautioned against governments attempting to regulate the technology too closely. "It's too early for the government to interfere," he said.


The AI delusion: why humans trump machines

#artificialintelligence

As well as playing a key role in cracking the Enigma code at Bletchley Park during the Second World War, and conceiving of the modern computer, the British mathematician Alan Turing owes his public reputation to the test he devised in 1950. Crudely speaking, it asks whether a human judge can distinguish between a human and an artificial intelligence based only on their responses to conversation or questions. This test, which he called the "imitation game," was popularised 15 years later in Philip K Dick's science-fiction novel Do Androids Dream of Electric Sheep? But Turing is also widely remembered as having committed suicide in 1954, quite probably driven to it by the hormone treatment he was instructed to take as an alternative to imprisonment for homosexuality (deemed to make him a security risk), and it is only comparatively recently that his genius has been afforded its full due. In 2009, Gordon Brown apologised on behalf of the British government for his treatment; in 2014, his posthumous star rose further again when Benedict Cumberbatch played him in The Imitation Game; and in 2021, he will be the face on the new £50 note.


Sizing the U.S. Student Cohort for Computer Science

Communications of the ACM

Alan Kay, Cathie Norris, Elliot Soloway, and I had an article in the September 2019 issue of Communications called "Computational Thinking Should Just Be Good Thinking" (access the article at http://bit.ly/2P7RYEV). Our argument is that "computational thinking" is already here--students use computing every day, and that computing is undoubtedly influencing their thinking. What we really care about is effective, critical, "expanded" thinking, where computing helps us think. To do that, we need better computing. Ken Kahn engaged with our article in the comments section (thank you, Ken!), and he made a provocative comment: There are have been many successful attempts to add programming to games: Rocky's Boots (1982), Robot Odyssey (1984), RoboSport (1991), Minecraft (multiple extensions), and probably many more.


As Esports Take Off, High School Leagues Get In The Game

NPR Technology

Assistant Principal Miles Carey oversees a Rocket League practice at Washington-Liberty High School in Arlington, Va. Assistant Principal Miles Carey oversees a Rocket League practice at Washington-Liberty High School in Arlington, Va. Nowadays, if you're a teenager who's good at video games there's a lot more to be had than just a pot of virtual gold. Today, more than 170 colleges and universities participate. Naturally, high schools have followed suit.


The Machine Learning research revolution

#artificialintelligence

In recent times, we have seen an increasing number of instances of Artificial Intelligence (AI) donning the proverbial lab coat. In early 2019, thousands of people were screened every day in a hospital in Madurai by an AI system developed by Google that helps diagnose diabetic retinopathy, a condition that can lead to blindness. Startups like Niramai, based in Bengaluru are developing AI technology for early diagnosis of conditions like breast cancer and river blindness. The sudden, accelerated growth of Machine Learning not just in research but in all walks of life can bring to mind Black Mirror-esque visions of dystopia in which machines rule over humanity. But let us leave worrying about the consequences of the far future to science fiction and look at the immediate impact this technology has had in science.


Turn Your Customers into Your Community

#artificialintelligence

In the early 2000s, facing growing competition from video games and the internet, LEGO found itself on the brink of bankruptcy. The company continued to struggle before staging a remarkable turnaround and surpassing Mattel to become the world's largest toy maker. Central to that transformation was a fundamental shift in how LEGO approached their customers. For more than 75 years of its history, LEGO made toys exclusively for customers in a closed innovation process. But over the last decade, LEGO learned how to build with their fan community.


Machine Learning Today and Tomorrow - Carrier Management

#artificialintelligence

It is difficult to open an insurance industry newsletter these days without seeing some reference to machine learning or its cousin artificial intelligence and how they will revolutionize the industry. Yet according to Willis Towers Watson's recently released 2019/2020 P&C Insurance Advanced Analytics Survey results, fewer companies have adopted machine learning and artificial intelligence than had planned to do so just two years ago (see the accompanying graphic). In the context of insurance, we're not talking about self-driving cars (though these may have important implications for insurance) or chess-playing computers. We're talking about predicting the outcome of comparatively simple future events: Who will buy what product, which clients are more likely to have what kind of claim, which claim will become complex according to some definition. The better insurers can estimate the outcomes of these future events, the better they can plan for them and achieve more positive results.


Will the success of The Witcher herald a golden age of game-to-TV adaptations?

The Guardian

It is a truth, universally accepted, that video games do not translate well to the big screen. From Assassin's Creed to the Super Mario Bros movie, the result is usually a compromised monstrosity, ignorant of the source material and quickly disowned by the studios, directors and actors responsible for it. There have been exceptions – Detective Pikachu was weird but fine and the Resident Evil films have their fans. But films based on games are usually a mess. Have licensing managers been looking at the wrong screen the whole time?


Understanding AlphaGo: how AI thinks and learns (Advanced)

#artificialintelligence

"It was the worst possible time, Everyone else was doing something different." In 1943, neurophysiologist Warren McCulloch and mathematician Walter Pitts created computational models based on math algorithms called Threshold Logic Unit (TLU) to describe how neurons might work. Simulations of neural networks were possible until computers became more advanced in the 1950s. Before the 2000s it was considered one of the worst areas of research. LeCun and Hinton variously mentioned how in this period their papers were routinely rejected from being published due to their subject being neural networks.