Alan Kay, Cathie Norris, Elliot Soloway, and I had an article in the September 2019 issue of Communications called "Computational Thinking Should Just Be Good Thinking" (access the article at http://bit.ly/2P7RYEV). Our argument is that "computational thinking" is already here--students use computing every day, and that computing is undoubtedly influencing their thinking. What we really care about is effective, critical, "expanded" thinking, where computing helps us think. To do that, we need better computing. Ken Kahn engaged with our article in the comments section (thank you, Ken!), and he made a provocative comment: There are have been many successful attempts to add programming to games: Rocky's Boots (1982), Robot Odyssey (1984), RoboSport (1991), Minecraft (multiple extensions), and probably many more.
Assistant Principal Miles Carey oversees a Rocket League practice at Washington-Liberty High School in Arlington, Va. Assistant Principal Miles Carey oversees a Rocket League practice at Washington-Liberty High School in Arlington, Va. Nowadays, if you're a teenager who's good at video games there's a lot more to be had than just a pot of virtual gold. Today, more than 170 colleges and universities participate. Naturally, high schools have followed suit.
In recent times, we have seen an increasing number of instances of Artificial Intelligence (AI) donning the proverbial lab coat. In early 2019, thousands of people were screened every day in a hospital in Madurai by an AI system developed by Google that helps diagnose diabetic retinopathy, a condition that can lead to blindness. Startups like Niramai, based in Bengaluru are developing AI technology for early diagnosis of conditions like breast cancer and river blindness. The sudden, accelerated growth of Machine Learning not just in research but in all walks of life can bring to mind Black Mirror-esque visions of dystopia in which machines rule over humanity. But let us leave worrying about the consequences of the far future to science fiction and look at the immediate impact this technology has had in science.
In the early 2000s, facing growing competition from video games and the internet, LEGO found itself on the brink of bankruptcy. The company continued to struggle before staging a remarkable turnaround and surpassing Mattel to become the world's largest toy maker. Central to that transformation was a fundamental shift in how LEGO approached their customers. For more than 75 years of its history, LEGO made toys exclusively for customers in a closed innovation process. But over the last decade, LEGO learned how to build with their fan community.
It is difficult to open an insurance industry newsletter these days without seeing some reference to machine learning or its cousin artificial intelligence and how they will revolutionize the industry. Yet according to Willis Towers Watson's recently released 2019/2020 P&C Insurance Advanced Analytics Survey results, fewer companies have adopted machine learning and artificial intelligence than had planned to do so just two years ago (see the accompanying graphic). In the context of insurance, we're not talking about self-driving cars (though these may have important implications for insurance) or chess-playing computers. We're talking about predicting the outcome of comparatively simple future events: Who will buy what product, which clients are more likely to have what kind of claim, which claim will become complex according to some definition. The better insurers can estimate the outcomes of these future events, the better they can plan for them and achieve more positive results.
It is a truth, universally accepted, that video games do not translate well to the big screen. From Assassin's Creed to the Super Mario Bros movie, the result is usually a compromised monstrosity, ignorant of the source material and quickly disowned by the studios, directors and actors responsible for it. There have been exceptions – Detective Pikachu was weird but fine and the Resident Evil films have their fans. But films based on games are usually a mess. Have licensing managers been looking at the wrong screen the whole time?
"It was the worst possible time, Everyone else was doing something different." In 1943, neurophysiologist Warren McCulloch and mathematician Walter Pitts created computational models based on math algorithms called Threshold Logic Unit (TLU) to describe how neurons might work. Simulations of neural networks were possible until computers became more advanced in the 1950s. Before the 2000s it was considered one of the worst areas of research. LeCun and Hinton variously mentioned how in this period their papers were routinely rejected from being published due to their subject being neural networks.
Artificial intelligence must be regulated to save humanity from being hit by its dangers, Google's boss has said. The potential damage the technology could do means it is "too important" not to be constrained, according to Sundar Pichai. While it has the potential to save and improve lives, it could also cause damage through misleading videos and the "nefarious uses of facial recognition", he wrote in the New York Times, calling on the world to work together to define what the future of AI should look like. Regulation would be required to prevent AI being influenced by bias, as well as protect public safety and privacy, he said. "Growing up in India, I was fascinated by technology. Each new invention changed my family's life in meaningful ways. The telephone saved us long trips to the hospital for test results. The refrigerator meant we could spend less time preparing meals, and television allowed us to see the world news and cricket matches we had only imagined while listening to the short-wave radio," he said.
Today's AI systems are superhuman. Computer models based loosely on the neural networks in our brains are trained on vast amounts of data using huge clusters of processors. They can now classify objects in images better than we can. And as IBM and Google's DeepMind have demonstrated, they can beat us at games such as chess and Go, and even achieve the highest rank in the computer game StarCraft II. But at the same time, AI systems are inhuman.
New technologies are poised to challenge assumptions that AI and robotics will be used to perform only low-level and highly repetitive tasks. Over the past decade, U.S. tech firms have made significant advancements in artificial intelligence and robotics, making it far easier and more efficient to automate tasks and functions across industries. Artificial intelligence (AI) affects all types of risks and lines of insurance, and the workers' compensation market has a particularly large stake in the developments. Although the U.S. has experienced technological change and disruption during prior periods of industrial revolution, the pace and scope of the fourth industrial Revolution positions it to have a far greater impact on the U.S. and global economies. The recent advancements in AI and robotics are some of the most significant computer science advancements of our generations.