Chess


Regulation will 'stifle' AI and hand the lead to Russia and China, warns Garry Kasparov

#artificialintelligence

Garry Kasparov has warned that any attempts by the Government to regulate artificial intelligence (AI) could "stifle" its development and give Russia and China an advantage. The former world chess champion has become an advocate for AI development following his resignation from professional chess in 2005. He told The Telegraph that "the government should be involved" in helping researchers and private firms to develop AI in order to "pave the road" for the technology. However, he cautioned against governments attempting to regulate the technology too closely. "It's too early for the government to interfere," he said.


The Machine Learning research revolution

#artificialintelligence

In recent times, we have seen an increasing number of instances of Artificial Intelligence (AI) donning the proverbial lab coat. In early 2019, thousands of people were screened every day in a hospital in Madurai by an AI system developed by Google that helps diagnose diabetic retinopathy, a condition that can lead to blindness. Startups like Niramai, based in Bengaluru are developing AI technology for early diagnosis of conditions like breast cancer and river blindness. The sudden, accelerated growth of Machine Learning not just in research but in all walks of life can bring to mind Black Mirror-esque visions of dystopia in which machines rule over humanity. But let us leave worrying about the consequences of the far future to science fiction and look at the immediate impact this technology has had in science.


Machine Learning Today and Tomorrow - Carrier Management

#artificialintelligence

It is difficult to open an insurance industry newsletter these days without seeing some reference to machine learning or its cousin artificial intelligence and how they will revolutionize the industry. Yet according to Willis Towers Watson's recently released 2019/2020 P&C Insurance Advanced Analytics Survey results, fewer companies have adopted machine learning and artificial intelligence than had planned to do so just two years ago (see the accompanying graphic). In the context of insurance, we're not talking about self-driving cars (though these may have important implications for insurance) or chess-playing computers. We're talking about predicting the outcome of comparatively simple future events: Who will buy what product, which clients are more likely to have what kind of claim, which claim will become complex according to some definition. The better insurers can estimate the outcomes of these future events, the better they can plan for them and achieve more positive results.


AlphaZero: Shedding new light on the grand games of chess, shogi and Go

#artificialintelligence

As with Go, we are excited about AlphaZero's creative response to chess, which has been a grand challenge for artificial intelligence since the dawn of the computing age with early pioneers including Babbage, Turing, Shannon, and von Neumann all trying their hand at designing chess programs. But AlphaZero is about more than chess, shogi or Go. To create intelligent systems capable of solving a wide range of real-world problems we need them to be flexible and generalise to new situations. While there has been some progress towards this goal, it remains a major challenge in AI research with systems capable of mastering specific skills to a very high standard, but often failing when presented with even slightly modified tasks. AlphaZero's ability to master three different complex games – and potentially any perfect information game – is an important step towards overcoming this problem.


AI Year in Review: Highlights of Papers from IBM Research in 2019

#artificialintelligence

January 17, 2020 Written by: John R. Smith IBM Research has a long history as a leader in the field of Artificial Intelligence (AI). IBM's pioneering work in AI dates back to the field's inception in the 1950s, when IBM developed one of the first instances of machine learning, which was applied to the game of checkers. Since then, IBM has been responsible for achieving major milestones in AI, ranging from Deep Blue – the first chess-playing computer to defeat a reigning world champion, to Watson – the first natural language question and answering system able to win at Jeopardy!, to last year's Project Debater – the first AI system that can build persuasive arguments on its own and effectively engage in debates on complex topics. IBM's leadership in AI continued in earnest in 2019, which was notable for a growing focus on critical topics such as making trustworthy AI work in practice, creating new AI engineering paradigms to scale AI for a broader use, and continuing to advance core AI capabilities in language, speech, vision, knowledge & reasoning, human-centered AI, and more. While recent years have seen incredible progress in "narrow AI," built on technologies like deep learning, IBM Research pushed its AI research in 2019 towards developing a new foundational underpinning of AI for enterprise applications by addressing important problems like learning more from less, enabling trusted AI by ensuring the fairness, explainability, adversarial robustness, and transparency of AI systems, and integrating learning and reasoning as a way to understand more in order to do more.


Living in Alan Turing's Future

#artificialintelligence

More than a decade has passed since the British government issued an apology to the mathematician Alan Turing. The tone of pained contrition was appropriate, given Britain's grotesquely ungracious treatment of Turing, who played a decisive role in cracking the German Enigma cipher, allowing Allied intelligence to predict where U-boats would strike and thus saving tens of thousands of lives. Unapologetic about his homosexuality, Turing had made a careless admission of an affair with a man, in the course of reporting a robbery at his home in 1952, and was arrested for an "act of gross indecency" (the same charge that had led to a jail sentence for Oscar Wilde in 1895). Turing was subsequently given a choice to serve prison time or undergo a hormone treatment meant to reverse the testosterone levels that made him desire men (so the thinking went at the time). Turing opted for the latter and, two years later, ended his life by taking a bite from an apple laced with cyanide.


Did HAL Commit Murder?

#artificialintelligence

Last month at the San Francisco Museum of Modern Art I saw "2001: A Space Odyssey" on the big screen for my 47th time. The fact that this masterpiece remains on nearly every relevant list of "top ten films" and is shown and discussed over a half-century after its 1968 release is a testament to the cultural achievement of its director Stanley Kubrick, writer Arthur C. Clarke, and their team of expert filmmakers. As with each viewing, I discovered or appreciated new details. But three iconic scenes -- HAL's silent murder of astronaut Frank Poole in the vacuum of outer space, HAL's silent medical murder of the three hibernating crewmen, and the poignant sorrowful "death" of HAL -- prompted deeper reflection, this time about the ethical conundrums of murder by a machine and of a machine. In the past few years experimental autonomous cars have led to the death of pedestrians and passengers alike. AI-powered bots, meanwhile, are infecting networks and influencing national elections. Elon Musk, Stephen Hawking, Sam Harris, and many other leading AI researchers have sounded the alarm: Unchecked, they say, AI may progress beyond our control and pose significant dangers to society. When astronauts Frank and Dave retreat to a pod to discuss HAL's apparent malfunctions and whether they should disconnect him, Dave imagines HAL's views and says: "Well I don't know what he'd think about it."


Machine Learning vs. AI, Important Differences Between Them

#artificialintelligence

Recently, a report was released regarding the misuse from companies claiming to use artificial intelligence [29] [30] on their products and services. According to the Verge [29], 40% of European startups that claimed to use AI don't actually use the technology. Last year, TechTalks, also stumbled upon such misuse by companies claiming to use machine learning and advanced artificial intelligence to gather and examine thousands of users' data to enhance user experience in their products and services [2] [33]. Unfortunately, there's still a lot of confusion within the public and the media regarding what truly is artificial intelligence [44], and what truly is machine learning [18]. Often the terms are being used as synonyms, in other cases, these are being used as discrete, parallel advancements, while others are taking advantage of the trend to create hype and excitement, as to increase sales and revenue [2] [31] [32] [45].


Why We Must Unshackle AI From the Boundaries of Human Knowledge

#artificialintelligence

Artificial intelligence (AI) has made astonishing progress in the last decade. AI can now drive cars, diagnose diseases from medical images, recommend movies, even whom you should date, make investment decisions, and create art that people have sold at auction. A lot of research today, however, focuses on teaching AI to do things the way we do them. For example, computer vision and natural language processing – two of the hottest research areas in the field – deal with building AI models that can see like humans and use language like humans. But instead of teaching computers to imitate human thought, the time has now come to let them evolve on their own, so instead of becoming like us, they have a chance to become better than us.


Artificial intelligence: How to measure the 'I' in AI

#artificialintelligence

This means that the test favors "program synthesis," the subfield of AI that involves generating programs that satisfy high-level specifications. This approach is in contrast with current trends in AI, which are inclined toward creating programs that are optimized for a limited set of tasks (e.g., playing a single game). In his experiments with ARC, Chollet has found that humans can fully solve ARC tests.