If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Cybersecurity research involves publishing papers about malicious exploits as much as publishing information on how to design tools to protect cyber-infrastructure. It is this information exchange between ethical hackers and security experts, which results in a well-balanced cyber-ecosystem. In the blooming domain of AI Safety Engineering, hundreds of papers have been published on different proposals geared at the creation of a safe machine, yet nothing, to our knowledge, has been published on how to design a malevolent machine. Availability of such information would be of great value particularly to computer scientists, mathematicians, and others who have an interest in AI safety, and who are attempting to avoid the spontaneous emergence or the deliberate creation of a dangerous AI, which can negatively affect human activities and in the worst case cause the complete obliteration of the human species. This paper provides some general guidelines for the creation of a Malevolent Artificial Intelligence (MAI).
The field of artificial intelligence has spawned a vast range of subset fields and terms: machine learning, neural networks, deep learning and cognitive computing, to name but a few. However here we will turn our attention to the specific term'artificial general intelligence', thanks to the Portland-based AI company Kimera Systems' (momentous) claim to have launched the world's first ever example, called Nigel. The AGI Society defines artificial general intelligence as "an emerging field aiming at the building of "thinking machines"; that is general-purpose systems with intelligence comparable to that of the human mind (and perhaps ultimately well beyond human general intelligence)". AGI would, in theory, be able to perform any intellectual feat a human can. You can now perhaps see why a claim to have launched the world's first ever AGI might be a tad ambitious, to say the least.
Kimera Systems announced the birth of Nigel – the world's first commercial human-like intelligence technology for connected devices. Nigel was delivered at a birthday party held last Friday in downtown Portland by its creator, Kimera co-founder and CEO Mounir Shita. The Nigel artificial general intelligence (AGI) technology began learning immediately in the same way humans do: by observing the behavior of people with Nigel-enabled devices. Shita began working on his single-algorithm, federated approach to artificial intelligence in 2005, and Kimera Systems was formally incorporated in 2012. The technology was dubbed "Nigel" to honor one of its principal architects, Nigel Deighton, a noted international expert on wireless technologies and a former Gartner research vice president, who passed away in 2013.
Summary: Which of these terms means the same thing: AI, Deep Learning, Machine Learning? While there's overlap none of these is a complete subset of the others and none completely explains the others. Which of the following are substantially the same things? For as precise a profession as we data scientists purport to be we are sometimes way too casual with our language. Read several articles about AI, Deep Learning, and Machine learning and you will come away confused whether these are all the same or all different.
There is a need for a new global platform to monitor, consider, and make recommendations about the implications of emerging technologies in general, and AI more specifically, for international security. The doomsday scenarios spun around this theme are so outlandish – like The Matrix, in which human-created artificial intelligence plugs humans into a simulated reality to harvest energy from their bodies – it's difficult to visualize them as serious threats. Meanwhile, artificially intelligent systems continue to develop apace. Self-driving cars are beginning to share our roads; pocket-sized devices respond to our queries and manage our schedules in real-time; algorithms beat us at Go; robots become better at getting up when they fall over. It's obvious how developing these technologies will benefit humanity. But, then – don't all the dystopian sci-fi stories start out this way?
Continuing the mission of the past AGI conferences, AGI-16 gathers an international group of leading academic and industry researchers involved in scientific and engineering work aimed directly toward the goal of Artificial General Intelligence (AGI). AGI-16 @ New York will be held from July 16-19 of 2016, on the campus of the New School in Lower Manhattan. As a special event for 2016, the AGI-16 conference will be co-located with three other related conferences -- BICA-16, the Neural-Symbolic Workshop 2016 and the AI & Cognition Workshop 2016 -- as part of the overall Human-Level Intelligence 2016 (HLAI-16) event. AGI conferences are organized by the Artificial General Intelligence Society, in cooperation with the Association for the Advancement of Artificial Intelligence (AAAI). The proceedings of AGI-16 will be published as a book in Springer's Lecture Notes in AI series, and all the accepted papers will be available online.
The doomsday scenarios spun around this theme are so outlandish -- like The Matrix, in which human-created artificial intelligence plugs humans into a simulated reality to harvest energy from their bodies -- it's difficult to visualize them as serious threats. Meanwhile, artificially intelligent systems continue to develop apace. Self-driving cars are beginning to share our roads; pocket-sized devices respond to our queries and manage our schedules in real-time; algorithms beat us at Go; robots become better at getting up when they fall over. It's obvious how developing these technologies will benefit humanity. But, then -- don't all the dystopian sci-fi stories start out this way? One is overly credulous scare-mongering.
From answering queries to predicting future of your relationship, a lot is already being said and written about Artificial Intelligence (AI). We've seen movies depicting the technology like Matrix, and even Bollywood doesn't fall short of explaining what's AI, of course with a fair share of melodrama. But, what seems fascinating and equally scary is a new report talking about an AI arms race. An army of machines may be decades away, and Anja Kaspersen, Head of International Security, World Economic Forum, pointing at a survey of AI researchers by TechEmergence (via Medium) points out how it poses an array of security concerns which could be curbed by timely implementation of norms and protocols. There are many questions raised about how AI could be a life-changing and threatening factor, and what it is goes into the hands of some malicious minds.
Are you tired of hearing about artificial intelligence yet? Well, I have some bad news. It's only going to become the most important thing in our lives. Every once in a while you read an article that completely changes and overwhelms the way you think about something. Today, for me, it was The Artificial Intelligence Revolution: Part 1 and Part 2. These are very long articles, so I have taken the liberty of extracting the best parts for a quick skim.