If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Can machines design? Can they come up with creative solutions to problems and build tools and artifacts across a wide range of domains? Recent advances in the field of computational creativity and formal Artificial General Intelligence (AGI) provide frameworks for machines with the general ability to design. In this paper we propose to integrate a formal computational creativity framework into the G\"odel machine framework. We call the resulting framework design G\"odel machine. Such a machine could solve a variety of design problems by generating novel concepts. In addition, it could change the way these concepts are generated by modifying itself. The design G\"odel machine is able to improve its initial design program, once it has proven that a modification would increase its return on the utility function. Finally, we sketch out a specific version of the design G\"odel machine which specifically addresses the design of complex software and hardware systems. Future work aims at the development of a more formal version of the design G\"odel machine and a proof of concept implementation.
Since Alan Turing first posed the question "can machines think?" in his seminal paper in 1950, "Computing Machinery and Intelligence", Artificial Intelligence (AI) has failed to deliver on its promise. That is, Artificial General Intelligence. There have, however, been incredible advances in the field, including Deep Blue beating the world's best chess player, the birth of autonomous vehicles, and Google's DeepMind beating the world's best AlphaGo player. The current achievements represent the culmination of research and development that occurred over more than 65 years. Importantly, during this period there were two well documented AI Winters that almost completely debunked the promise of AI.
Machine Learning (ML), along with the Internet of Things (IoT) seems to be the next big revolution in science and technology. AI experts are debating why machine learning is the most wondrous thing, today. They are trying to predict the way ML can affect the future and its evolution. The ability to feed the machine with big amounts of data, so that the machine can learn concepts and rules to focus on specific categories of problems and solutions, is a critical part of AI development. In 1959, the term'machine learning' was coined by Arthur Samuel, an AI professional.
Amongst all this hype and bandwagon jumping on Artificial Intelligence (AI), Machine Learning (ML), and Cognitive Technologies is also a sense of unease. How is it that a technology that has roots going back as far as the beginnings of computing is suddenly now the hot "must have" technology that's powering ever-more dramatic amounts of money being pumped into a few skyrocketing startups? The industry has gone through two major waves of AI development and promotion with their own periods of sky-high hype only to sink dramatically back to earth once people realized the limitations of what surely was being hyped as being on the cusp of sentience. And so here we are again, in the "summer" of this wave's AI adoption wondering if this will all last, or if billion-dollar unicorns are being funded in an environment that's sure to pull back the reins of overinflated expectations. As discussed in previous newsletters, podcasts, and research on this subject, an AI Winter is a period of declined interest, funding, research, and support for artificial intelligence and related areas -- in essence, a "chill" on the growth of the industry.
CEO of AI.io and Moonshot, Terence Mills is an AI pioneer and digital technology specialist. As our society's technological progress marches forward, we've become ever more fascinated with the concept of artificial general intelligence (AGI). From IBM's Jeopardy-playing computer, Watson to television programs like Westworld, we've collectively begun exploring and philosophizing about the potential of AGI. Of course, most discussions about AGI in our popular culture are focused on the future, and not the current realities of the present when it comes to artificial general intelligence. Below, we'll discuss the current realities of AGI and what breakthroughs we're on the cusp of in 2018.
As our society's technological progress marches forward, we've become ever more fascinated with the concept of artificial general intelligence (AGI). From IBM's Jeopardy-playing computer, Watson to television programs like Westworld, we've collectively begun exploring and philosophizing about the potential of AGI. Of course, most discussions about AGI in our popular culture are focused on the future, and not the current realities of the present when it comes to artificial general intelligence. Below, we'll discuss the current realities of AGI and what breakthroughs we're on the cusp of in 2018. How Close Are We To True Artificial General Intelligence?
Let's start at the beginning. Why do we even need this term? Over several decades of trying and failing (badly), the original vision was largely abandoned. Nowadays almost all AI work relates to narrow, domain-specific, human-designed capabilities. Powerful as these current applications may be, they are limited to their specific target domain, and have very narrow (if any) adaptation or interactive learning ability.
We are living in an age of disruption and one of the drivers of that change is a development that many people call Artificial Intelligence. It is a very broad heading under which many people seem to pin their hopes and fears. AI has been portrayed as an existential threat to humanity, or as just an excel sheet on steroids. Some, like Ray Kurzweil, point towards the Singularity when we reach real Artificial Intelligence. Just like a gravitational singularity where gravitational tidal forces become infinite this Singularity will be an Event Horizon- a point of no return -- a place and time which we can't see beyond.
When people think of the greatest artists who've ever lived, they probably think of names like Beethoven or Picasso. No one would ever think of a computer as a great artist. But what if one day, that was indeed the case. Could computers learn to create incredible drawings like the Mona Lisa? Perhaps one day a robot will be capable of composing the next great symphony. Some experts believe this to be the case. In fact, some of the greatest minds in artificial intelligence are diligently working to develop programs that can create drawing and music independently from humans. The use of artificial intelligence in the field of art has even been picked up by tech giants the likes of Google. The projects that are included in this paper could have drastic implications in our everyday lives. They may also change the way we view art.
"It was his unique teaching style that got me and a bunch of my friends hooked to this topic and field – his enthusiasm towards the material, the intuitive examples that he gives…," says Abhishek Naik, a student pursuing a dual degree from IIT Madras, who recently did a bulk of work on MADRaS, an open source multi-agent driving simulator. "Working with him is highly rewarding in the sense that after every meeting, you'll walk out out his office brimming with new ideas and directions to explore."