If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Although the concept of artificial intelligence has been around for centuries it wasn't until the 1950's where the true possibility of it was explored. A generation of scientists, mathematicians and philosophers all had the concept of AI but it wasn't until one British Polymath, Alan Turing, suggested that if humans use available information, as well as reason, to solve problems and make decisions -- then why can't machines do the same thing? Although Turing outlined machines and how to test their intelligence in his paper Computing Machinery and Intelligence in 1950 -- his findings did not advance. The main halt in growth was the problem of computers. Before any more growth could happen they needed to change fundamentally -- computers could execute commands, but they could not store them.
Artificial Intelligence is a branch of Computer Science and it was pursued because the computers, programs and machines were made intelligent or to think like humans. Over time, AI or Artificial Intelligence practices, intend to design software systems that mimic or display some form of human intelligence. These techniques have been engaged to aid or computerize many different actions and activities in software engineering. Hence the attempt to make or create a computer, a software or a robot to think and act like human beings or to behave intelligent. As just as the brain, thinks, observes, perceives, and decides, the software too is created to represent this.
Every few decades, a technological development leads us to believe that artificial general intelligence (aka strong AI), the brand of AI that can think and decide like humans, is just around the corner. The excitement that follows is accompanied by fears of dystopian near-future and an arms-race between companies and states to be the first to create general AI. However, every time we thought we were closing in on strong AI, we have been disappointed. Every time, we spent a lot of time, resources, money and the energy of our most brilliant scientists on accomplishing something that seems to be a pipe dream. And every time, what ensued was a period of disappointment and disinterest in the field, which lasted decades.
HONG KONG (Reuters Breakingviews) - Artificial intelligence doesn't hate you, prominent researcher Eliezer Yudkowsky wrote, "nor does it love you, but you are made of atoms which it can use for something else". This sets the scene for Tom Chivers' fascinating new book, which borrows its title from the quote, on why so-called superintelligence should be viewed as an existential threat potentially greater than nuclear weapons or climate change. The "strange, irascible and brilliant" Yudkowsky is a central figure throughout the book. His early musings on the potential and dangers of artificial intelligence during the mid- to late-2000s gave birth to the Rationalist movement, a loose community dedicated to AI safety. Chivers, a former science journalist with Buzzfeed and the Telegraph, offers a meticulously researched investigation into who the Rationalists are, and more importantly why they believe humanity is fast approaching an inflection point between "extinction and godhood".
Machine learning and Artificial intelligence have taken over data centers by storm. As racks begin to fill with ASICs, FPGAs, GPUs, and supercomputers, the face of the hyper-scale server farm seems to change. These technologies are known to provide exceptional computing power to train machine learning systems. Machine learning is a process that involves tremendous amounts of data-crunching, which is a herculean task in itself. The ultimate goal of this tiring process is to create applications that are smart and also to improve services that are already in everyday use.
After decades of a heavy slog with no promise of success, quantum computing is suddenly buzzing! Nearly two years ago, IBM made a quantum computer available to the world. The 5-quantum-bit (qubit) resource they now call the IBM Q experience. It was more like a toy for researchers than a way of getting any serious number crunching done. But 70,000 users worldwide have registered for it, and the qubit count in this resource has now quadrupled.
Imagine a world where humans co-existed with beings who, like us, had minds, thoughts, feelings, self-conscious awareness and the capacity to perform purposeful actions--but, unlike us, these beings had artificial mechanical bodies that could be switched on and off. That brave new world would throw up many issues as we came to terms with our robot counterparts as part and parcel of everyday life. How should we behave towards them? What moral duties would we have? What moral rights would such non-human persons have?
Chatbots today pop up at websites in smartphone apps; the same technology helps robots, smart speakers, and other machines operate in a more human-like way. The idea of conversing with a computer is nothing new. As far back as the 1960s, a natural language processing program named Eliza matched typed remarks with scripted responses. The software identified key words and responded with phrases that made it seem as though the computer was responding conversationally. Since then, such conversational interfaces--also known as virtual agents--have advanced remarkably due to greater processing power, cloud computing, and ongoing improvements in artificial intelligence (AI) and machine learning.
The hype machine is cranked up to an 11 on the topic of machine learning (sometimes called artificial intelligence, though I don't call it that because AI is not really intelligence and there's nothing artificial about it). Machine learning will either empower the world or take it over, depending on what you read. But before you get swept away by the gust of hot air coming from the technology industry, it's important to pause in order to put things into perspective. Maybe just explaining it in reasonable terms will help. Shortly after the first caveman figured out how to make fire, the second caveman wanted to learn how to make fire, too.
As 5G networks continue to expand in cities and countries across the globe, key researchers have already started to lay the foundation for 6G deployments roughly a decade from now. This time, they say, the key selling point won't be faster phones or wireless home internet service, but rather a range of advanced industrial and scientific applications -- including wireless, real-time remote access to human brain-level AI computing. That's one of the more interesting takeaways from a new IEEE paper published by NYU Wireless's pioneering researcher Dr. Ted Rappaport and colleagues, focused on applications for 100 gigahertz (GHz) to 3 terahertz (THz) wireless spectrum. As prior cellular generations have continually expanded the use of radio spectrum from microwave frequencies up to millimeter wave frequencies, that "submillimeter wave" range is the last collection of seemingly safe, non-ionizing frequencies that can be used for communications before hitting optical, x-ray, gamma ray, and cosmic ray wavelengths. Dr. Rappaport's team says that while 5G networks should eventually be able to deliver 100Gbps speeds, signal densification technology doesn't yet exist to eclipse that rate -- even on today's millimeter wave bands, one of which offers access to bandwidth that's akin to a 500-lane highway.