Goto

Collaborating Authors

 dartmouth conference


The "Ultimate" AI Textbook

#artificialintelligence

In this section, we will talk about Artificial Intelligence, its history, applications, the different types of AI, and the programming languages that are used for AI. Note that I will not be talking about how to code AI but mainly focus on the various languages which support AI. No, don't close this tab!!! Ok fine, I'll start doing my job of explaining properly. "The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages." In simple words, AI is the science of making machines that can think. It's a technique of getting machines to work and behave like humans which accomplishes this task by creating machines and robots.


AI thinks like a corporation--and that's worrying

#artificialintelligence

Artificial intelligence is everywhere but it is considered in a wholly ahistorical way. To understand the impact AI will have on our lives, it is vital to appreciate the context in which the field was established. After all, statistics and state control have evolved hand in hand for hundreds of years. Its origins have been traced not only to analytic philosophy, pure mathematics and Alan Turing, but perhaps surprisingly, to the history of public administration. In "The Government Machine: A Revolutionary History of the Computer" from 2003, Jon Agar of University College London charts the development of the British civil service as it ballooned from 16,000 employees in 1797 to 460,000 by 1999.


Survival of the Weakest...A.I.

#artificialintelligence

Recently I read a great book that was on my "to read" list for quite some time, namely: "Superintelligence: Paths, Dangers, Strategies" by Professor Nick Bostrom. In the book -a tough, but great read- Prof. Bostrom shares his views on the opportunities and risks to human kind, associated with the ongoing development of Artificial Intelligence(A.I.). Using his broad knowledge of Mathematics, Engineering, Medicine, Social science and Philosophy, Prof. Bostrom explains the possible dangers associated with A.I. reaching the level of Superintelligence. Superintelligence is a term used to describe the level of artificial intelligence that far surpasses the intelligence of the brightest human minds alive today, or will come in the future. So after reading the book and doing some further post reading "research", a question kept popping up in my mind: Now before offering my view on a possible answer to this question -and eventually relating it back to the title of this article-, I think it is good to first go over some commonly used terms and concepts of A.I., without going too much in detail... Artificial Intelligence or A.I., as a term and even a discipline, was introduced by one of the "founding fathers of A.I.", John McCarthy(Lisp programming anyone?), during the famous Dartmouth conference in the mid-fifties of the previous century.


AI thinks like a corporation--and that's worrying

#artificialintelligence

Artificial intelligence is everywhere but it is considered in a wholly ahistorical way. To understand the impact AI will have on our lives, it is vital to appreciate the context in which the field was established. After all, statistics and state control have evolved hand in hand for hundreds of years. Its origins have been traced not only to analytic philosophy, pure mathematics and Alan Turing, but perhaps surprisingly, to the history of public administration. In "The Government Machine: A Revolutionary History of the Computer" from 2003, Jon Agar of University College London charts the development of the British civil service as it ballooned from 16,000 employees in 1797 to 460,000 by 1999.


12 Breakthroughs That Shaped today's Artificial Intelligence

#artificialintelligence

Artificial intelligence is suddenly in people's homes, driving their cars, and running their security systems. Users interact with chatbots, sometimes unaware they're not talking to live people. Designers and marketing agencies trust computer-generated insights and machine learning over human input in making business decisions. Artificial intelligence development seemed to happen overnight, but it has been a series of developments that stretches back hundreds of years. It's hard to imagine that, 381 years ago, anyone could have conceived of artificial intelligence.


In the Beginning …

AI Magazine

John Mc-Carthy, then an assistant mathematics professor at Dartmouth, organized the conference and coined the name "artificial intelligence" in his conference proposal. This summer AAAI celebrates the first 50 years of AI; and continues to foster the fertile fields of AI at the National AI conference (AAAI-06) and Innovative Applications of AI conference (IAAI-06) in Boston. The computer age was just dawning in 1956. MIT researchers that year built the TX-0, the first general-purpose, programmable computer with transistors. IBM shipped the first magnetic disk storage, the 305 RAMAC, composed of 50 magnetically coated metal platters with 5 million bytes of data.


Deep Learning – Past, Present, and Future

@machinelearnbot

According to Gartner, the number of open positions for deep learning experts grew from almost zero in 2014 to 41,000 today. Much of this growth is being driven by high tech giants, such as Facebook, Apple, Netflix, Microsoft, Google, and Baidu. These big players and others have invested heavily in deep learning projects. Besides hiring experts, they have funded deep learning projects and experiments and acquired deep learning related companies. And these investments are only the beginning.


Cognitive collaboration

#artificialintelligence

Although artificial intelligence (AI) has experienced a number of "springs" and "winters" in its roughly 60-year history, it is safe to expect the current AI spring to be both lasting and fertile. Applications that seemed like science fiction a decade ago are becoming science fact at a pace that has surprised even many experts. The stage for the current AI revival was set in 2011 with the televised triumph of the IBM Watson computer system over former Jeopardy! This watershed moment has been followed rapid-fire by a sequence of striking breakthroughs, many involving the machine learning technique known as deep learning. Computer algorithms now beat humans at games of skill, master video games with no prior instruction, 3D-print original paintings in the style of Rembrandt, grade student papers, cook meals, vacuum floors, and drive cars.1 All of this has created considerable uncertainty about our future relationship with machines, the prospect of technological unemployment, and even the very fate of humanity. Regarding the latter topic, Elon Musk has described AI "our biggest existential threat." Stephen Hawking warned that "The development of full artificial intelligence could spell the end of the human race." In his widely discussed book Superintelligence, the philosopher Nick Bostrom discusses the possibility of a kind of technological "singularity" at which point the general cognitive abilities of computers exceed those of humans.2 Discussions of these issues are often muddied by the tacit assumption that, because computers outperform humans at various circumscribed tasks, they will soon be able to "outthink" us more generally. Continual rapid growth in computing power and AI breakthroughs notwithstanding, this premise is far from obvious.


Deep Learning – Past, Present, and Future

@machinelearnbot

According to Gartner, the number of open positions for deep learning experts grew from almost zero in 2014 to 41,000 today. Much of this growth is being driven by high tech giants, such as Facebook, Apple, Netflix, Microsoft, Google, and Baidu. These big players and others have invested heavily in deep learning projects. Besides hiring experts, they have funded deep learning projects and experiments and acquired deep learning related companies. And these investments are only the beginning.


Cognitive collaboration

#artificialintelligence

Although artificial intelligence (AI) has experienced a number of "springs" and "winters" in its roughly 60-year history, it is safe to expect the current AI spring to be both lasting and fertile. Applications that seemed like science fiction a decade ago are becoming science fact at a pace that has surprised even many experts. The stage for the current AI revival was set in 2011 with the televised triumph of the IBM Watson computer system over former Jeopardy! This watershed moment has been followed rapid-fire by a sequence of striking breakthroughs, many involving the machine learning technique known as deep learning. Computer algorithms now beat humans at games of skill, master video games with no prior instruction, 3D-print original paintings in the style of Rembrandt, grade student papers, cook meals, vacuum floors, and drive cars.1 All of this has created considerable uncertainty about our future relationship with machines, the prospect of technological unemployment, and even the very fate of humanity. Regarding the latter topic, Elon Musk has described AI "our biggest existential threat." Stephen Hawking warned that "The development of full artificial intelligence could spell the end of the human race." In his widely discussed book Superintelligence, the philosopher Nick Bostrom discusses the possibility of a kind of technological "singularity" at which point the general cognitive abilities of computers exceed those of humans.2 Discussions of these issues are often muddied by the tacit assumption that, because computers outperform humans at various circumscribed tasks, they will soon be able to "outthink" us more generally. Continual rapid growth in computing power and AI breakthroughs notwithstanding, this premise is far from obvious.