Before IBM's Deep Blue computer program defeated world champion Garry Kasparov in chess in 1997, ... [ ] many AI pundits believed that machines would never possess the creativity required to rival humans at the game. Years ago, Marvin Minsky coined the phrase "suitcase words" to refer to terms that have a multitude of different meanings packed into them. He gave as examples words like consciousness, morality and creativity. "Artificial intelligence" is a suitcase word. Commentators today use the phrase to mean many different things in many different contexts. As AI becomes more important technologically, economically and geopolitically, the phrase's use--and misuse--will only grow.
Cold War concerns U.S. government agencies like the Defense Advanced Research Projects Agency (DARPA) fund AI research at universities such as MIT, hoping for machines that will translate Russian instantly. I'm afraid I can't do that." The winter lasts two decades, with just a few heat waves of progress. Common-sense AI Douglas Lenat sets out to construct an AI that can do common-sense reasoning. He develops it for 30 years before it is used commercially.
According to an unofficial consensus, the birth of artificial intelligence as an independent research project can be dated to the summer of 1956, when John McCarthy at Dartmouth College, where he belonged to the Mathematical Department, was able to persuade the Rockefeller Foundation to finance an investigation " The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it". In addition to McCarthy (who was a professor at Stanford University until 2000 and is responsible for the coining of the term "artificial intelligence"), several other participants took part in the historical workshop at Dartmouth: Marvin Minsky (former professor at Stanford University), Claude Shannon (inventor of information theory); Herbert Simon (Nobel Prize winner in economics); Arthur Samuel (developer of the first chess computer program at world champion level); furthermore half a dozen experts from science and industry, who dreamed that it might be possible to produce a machine for coping with human tasks, which, according to the previous opinion, require intelligence. The Manifesto of Dartmouth (written at the dawn of the AI age) is both irritating and blurred. It is not clear whether the conference participants believed that one-day, machines would think or behave as if they could imagine. Both possible interpretations allow the word "simulate."
To differentiate themselves from researchers solving narrow AI problems, a few research teams have claimed an almost proprietary interest in producing human-level intelligence (or more) under the name "artificial general intelligence." Some have adopted the term "super-intelligence" to describe AGI systems that by themselves could rapidly design even more capable systems, with those systems further evolving to develop capabilities that far exceed any possessed by humans.
From helping in the global fight against Covid-19 to driving cars and writing classical symphonies, artificial intelligence is rapidly reshaping the world we live in. But not everyone is comfortable with this new reality. The billionaire tech entrepreneur Elon Musk has referred to AI as the "biggest existential threat" of our time. With recent scientific studies testing the technology's ability to evolve on its own, every step in its development throws up new concerns as to who is in control and how it will affect the lives of ordinary people. Here are 9 important milestones in the history of AI and the ethical concerns that have long loomed over the field.
It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. In considering any new subject, there is frequently a tendency, first, to overrate what we find to be already interesting or remarkable; and, secondly, by a sort of natural reaction, to undervalue the true state of the case, when we do discover that our notions have surpassed those that were really tenable. The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with. The first words uttered on a controversial subject can rarely be taken as the last, but this comment by British mathematician Lady Lovelace, who died in 1852, is just that--the basis of our understanding of what computers are and can be, including the notion that they might come to acquire artificial intelligence, which here means "strong AI," or the ability to think in the fullest sense of the word. Her words demand and repay close reading: the computer "can do whatever we know how to order it to perform." This means both that it can do only what we know how to instruct it to do, and that it can do all that we know how to instruct it to do.
Before IBM's Deep Blue computer program defeated world champion Garry Kasparov in chess in 1997, ... [ ] many AI pundits believed that machines would never possess the creativity required to rival humans at the game. Years ago, Marvin Minsky coined the phrase "suitcase words" to refer to terms that have a multitude of different meanings packed into them. He gave as examples words like consciousness, morality and creativity. "Artificial intelligence" is a suitcase word. Commentators today use the phrase to mean many different things in many different contexts.
As well as playing a key role in cracking the Enigma code at Bletchley Park during the Second World War, and conceiving of the modern computer, the British mathematician Alan Turing owes his public reputation to the test he devised in 1950. Crudely speaking, it asks whether a human judge can distinguish between a human and an artificial intelligence based only on their responses to conversation or questions. This test, which he called the "imitation game," was popularised 15 years later in Philip K Dick's science-fiction novel Do Androids Dream of Electric Sheep? But Turing is also widely remembered as having committed suicide in 1954, quite probably driven to it by the hormone treatment he was instructed to take as an alternative to imprisonment for homosexuality (deemed to make him a security risk), and it is only comparatively recently that his genius has been afforded its full due. In 2009, Gordon Brown apologised on behalf of the British government for his treatment; in 2014, his posthumous star rose further again when Benedict Cumberbatch played him in The Imitation Game; and in 2021, he will be the face on the new £50 note.
More than a decade has passed since the British government issued an apology to the mathematician Alan Turing. The tone of pained contrition was appropriate, given Britain's grotesquely ungracious treatment of Turing, who played a decisive role in cracking the German Enigma cipher, allowing Allied intelligence to predict where U-boats would strike and thus saving tens of thousands of lives. Unapologetic about his homosexuality, Turing had made a careless admission of an affair with a man, in the course of reporting a robbery at his home in 1952, and was arrested for an "act of gross indecency" (the same charge that had led to a jail sentence for Oscar Wilde in 1895). Turing was subsequently given a choice to serve prison time or undergo a hormone treatment meant to reverse the testosterone levels that made him desire men (so the thinking went at the time). Turing opted for the latter and, two years later, ended his life by taking a bite from an apple laced with cyanide.