Collaborating Authors


Modern Computing: A Short History, 1945-2022


Inspired by A New History of Modern Computing by Thomas Haigh and Paul E. Ceruzzi. But the selection of key events in the journey from ENIAC to Tesla, from Data Processing to Big Data, is mine. This was the first computer made by Apple Computers Inc, which became one of the fastest growing ... [ ] companies in history, launching a number of innovative and influential computer hardware and software products. Most home computer users in the 1970s were hobbyists who designed and assembled their own machines. The Apple I, devised in a bedroom by Steve Wozniak, Steven Jobs and Ron Wayne, was a basic circuit board to which enthusiasts would add display units and keyboards. April 1945 John von Neumann's "First Draft of a Report on the EDVAC," often called the founding document of modern computing, defines "the stored program concept." July 1945 Vannevar Bush publishes "As We May Think," in which he envisions the "Memex," a memory extension device serving as a large personal repository of information that could be instantly retrieved through associative links.

Council Post: Artificial Intelligence: A Key Technology That's Shaping Our Tomorrow


Manan Shah is the co-founder and CEO of Avalance Global Solutions, California-based cybersecurity, and breach and attack simulation company. Ever since Alan Turing helped decode the Enigma messages used by the Germans during World War II, the concept of artificial intelligence started getting traction. It was only in 1956 that the term actually was officially coined by none other than John McCarthy. It was the era when the debate over artificial intelligence began and became a heated topic. The concept fascinated a lot of free thinkers and frightened others.

What is the State-of-the-Art & Future of Artificial Intelligence?


In 1958, the New York Times reported on a demonstration by the US Navy of Frank Rosenblatt's "perceptron" (a rudimentary precursor to today's deep neural networks): "The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself, and be conscious of its existence". This optimistic take was quickly followed by similar proclamations from AI pioneers, this time about the promise of logic-based "symbolic" AI. In 1960 Herbert Simon declared that, "Machines will be capable, within twenty years, of doing any work that a man can do". The following year, Claude Shannon echoed this prediction: "I confidently expect that within a matter of 10 or 15 years, something will emerge from the laboratory which is not too far from the robot of science fiction fame". And a few years later Marvin Minsky predicted that, "Within a generation...the problems of creating'artificial intelligence' will be substantially solved". John McCarthy promoted the term Artificial Intelligence with a wishful thinking that, 'Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions, and concepts, solve the kinds of problems now reserved for humans, and improve themselves.' AI was assumed to simulate human reasoning, giving the ability of a computer program to learn and think.

Why AI is Harder Than We Think Artificial Intelligence

Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment ("AI spring") and periods of disappointment, loss of confidence, and reduced funding ("AI winter"). Even with today's seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.

Seventy years of highs and lows in the history of machine learning


Cold War concerns U.S. government agencies like the Defense Advanced Research Projects Agency (DARPA) fund AI research at universities such as MIT, hoping for machines that will translate Russian instantly. I'm afraid I can't do that." The winter lasts two decades, with just a few heat waves of progress. Common-sense AI Douglas Lenat sets out to construct an AI that can do common-sense reasoning. He develops it for 30 years before it is used commercially.

No Ghost in the Machine - The American Scholar


It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. In considering any new subject, there is frequently a tendency, first, to overrate what we find to be already interesting or remarkable; and, secondly, by a sort of natural reaction, to undervalue the true state of the case, when we do discover that our notions have surpassed those that were really tenable. The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with. The first words uttered on a controversial subject can rarely be taken as the last, but this comment by British mathematician Lady Lovelace, who died in 1852, is just that--the basis of our understanding of what computers are and can be, including the notion that they might come to acquire artificial intelligence, which here means "strong AI," or the ability to think in the fullest sense of the word. Her words demand and repay close reading: the computer "can do whatever we know how to order it to perform." This means both that it can do only what we know how to instruct it to do, and that it can do all that we know how to instruct it to do.

WWII code breaker buried in Nebraska with UK military honors

FOX News

This undated Watters family photo via the Omaha World-Herald shows Col. John Watters and his wife, Jean Watters, on their wedding day. Jean Watters was buried Monday, Sept. 24, 2018, in Nebraska with British military honors for a secret that she held for decades: her World War II service as a codebreaker of German intelligence communications. The tribute honored Watters for her role decoding for a top-secret military program led by British mathematician Alan Turing, who was the subject of the 2014 Oscar-winning film, "The Imitation Game ." She was 18 when she enlisted in the Women's Royal Naval Service. She and her husband retired to the U.S. in 1969.

Broken Promises & Empty Threats: The Evolution of AI in the USA, 1956-1996 – Technology's Stories


Artificial Intelligence (AI) is once again a promising technology. The last time this happened was in the 1980s, and before that, the late 1950s through the early 1960s. In between, commentators often described AI as having fallen into "Winter," a period of decline, pessimism, and low funding. Understanding the field's more than six decades of history is difficult because most of our narratives about it have been written by AI insiders and developers themselves, most often from a narrowly American perspective.[1] In addition, the trials and errors of the early years are scarcely discussed in light of the current hype around AI, heightening the risk that past mistakes will be repeated. How can we make better sense of AI's history and what might it tell us about the present moment?

AI technology: Is the genie (or genius) out of the bottle?


It is with great enthusiasm and a healthy dose of angst that I am writing this post. My enthusiasm comes from the... This email address is already registered. By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent. By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners.