Goto

Collaborating Authors

 analytical engine


The First Computer Program

Communications of the ACM

This article is a description of Charles Babbage's first computer program, which he sketched out almost 200 years ago, in 1837. The Analytical Engine (AE), the computer for which the program was intended, did not actually exist; sadly, it was to remain unfinished. Only some portions of Babbage's calculating machine were built during the lifetime of the English mathematician and inventor. Had it been completed, it would have been the world's first computer.1,3 Of course, many algorithms had already been described before Babbage--for computing the greatest common divisor (GCD), for example--but Babbage's code is the first attempt to specify how to mechanize complex algorithms with a computer.


Generative AI And The Future Of Creative Jobs

#artificialintelligence

The sudden popularity of generative AI has re-generated a popular pre-pandemic preoccupation: How many jobs will AI destroy? Some prediction experts predicted a decade ago that almost half of U.S. jobs could be replaced by AI by 2023 (!) or, at most, by 2033, mainly impacting low-skill jobs (e.g., no more truck drivers because we will have self-driving trucks). Other crystal-ball observers argued that in contrast to previous waves of automation, we are entering a new era in which the most affected will be highly-skilled knowledge workers. The tight labor market of recent years has suppressed somewhat these dire predictions. The widespread excitement about generative AI, however, is bringing back the anxiety about jobs, especially the creative kind of jobs.


Analytical Engines With Context-Rich Processing: Towards Efficient Next-Generation Analytics

Sanca, Viktor, Ailamaki, Anastasia

arXiv.org Artificial Intelligence

As modern data pipelines continue to collect, produce, and store a variety of data formats, extracting and combining value from traditional and context-rich sources such as strings, text, video, audio, and logs becomes a manual process where such formats are unsuitable for RDBMS. To tap into the dark data, domain experts analyze and extract insights and integrate them into the data repositories. This process can involve out-of-DBMS, ad-hoc analysis, and processing resulting in ETL, engineering effort, and suboptimal performance. While AI systems based on ML models can automate the analysis process, they often further generate context-rich answers. Using multiple sources of truth, for either training the models or in the form of knowledge bases, further exacerbates the problem of consolidating the data of interest. We envision an analytical engine co-optimized with components that enable context-rich analysis. Firstly, as the data from different sources or resulting from model answering cannot be cleaned ahead of time, we propose using online data integration via model-assisted similarity operations. Secondly, we aim for a holistic pipeline cost- and rule-based optimization across relational and model-based operators. Thirdly, with increasingly heterogeneous hardware and equally heterogeneous workloads ranging from traditional relational analytics to generative model inference, we envision a system that just-in-time adapts to the complex analytical query requirements. To solve increasingly complex analytical problems, ML offers attractive solutions that must be combined with traditional analytical processing and benefit from decades of database community research to achieve scalability and performance effortless for the end user.


5 Greatest and Most Mysterious Mechanical Computers Ever Made -- and One that Wasn't

#artificialintelligence

Usually when we think of computers, we probably imagine glowing displays, interconnected networks sharing digital information, and more software applications than anyone one person could ever come close to using -- but that's only part of computing's story. Analog computers, and later mechanical computers, were an integral part of humanity's pursuit of scientific discovery, fueled by our desire to anticipate future events and outcomes. For a species that conquered the entire world thanks to our larger brains and toolmaking prowess, it's no surprise that we've been using artificial tools to augment and enhance our intelligence as far back as our history goes -- and probably even longer than that. From the careful positioning of stones in England, to the soaring water clocks of China's Song Dynasty to the precise arrangement of mechanical gears in the visionary inventions of Blaise Pascal and Charles Babbage, analog and mechanical computers have served our forebearers well and helped them not just survive but thrive by transcending the bounds of our biology. In Salisbury Plain in the south of England, a collection of about 100 massive and roughly even-cut stones form a pair of standing rings whose purpose is lost to history, but whose construction began before the invention of the wheel and took at least 1,500 years to complete, and possibly even longer.


Can computers think? -- The north star in the quest for general intelligence

#artificialintelligence

Augusta Ada King, Countess of Lovelace, widely regarded as the world's first computer programmer, when talking about the Analytical Engine said, "The Analytical Engine has no pretensions whatever to originate anything" [1]. Hence, it is safe to say that the question "Can computers think?", in some form, not only predates the concept of Artificial Intelligence (AI) but is almost as old as the Analytical Engine. This question has stimulated the minds of pioneers and researchers from different domains including computer science, mathematics, psychology and philosophy. This essay delves into some of the important facets of this question. It is primarily driven by the thoughts and arguments of Alan M. Turing and John R. Searle, two pioneers who have extensively explored this question.


Truly creative A.I. is just around the corner. Here's why that's a big deal

#artificialintelligence

By that same logic, when Hollywood actors start tweeting about a once-obscure part of artificial intelligence (A.I.), you know that something big is happening, too. That's exactly what occurred recently when Zach Braff, the actor-director still best known for his performance as J.D. on the medical comedy series Scrubs, recorded himself reading a Scrubs-style monolog written by an A.I. "What is a hospital?" Braff reads, adopting the thoughtful tone J.D. used to wrap up each episode in the series. "A hospital is a lot like a high school: the most amazing man is dying, and you're the only one who wants to steal stuff from his dad. Being in a hospital is a lot like being in a sorority. You have greasers and surgeons. And even though it sucks about Doctor Tapioca, not even that's sad."


Computing Machinery and Intelligence

#artificialintelligence

This question begs one to define the words "machine" and "think". Instead of defining them -- which is seemingly easy, let's replace the question with one that is very similar. Before that, we introduce the imitation game. The game is played by three. The interrogator is isolated from the other two and can ask each one of them questions, with a goal of identifying who the man and who the woman is.


Untold History of AI: Charles Babbage and the Turk

IEEE Spectrum Robotics

The history of AI is often told as the story of machines getting smarter over time. What's lost is the human element in the narrative, how intelligent machines are designed, trained, and powered by human minds and bodies. In this six-part series, we explore that human history of AI--how innovators, thinkers, workers, and sometimes hucksters have created algorithms that can replicate human thought and behavior (or at least appear to). While it can be exciting to be swept up by the idea of super-intelligent computers that have no need for human input, the true history of smart machines shows that our AI is only as good as we are. In the year 1770, at the court of the Austrian Empress Maria Theresa, an inventor named Wolfgang von Kempelen presented a chess-playing machine.


World's first ever computer manual which was written 175 years ago is sold for nearly £100,000

Daily Mail - Science & tech

The first ever computer manual which was written by a Victorian woman 175 years ago has been sold at auction for nearly £100,000 - nearly 20 times its expected price. The edition, 'Sketch of the Analytical Engine by by L.F. Menabrea with notes by Ada Lovelace', was snapped up by an anonymous buyer after being sold by Moore Allen & Innocent in Cirencester, Glocestershire. Ada, the only legitimate child of poet Lord Byron, was a maths prodigy who died aged 36. A rare book by Ada Lovelace, the Victorian woman renowned as the world's first computer programmer has sold at auction for £95,000 During her short life, she played a key role in the early development of computer programming, becoming friends with mathematician Charles Babbage over his automatic mechanical calculator, the Difference Engine. Lovelace played a key role in the 1843 book'Sketch of the Analytical Engine', of which just seven copies are thought to exist - one of which sold at auction for £95,000.


Truly creative A.I. is just around the corner. Here's why that's a big deal

#artificialintelligence

Joe Kennedy, father of the late President John F. Kennedy, once said that, when shoeshine boys start giving you stock tips, the financial bubble is getting too big for its own good. By that same logic, when Hollywood actors start tweeting about a once-obscure part of artificial intelligence (A.I.), you know that something big is happening, too. That's exactly what occurred recently when Zach Braff, the actor-director still best known for his performance as J.D. on the medical comedy series Scrubs, recorded himself reading a Scrubs-style monolog written by an A.I. Braff reads, adopting the thoughtful tone J.D. used to wrap up each episode in the series. "A hospital is a lot like a high school: the most amazing man is dying, and you're the only one who wants to steal stuff from his dad. Being in a hospital is a lot like being in a sorority. You have greasers and surgeons. And even though it sucks about Doctor Tapioca, not even that's sad."