Goto

Collaborating Authors

 digital computer


Analogue computers could train AI 1000 times faster and cut energy use

New Scientist

Computers built with analogue circuits promise huge speed and efficiency gains over ordinary computers, but normally at the cost of accuracy. Analogue computers that rapidly solve a key type of equation used in training artificial intelligence models could offer a potential solution to the growing energy consumption in data centres caused by the AI boom. Laptops, smartphones and other familiar devices are known as digital computers, because they store and process data as a series of binary digits, either 0 or 1, and can be programmed to solve a range of problems. In contrast, analogue computers are normally designed to solve just one specific problem. They store and process data using quantities that can vary continuously such as electrical resistance, rather than discrete 0s and 1s.


Passed the Turing Test: Living in Turing Futures

Gonçalves, Bernardo

arXiv.org Artificial Intelligence

The world has seen the emergence of machines based on pretrained models, transformers, also known as generative artificial intelligences for their ability to produce various types of content, including text, images, audio, and synthetic data. Without resorting to preprogramming or special tricks, their intelligence grows as they learn from experience, and to ordinary people, they can appear human-like in conversation. This means that they can pass the Turing test, and that we are now living in one of many possible Turing futures where machines can pass for what they are not. However, the learning machines that Turing imagined would pass his imitation tests were machines inspired by the natural development of the low-energy human cortex. They would be raised like human children and naturally learn the ability to deceive an observer. These ``child machines,'' Turing hoped, would be powerful enough to have an impact on society and nature.


The Original Turing Test Was a Drag Show

Slate

ChatGPT can now easily pass any Turing test, a measure of successful A.I. proposed by a founder of computer science, Alan Turing. But contemporary Turing tests leave out the most interesting part of Turing's original test: the gender-bending. I can usually spot A.I. writing in my students' work by the overuse of words like "delve," but the accuracy of artificial intelligence is impossible to deny. A.I. is being integrated into every aspect of our written culture, from news sources to classrooms to medicine. But in 1950, Turing's ideas about A.I. were prescient, creative, and, when I read them, surprisingly queer.


Turing's Test, a Beautiful Thought Experiment

Gonçalves, Bernardo

arXiv.org Artificial Intelligence

In the wake of large language models, there has been a resurgence of claims and questions about the Turing test and its value for AI, which are reminiscent of decades of practical "Turing" tests. If AI were quantum physics, by now several "Schr\"odinger's" cats could have been killed. Better late than never, it is time for a historical reconstruction of Turing's beautiful thought experiment. In this paper I present a wealth of evidence, including new archival sources, give original answers to several open questions about Turing's 1950 paper, and address the core question of the value of Turing's test.


Reliable AI: Does the Next Generation Require Quantum Computing?

Bacho, Aras, Boche, Holger, Kutyniok, Gitta

arXiv.org Artificial Intelligence

In this survey, we aim to explore the fundamental question of whether the next generation of artificial intelligence requires quantum computing. Artificial intelligence is increasingly playing a crucial role in many aspects of our daily lives and is central to the fourth industrial revolution. It is therefore imperative that artificial intelligence is reliable and trustworthy. However, there are still many issues with reliability of artificial intelligence, such as privacy, responsibility, safety, and security, in areas such as autonomous driving, healthcare, robotics, and others. These problems can have various causes, including insufficient data, biases, and robustness problems, as well as fundamental issues such as computability problems on digital hardware. The cause of these computability problems is rooted in the fact that digital hardware is based on the computing model of the Turing machine, which is inherently discrete. Notably, our findings demonstrate that digital hardware is inherently constrained in solving problems about optimization, deep learning, or differential equations. Therefore, these limitations carry substantial implications for the field of artificial intelligence, in particular for machine learning. Furthermore, although it is well known that the quantum computer shows a quantum advantage for certain classes of problems, our findings establish that some of these limitations persist when employing quantum computing models based on the quantum circuit or the quantum Turing machine paradigm. In contrast, analog computing models, such as the Blum-Shub-Smale machine, exhibit the potential to surmount these limitations.


Trends in Analog and Neural Computation – MetaDevo

#artificialintelligence

Cognitive Science and AI typically subscribe to computationalism--the mind is a form of computation in the brain (or the overall nervous system including the brain). In the 1940s, explaining cognition as the brain computing was new, and started catching on in what would become computer science and AI…and eventually to some degree neuroscience. But many were modeling the brain using what you could call analog math.1Piccinini, And there were actual analog computers, many of which were used by the U.S. military starting in World War 2. Nowadays, most people use digital computers for research and AI work…and pretty much everything. But what happened to the non-digital theories, and why aren't there analog computers any more to experiment on those?


Use of Analog Computers in Artificial Intelligence (AI) - MarkTechPost

#artificialintelligence

Analog Computers are a class of devices in which physical quantities like electrical voltage, mechanical motions, or fluid pressure are represented so that they are analogous to the corresponding amount in the problem to be solved. Here is a simple example of an analog computer. If we turn the black and white wheels by certain amounts, the gray wheel shows the sum of the two rotations. One of the earliest analog computers was The Antikythera Mechanism, constructed around 100-200 B.C. It involved a series of interlocking bronze gears in such a way that the motion of certain dials was analogous to the motion of the sun and the moon.


A photonic chip-based machine learning approach for the prediction of molecular properties

Zhang, Hui, Lau, Jonathan Wei Zhong, Wan, Lingxiao, Shi, Liang, Cai, Hong, Luo, Xianshu, Lo, Patrick, Lee, Chee-Kong, Kwek, Leong-Chuan, Liu, Ai Qun

arXiv.org Artificial Intelligence

Machine learning methods have revolutionized the discovery process of new molecules and materials. However, the intensive training process of neural networks for molecules with ever-increasing complexity has resulted in exponential growth in computation cost, leading to long simulation time and high energy consumption. Photonic chip technology offers an alternative platform for implementing neural networks with faster data processing and lower energy usage compared to digital computers. Photonics technology is naturally capable of implementing complex-valued neural networks at no additional hardware cost. Here, we demonstrate the capability of photonic neural networks for predicting the quantum mechanical properties of molecules. To the best of our knowledge, this work is the first to harness photonic technology for machine learning applications in computational chemistry and molecular sciences, such as drug discovery and materials design. We further show that multiple properties can be learned simultaneously in a photonic chip via a multi-task regression learning algorithm, which is also the first of its kind as well, as most previous works focus on implementing a network in the classification task.


The Turing Deception

Noever, David, Ciolino, Matt

arXiv.org Artificial Intelligence

The outlier, however, for ChatGPT is Appendix F, based on the prompt to generate variants on poetry dedicated to Turing. In this instance, the generated content bypassed Open AI's detector with high confidence as real (99.98%). In their original report [24], the authors found "detection rates of ~95% for detecting 1.5B GPT-2-generated text" and noted that "We believe this is not high enough accuracy for standalone detection and needs to be paired with metadata-based approaches, human judgment, and public education to be more effective." Like the evolution of ever larger language models (>100 billion), refinements also have built-in heuristics or guardrails for model execution. The Instruct-series of GPT-3 demonstrated the ability to answer questions directly without conversational meanderings. The ChatGPT includes longer-term conversational memory, such that the API can track the dialog even with leaps of narration that single API calls could not span. One can test dialogs with impersonal pronouns like "it" carrying forward in the conversation with context to previous API calls in a single session-one easily grasped example for ChatGPT's API memory as both powerful and expensive to encode for more extended conversations. As Turing himself posed the human capacity to list memories [1]: "Actual human computers really remember what they have to do Constructing instruction tables is usually described as'programming.'"


We will see a completely new type of computer, says AI pioneer Geoff Hinton

#artificialintelligence

Machine-learning forms of artificial intelligence are going to produce a revolution in computer systems, a new kind of hardware-software union that can put AI in your toaster, according to AI pioneer Geoffrey Hinton. Learn about the leading tech trends the world will lean into over the next 12 months and how they will affect your life and your job. Hinton, offering the closing keynote Thursday at this year's Neural Information Processing Systems conference, NeurIPS, in New Orleans, said that the machine learning research community "has been slow to realize the implications of deep learning for how computers are built." He continued, "What I think is that we're going to see a completely different type of computer, not for a few years, but there's every reason for investigating this completely different type of computer." All digital computers to date have been built to be "immortal," where the hardware is engineered to be reliable so that the same software runs anywhere.