Goto

Collaborating Authors

Results


Council Post: Individual artificial intelligence is the future of AI

#artificialintelligence

Elon Musk-backed Neuralink is teeing up for clinical trials on humans with a view to accomplish the human brain implant by the end of the year. However, this is just the beginning. Remember the movie Transcendence where Johnny Depp (Dr Will Caster) turns into a superintelligent AI. Well, the timeline to superintelligence is shortening. Last December, a 63-year-old Australian Philip O'Keefe, who is suffering from ALS, tweeted his thoughts with the help of a computer chip implanted in his brain.


Systems Challenges for Trustworthy Embodied Systems

arXiv.org Artificial Intelligence

A new generation of increasingly autonomous and self-learning systems, which we call embodied systems, is about to be developed. When deploying these systems into a real-life context we face various engineering challenges, as it is crucial to coordinate the behavior of embodied systems in a beneficial manner, ensure their compatibility with our human-centered social values, and design verifiably safe and reliable human-machine interaction. We are arguing that raditional systems engineering is coming to a climacteric from embedded to embodied systems, and with assuring the trustworthiness of dynamic federations of situationally aware, intent-driven, explorative, ever-evolving, largely non-predictable, and increasingly autonomous embodied systems in uncertain, complex, and unpredictable real-world contexts. We are also identifying a number of urgent systems challenges for trustworthy embodied systems, including robust and human-centric AI, cognitive architectures, uncertainty quantification, trustworthy self-integration, and continual analysis and assurance.


Challenges of Artificial Intelligence -- From Machine Learning and Computer Vision to Emotional Intelligence

arXiv.org Artificial Intelligence

Artificial intelligence (AI) has become a part of everyday conversation and our lives. It is considered as the new electricity that is revolutionizing the world. AI is heavily invested in both industry and academy. However, there is also a lot of hype in the current AI debate. AI based on so-called deep learning has achieved impressive results in many problems, but its limits are already visible. AI has been under research since the 1940s, and the industry has seen many ups and downs due to over-expectations and related disappointments that have followed. The purpose of this book is to give a realistic picture of AI, its history, its potential and limitations. We believe that AI is a helper, not a ruler of humans. We begin by describing what AI is and how it has evolved over the decades. After fundamentals, we explain the importance of massive data for the current mainstream of artificial intelligence. The most common representations for AI, methods, and machine learning are covered. In addition, the main application areas are introduced. Computer vision has been central to the development of AI. The book provides a general introduction to computer vision, and includes an exposure to the results and applications of our own research. Emotions are central to human intelligence, but little use has been made in AI. We present the basics of emotional intelligence and our own research on the topic. We discuss super-intelligence that transcends human understanding, explaining why such achievement seems impossible on the basis of present knowledge,and how AI could be improved. Finally, a summary is made of the current state of AI and what to do in the future. In the appendix, we look at the development of AI education, especially from the perspective of contents at our own university.


Artificial Intellgence -- Application in Life Sciences and Beyond. The Upper Rhine Artificial Intelligence Symposium UR-AI 2021

arXiv.org Artificial Intelligence

The TriRhenaTech alliance presents the accepted papers of the 'Upper-Rhine Artificial Intelligence Symposium' held on October 27th 2021 in Kaiserslautern, Germany. Topics of the conference are applications of Artificial Intellgence in life sciences, intelligent systems, industry 4.0, mobility and others. The TriRhenaTech alliance is a network of universities in the Upper-Rhine Trinational Metropolitan Region comprising of the German universities of applied sciences in Furtwangen, Kaiserslautern, Karlsruhe, Offenburg and Trier, the Baden-Wuerttemberg Cooperative State University Loerrach, the French university network Alsace Tech (comprised of 14 'grandes \'ecoles' in the fields of engineering, architecture and management) and the University of Applied Sciences and Arts Northwestern Switzerland. The alliance's common goal is to reinforce the transfer of knowledge, research, and technology, as well as the cross-border mobility of students.


Neuroscience Weighs in on Physics' Biggest Questions - Issue 107: The Edge

Nautilus

For an empirical science, physics can be remarkably dismissive of some of our most basic observations. We see objects existing in definite locations, but the wave nature of matter washes that away. We perceive time to flow, but how could it, really? We feel ourselves to be free agents, and that's just quaint. Physicists like nothing better than to expose our view of the universe as parochial. But when asked why our impressions are so off, they mumble some excuse and slip out the side door of the party. Physicists, in other words, face the same hard problem of consciousness as neuroscientists do: the problem of bridging objective description and subjective experience. To relate fundamental theory to what we actually observe in the world, they must explain what it means "to observe"--to become conscious of. And they tend to be slapdash about it. They divide the world into "system" and "observer," study the former intensely, and take the latter for granted--or, worse, for a fool.


A brief history of AI: how to prevent another winter (a critical review)

arXiv.org Artificial Intelligence

The field of artificial intelligence (AI), regarded as one of the most enigmatic areas of science, has witnessed exponential growth in the past decade including a remarkably wide array of applications, having already impacted our everyday lives. Advances in computing power and the design of sophisticated AI algorithms have enabled computers to outperform humans in a variety of tasks, especially in the areas of computer vision and speech recognition. Yet, AI's path has never been smooth, having essentially fallen apart twice in its lifetime ('winters' of AI), both after periods of popular success ('summers' of AI). We provide a brief rundown of AI's evolution over the course of decades, highlighting its crucial moments and major turning points from inception to the present. In doing so, we attempt to learn, anticipate the future, and discuss what steps may be taken to prevent another 'winter'.


On the Opportunities and Risks of Foundation Models

arXiv.org Artificial Intelligence

AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.


A Theory of Consciousness from a Theoretical Computer Science Perspective: Insights from the Conscious Turing Machine

arXiv.org Artificial Intelligence

The quest to understand consciousness, once the purview of philosophers and theologians, is now actively pursued by scientists of many stripes. We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. In the spirit of Alan Turing's simple yet powerful definition of a computer, the Turing Machine (TM), and perspective of computational complexity theory, we formalize a modified version of the Global Workspace Theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeaux and others. We are not looking for a complex model of the brain nor of cognition, but for a simple computational model of (the admittedly complex concept of) consciousness. We do this by defining the Conscious Turing Machine (CTM), also called a conscious AI, and then we define consciousness and related notions in the CTM. While these are only mathematical (TCS) definitions, we suggest why the CTM has the feeling of consciousness. The TCS perspective provides a simple formal framework to employ tools from computational complexity theory and machine learning to help us understand consciousness and related concepts. Previously we explored high level explanations for the feelings of pain and pleasure in the CTM. Here we consider three examples related to vision (blindsight, inattentional blindness, and change blindness), followed by discussions of dreams, free will, and altered states of consciousness.


The brain is a computer is a brain: neuroscience's internal debate and the social significance of the Computational Metaphor

arXiv.org Artificial Intelligence

The Computational Metaphor, comparing the brain to the computer and vice versa, is the most prominent metaphor in neuroscience and artificial intelligence (AI). Its appropriateness is highly debated in both fields, particularly with regards to whether it is useful for the advancement of science and technology. Considerably less attention, however, has been devoted to how the Computational Metaphor is used outside of the lab, and particularly how it may shape society's interactions with AI. As such, recently publicized concerns over AI's role in perpetuating racism, genderism, and ableism suggest that the term "artificial intelligence" is misplaced, and that a new lexicon is needed to describe these computational systems. Thus, there is an essential question about the Computational Metaphor that is rarely asked by neuroscientists: whom does it help and whom does it harm? This essay invites the neuroscience community to consider the social implications of the field's most controversial metaphor.


Measuring the Occupational Impact of AI: Tasks, Cognitive Abilities and AI Benchmarks

Journal of Artificial Intelligence Research

In this paper we develop a framework for analysing the impact of Artificial Intelligence (AI) on occupations. This framework maps 59 generic tasks from worker surveys and an occupational database to 14 cognitive abilities (that we extract from the cognitive science literature) and these to a comprehensive list of 328 AI benchmarks used to evaluate research intensity across a broad range of different AI areas. The use of cognitive abilities as an intermediate layer, instead of mapping work tasks to AI benchmarks directly, allows for an identification of potential AI exposure for tasks for which AI applications have not been explicitly created. An application of our framework to occupational databases gives insights into the abilities through which AI is most likely to affect jobs and allows for a ranking of occupations with respect to AI exposure. Moreover, we show that some jobs that were not known to be affected by previous waves of automation may now be subject to higher AI exposure. Finally, we find that some of the abilities where AI research is currently very intense are linked to tasks with comparatively limited labour input in the labour markets of advanced economies (e.g., visual and auditory processing using deep learning, and sensorimotor interaction through (deep) reinforcement learning). This article appears in the special track on AI and Society.