Goto

Collaborating Authors

 mind and machine


Normality and the Turing Test

Kabbach, Alexandre

arXiv.org Artificial Intelligence

This paper proposes to revisit the Turing test through the concept of normality. Its core argument is that the Turing test is a test of normal intelligence as assessed by a normal judge. First, in the sense that the Turing test targets normal/average rather than exceptional human intelligence, so that successfully passing the test requires machines to "make mistakes" and display imperfect behavior just like normal/average humans. Second, in the sense that the Turing test is a statistical test where judgments of intelligence are never carried out by a single "average" judge (understood as non-expert) but always by a full jury. As such, the notion of "average human interrogator" that Turing talks about in his original paper should be understood primarily as referring to a mathematical abstraction made of the normalized aggregate of individual judgments of multiple judges. Its conclusions are twofold. First, it argues that large language models such as ChatGPT are unlikely to pass the Turing test as those models precisely target exceptional rather than normal/average human intelligence. As such, they constitute models of what it proposes to call artificial smartness rather than artificial intelligence, insofar as they deviate from the original goal of Turing for the modeling of artificial minds. Second, it argues that the objectivization of normal human behavior in the Turing test fails due to the game configuration of the test which ends up objectivizing normative ideals of normal behavior rather than normal behavior per se.


Which symbol grounding problem should we try to solve?

Müller, Vincent C.

arXiv.org Artificial Intelligence

Müller, Vincent C. (2015), 'Which symbol grounding problem should we try to solve?', Journal of Experimental and Theoretical Artificial Intellig ence, 27 (1, ed. Which symbol grounding problem should we try to solve? October, 201 3 Floridi and Taddeo propose a condition of "zero semantic co m-mitment" for sol u tions to the grounding problem, and a solution to it . I argue briefly that their condition cannot be fulfilled, not even by their own solu tion . After a look at Luc Steel's very different competing suggestion, I suggest that w e need to rethink what the problem is and what role the'goals' in a system play in formulating the problem .


The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems

Mokander, Jakob, Sheth, Margi, Watson, David, Floridi, Luciano

arXiv.org Artificial Intelligence

Organisations that design and deploy artificial intelligence (AI) systems increasingly commit themselves to high-level, ethical principles. However, there still exists a gap between principles and practices in AI ethics. One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope. Put differently, the question to which systems and processes AI ethics principles ought to apply remains unanswered. Of course, there exists no universally accepted definition of AI, and different systems pose different ethical challenges. Nevertheless, pragmatic problem-solving demands that things should be sorted so that their grouping will promote successful actions for some specific end. In this article, we review and compare previous attempts to classify AI systems for the purpose of implementing AI governance in practice. We find that attempts to classify AI systems found in previous literature use one of three mental model. The Switch, i.e., a binary approach according to which systems either are or are not considered AI systems depending on their characteristics. The Ladder, i.e., a risk-based approach that classifies systems according to the ethical risks they pose. And the Matrix, i.e., a multi-dimensional classification of systems that take various aspects into account, such as context, data input, and decision-model. Each of these models for classifying AI systems comes with its own set of strengths and weaknesses. By conceptualising different ways of classifying AI systems into simple mental models, we hope to provide organisations that design, deploy, or regulate AI systems with the conceptual tools needed to operationalise AI governance in practice.


New insights into training dynamics of deep classifiers

#artificialintelligence

A new study from researchers at MIT and Brown University characterizes several properties that emerge during the training of deep classifiers, a type of artificial neural network commonly used for classification tasks such as image classification, speech recognition, and natural language processing. The paper, "Dynamics in Deep Classifiers trained with the Square Loss: Normalization, Low Rank, Neural Collapse and Generalization Bounds," published today in the journal Research, is the first of its kind to theoretically explore the dynamics of training deep classifiers with the square loss and how properties such as rank minimization, neural collapse, and dualities between the activation of neurons and the weights of the layers are intertwined. In the study, the authors focused on two types of deep classifiers: fully connected deep networks and convolutional neural networks (CNNs). A previous study examined the structural properties that develop in large neural networks at the final stages of training. That study focused on the last layer of the network and found that deep networks trained to fit a training dataset will eventually reach a state known as "neural collapse." When neural collapse occurs, the network maps multiple examples of a particular class (such as images of cats) to a single template of that class.


Machine Mind

#artificialintelligence

"We shall not cease from exploration And the end of all our exploring Will be to arrive where we started And know the place for the first time." In 1956, the Dartmouth Summer Research Project on Artificial Intelligence gave A.I. its standing as a legitimate field of study. "Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." A.I. development has fallen in line with this sentiment ever since. Our scientists, analysts, and engineers have taken a multidisciplinary approach that focuses on describing and reproducing as many observable faculties of the mind as possible. This reproduction has proven to be quite successful.


Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation

Mokander, Jakob, Axente, Maria, Casolari, Federico, Floridi, Luciano

arXiv.org Artificial Intelligence

The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the conformity assessments that providers of high-risk AI systems are expected to conduct, and the post-market monitoring plans that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. We argue that AIA can be interpreted as a proposal to establish a Europe-wide ecosystem for conducting AI auditing, albeit in other words. Our analysis offers two main contributions. First, by describing the enforcement mechanisms included in the AIA in terminology borrowed from existing literature on AI auditing, we help providers of AI systems understand how they can prove adherence to the requirements set out in the AIA in practice. Second, by examining the AIA from an auditing perspective, we seek to provide transferable lessons from previous research about how to refine further the regulatory approach outlined in the AIA. We conclude by highlighting seven aspects of the AIA where amendments (or simply clarifications) would be helpful. These include, above all, the need to translate vague concepts into verifiable criteria and to strengthen the institutional safeguards concerning conformity assessments based on internal checks.


Ethics as a Service: A Pragmatic Operationalisation of AI Ethics - Minds and Machines

#artificialintelligence

As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of AI ethics principles and the practical design of AI systems. In previous work, we analysed whether it is possible to close this gap between the'what' and the'how' of AI ethics through the use of tools and methods designed to help AI developers, engineers, and designers translate principles into practice.


What We Should Learn from the Tension Between Mind and Machine

#artificialintelligence

Did medical knowledge engineering/search/expert systems. Every human bliss and kindness, every suspicion, cruelty, and torment ultimately comes from the whirring 3-pound "enchanted loom" that is our brain and its other side, the cloud of knowing that is our mind. It's an odd coincidence that serious study of the mind and the brain bloomed in the late 20th century when we also started to make machines that had some mind-like qualities. Now, with information technology we have applied an untested amplifier to our minds, and cranked it up to eleven, running it around the clock, year after year. Because we have become a culture of crisis, we are good at asking, what has gone wrong? But is the conjunction of natural and artificial mind only ill-favored, or might we not learn from both by comparison?


Mind and Machine : The Dawn of a New Era

#artificialintelligence

Ancient Greek philosophers had spent much of their time pondering about what truly makes one intelligent. But this concept was embraced in science and research only about half a century ago. Ever since its inception, neuroscience has strived to understand how the brain processes information, makes decisions, and interacts with the environment. But in the mid – 20th century, arose a new school of thought – how can we emulate intelligence in an artificial system? This does sound daunting and can definitely leave the few odd eccentric minds wondering about its dystopian implications.


Minds and machines

#artificialintelligence

Artificial intelligence (AI) is increasingly ubiquitous and already transforming many aspects of our lives from how we manage our health to how we access news. Yet for all the hype and investment, most AI has focused on a relatively narrow set of applications with little attention given to the relationship between artificial intelligence and our collective human intelligence (CI) - the enhanced capacity that is created when groups think and work together to solve problems. We are at a critical turning point to set the trajectory of AI and we need to continue to challenge our thinking about what we want from AI in society and what role we want it to play. Unless we do, both AI and CI will continue to fall short of our expectations. So how do we start to think differently about AI's potential?