Goto

Collaborating Authors

 schwitzgebel


Elon Musk's Neuralink raises question about how moral future humans might be

FOX News

Elon Musk just announced that Neuralink -- a "brain-computer interface" -- had been implanted into a human brain for the first time. Patient Zero is recovering well. The technology has already undergone animal trials and has been branded as a Fitbit for the brain. Paired with your iPhone, a Neuralink could help control prosthetics, monitor brain activity in real time and boost overall cognitive capacity. It will eventually pair seamlessly with a Tesla, I'm sure.


Artificial intelligence and the limits of the humanities

Duch, Włodzisław

arXiv.org Artificial Intelligence

The complexity of cultures in the modern world is now beyond human comprehension. Cognitive sciences cast doubts on the traditional explanations based on mental models. The core subjects in humanities may lose their importance. Humanities have to adapt to the digital age. New, interdisciplinary branches of humanities emerge. Instant access to information will be replaced by instant access to knowledge. Understanding the cognitive limitations of humans and the opportunities opened by the development of artificial intelligence and interdisciplinary research necessary to address global challenges is the key to the revitalization of humanities. Artificial intelligence will radically change humanities, from art to political sciences and philosophy, making these disciplines attractive to students and enabling them to go beyond current limitations.


Creating a Large Language Model of a Philosopher

Schwitzgebel, Eric, Schwitzgebel, David, Strasser, Anna

arXiv.org Artificial Intelligence

Can large language models be trained to produce philosophical texts that are difficult to distinguish from texts produced by human philosophers? To address this question, we fine-tuned OpenAI's GPT-3 with the works of philosopher Daniel C. Dennett as additional training data. To explore the Dennett model, we asked the real Dennett ten philosophical questions and then posed the same questions to the language model, collecting four responses for each question without cherry-picking. We recruited 425 participants to distinguish Dennett's answer from the four machine-generated answers. Experts on Dennett's work (N = 25) succeeded 51% of the time, above the chance rate of 20% but short of our hypothesized rate of 80% correct. For two of the ten questions, the language model produced at least one answer that experts selected more frequently than Dennett's own answer. Philosophy blog readers (N = 302) performed similarly to the experts, while ordinary research participants (N = 98) were near chance distinguishing GPT-3's responses from those of an "actual human philosopher".


The Full Rights Dilemma for A.I. Systems of Debatable Personhood

Schwitzgebel, Eric

arXiv.org Artificial Intelligence

Abstract: An Artificially Intelligent system (an AI) has debatable personhood if it's epistemically possible either that the AI is a person or that it falls far short of personhood. Debatable personhood is a likely outcome of AI development and might arise soon. Debatable AI personhood throws us into a catastrophic moral dilemma: Either treat the systems as moral persons and risk sacrificing real human interests for the sake of entities without interests worth the sacrifice, or don't treat the systems as moral persons and risk perpetrating grievous moral wrongs against them. The moral issues become even more perplexing if we consider cases of possibly conscious AI that are subhuman, superhuman, or highly divergent from us in their morally relevant properties. We might soon build artificially intelligent entities - AIs - of debatable personhood. Our systems and habits of ethical thinking are currently as unprepared for this decision as medieval physics was for space flight.


A large language model that answers philosophical questions

#artificialintelligence

In recent years, computer scientists have been trying to create increasingly advanced dialogue and information systems. The release of ChatGPT and other highly performing language models are demonstrating just how far artificial intelligence can go in answering user questions, writing texts and conversing with humans. This model, presented in a paper published on the pre-print server arXiv, can autonomously generate answers that closely resemble those produced by human philosophers. "Anna Strasser, Matthew Crosby and I had noticed that people were creating GPT-3 outputs in the style of various writers or other philosophers," Eric Schwitzgebel, one of the researchers who carried out the study, told Tech Xplore. "We thought it would be interesting to see if we could fine-tune GPT-3 (Generative Pre-trained Transformer 3) on the body of work of a philosopher, then ask it questions and see if it said things that the real philosopher might have said."


The Psychology of Technology Institute

#artificialintelligence

DeepMind recently announced an historic advance toward solving the so-called "protein folding problem," a longstanding and consequential challenge in computational biology. Their AlphaFold program, an AI system made up of multiple deep neural networks, achieved unprecedented predictive accuracy in the annual CASP competition, vastly outstripping the methods deployed by other teams. There are many questions one might ask about AlphaFold. Does it understand the problem it's solving? Does accuracy in prediction really constitute a "solution" to the protein folding problem?


OpenAI's GPT-3 is a convincing philosopher

#artificialintelligence

A study has found that OpenAI's GPT-3 is capable of being indistinguishable from a human philosopher. The now infamous GPT-3 is a powerful autoregressive language model that uses deep learning to produce human-like text. Eric Schwitzgebel, Anna Strasser, and Matthew Crosby set out to find out whether GPT-3 can replicate a human philosopher. The team "fine-tuned" GPT-3 based on philosopher Daniel Dennet's corpus. Ten philosophical questions were then posed to both the real Dennet and GPT-3 to see whether the AI could match its renowned human counterpart.


Consciousness Began When the Gods Stopped Speaking - Issue 54: The Unspoken

Nautilus

Julian Jaynes was living out of a couple of suitcases in a Princeton dorm in the early 1970s. He must have been an odd sight there among the undergraduates, some of whom knew him as a lecturer who taught psychology, holding forth in a deep baritone voice. He was in his early 50s, a fairly heavy drinker, untenured, and apparently uninterested in tenure. "I don't think the university was paying him on a regular basis," recalls Roy Baumeister, then a student at Princeton and today a professor of psychology at Florida State University. But among the youthful inhabitants of the dorm, Jaynes was working on his masterpiece, and had been for years. From the age of 6, Jaynes had been transfixed by the singularity of conscious experience. Gazing at a yellow forsythia flower, he'd wondered how he could be sure that others saw the same yellow as he did. As a young man, serving three years in a Pennsylvania prison for declining to support the war effort, he watched a worm in the grass of the prison yard one spring, wondering what separated the unthinking earth from the worm and the worm from himself. It was the kind of question that dogged him for the rest of his life, and the book he was working on would grip a generation beginning to ask themselves similar questions.


Potential and Peril

Communications of the ACM

The history of battle knows no bounds, with weapons of destruction evolving from prehistoric clubs, axes, and spears to bombs, drones, missiles, landmines, and systems used in biological and nuclear warfare. More recently, lethal autonomous weapon systems (LAWS) powered by artificial intelligence (AI) have begun to surface, raising ethical issues about the use of AI and causing disagreement on whether such weapons should be banned in line with international humanitarian laws under the Geneva Convention. Much of the disagreement around LAWS is based on where the line should be drawn between weapons with limited human control and autonomous weapons, and differences of opinion on whether more or less people will lose their lives as a result of the implementation of LAWS. There are also contrary views on whether autonomous weapons are already in play on the battlefield. Ronald Arkin, Regents' Professor and Director of the Mobile Robot Laboratory in the College of Computing at Georgia Institute of Technology, says limited autonomy is already present in weapon systems such as the U.S. Navy's Phalanx Close-In Weapons System, which is designed to identify and fire at incoming missiles or threatening aircraft, and Israel's Harpy system, a fire-and-forget weapon designed to detect, attack, and destroy radar emitters.


Virtual Reality Poses the Same Riddles as the Cosmic Multiverse - Issue 46: Balance

Nautilus

On most days, we do not wake up anticipating that we may be suddenly thrust into the sky while popcorn shrimp rains down like confetti, as some guy roars from above: "Hey, there, I'm Jack. And you are in a computer simulation." Instead, we wake up thinking that an atom is an atom, that our physics is inherent to this universe and not prone to arbitrary change by coders, and that our reality is, well, real. Yet there may be another possibility. Game developers have opened up massive, explorable universes and populated them with computer-generated characters based on advanced A.I.