Goto

Collaborating Authors

 torre


A Reduction of Input/Output Logics to SAT

Steen, Alexander

arXiv.org Artificial Intelligence

It studies reasoning patterns and logical properties that are not suitably captured by classical propositional or first-order logic. Various logic formalisms have been proposed to handle deontic and normative reasoning, including systems based on modal logics (von Wright, 1951), dyadic deontic logic (Gabbay et al., 2013), and norm-based systems (Hansen, 2014). These systems differ in the properties of the obligation operator, and in their ability to consistently handle deontic paradoxes and/or norm conflicts (Gabbay et al., 2013). Input/Output (I/O) logics (Makinson & van der Torre, 2000) are a particular norm-based family of systems in which conditional norms are represented by pairs of formulas. The pairs do not carry truth-values themselves. I/O logics use an operational semantics based on detachment and come with a variety of different systems, formalized by different so-called output operators . Given a set of conditional norms N, and a set of formulas describing the situational context A, output operators produce a set of formulas that represent the obligations that are in force for that context. In order to check whether some state of affairs φ is obligatory, it suffices to check whether φ out (N, A), where out is some output operator. Unconstrained I/O logics are monotone and cannot consistently handle norm conflicts (i.e., situations in which norms with conflicting obligations are in force) without


On the Importance of Behavioral Nuances: Amplifying Non-Obvious Motor Noise Under True Empirical Considerations May Lead to Briefer Assays and Faster Classification Processes

Bermperidis, Theodoros, Vero, Joe, Torres, Elizabeth B

arXiv.org Artificial Intelligence

There is a tradeoff between attaining statistical power with large, difficult to gather data sets, and producing highly scalable assays that register brief data samples. Often, as grand-averaging techniques a priori assume normally-distributed parameters and linear, stationary processes in biorhythmic, time series data, important information is lost, averaged out as gross data. We developed an affective computing platform that enables taking brief data samples while maintaining personalized statistical power. This is achieved by combining a new data type derived from the micropeaks present in time series data registered from brief (5-second-long) face videos with recent advances in AI-driven face-grid estimation methods. By adopting geometric and nonlinear dynamical systems approaches to analyze the kinematics, especially the speed data, the new methods capture all facial micropeaks. These include as well the nuances of different affective micro expressions. We offer new ways to differentiate dynamical and geometric patterns present in autistic individuals from those found more commonly in neurotypical development.


'It's not me, it's just my face': the models who found their likenesses had been used in AI propaganda

The Guardian

The well-groomed young man dressed in a crisp, blue shirt speaking with a soft American accent seems an unlikely supporter of the junta leader of the west African state of Burkina Faso. "We must support … President Ibrahim Traoré … Homeland or death we shall overcome!" he says in a video that began circulating in early 2023 on Telegram. It was just a few months after the dictator had come to power via a military coup. Other videos fronted by different people, with a similar professional-looking appearance and repeating the exact same script in front of the Burkina Faso flag, cropped up around the same time. On a verified account on X a few days later the same young man, in the same blue shirt, claimed to be Archie, the chief executive of a new cryptocurrency platform. They were generated with artificial intelligence (AI) developed by a startup based in east London.


'Eugenics on steroids': the toxic and contested legacy of Oxford's Future of Humanity Institute

The Guardian

Two weeks ago it was quietly announced that the Future of Humanity Institute, the renowned multidisciplinary research centre in Oxford, no longer had a future. It shut down without warning on 16 April. Initially there was just a brief statement on its website stating it had closed and that its research may continue elsewhere within and outside the university. The institute, which was dedicated to studying existential risks to humanity, was founded in 2005 by the Swedish-born philosopher Nick Bostrom and quickly made a name for itself beyond academic circles – particularly in Silicon Valley, where a number of tech billionaires sang its praises and provided financial support. Bostrom is perhaps best known for his bestselling 2014 book Superintelligence, which warned of the existential dangers of artificial intelligence, but he also gained widespread recognition for his 2003 academic paper "Are You Living in a Computer Simulation?".


There's a 5% chance of AI causing humans to go extinct, say scientists

New Scientist

Many artificial intelligence researchers see the possible future development of superhuman AI as having a non-trivial chance of causing human extinction – but there is also widespread disagreement and uncertainty about such risks. Those findings come from a survey of 2700 AI researchers who have recently published work at six of the top AI conferences – the largest such survey to date. The survey asked participants to share their thoughts on possible timelines for future AI technological milestones, as well as the good or bad societal consequences of those achievements. Almost 58 per cent of researchers said they considered that there is a 5 per cent chance of human extinction or other extremely bad AI-related outcomes. "It's an important signal that most AI researchers don't find it strongly implausible that advanced AI destroys humanity," says Katja Grace at the Machine Intelligence Research Institute in California, an author of the paper.


LLMR: Real-time Prompting of Interactive Worlds using Large Language Models

De La Torre, Fernanda, Fang, Cathy Mengying, Huang, Han, Banburski-Fahey, Andrzej, Fernandez, Judith Amores, Lanier, Jaron

arXiv.org Artificial Intelligence

We present Large Language Model for Mixed Reality (LLMR), a framework for the real-time creation and modification of interactive Mixed Reality experiences using LLMs. LLMR leverages novel strategies to tackle difficult cases where ideal training data is scarce, or where the design goal requires the synthesis of internal dynamics, intuitive analysis, or advanced interactivity. Our framework relies on text interaction and the Unity game engine. By incorporating techniques for scene understanding, task planning, self-debugging, and memory management, LLMR outperforms the standard GPT-4 by 4x in average error rate. We demonstrate LLMR's cross-platform interoperability with several example worlds, and evaluate it on a variety of creation and modification tasks to show that it can produce and edit diverse objects, tools, and scenes. Finally, we conducted a usability study (N=11) with a diverse set that revealed participants had positive experiences with the system and would use it again.


Elon Musk's AI chat with Rishi Sunak: Everything you need to know

New Scientist

In an event following the UK's AI Safety Summit, entrepreneur Elon Musk spoke with UK prime minister Rishi Sunak about future AIs most likely being "a force for good" and someday enabling a "future of abundance". That utopian narrative about a future superhuman AI – one that Musk claims would eliminate the need for human work and even provide meaningful companionship – shaped much of the conversation between the pair. But their conversation's focus on an "age of abundance" glossed over the current negative impacts and controversies surrounding the tech industry's race to develop large AI models – and did not get into specifics on how governments should regulate AI and address real-world risks. "I think we are seeing the most disruptive force in history here, where we will have for the first time something that is smarter than the smartest human," said Musk. "There will come a point when no job is needed – you can have a job if you want for personal satisfaction, but the AI will be able to do everything." Musk also acknowledged his longstanding position of frequently warning about the existential risks that superhuman AI could pose to humanity in the future.


The pro-extinctionist philosopher who has sparked a battle over humanity's future

The Guardian

Given all the suffering, pain and destruction produced by humanity, Émile Torres, who is a non-binary philosopher specialising in existential threats, thinks that it would not be a bad thing if humanity ceased to exist. "The pro-extinctionist view," they say, "immediately conjures up for a lot of people the image of a homicidal, ghoulish, sadistic maniac, but actually most pro-extinctionists would say that most ways of going extinct would be absolutely unacceptable. But what if everybody decided not to have children? I don't see anything wrong with that." Torres has just written a book called Human Extinction: A History of the Science and Ethics of Annihilation.


The Jiminy Advisor: Moral Agreements among Stakeholders Based on Norms and Argumentation

Liao, Beishui (Zheijang University) | Pardo, Pere (a:1:{s:5:"en_US";s:24:"University of Luxembourg";}) | Slavkovik, Marija (University of Bergen) | van der Torre, Leendert (University of Luxembourg)

Journal of Artificial Intelligence Research

An autonomous system is constructed by a manufacturer, operates in a society subject to norms and laws, and interacts with end users. All of these actors are stakeholders affected by the behavior of the autonomous system. We address the challenge of how the ethical views of such stakeholders can be integrated in the behavior of an autonomous system. We propose an ethical recommendation component called Jiminy which uses techniques from normative systems and formal argumentation to reach moral agreements among stakeholders. A Jiminy represents the ethical views of each stakeholder by using normative systems, and has three ways of resolving moral dilemmas that involve the opinions of the stakeholders. First, the Jiminy considers how the arguments of the stakeholders relate to one another, which may already resolve the dilemma. Secondly, the Jiminy combines the normative systems of the stakeholders such that the combined expertise of the stakeholders may resolve the dilemma. Thirdly, and only if these two other methods have failed, the Jiminy uses context-sensitive rules to decide which of the stakeholders take preference over the others. At the abstract level, these three methods are characterized by adding arguments, adding attacks between arguments, and revising attacks between arguments. We show how a Jiminy can be used not only for ethical reasoning and collaborative decision-making, but also to provide explanations about ethical behavior.


House Democrat bill would force labeling of AI use

FOX News

Harvey Castro talks about how AI cold be used in cold cases and the symbiotic relationship between AI and a detective. A new bill introduced in the House of Representatives on Monday is aimed at making sure American consumers know the difference between fantasy and reality online by cracking down on generative artificial intelligence technology. Rep. Ritchie Torres, D-N.Y., is leading the effort on the AI Disclosure Act of 2023, which would force AI-generated content to include the disclaimer, "Disclaimer: this output has been generated by artificial intelligence." In a statement announcing the bill, Torres predicted that "regulatory framework for managing the existential risks of AI will be one of the central challenges confronting Congress in the years and decades to come." He noted risks in going too far with policing AI as well as not regulating it enough.