Goto

Collaborating Authors

 press


Sensemaking in Novel Environments: How Human Cognition Can Inform Artificial Agents

Patterson, Robert E., Buccello-Stout, Regina, Frame, Mary E., Maresca, Anna M., Nelson, Justin, Acker-Mills, Barbara, Curtis, Erica, Culbertson, Jared, Schmidt, Kevin, Clouse, Scott, Rogers, Steve

arXiv.org Artificial Intelligence

One of the most vital cognitive skills to possess is the ability to make sense of objects, events, and situations in the world. In the current paper, we offer an approach for creating artificially intelligent agents with the capacity for sensemaking in novel environments. Objectives: to present several key ideas: (1) a novel unified conceptual framework for sensemaking (which includes the existence of sign relations embedded within and across frames); (2) interaction among various content-addressable, distributed-knowledge structures via shared attributes (whose net response would represent a synthesized object, event, or situation serving as a sign for sensemaking in a novel environment). Findings: we suggest that attributes across memories can be shared and recombined in novel ways to create synthesized signs, which can denote certain outcomes in novel environments (i.e., sensemaking).


AI for Just Work: Constructing Diverse Imaginations of AI beyond "Replacing Humans"

Jin, Weina, Vincent, Nicholas, Hamarneh, Ghassan

arXiv.org Artificial Intelligence

The AI community usually focuses on "how" to develop AI techniques, but lacks thorough open discussions on "why" we develop AI. Lacking critical reflections on the general visions and purposes of AI may make the community vulnerable to manipulation. In this position paper, we explore the "why" question of AI. We denote answers to the "why" question the imaginations of AI, which depict our general visions, frames, and mindsets for the prospects of AI. We identify that the prevailing vision in the AI community is largely a monoculture that emphasizes objectives such as replacing humans and improving productivity. Our critical examination of this mainstream imagination highlights its underpinning and potentially unjust assumptions. We then call to diversify our collective imaginations of AI, embedding ethical assumptions from the outset in the imaginations of AI. To facilitate the community's pursuit of diverse imaginations, we demonstrate one process for constructing a new imagination of "AI for just work," and showcase its application in the medical image synthesis task to make it more ethical. We hope this work will help the AI community to open dialogues with civil society on the visions and purposes of AI, and inspire more technical works and advocacy in pursuit of diverse and ethical imaginations to restore the value of AI for the public good.


Dubito Ergo Sum: Exploring AI Ethics

Dorfler, Viktor, Cuthbert, Giles

arXiv.org Artificial Intelligence

We paraphrase Descartes' famous dictum in the area of AI ethics where the "I doubt and therefore I am" is suggested as a necessary aspect of morality. Therefore AI, which cannot doubt itself, cannot possess moral agency. Of course, this is not the end of the story. We explore various aspects of the human mind that substantially differ from AI, which includes the sensory grounding of our knowing, the act of understanding, and the significance of being able to doubt ourselves. The foundation of our argument is the discipline of ethics, one of the oldest and largest knowledge projects of human history, yet, we seem only to be beginning to get a grasp of it. After a couple of thousand years of studying the ethics of humans, we (humans) arrived at a point where moral psychology suggests that our moral decisions are intuitive, and all the models from ethics become relevant only when we explain ourselves. This recognition has a major impact on what and how we can do regarding AI ethics. We do not offer a solution, we explore some ideas and leave the problem open, but we hope somewhat better understood than before our study.


AI Safety is Stuck in Technical Terms -- A System Safety Response to the International AI Safety Report

Dobbe, Roel

arXiv.org Artificial Intelligence

Safety has become the central value around which dominant AI governance efforts are being shaped. Recently, this culminated in the publication of the International AI Safety Report, written by 96 experts of which 30 nominated by the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), and the United Nations (UN). The report focuses on the safety risks of general-purpose AI and available technical mitigation approaches. In this response, informed by a system safety perspective, I refl ect on the key conclusions of the report, identifying fundamental issues in the currently dominant technical framing of AI safety and how this frustrates meaningful discourse and policy efforts to address safety comprehensively. The system safety discipline has dealt with the safety risks of software-based systems for many decades, and understands safety risks in AI systems as sociotechnical and requiring consideration of technical and non-technical factors and their interactions. The International AI Safety report does identify the need for system safety approaches. Lessons, concepts and methods from system safety indeed provide an important blueprint for overcoming current shortcomings in technical approaches by integrating rather than adding on non-technical factors and interventions. I conclude with why building a system safety discipline can help us overcome limitations in the European AI Act, as well as how the discipline can help shape sustainable investments into Public Interest AI.

  Country:
  Genre: Research Report (0.69)
  Industry: Government > Regional Government (0.49)

OpenAI rolls out new ChatGPT features including ability to go incognito

FOX News

Fox News correspondent Grady Trimble has the latest on fears the technology will spiral out of control on'Special Report.' Artificial intelligence leader OpenAI has introduced the ability to turn off chat history in its popular chatbot ChatGPT. In a Tuesday blog post, the company said conversations that are started when chat history is disabled will not be used to train and improve its models and will not appear in the history sidebar. The controls are found in the ChatGPT settings and can be changed at any time. The mode rolled out ot all users.


Hot Off the Press: The Chatbot Buyer's Guide for 2023

#artificialintelligence

Chatbots and conversational AI have been gaining acceptance as essential pieces of successful customer service and employee support strategies. If your organisation doesn't have at least one of these solutions already, it's likely you are planning to deploy one soon or are exploring the possibility of adding one to your 2023 strategy. Unfortunately, as adoption of this technology is increasing so is the oversaturation of the market with poor performing chatbot products. Now many live chat, CRM, and contact centre vendors are attempting to jump on the conversational AI bandwagon with their own'add-on bots'. This is creating both confusion for buyers and a starker divide between vendors selling add-on bots and vendors that are true conversational AI specialists.


The Machine Ethics Podcast: The Politics of AI with Mark Coeckelbergh

AIHub

Hosted by Ben Byford, The Machine Ethics Podcast brings together interviews with academics, authors, business leaders, designers and engineers on the subject of autonomous algorithms, artificial intelligence, machine learning, and technology's impact on society. This episode we talk with Mark Coeckelbergh about AI as a story about machines and where are we heading in creating human level intelligence, moral standing and robot-animal interfaces, technology determinism, environmental impacts of robots and AI, energy budgets, politics and AI, self-regulation and global governance for global issues. Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the University of Vienna and author of more than 15 books including AI Ethics (MIT Press), The Political Philosophy of AI (Polity Press), and Introduction to Philosophy of Technology (Oxford University Press). Previously he was Vice Dean of the Faculty of Philosophy and Education, and President of the Society for Philosophy and Technology (SPT). He is also involved in policy advise, for example he was member of the High Level Expert Group on AI of the European Commission.


Multiple Linear Regression in R for Data Science - Detechtor

#artificialintelligence

We are going to learn how to implement a Multiple Linear Regression model in R. This is a bit more complex than Simple Linear Regression but it's going to be so practical and fun. Multiple Linear Regression is a data science technique that uses several explanatory variables to predict the outcome of a response variable. A Multiple linear regression model attempts to model the relationship between two or more explanatory variables (independent variables) and a response variable (dependent variable), by fitting a linear equation to observed data. Every value of the independent variable x is associated with a value of the dependent variable y.


The five best books to understand AI

#artificialintelligence

This article is part of our Summer reads series. Visit our collection to discover "The Economist reads" guides, guest essays and more seasonal distractions. IN RECENT years artificial intelligence (AI) has undergone a revolution. After decades of modest progress that never quite lived up to its promise, a different approach--relying on big data and stats, not clever algorithms--made huge strides in solving real-world problems like voice- and image-recognition and self-driving cars. Also in the past ten years, a lot of books have been published that aim to explain what AI is, where it's going and why it matters.


Newspaper articles written by robots?

#artificialintelligence

Theoretical physicist Stephen Hawking, arguably one of the smartest people in history, warned, in an interview with the BBC, that, "the development of full artificial intelligence (AI) could spell the end of the human race." Hawking went on to say, at the Web Summit technology conference in Lisbon, Portugal, "AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy." In 2015, dozens of brainiac scientists and technology experts, including celebrity physicists like Hawking and Elon Musk, signed a letter warning that, even though AI could be used for great good, it could also have potentially devastating, dangerous and unintended uses.