Goto

Collaborating Authors

 Palermo


Learning to Prompt in the Classroom to Understand AI Limits: A pilot study

arXiv.org Artificial Intelligence

Artificial intelligence's (AI) progress holds great promise in tackling pressing societal concerns such as health and climate. Large Language Models (LLM) and the derived chatbots, like ChatGPT, have highly improved the natural language processing capabilities of AI systems allowing them to process an unprecedented amount of unstructured data. However, the ensuing excitement has led to negative sentiments, even as AI methods demonstrate remarkable contributions (e.g. in health and genetics). A key factor contributing to this sentiment is the misleading perception that LLMs can effortlessly provide solutions across domains, ignoring their limitations such as hallucinations and reasoning constraints. Acknowledging AI fallibility is crucial to address the impact of dogmatic overconfidence in possibly erroneous suggestions generated by LLMs. At the same time, it can reduce fear and other negative attitudes toward AI. This necessitates comprehensive AI literacy interventions that educate the public about LLM constraints and effective usage techniques, i.e prompting strategies. With this aim, a pilot educational intervention was performed in a high school with 21 students. It involved presenting high-level concepts about intelligence, AI, and LLMs, followed by practical exercises involving ChatGPT in creating natural educational conversations and applying established prompting strategies. Encouraging preliminary results emerged, including high appreciation of the activity, improved interaction quality with the LLM, reduced negative AI sentiments, and a better grasp of limitations, specifically unreliability, limited understanding of commands leading to unsatisfactory responses, and limited presentation flexibility. Our aim is to explore AI acceptance factors and refine this approach for more controlled future studies.


A Brain Model Learns to Drive - Neuroscience News

#artificialintelligence

Summary: A new AI model that mimics the neural architecture and connections of the hippocampus is able to alter its synaptic connections as it moves a car-like virtual robot. HBP researchers at the Institute of Biophysics of the National Research Council (IBF-CNR) in Palermo, Italy, have mimicked the neuronal architecture and connections of the brain's hippocampus to develop a robotic platform capable of learning as humans do while the robot navigates around a space. The simulated hippocampus is able to alter its own synaptic connections as it moves a car-like virtual robot. Crucially, this means it needs to navigate to a specific destination only once before it is able to remember the path. This is a marked improvement over current autonomous navigation methods that rely on deep learning, and which have to calculate thousands of possible paths instead.


Book Discussion - Cognitive Design for Artificial Minds

#artificialintelligence

Cognitive Design for Artificial Minds (Routledge/Taylor & Francis, 2021) explains the crucial role that human cognition research plays in the design and realization of artificial intelligence systems, illustrating the steps necessary for the design of artificial models of cognition. It bridges the gap between the theoretical, experimental, and technological issues addressed in the context of AI of cognitive inspiration and computational cognitive science. The event is moderated by Antonio Chella (Prof. of Robotics at the University of Palermo) The event is free (but the registration is mandatory) and will be held on Gather Town (you will receive the link once registered). The book "Cognitive Design for Artificial Minds" (with related editorial reviews) can be found at: Antonio Lieto is a researcher in Artificial Intelligence at the Department of Computer Science of the University of Turin, Italy, and a research associate at the ICAR-CNR in Palermo, Italy. He is the current Vice-President of the Italian Association of Cognitive Science (2017–2022) and an ACM Distinguished Speaker on the topics of cognitively inspired AI and artificial models of cognition.


Robot taught table etiquette can explain why it won't follow the rules

New Scientist

We use what is known as inner speech, where we talk to ourselves, to evaluate situations and make more-informed decisions. Now, a robot has been trained to speak aloud its inner decision-making process, giving us a view of how it prioritises competing demands. Arianna Pipitone and Antonio Chella at the University of Palermo, Italy, programmed a humanoid robot named Pepper, made by SoftBank Robotics in Japan, with software that models human cognitive processes, as well as a text-to-speech processor. This allowed Pepper to voice its decision-making process while completing a task. "With inner speech, we can better understand what the robot wants to do and what its plan is," says Chella.


AI and the News

AI Magazine

Capek's seminal play can be used to explore with links to the item's source and Boryana Rossa and her colleagues sent And the staff Please note that: (1) an excerpt may not 50 Years. It reflect the overall tenor of the item, nor 18, 2006 (thedartmouth.com). "Fifty years should be considered a sin, the decree said, contain all of the relevant information; after a group of about 10 young scientists to kill an artificially created, sentient being and, (2) all items are offered "as is" and first met to start the nascent field of artificial (that is, a robot). Robots have the right to the fact that an item has been selected does intelligence, some of them returned chose their own religion, it continued. An not imply any endorsement whatsoever. 'In terms of artificial Italy, and announced their initial findings said. 'The idea that a machine could intelligence, you can't have an intelligent in March at the European Robotics Symposium do things that before we thought only humans entity without the possibility of free will,' in Palermo, Sicily. 'It has to have choices and intentions, is being done to protect us from Since then, computers have otherwise it is like a toaster.' … Ultrafuturo these mechanical menaces? 'Not enough,' tackled calculus, chess and even had some critiques science, specifically the says Blay Whitby, an artificial-intelligence success at translating languages. There are uses of artificial intelligence and the responsibilities expert at the University of Sussex in England. But … Robot safety is likely to surface in assistance."