ai hallucination
A Review of Generative AI in Computer Science Education: Challenges and Opportunities in Accuracy, Authenticity, and Assessment
Reihanian, Iman, Hou, Yunfei, Chen, Yu, Zheng, Yifei
This paper surveys the use of Generative AI tools, such as ChatGPT and Claude, in computer science education, focusing on key aspects of accuracy, authenticity, and assessment. Through a literature review, we highlight both the challenges and opportunities these AI tools present. While Generative AI improves efficiency and supports creative student work, it raises concerns such as AI hallucinations, error propagation, bias, and blurred lines between AI-assisted and student-authored content. Human oversight is crucial for addressing these concerns. Existing literature recommends adopting hybrid assessment models that combine AI with human evaluation, developing bias detection frameworks, and promoting AI literacy for both students and educators. Our findings suggest that the successful integration of AI requires a balanced approach, considering ethical, pedagogical, and technical factors. Future research may explore enhancing AI accuracy, preserving academic integrity, and developing adaptive models that balance creativity with precision.
- Europe > Finland > Southwest Finland > Turku (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Research Report > New Finding (1.00)
- Overview (1.00)
- Education > Educational Setting (1.00)
- Education > Curriculum > Subject-Specific Education (0.90)
- Education > Assessment & Standards (0.68)
- Education > Educational Technology > Educational Software (0.68)
AI hallucinations are getting worse – and they're here to stay
AI chatbots from tech companies such as OpenAI and Google have been getting so-called reasoning upgrades over the past months – ideally to make them better at giving us answers we can trust, but recent testing suggests they are sometimes doing worse than previous models. The errors made by chatbots, known as "hallucinations", have been a problem from the start, and it is becoming clear we may never get rid of them. Hallucination is a blanket term for certain kinds of mistakes made by the large language models (LLMs) that power systems like OpenAI's ChatGPT or Google's Gemini. It is best known as a description of the way they sometimes present false information as true. But it can also refer to an AI-generated answer that is factually accurate, but not actually relevant to the question it was asked, or fails to follow instructions in some other way.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.51)
Beyond Misinformation: A Conceptual Framework for Studying AI Hallucinations in (Science) Communication
This paper proposes a conceptual framework for understanding AI hallucinations as a distinct form of misinformation. While misinformation scholarship has traditionally focused on human intent, generative AI systems now produce false yet plausible outputs absent of such intent. I argue that these AI hallucinations should not be treated merely as technical failures but as communication phenomena with social consequences. Drawing on a supply-and-demand model and the concept of distributed agency, the framework outlines how hallucinations differ from human-generated misinformation in production, perception, and institutional response. I conclude by outlining a research agenda for communication scholars to investigate the emergence, dissemination, and audience reception of hallucinated content, with attention to macro (institutional), meso (group), and micro (individual) levels. This work urges communication researchers to rethink the boundaries of misinformation theory in light of probabilistic, non-human actors increasingly embedded in knowledge production.
- North America > United States > New Hampshire (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- (3 more...)
- Media > News (1.00)
- Government > Regional Government > North America Government > United States Government (0.47)
You can trick Google's AI Overviews into explaining made-up idioms
As Big Tech pours countless dollars and resources into AI, preaching the gospel of its utopia-creating brilliance, here's a reminder that algorithms can screw up. The latest evidence: You can trick Google's AI Overview (the automated answers at the top of your search queries) into explaining fictional, nonsensical idioms as if they were real. According to Google's AI Overview (via @gregjenner on Bluesky), "You can't lick a badger twice" means you can't trick or deceive someone a second time after they've been tricked once. That sounds like a logical attempt to explain the idiom -- if only it weren't poppycock. Google's Gemini-powered failure came in assuming the question referred to an established phrase rather than absurd mumbo jumbo designed to trick it.
Shining a Light on AI Hallucinations
The ability of artificial intelligence (AI) to sift through mountains of information and deliver useful results is rapidly reshaping the way people learn, work, and handle numerous tasks. Yet, for all the convenience and value Generative AI and large language models (LLMs) deliver, they have a problem. Despite delivering text, video, and images that appear accurate and convincing, they sometimes hallucinate. These fabrications--which can range from minor, plausible errors to utterly absurd assertions--are a legitimate cause for concern. At the very least, the resulting misinformation or botched image can rank as mildly amusing or annoying.
Scientists Develop New Algorithm to Spot AI 'Hallucinations'
An enduring problem with today's generative artificial intelligence (AI) tools, like ChatGPT, is that they often confidently assert false information. Computer scientists call this behavior "hallucination," and it's a key barrier to AI's usefulness. Hallucinations have led to some embarrassing public slip-ups. In February, AirCanada was forced by a tribunal to honor a discount that its customer-support chatbot had mistakenly offered to a passenger. In May, Google was forced to make changes to its new "AI overviews" search feature, after the bot told some users that it was safe to eat rocks. And last June, two lawyers were fined 5,000 by a U.S. judge after one of them admitted he had used ChatGPT to help write a court filing.
- Europe > France (0.06)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.96)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.35)
Reduce AI Hallucinations With This Neat Software Trick
If you've ever used a generative artificial intelligence tool, it's lied to you. These recurring fabrications are often called AI hallucinations, and developers are feverishly working to make generative AI tools more reliable by reigning in these unfortunate fibs. One of the most popular approaches to reducing AI hallucinations--and one that is quickly growing more popular in Silicon Valley--is called retrieval augmented generation. The RAG process is quite complicated, but on a basic level it augments your prompts by gathering info from a custom database, and then the large language model generates an answer based on that data. For example, a company could upload all of its HR policies and benefits to a RAG database and have the AI chatbot just focus on answers that can be found in those documents.
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.73)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.58)
AI Hallucinations: A Misnomer Worth Clarifying
Maleki, Negar, Padmanabhan, Balaji, Dutta, Kaushik
As large language models continue to advance in Artificial Intelligence (AI), text generation systems have been shown to suffer from a problematic phenomenon termed often as "hallucination." However, with AI's increasing presence across various domains including medicine, concerns have arisen regarding the use of the term itself. In this study, we conducted a systematic review to identify papers defining "AI hallucination" across fourteen databases. We present and analyze definitions obtained across all databases, categorize them based on their applications, and extract key points within each category. Our results highlight a lack of consistency in how the term is used, but also help identify several alternative terms in the literature. We discuss implications of these and call for a more unified effort to bring consistency to an important contemporary AI issue that can affect multiple domains significantly.
- North America > Canada (0.04)
- Europe > United Kingdom (0.04)
- Europe > Ireland (0.04)
- (12 more...)
- Law (1.00)
- Health & Medicine > Health Care Providers & Services (1.00)
- Health & Medicine > Consumer Health (1.00)
- (12 more...)
In Defense of AI Hallucinations
No one knows whether artificial intelligence will be a boon or curse in the far future. But right now, there's almost universal discomfort and contempt for one habit of these chatbots and agents: hallucinations, those made-up facts that appear in the outputs of large language models like ChatGPT. In the middle of what seems like a carefully constructed answer, the LLM will slip in something that seems reasonable but is a total fabrication. Your typical chatbot can make disgraced ex-congressman George Santos look like Abe Lincoln. Since it looks inevitable that chatbots will one day generate the vast majority of all prose ever written, all the AI companies are obsessed with minimizing and eliminating hallucinations, or at least convincing the world the problem is in hand.
Fox News Artificial Intelligence Newsletter: Navy finds perfect wingman for carrier pilots
American aircraft carrier USS Gerald R. Ford is seen from the air anchored in Italy in the Gulf of Trieste. The USS Gerald R. Ford is the largest warship in the world. MOVE OVER, MAVERICK: OPINION: Navy finds perfect wingman for carrier pilots. BREAST CANCER BREAKTHROUGH: AI predicts one-third of cases prior to diagnosis in mammography study. AI hallucinations are the category of content that may be inaccurate, nonsensical or even harmful due to AI models drawing information from outdated or incorrect data sets.
- North America > United States (1.00)
- Europe > Italy > Friuli Venezia Giulia > Trieste Province > Trieste (0.27)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.22)
- (2 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military > Navy (1.00)
- Health & Medicine > Therapeutic Area > Oncology (0.96)