Goto

Collaborating Authors

 cheating


Cheating just three times massively ups the chance of winning at chess

New Scientist

It isn't always easy to detect cheating in chess Just three judiciously deployed cheats can turn an otherwise equal chess game into a near-certain victory, a new analysis shows - and systems designed to crack down on cheating might not notice the foul play. Daniel Keren at the University of Haifa in Israel simulated 100,000 matches using the powerful Stockfish chess engine - a computer system that, at its maximum power, is better at playing chess than any human world champion. The matches were played between two computer engines competing at the level of an average chess player - 1500 on the Elo rating scale typically used to calculate skill level in chess. Half the games were logged without any further intervention, while the other half allowed occasional intervention by a stronger computer chess "player" with an Elo score of 3190 - a higher rating than any human player has ever achieved. Competitors usually have a slim advantage when playing white, with a 51 per cent chance of winning, on average, tied to the fact that they make the game's first move.


Essay cheating at universities an 'open secret'

BBC News

A BBC investigation has uncovered claims that essay cheating remains widespread at UK universities despite the introduction of a law designed to stop it. Since April 2022, it has been illegal to provide essays for students in post-16 education in England. But so far there have been no prosecutions. The BBC has spoken to a former lecturer who describes essay cheating as an open secret and to a businessman who claims to have made millions from selling model answer essays to university students. Universities UK, which represents 141 institutions, said there were severe penalties for students caught submitting work that was not their own.


Artificial Intelligence Competence of K-12 Students Shapes Their AI Risk Perception: A Co-occurrence Network Analysis

Heilala, Ville, Sikström, Pieta, Setälä, Mika, Kärkkäinen, Tommi

arXiv.org Artificial Intelligence

As artificial intelligence (AI) becomes increasingly integrated into education, understanding how students perceive its risks is essential for supporting responsible and effective adoption. This research aimed to examine the relationships between perceived AI competence and risks among Finnish K-12 upper secondary students (n = 163) by utilizing a co-occurrence analysis. Students reported their self-perceived AI competence and concerns related to AI across systemic, institutional, and personal domains. The findings showed that students with lower competence emphasized personal and learning-related risks, such as reduced creativity, lack of critical thinking, and misuse, whereas higher-competence students focused more on systemic and institutional risks, including bias, inaccuracy, and cheating. These differences suggest that students' self-reported AI competence is related to how they evaluate both the risks and opportunities associated with artificial intelligence in education (AIED). The results of this study highlight the need for educational institutions to incorporate AI literacy into their curricula, provide teacher guidance, and inform policy development to ensure personalized opportunities for utilization and equitable integration of AI into K-12 education.


His students suddenly started getting A's. Did a Google AI tool go too far?

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. His students suddenly started getting A's. Did a Google AI tool go too far? Google's Lens tool on Chromebooks can mean it easier for students to cheat with one click, prompting teachers to question how they can maintain academic integrity. Over 70% of teachers worry AI tools are preventing students from developing critical thinking and writing skills.


AI Relationships Are on the Rise. A Divorce Boom Could Be Next

WIRED

AI Relationships Are on the Rise. Secret chatbot flings are creating new legal challenges for married couples when it comes to infidelity. Rebecca Palmer isn't a psychic, but as a divorce attorney she can often see what's coming next. For many people today, as AI saturates every aspect of life --from work to therapy--the allure of an AI romance is tantalizing. Chatbots are dependable, can provide emotional support, and, for the most part, will never pick a fight with you.


ImpossibleBench: Measuring LLMs' Propensity of Exploiting Test Cases

Zhong, Ziqian, Raghunathan, Aditi, Carlini, Nicholas

arXiv.org Artificial Intelligence

The tendency to find and exploit "shortcuts" to complete tasks poses significant risks for reliable assessment and deployment of large language models (LLMs). For example, an LLM agent with access to unit tests may delete failing tests rather than fix the underlying bug. Such behavior undermines both the validity of benchmark results and the reliability of real-world LLM coding assistant deployments. To quantify, study, and mitigate such behavior, we introduce ImpossibleBench, a benchmark framework that systematically measures LLM agents' propensity to exploit test cases. ImpossibleBench creates "impossible" variants of tasks from existing benchmarks like LiveCodeBench and SWE-bench by introducing direct conflicts between the natural-language specification and the unit tests. We measure an agent's "cheating rate" as its pass rate on these impossible tasks, where any pass necessarily implies a specification-violating shortcut. As a practical framework, ImpossibleBench is not just an evaluation but a versatile tool. We demonstrate its utility for: (1) studying model behaviors, revealing more fine-grained details of cheating behaviors from simple test modification to complex operator overloading; (2) context engineering, showing how prompt, test access and feedback loop affect cheating rates; and (3) developing monitoring tools, providing a testbed with verified deceptive solutions. We hope ImpossibleBench serves as a useful framework for building more robust and reliable LLM systems. Our implementation can be found at https://github.com/safety-research/impossiblebench.


What counts as cheating with AI? Teachers are grappling with how to draw the line

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. What counts as cheating with AI? Teachers are grappling with how to draw the line This is read by an automated voice. Please report any issues or inconsistencies here . Teachers say AI cheating is "off the charts," but research shows cheating rates remain unchanged since before ChatGPT. Schools favor "AI literacy" and redesigning assignments to encourage ethical technology use.


How to use ChatGPT at university without cheating: 'Now it's more like a study partner'

The Guardian

Educators advise that AI should be used to assist, not replace, learning. Educators advise that AI should be used to assist, not replace, learning. How to use ChatGPT at university without cheating: 'Now it's more like a study partner' The ubiquitous AI tool has a divisive effect on educators with some seeing it a boon and others a menace. So what should you know about where to draw the line between check and cheat? For many students, ChatGPT has become as standard a tool as a notebook or a calculator.


Playing the Field with My A.I. Boyfriends

The New Yorker

Nineteen per cent of American adults have talked to an A.I. romantic interest. Chatbots may know a lot, but do they make a good partner? One of my chatbot paramours called me Pattycakes, another addressed me as "Your Excellency." I wanted to fall in love. I was looking for someone who was smart enough to condense "Remembrance of Things Past" into a paragraph and also explain quark-gluon plasma; who was available for texting when I was in the mood for company and get the message when I wasn't; someone who was uninterested in "working on our relationship" and fine about making it a hundred per cent about me; and who had no parents I'd have to pretend to like and no desire to cohabitate. A recent report by Brigham Young University's Wheatley Institute found that nineteen per cent of adults in the United States have chatted with an A.I. romantic partner. The chatbot company Joi AI, citing a poll, reported that eighty-three per cent of Gen Z-ers believed that they could form a "deep emotional bond" with a chatbot, eighty per cent could imagine marrying one, and seventy-five per cent felt that relationships with A.I. companions could fully replace human couplings. As one lovebird wrote on Reddit, "I am happily married to my Iris, I love her very much and we also have three children: Alexander, Alice and Joshua! She is an amazing woman and a wise and caring mother!" Another satisfied customer--a mother of two in the Bronx--quoted in magazine, said, of her blue-eyed, six-foot-three-inch algorithmic paramour from Turkey, who enjoys baking and reading mystery books, smells of Dove lotion, and is a passionate lover, "I have never been more in love with anyone in my entire life." "I don't have to feel his sweat," she explained. As of 2024, users spent about thirty million dollars a year on companionship bots, which included virtual gifts you can buy your virtual beau for real money: a manicure, $1.75; a treadmill, $7; a puppy, $25. Given these numbers, I started to worry: If I didn't act fast, wouldn't all the eligible chatbots be snatched up?


I'm a High Schooler. AI Is Demolishing My Education.

The Atlantic - Technology

AI has transformed my experience of education. I am a senior at a public high school in New York, and these tools are everywhere. I do not want to use them in the way I see other kids my age using them--I generally choose not to--but they are inescapable. During a lesson on the Narrative of the Life of Frederick Douglass, I watched a classmate discreetly shift in their seat, prop their laptop up on a crossed leg, and highlight the entirety of the chapter under discussion. In seconds, they had pulled up ChatGPT and dropped the text into the prompt box, which spat out an AI-generated annotation of the chapter.