genai
GTA 6 and everything else: What to watch in video games in 2026
The video games industry is unpredictable. If you'd told us this time last year that a previously unknown French studio would claim game of the year, Battlefield 6 would knock Call of Duty off the top of the annual charts and that Saudi Arabia would buy gaming giant Electronic Arts (EA) we'd have been... sceptical. So you'd have to be very sure of yourself - or very foolish - to try and predict what's going to happen in the year ahead. Luckily, we're not in the crystal ball business here at BBC Newsbeat, but there are a few things we can be confident video game fans should keep an eye on in 2026. GTA 6: Will it actually arrive in 2026?
- Asia > Middle East > Saudi Arabia (0.26)
- North America > United States (0.15)
- North America > Central America (0.14)
- (15 more...)
Examining Student Interactions with a Pedagogical AI-Assistant for Essay Writing and their Impact on Students Writing Quality
Febriantoro, Wicaksono, Zhou, Qi, Suraworachet, Wannapon, Bulathwela, Sahan, Gauthier, Andrea, Millan, Eva, Cukurova, Mutlu
The dynamic nature of interactions between students and GenAI, as well as their relationship to writing quality, remains underexplored. While most research has examined how general-purpose GenAI can support writing, fewer studies have investigated how students interact with pedagogically designed systems across different phases of the writing process. To address this gap, we evaluated a GenAI-driven essay-writing assistant (EWA) designed to support higher education students in argumentative writing. Drawing on 1,282 interaction logs from 32 undergraduates during a two-hour writing session, Sequential Pattern Mining and K-Means clustering were used to identify behavioral patterns. Two clusters emerged: Cluster 1 emphasized outline planning and essay structure, while Cluster 2 focused on content development. A Mann-Whitney U test revealed a moderate effect size (r = 0.36) in the essay Organization dimension, with Cluster 1 showing higher scores. Qualitative analysis indicated that students with better performance actively wrote and shared essay sections with EWA for feedback, rather than interacted passively by asking questions. These findings suggest implications for teaching and system design. Teachers can encourage active engagement, while future EWAs may integrate automatic labeling and monitoring to prompt students to move from questioning to writing, enabling fuller benefits from GenAI-supported learning.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Education > Educational Setting (1.00)
- Education > Curriculum > Subject-Specific Education (1.00)
- Education > Educational Technology > Educational Software > Computer Based Training (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.87)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.70)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.70)
Game at centre of AI debate in running for top Bafta award
A video game at the centre of a debate over artificial intelligence (AI) is in the running for the top prize at next year's Bafta Game Awards. Arc Raiders, from Swedish developer Embark Studios, has been a smash-hit since its October launch, selling more than four million copies. But the multiplayer shooter has been criticised for using text-to-speech tools to create additional lines, based on dialogue previously recorded by the game's actors. It is one of 10 titles longlisted for the prestigious best game award, with a shortlist to be announced in the run-up to April's annual ceremony. Other games up for the top prize include blockbusters Ghost of Yōtei and Death Stranding 2, indie games Hollow Knight: Silksong and Hades II, and indie adventure Blue Prince.
- North America > Central America (0.15)
- Oceania > Australia (0.06)
- Europe > United Kingdom > Wales (0.06)
- (14 more...)
Unintentional Consequences: Generative AI Use for Cybercrime
Luu, Truong Jack, Samuel, Binny M.
The democratization of generative AI introduces new forms of human-AI interaction and raises urgent safety, ethical, and cybersecurity concerns. We develop a socio-technical explanation for how generative AI enables and scales cybercrime. Drawing on affordance theory and technological amplification, we argue that generative AI systems create new action possibilities for cybercriminals and magnify pre-existing malicious intent by lowering expertise barriers and increasing attack efficiency. To illustrate this framework, we conduct interrupted time series analyses of two large datasets: (1) 464,190,074 malicious IP address reports from AbuseIPDB, and (2) 281,115 cryptocurrency scam reports from Chainabuse. Using November 30, 2022, as a high-salience public-access shock, we estimate the counterfactual trajectory of reported cyber abuse absent the release, providing an early-warning impact assessment of a general-purpose AI technology. Across both datasets, we observe statistically significant post-intervention increases in reported malicious activity, including an immediate increase of over 1.12 million weekly malicious IP reports and about 722 weekly cryptocurrency scam reports, with sustained growth in the latter. We discuss implications for AI governance, platform-level regulation, and cyber resilience, emphasizing the need for multi-layer socio-technical strategies that help key stakeholders maximize AI's benefits while mitigating its growing cybercrime risks.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Law > Criminal Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.67)
Generative AI in Sociological Research: State of the Discipline
Alvero, AJ, Stoltz, Dustin S., Stuhler, Oscar, Taylor, Marshall
Generative artificial intelligence (GenAI) has garnered considerable attention for its potential utility in research and scholarship. A growing body of work in sociology and related fields demonstrates both the potential advantages and risks of GenAI, but these studies are largely proof-of-concept or specific audits of models and products. We know comparatively little about how sociologists actually use GenAI in their research practices and how they view its present and future role in the discipline. In this paper, we describe the current landscape of GenAI use in sociological research based on a survey of authors in 50 sociology journals. Our sample includes both computational sociologists and non-computational sociologists and their collaborators. We find that sociologists primarily use GenAI to assist with writing tasks: revising, summarizing, editing, and translating their own work. Respondents report that GenAI saves time and that they are curious about its capabilities, but they do not currently feel strong institutional or field-level pressure to adopt it. Overall, respondents are wary of GenAI's social and environmental impacts and express low levels of trust in its outputs, but many believe that GenAI tools will improve over the next several years. We do not find large differences between computational and non-computational scholars in terms of GenAI use, attitudes, and concern; nor do we find strong patterns by familiarity or frequency of use. We discuss what these findings suggest about the future of GenAI in sociology and highlight challenges for developing shared norms around its use in research practice.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Overview (1.00)
- Education (0.88)
- Government (0.68)
- Law > Environmental Law (0.34)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Generation (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Human Experts' Evaluation of Generative AI for Contextualizing STEAM Education in the Global South
Nyaaba, Matthew, Nabang, Macharious, Kyeremeh, Patrick, Nantomah, Ibrahim, Owusu-Fordjour, Collins, Ako, Martin, Akanzire, Bismark Nyaaba, Nantomah, Kassim Korah, Issaka, Cecilia, Zhai, Xiaoming
STEAM education in many parts of the Global South remains abstract and weakly connected to learners sociocultural realities. This study examines how human experts evaluate the capacity of Generative AI (GenAI) to contextualize STEAM instruction in these settings. Using a convergent mixed-methods design grounded in human-centered and culturally responsive pedagogy, four STEAM education experts reviewed standardized Ghana NaCCA lesson plans and GenAI-generated lessons created with a customized Culturally Responsive Lesson Planner (CRLP). Quantitative data were collected with a validated 25-item Culturally Responsive Pedagogy Rubric assessing bias awareness, cultural representation, contextual relevance, linguistic responsiveness, and teacher agency. Qualitative reflections provided additional insight into the pedagogical and cultural dynamics of each lesson. Findings show that GenAI, especially through the CRLP, improved connections between abstract standards and learners lived experiences. Teacher Agency was the strongest domain, while Cultural Representation was the weakest. CRLP-generated lessons were rated as more culturally grounded and pedagogically engaging. However, GenAI struggled to represent Ghana's cultural diversity, often producing surface-level references, especially in Mathematics and Computing. Experts stressed the need for teacher mediation, community input, and culturally informed refinement of AI outputs. Future work should involve classroom trials, broader expert participation, and fine-tuning with Indigenous corpora.
- North America (0.93)
- Africa > Ghana (0.70)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Instructional Material > Course Syllabus & Notes (1.00)
- Education > Educational Setting (1.00)
- Education > Curriculum > Subject-Specific Education (1.00)
Towards Synergistic Teacher-AI Interactions with Generative Artificial Intelligence
Cukurova, Mutlu, Suraworachet, Wannapon, Zhou, Qi, Bulathwela, Sahan
Generative artificial intelligence (GenAI) is increasingly used in education, posing significant challenges for teachers adapting to these changes. GenAI offers unprecedented opportunities for accessibility, scalability and productivity in educational tasks. However, the automation of teaching tasks through GenAI raises concerns about reduced teacher agency, potential cognitive atrophy, and the broader deprofessionalisation of teaching. Drawing findings from prior literature on AI in Education, and refining through a recent systematic literature review, this chapter presents a conceptualisation of five levels of teacher-AI teaming: transactional, situational, operational, praxical and synergistic teaming. The framework aims to capture the nuanced dynamics of teacher-AI interactions, particularly with GenAI, that may lead to the replacement, complementarity, or augmentation of teachers' competences and professional practice. GenAI technological affordances required in supporting teaming, along with empirical studies, are discussed. Drawing on empirical observations, we outline a future vision that moves beyond individual teacher agency toward collaborative decision-making between teachers and AI, in which both agents engage in negotiation, constructive challenge, and co-reasoning that enhance each other's capabilities and enable outcomes neither could realise independently. Further discussion of socio-technical factors beyond teacher-AI teaming is also included to streamline the synergy of teachers and AI in education ethically and practically.
- Education > Educational Setting (1.00)
- Education > Educational Technology > Educational Software (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.71)
Stable diffusion models reveal a persisting human and AI gap in visual creativity
Rondini, Silvia, Alvarez-Martin, Claudia, Angermair-Barkai, Paula, Penacchio, Olivier, Paz, M., Pelowski, Matthew, Dediu, Dan, Rodriguez-Fornells, Antoni, Cerda-Company, Xim
While recent research suggests Large Language Models match human creative performance in divergent thinking tasks, visual creativity remains underexplored. This study compared image generation in human participants (Visual Artists and Non Artists) and using an image generation AI model (two prompting conditions with varying human input: high for Human Inspired, low for Self Guided). Human raters (N=255) and GPT4o evaluated the creativity of the resulting images. We found a clear creativity gradient, with Visual Artists being the most creative, followed by Non Artists, then Human Inspired generative AI, and finally Self Guided generative AI. Increased human guidance strongly improved GenAI's creative output, bringing its productions close to those of Non Artists. Notably, human and AI raters also showed vastly different creativity judgment patterns. These results suggest that, in contrast to language centered tasks, GenAI models may face unique challenges in visual domains, where creativity depends on perceptual nuance and contextual sensitivity, distinctly human capacities that may not be readily transferable from language models.
- North America > United States (1.00)
- Europe > United Kingdom > England (0.28)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
What AI doesn't know: we could be creating a global 'knowledge collapse' Deepak Varuvel Dennison
What AI doesn't know: we could be creating a global'knowledge collapse' As GenAI becomes the primary way to find information, local and traditional wisdom is being lost. And we are only beginning to realise what we're missing This article was originally published as'Holes in the web' on Aeon.co A few years back, my dad was diagnosed with a tumour on his tongue - which meant we had some choices to weigh up. My family has an interesting dynamic when it comes to medical decisions. While my older sister is a trained doctor in western allopathic medicine, my parents are big believers in traditional remedies. Having grown up in a small town in India, I am accustomed to rituals. My dad had a ritual, too. Every time we visited his home village in southern Tamil Nadu, he'd get a bottle of thick, pungent, herb-infused oil from a vaithiyar, a traditional doctor practising Siddha medicine. It was his way of maintaining his connection with the kind of medicine he had always known and trusted.
- Leisure & Entertainment > Sports (0.68)
- Education (0.68)
- Government > Regional Government > North America Government > United States Government (0.46)
Examining the Usage of Generative AI Models in Student Learning Activities for Software Programming
Chen, Rufeng, Jiang, Shuaishuai, Shen, Jiyun, Moon, AJung, Wei, Lili
Abstract--The rise of Generative AI (GenAI) tools like Chat-GPT has created new opportunities and challenges for computing education. Existing research has primarily focused on GenAI's ability to complete educational tasks and its impact on student performance, often overlooking its effects on knowledge gains. In this study, we investigate how GenAI assistance compares to conventional online resources in supporting knowledge gains across different proficiency levels. We conducted a controlled user experiment with 24 undergraduate students of two different levels of programming experience (beginner, intermediate) to examine how students interact with ChatGPT while solving programming tasks. We analyzed task performance, conceptual understanding, and interaction behaviors. Our findings reveal that generating complete solutions with GenAI significantly improves task performance, especially for beginners, but does not consistently result in knowledge gains. Importantly, usage strategies differ by experience: beginners tend to rely heavily on GenAI toward task completion often without knowledge gain in the process, while intermediates adopt more selective approaches. We find that both over-reliance and minimal use result in weaker knowledge gains overall. Based on our results, we call on students and educators to adopt GenAI as a learning rather than a problem solving tool. Our study highlights the urgent need for guidance when integrating GenAI into programming education to foster deeper understanding. The rapid development of Generative Artificial Intelligence (GenAI) has led to its widespread adoption across various domains to boost productivity and streamline workflows. Large Language Models (LLMs), such as OpenAI's ChatGPT and Codex, Google Gemini, and GitHub Copilot, have been integrated into domains including software engineering [1], [2], healthcare [3], education [4], creative writing [5], [6], and digital music [7], offering capabilities such as code generation, question answering, and image generation. These authors contributed equally to this work. Some studies evaluated GenAI's performance on programming tasks [8], user interface design education [9], and computer vision coursework [10]. Others focused on assessing the accuracy and usability of GenAIgenerated responses [11], [12].
- North America > United States (0.30)
- North America > Canada > Quebec > Montreal (0.15)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Education > Curriculum > Subject-Specific Education (0.68)
- Education > Educational Setting (0.48)
- Education > Educational Technology (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)