qwen
From 'nerdy' Gemini to 'edgy' Grok: how developers are shaping AI behaviours
Which chatbot we choose could become an extension and reflection of our personalities, like the clothes we wear or car we drive. Which chatbot we choose could become an extension and reflection of our personalities, like the clothes we wear or car we drive. From'nerdy' Gemini to'edgy' Grok: how developers are shaping AI behaviours Do you want an AI assistant that gushes about how it "loves humanity" or one that spews sarcasm? How about a political propagandist ready to lie? If so, ChatGPT, Grok and Qwen are at your disposal. Companies that create AI assistants, from the US to China, are increasingly wrestling with how to mould their characters, and it is no abstract debate.
- Asia > China (0.35)
- Europe > United Kingdom (0.15)
- Europe > Ukraine (0.05)
- (4 more...)
- Government > Regional Government (0.71)
- Leisure & Entertainment > Sports (0.69)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.48)
- Health & Medicine > Consumer Health (0.47)
AI Models Are Starting to Learn by Asking Themselves Questions
An AI model that learns without human input--by posing interesting queries for itself--might point the way to superintelligence. Even the smartest artificial intelligence models are essentially copycats. They learn either by consuming examples of human work or by trying to solve problems that have been set for them by human instructors. But perhaps AI can, in fact, learn in a more human way--by figuring out interesting questions to ask itself and attempting to find the right answer. A project from Tsinghua University, the Beijing Institute for General Artificial Intelligence (BIGAI), and Pennsylvania State University shows that AI can learn to reason in this way by playing with computer code.
- North America > United States > Pennsylvania (0.25)
- Asia > China > Beijing > Beijing (0.25)
- North America > United States > North Carolina (0.05)
- (5 more...)
- Information Technology (1.00)
- Education (0.90)
Tips for Keeping a Digital Diary and Why You Should
After 10 years of journaling, my only regret is not starting sooner. Keeping a daily diary doesn't come easily to most people, but it takes less effort than you might imagine. It could also become a meaningful way to reflect and grow as a person. For more than 10 years, I've written a few words every morning, and what I've learned from this practice has changed my life. My only regret is not starting sooner.
- North America > United States > California (0.05)
- Europe > Slovakia (0.05)
- Europe > Czechia (0.05)
- Information Technology > Communications > Mobile (0.70)
- Information Technology > Artificial Intelligence (0.50)
3 New Tricks to Try With Google Gemini Live After Its Latest Major Upgrade
Google's AI is now even smarter, and more versatile. Gemini Live is the more conversational, natural language way of interacting with the Google Gemini AI bot using your voice. The idea is you chat with it like you would chat with a friend, interruptions and all, even if the actual answers are the same as you'd get from typing your queries into Gemini as normal. Now, about a year and a half after its debut, Gemini Live has been given what Google is describing as its "biggest update ever." The update makes the Gemini Live mode even more natural and even more conversational than before, with a better understanding of tone, nuance, pronunciation, and rhythm.
- North America > United States > California (0.15)
- Europe > United Kingdom (0.05)
- Europe > Slovakia (0.05)
- (3 more...)
- Information Technology (0.48)
- Media (0.48)
So Long, GPT-5. Hello, Qwen
In the AI boom, chatbots and GPTs come and go quickly. On a drizzly and windswept afternoon this summer, I visited the headquarters of Rokid, a startup developing smart glasses in Hangzhou, China. As I chatted with engineers, their words were swiftly translated from Mandarin to English, and then transcribed onto a tiny translucent screen just above my right eye using one of the company's new prototype devices. Rokid's high-tech spectacles use Qwen, an open-weight large language model developed by the Chinese ecommerce giant Alibaba. OpenAI's GPT-5, Google's Gemini 3, and Anthropic's Claude often score higher on benchmarks designed to gauge different dimensions of machine cleverness.
- Asia > China > Zhejiang Province > Hangzhou (0.25)
- North America > United States > Michigan (0.05)
- North America > United States > California (0.05)
- (2 more...)
Going All-In on LLM Accuracy: Fake Prediction Markets, Real Confidence Signals
Large language models are increasingly used to evaluate other models, yet these judgments typically lack any representation of confidence. This pilot study tests whether framing an evaluation task as a betting game (a fictional prediction market with its own LLM currency) improves forecasting accuracy and surfaces calibrated confidence signals. We generated 100 math and logic questions with verifiable answers. Six Baseline models (three current-generation, three prior-generation) answered all items. Three Predictor models then forecasted, for each question-baseline pair, if the baseline would answer correctly. Each predictor completed matched runs in two conditions: Control (simple correct/incorrect predictions) and Incentive (predictions plus wagers of 1-100,000 LLMCoin under even odds, starting from a 1,000,000 LLMCoin bankroll). Across 5,400 predictions per condition, Incentive runs showed modestly higher accuracy (81.5% vs. 79.1%, p = .089, d = 0.86) and significantly faster learning across rounds (12.0 vs. 2.9 percentage-point improvement from Round 1 to Round 4, p = .011). Most notably, stake size tracked confidence. "Whale" bets of 40,000+ coins were correct ~99% of the time, while small bets (<1,000 coins) showed only ~74% accuracy. The key finding is not that fictional money makes models smarter; accuracy gains were modest and did not reach statistical significance (p = .089) in this pilot. Rather, the betting mechanic created a legible confidence signal absent from binary yes/no outputs. This suggests that simple financial framing may help transform LLMs into risk-aware forecasters, making their internal beliefs visible and usable. The protocol offers a foundation for future work for meta-evaluation systems and what may become LLM-to-LLM prediction markets.
BEDI: A Comprehensive Benchmark for Evaluating Embodied Agents on UAVs
Guo, Mingning, Wu, Mengwei, He, Jiarun, Li, Shaoxian, Li, Haifeng, Tao, Chao
With the rapid advancement of low-altitude remote sensing and Vision-Language Models (VLMs), Embodied Agents based on Unmanned Aerial Vehicles (UAVs) have shown significant potential in autonomous tasks. However, current evaluation methods for UAV-Embodied Agents (UAV-EAs) remain constrained by the lack of standardized benchmarks, diverse testing scenarios and open system interfaces. To address these challenges, we propose BEDI (Benchmark for Embodied Drone Intelligence), a systematic and standardized benchmark designed for evaluating UAV-EAs. Specifically, we introduce a novel Dynamic Chain-of-Embodied-Task paradigm based on the perception-decision-action loop, which decomposes complex UAV tasks into standardized, measurable subtasks. Building on this paradigm, we design a unified evaluation framework encompassing six core sub-skills: semantic perception, spatial perception, motion control, tool utilization, task planning and action generation. Furthermore, we develop a hybrid testing platform that incorporates a wide range of both virtual and real-world scenarios, enabling a comprehensive evaluation of UAV-EAs across diverse contexts. The platform also offers open and standardized interfaces, allowing researchers to customize tasks and extend scenarios, thereby enhancing flexibility and scalability in the evaluation process. Finally, through empirical evaluations of several state-of-the-art (SOTA) VLMs, we reveal their limitations in embodied UAV tasks, underscoring the critical role of the BEDI benchmark in advancing embodied intelligence research and model optimization. By filling the gap in systematic and standardized evaluation within this field, BEDI facilitates objective model comparison and lays a robust foundation for future development in this field. Our benchmark is now publicly available at https://github.com/lostwolves/BEDI.
- Asia > China > Beijing > Beijing (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- North America > Canada > Ontario > National Capital Region > Ottawa (0.04)
- (8 more...)
- Workflow (1.00)
- Research Report (1.00)
- Government > Military (0.67)
- Information Technology > Robotics & Automation (0.48)
- Transportation > Marine (0.46)
- Health & Medicine > Therapeutic Area (0.46)
Large language models replicate and predict human cooperation across experiments in game theory
Palatsi, Andrea Cera, Martin-Gutierrez, Samuel, Cardenal, Ana S., Pellert, Max
Large language models (LLMs) are increasingly used both to make decisions in domains such as health, education and law, and to simulate human behavior. Yet how closely LLMs mirror actual human decision-making remains poorly understood. This gap is critical: misalignment could produce harmful outcomes in practical applications, while failure to replicate human behavior renders LLMs ineffective for social simulations. Here, we address this gap by developing a digital twin of game-theoretic experiments and introducing a systematic prompting and probing framework for machine-behavioral evaluation. Testing three open-source models (Llama, Mistral and Qwen), we find that Llama reproduces human cooperation patterns with high fidelity, capturing human deviations from rational choice theory, while Qwen aligns closely with Nash equilibrium predictions. Notably, we achieved population-level behavioral replication without persona-based prompting, simplifying the simulation process. Extending beyond the original human-tested games, we generate and preregister testable hypotheses for novel game configurations outside the original parameter grid. Our findings demonstrate that appropriately calibrated LLMs can replicate aggregate human behavioral patterns and enable systematic exploration of unexplored experimental spaces, offering a complementary approach to traditional research in the social and behavioral sciences that generates new empirical predictions about human social decision-making.
- North America > United States > Michigan (0.04)
- North America > Canada (0.04)
- Europe > Spain > Galicia > Madrid (0.04)
Build AI Assistants using Large Language Models and Agents to Enhance the Engineering Education of Biomechanics
Yan, Hanzhi, Lu, Qin, Wang, Xianqiao, Zhai, Xiaoming, Liu, Tianming, Li, He
While large language models (LLMs) have demonstrated remarkable versatility across a wide range of general tasks, their effectiveness often diminishes in domain-specific applications due to inherent knowledge gaps. Moreover, their performance typically declines when addressing complex problems that require multi-step reasoning and analysis. In response to these challenges, we propose leveraging both LLMs and AI agents to develop education assistants aimed at enhancing undergraduate learning in biomechanics courses that focus on analyzing the force and moment in the musculoskeletal system of the human body. To achieve our goal, we construct a dual-module framework to enhance LLM performance in biomechanics educational tasks: 1) we apply Retrieval-Augmented Generation (RAG) to improve the specificity and logical consistency of LLM's responses to the conceptual true/false questions; 2) we build a Multi-Agent System (MAS) to solve calculation-oriented problems involving multi-step reasoning and code execution. Specifically, we evaluate the performance of several LLMs, i.e., Qwen-1.0-32B, Qwen-2.5-32B, and Llama-70B, on a biomechanics dataset comprising 100 true/false conceptual questions and problems requiring equation derivation and calculation. Our results demonstrate that RAG significantly enhances the performance and stability of LLMs in answering conceptual questions, surpassing those of vanilla models. On the other hand, the MAS constructed using multiple LLMs demonstrates its ability to perform multi-step reasoning, derive equations, execute code, and generate explainable solutions for tasks that require calculation. These findings demonstrate the potential of applying RAG and MAS to enhance LLM performance for specialized courses in engineering curricula, providing a promising direction for developing intelligent tutoring in engineering education.
- North America > United States > Georgia > Clarke County > Athens (0.15)
- South America > Uruguay > Maldonado > Maldonado (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- (2 more...)
- Health & Medicine > Health Care Technology (1.00)
- Education > Curriculum > Subject-Specific Education (1.00)
A benchmark multimodal oro-dental dataset for large vision-language models
Lv, Haoxin, Haq, Ijazul, Du, Jin, Ma, Jiaxin, Zhu, Binnian, Dang, Xiaobing, Liang, Chaoan, Du, Ruxu, Zhang, Yingjie, Saqib, Muhammad
The advancement of artificial intelligence in oral healthcare relies on the availability of large-scale multimodal datasets that capture the complexity of clinical practice. In this paper, we present a comprehensive multimodal dataset, comprising 8775 dental checkups from 4800 patients collected over eight years (2018-2025), with patients ranging from 10 to 90 years of age. The dataset includes 50000 intraoral images, 8056 radiographs, and detailed textual records, including diagnoses, treatment plans, and follow-up notes. The data were collected under standard ethical guidelines and annotated for benchmarking. To demonstrate its utility, we fine-tuned state-of-the-art large vision-language models, Qwen-VL 3B and 7B, and evaluated them on two tasks: classification of six oro-dental anomalies and generation of complete diagnostic reports from multimodal inputs. We compared the fine-tuned models with their base counterparts and GPT-4o. The fine-tuned models achieved substantial gains over these baselines, validating the dataset and underscoring its effectiveness in advancing AI-driven oro-dental healthcare solutions. The dataset is publicly available, providing an essential resource for future research in AI dentistry.
- Asia > China > Guangdong Province > Guangzhou (0.05)
- Asia > Pakistan (0.04)
- Asia > Middle East > Iran (0.04)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- Health & Medicine > Therapeutic Area > Dental and Oral Health (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (0.71)
- Health & Medicine > Health Care Technology > Medical Record (0.70)