consultation
Starmer 'appeasing' big tech firms, says online safety campaigner
Starmer'appeasing' big tech firms, says online safety campaigner A leading campaigner has accused the prime minister of appeasing big tech companies and being late to the party in regulating social media and artificial intelligence. Crossbench peer Baroness Kidron told the BBC Sir Keir Starmer needed to get on with it rather than launching more consultations. She also criticised the PM for citing his own experience as a father of two teenage children on social media, arguing that this did not make him an expert on the subject and that his family were sheltered compared to others. The government rejected the claims, with a spokesperson saying it had already introduced some of the strongest online safety protections in the world. Sir Keir has launched a consultation on banning under-16s from social media and promised to crackdown on the addictive elements of the apps.
- North America > Central America (0.15)
- Oceania > Australia (0.06)
- Europe > United Kingdom > Wales (0.05)
- (12 more...)
- Leisure & Entertainment (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (0.72)
- Media > Film (0.71)
UK's Starmer announces crackdown on AI chatbots in child safety push
UK's Starmer announces crackdown on AI chatbots in child safety push United Kingdom Prime Minister Keir Starmer has announced a crackdown on artificial intelligence chatbots that endanger children and pledged to seek broader powers to regulate internet access for minors. Starmer's office said on Monday that the government would target "vile and illegal content created by AI" and push for legal powers to act quickly on the findings of a public consultation that will consider a social media ban for children below 16 years of age. "Technology is moving really fast, and the law has got to keep up," Starmer said in a statement. "We are acting to protect children's wellbeing and help parents to navigate the minefield of social media," he said. The measures will require all AI chatbot providers to abide by digital safety laws, including a ban on creating sexualised images without a subject's consent.
- North America > United States (0.67)
- Europe > United Kingdom (0.51)
- South America (0.41)
- (15 more...)
- Law (1.00)
- Government > Regional Government > Europe Government (0.31)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
No free pass for internet platforms on child safety, Starmer says
No online platform will get a free pass on children's safety on the internet in new plans, Prime Minister Sir Keir Starmer has said. The government is pledging to close loopholes in existing laws designed to protect children online and will consult on a social media ban for under-16s as part of plans for online safety. There are also plans to introduce powers to speedily change the law in response to developing online behaviours, and to update legislation to preserve children's social media and online data - as campaigned for by the group Jools' Law. Opponents accused the government of inaction, and have called for Parliament to be given a vote on the social media ban for children. The government had already said it would launch the public consultation in March, seeking opinions about restricting children's access to AI chatbots and limiting infinite scrolling features for children - also known as doomscrolling.
- North America > United States (0.30)
- North America > Central America (0.15)
- Oceania > Australia (0.06)
- (12 more...)
- Information Technology > Communications > Social Media (0.80)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.57)
Starmer to extend online safety rules to AI chatbots after Grok scandal
The government said it would close a legal loophole in the Online Safety Act. The government said it would close a legal loophole in the Online Safety Act. Starmer to announce'crackdown on vile illegal content created by AI' after scandal involving Elon Musk's Grok tool Makers of AI chatbots that put children at risk will face massive fines or even see their services blocked in the UK under law changes to be announced by Keir Starmer on Monday. Emboldened by Elon Musk's X stopping its Grok AI tool from creating sexualised images of real people in the UK after public outrage last month, ministers are planning a "crackdown on vile illegal content created by AI". With more and more children using chatbots for everything from help with their homework to mental health support, the government said it would "move fast to shut a legal loophole and force all AI chatbot providers to abide by illegal content duties in the Online Safety Act or face the consequences of breaking the law".
- Europe > United Kingdom (0.91)
- Europe > Ukraine (0.06)
- South America > Venezuela (0.05)
- (2 more...)
- Law (1.00)
- Health & Medicine (1.00)
- Leisure & Entertainment > Sports (0.71)
- Government > Regional Government > North America Government > United States Government (0.50)
UK to consider Australia-style ban on social media for children
The UK government has launched a consultation on implementing an Australian-style social media ban for children in the UK, as well as other measures to better protect minors online. The government said on Monday it would examine evidence from around the world on a wide range of suggested proposals, including looking at whether a social media ban for minors would be effective, and if one was introduced, how best to make it work. "The consultation will look at options including raising the digital age of consent, implementing phone curfews to avoid excessive use, and restricting potentially addictive design features such as'streaks' and'infinite scrolling'," the government said. The UK's announcement comes as governments and regulators worldwide grapple with the rapid explosion of AI-generated content, which was highlighted this month by an international outcry over reports of Elon Musk's Grok AI chatbot generating non-consensual sexual images, including of children. The UK has already set out plans for an outright ban on artificial intelligence nudification tools, while working to stop children being able to take, share or view nude images on their devices, it said in Monday's statement.
- Europe > United Kingdom (1.00)
- North America > United States (0.52)
- Oceania > Australia (0.43)
- (13 more...)
- Information Technology > Communications > Social Media (0.92)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.56)
AGILE: A Novel Reinforcement Learning Framework of LLM Agents
We introduce a novel reinforcement learning framework of LLM agents named AGILE (AGent that Interacts and Learns from Environments) designed to perform complex conversational tasks with users, leveraging LLMs, memory, tools, and interactions with experts. The agent possesses capabilities beyond conversation, including reflection, tool usage, and expert consultation. We formulate the construction of such an LLM agent as a reinforcement learning (RL) problem, in which the LLM serves as the policy model. We fine-tune the LLM using labeled data of actions and the PPO algorithm. We focus on question answering and release a dataset for agents called ProductQA, comprising challenging questions in online shopping. Our extensive experiments on ProductQA, MedMCQA and HotPotQA show that AGILE agents based on 7B and 13B LLMs trained with PPO can outperform GPT-4 agents. Our ablation study highlights the indispensability of memory, tools, consultation, reflection, and reinforcement learning in achieving the agent's strong performance.
Facial recognition could be used more widely by police
Facial recognition technology could be used more often by UK police forces, according to new plans announced by the Home Office. Policing and crime minister Sarah Jones said a widespread rollout of the equipment could mark the biggest breakthrough in catching criminals since DNA matching. People are being asked for their views on its use, as part of a 10-week consultation launched on Thursday, possibly paving the way for new laws. Jones credited the technology for helping to arrest thousands of criminals, but campaign group Big Brother Watch said increased use would make George Orwell roll in his grave. Facial recognition is used to locate wanted suspects and find vulnerable people.
- North America > United States (0.16)
- Europe > United Kingdom > Northern Ireland (0.16)
- North America > Central America (0.15)
- (15 more...)
3MDBench: Medical Multimodal Multi-agent Dialogue Benchmark
Sviridov, Ivan, Miftakhova, Amina, Tereshchenko, Artemiy, Zubkova, Galina, Blinov, Pavel, Savchenko, Andrey
Though Large Vision-Language Models (LVLMs) are being actively explored in medicine, their ability to conduct complex real-world telemedicine consultations combining accurate diagnosis with professional dialogue remains underexplored. This paper presents 3MDBench (Medical Multimodal Multi-agent Dialogue Benchmark), an open-source framework for simulating and evaluating LVLM-driven telemedical consultations. 3MDBench simulates patient variability through temperament-based Patient Agent and evaluates diagnostic accuracy and dialogue quality via Assessor Agent. It includes 2996 cases across 34 diagnoses from real-world telemedicine interactions, combining textual and image-based data. The experimental study compares diagnostic strategies for widely used open and closed-source LVLMs. We demonstrate that multimodal dialogue with internal reasoning improves F1 score by 6.5% over non-dialogue settings, highlighting the importance of context-aware, information-seeking questioning. Moreover, injecting predictions from a diagnostic convolutional neural network into the LVLM's context boosts F1 by up to 20%. Source code is available at https://github.com/univanxx/3mdbench.
- Asia > Russia (0.14)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Health Care Technology > Telehealth (1.00)
- Health & Medicine > Diagnostic Medicine (1.00)
Patient-Centered Summarization Framework for AI Clinical Summarization: A Mixed-Methods Design
Jimenez, Maria Lizarazo, Claros, Ana Gabriela, Green, Kieran, Toro-Tobon, David, Larios, Felipe, Asthana, Sheena, Wenczenovicz, Camila, Maldonado, Kerly Guevara, Vilatuna-Andrango, Luis, Proano-Velez, Cristina, Bandi, Satya Sai Sri, Bagewadi, Shubhangi, Branda, Megan E., Zahidy, Misk Al, Luz, Saturnino, Lapata, Mirella, Brito, Juan P., Ponce-Ponte, Oscar J.
Large Language Models (LLMs) are increasingly demonstrating the potential to reach human-level performance in generating clinical summaries from patient-clinician conversations. However, these summaries often focus on patients' biology rather than their preferences, values, wishes, and concerns. To achieve patient-centered care, we propose a new standard for Artificial Intelligence (AI) clinical summarization tasks: Patient-Centered Summaries (PCS). Our objective was to develop a framework to generate PCS that capture patient values and ensure clinical utility and to assess whether current open-source LLMs can achieve human-level performance in this task. We used a mixed-methods process. Two Patient and Public Involvement groups (10 patients and 8 clinicians) in the United Kingdom participated in semi-structured interviews exploring what personal and contextual information should be included in clinical summaries and how it should be structured for clinical use. Findings informed annotation guidelines used by eight clinicians to create gold-standard PCS from 88 atrial fibrillation consultations. Sixteen consultations were used to refine a prompt aligned with the guidelines. Five open-source LLMs (Llama-3.2-3B, Llama-3.1-8B, Mistral-8B, Gemma-3-4B, and Qwen3-8B) generated summaries for 72 consultations using zero-shot and few-shot prompting, evaluated with ROUGE-L, BERTScore, and qualitative metrics. Patients emphasized lifestyle routines, social support, recent stressors, and care values. Clinicians sought concise functional, psychosocial, and emotional context. The best zero-shot performance was achieved by Mistral-8B (ROUGE-L 0.189) and Llama-3.1-8B (BERTScore 0.673); the best few-shot by Llama-3.1-8B (ROUGE-L 0.206, BERTScore 0.683). Completeness and fluency were similar between experts and models, while correctness and patient-centeredness favored human PCS.
- North America > United States > Minnesota > Olmsted County > Rochester (0.14)
- Europe > United Kingdom > England > Devon > Plymouth (0.04)
- South America > Uruguay > Maldonado > Maldonado (0.04)
- (4 more...)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.68)
PatientSim: A Persona-Driven Simulator for Realistic Doctor-Patient Interactions
Kyung, Daeun, Chung, Hyunseung, Bae, Seongsu, Kim, Jiho, Sohn, Jae Ho, Kim, Taerim, Kim, Soo Kyung, Choi, Edward
Doctor-patient consultations require multi-turn, context-aware communication tailored to diverse patient personas. Training or evaluating doctor LLMs in such settings requires realistic patient interaction systems. However, existing simulators often fail to reflect the full range of personas seen in clinical practice. To address this, we introduce PatientSim, a patient simulator that generates realistic and diverse patient personas for clinical scenarios, grounded in medical expertise. PatientSim operates using: 1) clinical profiles, including symptoms and medical history, derived from real-world data in the MIMIC-ED and MIMIC-IV datasets, and 2) personas defined by four axes: personality, language proficiency, medical history recall level, and cognitive confusion level, resulting in 37 unique combinations. We evaluate eight LLMs for factual accuracy and persona consistency. The top-performing open-source model, Llama 3.3 70B, is validated by four clinicians to confirm the robustness of our framework. As an open-source, customizable platform, PatientSim provides a reproducible and scalable solution that can be customized for specific training needs. Offering a privacy-compliant environment, it serves as a robust testbed for evaluating medical dialogue systems across diverse patient presentations and shows promise as an educational tool for healthcare. The code is available at https://github.com/dek924/PatientSim.
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Asia > Middle East > Israel (0.04)
- North America > United States > Texas > Coleman County (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)