Goto

Collaborating Authors

 global warming


The great climate paradox: Drop in air pollution has INCREASED global warming by making clouds less reflective, scientists warn

Daily Mail - Science & tech

New York's new mayor Zohran Mamdani tells Trump'I have four words for you' in blistering victory speech quoting his socialist hero, bragging about'toppling a dynasty' and promising a'new dawn' Driver screaming'Allahu Akbar' ploughs in to pedestrians'trying to hit everyone he encountered' on French holiday island leaving ten injured This Leftist election landslide was caused by the same vile disease that's triggered a GOP civil war. Amazon signals it's finally fed up with Whole Foods' sluggish sales - and is making sweeping, controversial changes Why Mamdani's socialist revolution in New York has sparked a civil war for Democrats... and Trump is secretly loving it Simone Biles details all the plastic surgery she's had after her boob job this summer Inside Kate and William's forever home: Princess is kitting out Forest Lodge in her preferred'classic contemporary style' to create a'lovely but absolutely inoffensive' look REVEALED: Fattest states in America ranked... including region where three-quarters of residents are obese Now he's dead, here's the full story of what happened that day... and the ghastly aftermath no one knows about Shocking moment Mexico's president is groped by man who grabs her breasts and tries to kiss her Miss Universe contestant called'dumb' in humiliating dressing-down by official hits back with powerful speech as furious organisers condemn her treatment and he issues grovelling apology Hollywood A-listers may be blacklisted for'antisemitism' under Paramount's new anti-woke leadership Nepo baby turns heads at Glamour Women Of The Year Awards in a glitzy gold sequin feathered gown - but can YOU guess who her A-list mother is? New footage reveals the moments before football manager collapsed and died mid-match, leaving his players in disbelief, as it emerges he'complained about fish he had eaten' hours before Texas teen'tears masterpiece from wall at the Met in unhinged meltdown' before being handed in by his MOTHER Scientists have been faced with a huge dilemma, as research reveals that reducing air pollution has increased global warming . While smog kills millions of people every year, it also whitens clouds - making them more reflective. So by slashing air pollution, we're inadvertently diminishing the brightness of clouds, which are key regulators of global temperature.


Pope Leo condemns climate change critics

BBC News

Pope Leo XIV has hit out at those who minimise the increasingly evident impact of rising temperatures in his first major statement on climate change. Reiterating the words of his predecessor Pope Francis, the new pontiff lambasted critics who ridicule those who speak of global warming. The Pope's remarks, at a speech in Castel Gondolfo near Rome, will be seen as an implied criticism of US President Donald Trump, who last month called climate change a con. Pope Leo also called for greater action from citizens the world over on climate change, saying there was no room for indifference or resignation. The Pope was speaking at a conference to mark 10 years since the publication of Laudato Si'.


Coming soon: Our 2025 list of Climate Tech Companies to Watch

MIT Technology Review

The new edition of our annual list features startups and established businesses working to decarbonize transportation, heavy industry, energy, and more. The need to cut emissions and adapt to our warming world is growing more urgent. This year, we've seen temperatures reach record highs, as they have nearly every year for the last decade . Climate-fueled natural disasters are affecting communities around the world, costing billions of dollars. That's why, for the past two years, has curated a list of companies with the potential to make a meaningful difference in addressing climate change (you can revisit the 2024 list here). We're excited to share that we'll publish our third edition of Climate Tech Companies to Watch on October 6.


Bad news for nervous fliers! Severe turbulence is set to get even WORSE thanks to climate change, scientists say - as they discover a link between 'freak wind gusts' and global warming

Daily Mail - Science & tech

But severe turbulence is set to get even worse - with climate change to blame. That's according to Professor Lance M Leslie and Milton Speer from the University of Technology Sydney, who have discovered a link between'freak wind gusts' and global warming. Using machine learning techniques, the pair found that heat and moisture are'key ingredients' for dangerous wind gusts known as'downbursts.' Downbursts can wreak havoc during takeoff and landing, causing planes to dangerously gain or lose altitude. Based on their findings, the scientists are calling for air safety authorities and airlines to be'more vigilant during takeoff and landing in a warming world.' 'Our research is among the first to detail the heightened climate risk to airlines from thunderstorm microbursts, especially during takeoff and landing,' they explained in an article for The Conversation.


Robust Reward Modeling via Causal Rubrics

Srivastava, Pragya, Singh, Harman, Madhavan, Rahul, Patil, Gandharv, Addepalli, Sravanti, Suggala, Arun, Aravamudhan, Rengarajan, Sharma, Soumya, Laha, Anirban, Raghuveer, Aravindan, Shanmugam, Karthikeyan, Precup, Doina

arXiv.org Artificial Intelligence

Reward models (RMs) are fundamental to aligning Large Language Models (LLMs) via human feedback, yet they often suffer from reward hacking. They tend to latch on to superficial or spurious attributes, such as response length or formatting, mistaking these cues learned from correlations in training data for the true causal drivers of quality (e.g., factuality, relevance). This occurs because standard training objectives struggle to disentangle these factors, leading to brittle RMs and misaligned policies. We introduce Crome (Causally Robust Reward Modeling), a novel framework grounded in an explicit causal model designed to mitigate reward hacking. Crome employs the following synthetic targeted augmentations during training: (1) Causal Augmentations, which are pairs that differ along specific causal attributes, to enforce sensitivity along each causal attribute individually, and (2) Neutral Augmentations, which are tie-label pairs varying primarily in spurious attributes, to enforce invariance along spurious attributes. Notably, our augmentations are produced without any knowledge of spurious factors, via answer interventions only along causal rubrics, that are identified by querying an oracle LLM. Empirically, Crome significantly outperforms standard baselines on RewardBench, improving average accuracy by up to 5.4% and achieving gains of up to 13.2% and 7.2% in specific categories. The robustness of Crome is further testified by the consistent gains obtained in a Best-of-N inference setting across increasing N, across various benchmarks, including the popular RewardBench (covering chat, chat-hard, safety, and reasoning tasks), the safety-focused WildGuardTest, and the reasoning-specific GSM8k.


Large Language Models and Synthetic Data for Monitoring Dataset Mentions in Research Papers

Solatorio, Aivin V., Macalaba, Rafael, Liounis, James

arXiv.org Artificial Intelligence

Tracking how data is mentioned and used in research papers provides critical insights for improving data discoverability, quality, and production. However, manually identifying and classifying dataset mentions across vast academic literature is resource-intensive and not scalable. This paper presents a machine learning framework that automates dataset mention detection across research domains by leveraging large language models (LLMs), synthetic data, and a two-stage fine-tuning process. We employ zero-shot extraction from research papers, an LLM-as-a-Judge for quality assessment, and a reasoning agent for refinement to generate a weakly supervised synthetic dataset. The Phi-3.5-mini instruct model is pre-fine-tuned on this dataset, followed by fine-tuning on a manually annotated subset. At inference, a ModernBERT-based classifier efficiently filters dataset mentions, reducing computational overhead while maintaining high recall. Evaluated on a held-out manually annotated sample, our fine-tuned model outperforms NuExtract-v1.5 and GLiNER-large-v2.1 in dataset extraction accuracy. Our results highlight how LLM-generated synthetic data can effectively address training data scarcity, improving generalization in low-resource settings. This framework offers a pathway toward scalable monitoring of dataset usage, enhancing transparency, and supporting researchers, funders, and policymakers in identifying data gaps and strengthening data accessibility for informed decision-making.


Shocking images reveal the cities that 'will be flooded by global warming by 2100 as sea levels rise by up to 6.2 FEET'- so, can you tell where they are?

Daily Mail - Science & tech

As greenhouse gas emissions continue to rise, scientists reveal many of the world's cities will be plunged underwater in just 75 years. In 2100, global sea levels will rise by a staggering 6.2ft (1.9 metres) if carbon dioxide (CO2) emissions continue to increase, say experts in Singapore. Now, artificial intelligence (AI) reveals exactly what this might look like. MailOnline turned to Google's AI image generator ImageFX to depict nine of the global cities that are particularly vulnerable to rising sea levels. For each city, we gave the command; 'Show me what it would look like in the year 2100 where sea levels have risen 6.2 feet.'


Ev2R: Evaluating Evidence Retrieval in Automated Fact-Checking

Akhtar, Mubashara, Schlichtkrull, Michael, Vlachos, Andreas

arXiv.org Artificial Intelligence

Current automated fact-checking (AFC) approaches commonly evaluate evidence either implicitly via the predicted verdicts or by comparing retrieved evidence with a predefined closed knowledge source, such as Wikipedia. However, these methods suffer from limitations, resulting from their reliance on evaluation metrics developed for different purposes and constraints imposed by closed knowledge sources. Recent advances in natural language generation (NLG) evaluation offer new possibilities for evidence assessment. In this work, we introduce Ev2R, an evaluation framework for AFC that comprises three types of approaches for evidence evaluation: reference-based, proxy-reference, and reference-less. We evaluate their effectiveness through agreement with human ratings and adversarial tests, and demonstrate that prompt-based scorers, particularly those leveraging LLMs and reference evidence, outperform traditional evaluation approaches.


Generative Debunking of Climate Misinformation

Zanartu, Francisco, Otmakhova, Yulia, Cook, John, Frermann, Lea

arXiv.org Artificial Intelligence

Misinformation about climate change causes numerous negative impacts, necessitating corrective responses. Psychological research has offered various strategies for reducing the influence of climate misinformation, such as the fact-myth-fallacy-fact-structure. However, practically implementing corrective interventions at scale represents a challenge. Automatic detection and correction of misinformation offers a solution to the misinformation problem. This study documents the development of large language models that accept as input a climate myth and produce a debunking that adheres to the fact-myth-fallacy-fact (``truth sandwich'') structure, by incorporating contrarian claim classification and fallacy detection into an LLM prompting framework. We combine open (Mixtral, Palm2) and proprietary (GPT-4) LLMs with prompting strategies of varying complexity. Experiments reveal promising performance of GPT-4 and Mixtral if combined with structured prompts. We identify specific challenges of debunking generation and human evaluation, and map out avenues for future work. We release a dataset of high-quality truth-sandwich debunkings, source code and a demo of the debunking system.


Unlearning Climate Misinformation in Large Language Models

Fore, Michael, Singh, Simranjit, Lee, Chaehong, Pandey, Amritanshu, Anastasopoulos, Antonios, Stamoulis, Dimitrios

arXiv.org Artificial Intelligence

Misinformation regarding climate change is a key roadblock in addressing one of the most serious threats to humanity. This paper investigates factual accuracy in large language models (LLMs) regarding climate information. Using true/false labeled Q&A data for fine-tuning and evaluating LLMs on climate-related claims, we compare open-source models, assessing their ability to generate truthful responses to climate change questions. We investigate the detectability of models intentionally poisoned with false climate information, finding that such poisoning may not affect the accuracy of a model's responses in other domains. Furthermore, we compare the effectiveness of unlearning algorithms, fine-tuning, and Retrieval-Augmented Generation (RAG) for factually grounding LLMs on climate change topics. Our evaluation reveals that unlearning algorithms can be effective for nuanced conceptual claims, despite previous findings suggesting their inefficacy in privacy contexts. These insights aim to guide the development of more factually reliable LLMs and highlight the need for additional work to secure LLMs against misinformation attacks.