censorship
- North America > United States (0.14)
- Europe > United Kingdom (0.14)
- Europe > Germany (0.06)
- (4 more...)
- Law (0.89)
- Government > Regional Government > Asia Government > China Government (0.34)
The Information Networks That Connect Venezuelans in Uncertain Times
The people of Venezuela have spent years learning resilience in the face of censorship, disinformation, and repression. They now rely on those tools more than ever. In the early morning hours of Saturday, January 3, the roar of bombs dropping from the sky announced the US military attack on Venezuela, waking the sleeping residents of La Carlota, in Caracas, a neighborhood adjacent to the air base that was a target of Operation Absolute Resolve. Marina G.'s first thought, as the floors, walls, and windows of her second-story apartment shook, was that it was an earthquake. Her cat scrambled and hid for hours, while the neighbors' dogs began to bark incessantly.
- South America > Venezuela > Capital District > Caracas (0.27)
- North America > Central America (0.05)
- Europe > Russia (0.05)
- (6 more...)
- Media > News (1.00)
- Law Enforcement & Public Safety (1.00)
- Information Technology (1.00)
- (3 more...)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.47)
UK to bring into force law to tackle Grok AI deepfakes this week
The UK will bring into force a law which will make it illegal to create non-consensual intimate images, following widespread concerns over Elon Musk's Grok AI chatbot. The Technology Secretary Liz Kendall said the government would also seek to make it illegal for companies to supply the tools designed to create such images. Speaking to the Commons, Kendall said AI-generated pictures of women and children in states of undress, created without a person's consent, were not harmless images but weapons of abuse. The BBC has approached X for comment. It previously said: Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content..
- North America > United States (0.16)
- North America > Central America (0.15)
- Oceania > Australia (0.07)
- (15 more...)
- Law (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (0.53)
- Leisure & Entertainment > Sports (0.43)
Ofcom investigates Elon Musk's X over Grok AI sexual deepfakes
Ofcom has launched an investigation into Elon Musk's X over concerns its AI tool Grok is being used to create sexualised images. In a statement, the UK watchdog said there had been deeply concerning reports of the chatbot being used to create and share undressed images of people, as well as sexualised images of children. If found to have broken the law, Ofcom can potentially issue X with a fine of up to 10% of its worldwide revenue or £18 million, whichever is greater. The BBC has approached X for comment. Elon Musk previously said the UK government wanted any excuse for censorship in response to a post questioning why other AI platforms were not being looked at.
- North America > United States (0.16)
- North America > Central America (0.15)
- Oceania > Australia (0.07)
- (15 more...)
- Leisure & Entertainment (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (1.00)
- Law (0.98)
- Media (0.95)
Malaysia and Indonesia block Musk's Grok over sexually explicit deepfakes
Malaysia and Indonesia block Musk's Grok over sexually explicit deepfakes Malaysia and Indonesia have blocked access to Elon Musk's artificial intelligence (AI) chatbot Grok over its ability to produce sexually explicit deepfakes. Grok, a tool on Musk's X platform, allows users to generate images. In recent weeks however, it has been used to edit images of real people to show them in revealing outfits. The South East Asian countries said Grok could be used to produce pornographic and non-consensual images involving women and children. They are the first in the world to ban the AI tool.
- Asia > Indonesia (0.60)
- Asia > Malaysia (0.48)
- North America > United States (0.16)
- (15 more...)
- Information Technology > Security & Privacy (0.59)
- Leisure & Entertainment > Sports (0.43)
- Government > Regional Government > Europe Government > United Kingdom Government (0.31)
Effective Dimension in Bandit Problems under Censorship
In this paper, we study both multi-armed and contextual bandit problems in censored environments. Our goal is to estimate the performance loss due to censorship in the context of classical algorithms designed for uncensored environments. Our main contributions include the introduction of a broad class of censorship models and their analysis in terms of the effective dimension of the problem -- a natural measure of its underlying statistical complexity and main driver of the regret bound. In particular, the effective dimension allows us to maintain the structure of the original problem at first order, while embedding it in a bigger space, and thus naturally leads to results analogous to uncensored settings. Our analysis involves a continuous generalization of the Elliptical Potential Inequality, which we believe is of independent interest. We also discover an interesting property of decision-making under censorship: a transient phase during which initial misspecification of censorship is self-corrected at an extra cost; followed by a stationary phase that reflects the inherent slowdown of learning governed by the effective dimension. Our results are useful for applications of sequential decision-making models where the feedback received depends on strategic uncertainty (e.g., agents' willingness to follow a recommendation) and/or random uncertainty (e.g., loss or delay in arrival of information).
Horses, the Most Controversial Game of the Year, Doesn't Live Up to the Hype
Then its sales blew up. But fails to meet the lofty goals of its own ideas. Shortly before the December 2 release of horror game, developer Santa Ragione shared some news: the game would not be available on Valve's mega platform, Steam . Valve had already banned an early, incomplete version of the game two years ago and offered, according to Santa Ragione, little clarification about why at the time. Then, hours before the game's release, the Epic Games Store banned as well.
- North America > United States > California (0.14)
- Asia > Nepal (0.14)
- Europe > Slovakia (0.04)
- Europe > Czechia (0.04)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Leisure & Entertainment > Games > Computer Games (0.89)
- Information Technology (0.68)
Are LLMs Good Safety Agents or a Propaganda Engine?
Yadav, Neemesh, Ortu, Francesco, Liu, Jiarui, Yook, Joeun, Schölkopf, Bernhard, Mihalcea, Rada, Cazzaniga, Alberto, Jin, Zhijing
Large Language Models (LLMs) are trained to refuse to respond to harmful content. However, systematic analyses of whether this behavior is truly a reflection of its safety policies or an indication of political censorship, that is practiced globally by countries, is lacking. Differentiating between safety influenced refusals or politically motivated censorship is hard and unclear. For this purpose we introduce PSP, a dataset built specifically to probe the refusal behaviors in LLMs from an explicitly political context. PSP is built by formatting existing censored content from two data sources, openly available on the internet: sensitive prompts in China generalized to multiple countries, and tweets that have been censored in various countries. We study: 1) impact of political sensitivity in seven LLMs through data-driven (making PSP implicit) and representation-level approaches (erasing the concept of politics); and, 2) vulnerability of models on PSP through prompt injection attacks (PIAs). Associating censorship with refusals on content with masked implicit intent, we find that most LLMs perform some form of censorship. We conclude with summarizing major attributes that can cause a shift in refusal distributions across models and contexts of different countries.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > Finland > North Karelia > Joensuu (0.04)
- South America > Chile (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.66)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.48)
The Download: de-censoring DeepSeek, and Gemini 3
A group of quantum physicists at Spanish firm Multiverse Computing claims to have created a version of the powerful reasoning AI model DeepSeek R1 that strips out the censorship built into the original by its Chinese creators. In China, AI companies are subject to rules and regulations meant to ensure that content output aligns with laws and "socialist values." As a result, companies build in layers of censorship when training the AI systems. When asked questions that are deemed "politically sensitive," the models often refuse to answer or provide talking points straight from state propaganda. Multiverse Computing specializes in quantum-inspired AI techniques, which it used to create DeepSeek R1 Slim, a model that is 55% smaller but performs almost as well as the original model. It allowed them to identify and remove Chinese censorship so that the model answered sensitive questions in much the same way as Western models.
- Asia > China (0.26)
- Africa > Namibia (0.15)
- North America > United States > New York (0.05)
- (3 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Government (1.00)