Goto

Collaborating Authors

 indonesia



Indonesia is lifting its ban on Grok, but with some conditions

Engadget

The country's Ministry of Communication and Digital Affairs said it will monitor xAI's newly implemented safety measures on an ongoing basis. Grok is once again available in Indonesia, after the country lifted its ban on the AI chatbot that was seen generating millions of sexualized deepfakes, thousands of which included children. The country's Ministry of Communication and Digital Affairs released a statement earlier today, which said X is allowed to resume service in Indonesia but will be subject to monitoring for any future violations. According to the Indonesian government agency, X provided a letter that detailed several implemented measures that prevent the misuse of its Grok chatbot. Alexander Sabar, the ministry's director general of digital space supervision, said in the statement that the agency will test the new measures on an ongoing basis and will ban Grok again if it's found spreading illegal content or violating the country's laws regarding children.


'Still here!': X's Grok AI tool accessible in Malaysia and Indonesia despite ban

The Guardian

Indonesia and Malaysia have become the first two countries in the world to announce blocks on the Grok AI. Indonesia and Malaysia have become the first two countries in the world to announce blocks on the Grok AI. D ays after Malaysia made global headlines by announcing it would temporarily ban Grok over its ability to generate "grossly offensive and nonconsensual manipulated images", the generative AI tool was conversing breezily with accounts registered in the country. That DNS block in Malaysia is pretty lightweight - easy to bypass with a VPN or DNS tweak," Grok's account on X said in response to a question from a user. Grok's ability to allow users to create sexually explicit images, including images of children, has created a global outcry over recent weeks, with regulators and politicians around the world launching investigations. Indonesia and Malaysia became the first two countries to announce blocks on the technology, with Malaysia's regulatory body saying last Sunday it had "directed a temporary restriction" on access to Grok, effective as of 11 January 2026. Officials in the Philippines have said they too plan to ban the technology. Blocking access to Grok is not straightforward, however. The technology not only exists across multiple platforms, including a standalone app and website, but is also integrated across X, which, along with Grok, is owned by Elon Musk's xAI. The protest group Everyone Hates Elon advertises a boycott of X in London. Over the past week, X users, and even Grok itself, have advised people on how to bypass restrictions. This includes using a VPN - many of which are available for free - or changing domain name system (DNS), the protocol on the internet that turns address names into IP addresses that load websites. When the Guardian tried to use Grok in Indonesia, its website was working even without a VPN, though the Grok app did not work. Grok was also still responding to Indonesian accounts on X, where it functions as an integrated chatbot. X has not been subject to a ban. Even if governments could completely restrict Grok, though, this is not a real solution, said Nana Nwachukwu, an AI governance expert and PhD researcher at Trinity College Dublin. "Blocking Grok is like slapping a Band-Aid on a weeping wound that you haven't cleaned," she said. "You block Grok, and then you go around shouting you've done something.


UK regulator Ofcom opens a formal investigation into X over CSAM scandal

Engadget

Malaysia and Indonesia are the first to block Grok over explicit deepfakes that the chatbot has been generating. The UK's media regulator has opened a formal investigation into X under the Online Safety Act. There have been deeply concerning reports of the Grok AI chatbot account on X being used to create and share undressed images of people -- which may amount to intimate image abuse or pornography -- and sexualized images of children that may amount to child sexual abuse material (CSAM), Ofcom said. The investigation will focus on whether X has has complied with its duties to protect people in the UK from content that is illegal in the UK. That includes whether X is taking appropriate measures to prevent UK users from seeing priority illegal content, such as CSAM and non-consensual intimate images; if the platform is removing illegal content quickly after becoming aware of it; and whether X carried out an updated risk assessment before making any significant changes to the platform.


Pigs have been island hopping for 50,000 years

Popular Science

With human help, the mammals can defy'the world's most fundamental natural boundaries.' Breakthroughs, discoveries, and DIY tips sent every weekday. Despite not exactly being world-renowned swimmers, pigs have spread across the Asia-Pacific region for thousands of years . With the genetic and archeological data from over 700 pigs, a team of scientists documented how people helped the mammals make their way across thousands of miles. "This research reveals what happens when people transport animals enormous distances, across one of the world's most fundamental natural boundaries," evolutionary geneticist and study co-author author Dr. David Stanton of the University of Cardiff and Queen Mary University of London said in a statement. "These movements led to pigs with a melting pot of ancestries. These patterns were technically very difficult to disentangle, but have ultimately helped us understand how and why animals came to be distributed across the Pacific islands."


Drone video shows devastation from floods in Indonesia's Sumatra

Al Jazeera

Drone video shows devastation from floods in Indonesia's Sumatra NewsFeed Drone video shows devastation from floods in Indonesia's Sumatra Drone video shows widespread destruction in part of Sumatra in Indonesia, where more than 440 people have died in flooding and landslides across the country. Hundreds of others are still missing. Pope Leo says two-state is'only solution' for Israel-Palestine Netanyahu requests Israel's president grant a pardon in corruption cases


Culture Cartography: Mapping the Landscape of Cultural Knowledge

Ziems, Caleb, Held, William, Yu, Jane, Goldberg, Amir, Grusky, David, Yang, Diyi

arXiv.org Artificial Intelligence

To serve global users safely and productively, LLMs need culture-specific knowledge that might not be learned during pre-training. How do we find such knowledge that is (1) salient to in-group users, but (2) unknown to LLMs? The most common solutions are single-initiative: either researchers define challenging questions that users passively answer (traditional annotation), or users actively produce data that researchers structure as benchmarks (knowledge extraction). The process would benefit from mixed-initiative collaboration, where users guide the process to meaningfully reflect their cultures, and LLMs steer the process towards more challenging questions that meet the researcher's goals. We propose a mixed-initiative methodology called CultureCartography. Here, an LLM initializes annotation with questions for which it has low-confidence answers, making explicit both its prior knowledge and the gaps therein. This allows a human respondent to fill these gaps and steer the model towards salient topics through direct edits. We implement this methodology as a tool called CultureExplorer. Compared to a baseline where humans answer LLM-proposed questions, we find that CultureExplorer more effectively produces knowledge that leading models like DeepSeek R1 and GPT-4o are missing, even with web search. Fine-tuning on this data boosts the accuracy of Llama-3.1-8B by up to 19.2% on related culture benchmarks.


HiRA: A Hierarchical Reasoning Framework for Decoupled Planning and Execution in Deep Search

Jin, Jiajie, Li, Xiaoxi, Dong, Guanting, Zhang, Yuyao, Zhu, Yutao, Zhao, Yang, Qian, Hongjin, Dou, Zhicheng

arXiv.org Artificial Intelligence

Complex information needs in real-world search scenarios demand deep reasoning and knowledge synthesis across diverse sources, which traditional retrieval-augmented generation (RAG) pipelines struggle to address effectively. Current reasoning-based approaches suffer from a fundamental limitation: they use a single model to handle both high-level planning and detailed execution, leading to inefficient reasoning and limited scalability. In this paper, we introduce HiRA, a hierarchical framework that separates strategic planning from specialized execution. Our approach decomposes complex search tasks into focused subtasks, assigns each subtask to domain-specific agents equipped with external tools and reasoning capabilities, and coordinates the results through a structured integration mechanism. This separation prevents execution details from disrupting high-level reasoning while enabling the system to leverage specialized expertise for different types of information processing. Experiments on four complex, cross-modal deep search benchmarks demonstrate that HiRA significantly outperforms state-of-the-art RAG and agent-based systems. Our results show improvements in both answer quality and system efficiency, highlighting the effectiveness of decoupled planning and execution for multi-step information seeking tasks. Our code is available at https://github.com/ignorejjj/HiRA.



From Handwriting to Feedback: Evaluating VLMs and LLMs for AI-Powered Assessment in Indonesian Classrooms

Aisyah, Nurul, Kautsar, Muhammad Dehan Al, Hidayat, Arif, Chowdhury, Raqib, Koto, Fajri

arXiv.org Artificial Intelligence

Despite rapid progress in vision-language and large language models (VLMs and LLMs), their effectiveness for AI-driven educational assessment in real-world, underrepresented classrooms remains largely unexplored. We evaluate state-of-the-art VLMs and LLMs on over 14K handwritten answers from grade-4 classrooms in Indonesia, covering Mathematics and English aligned with the local national curriculum. Unlike prior work on clean digital text, our dataset features naturally curly, diverse handwriting from real classrooms, posing realistic visual and linguistic challenges. Assessment tasks include grading and generating personalized Indonesian feedback guided by rubric-based evaluation. Results show that the VLM struggles with handwriting recognition, causing error propagation in LLM grading, yet LLM feedback remains pedagogically useful despite imperfect visual inputs, revealing limits in personalization and contextual relevance.