Goto

Collaborating Authors

 uganda


A software security review on Uganda's Mobile Money Services: Dr. Jim Spire's tweets sentiment analysis

Wilberforce, Nsengiyumva

arXiv.org Artificial Intelligence

The proliferation of mobile money in Uganda has been a cornerstone of financial inclusion, yet its security mechanisms remain a critical concern. This study investigates a significant public response to perceived security failures: the #StopAirtelThefty Twitter campaign of August 2025 Sparked by an incident publicized by Dr. Jim Spire Ssentongo where a phone thief accessed a victim's account, withdrew funds, and procured a loan, the campaign revealed deep seated public anxiety over the safety of mobile money. This research employs qualitative analysis to systematically examine the complaints raised during this campaign, extracting key themes related to security vulnerabilities and user dissatisfaction. By synthesizing these public sentiments, the paper provides crucial insights into the specific security gaps experienced by users and situates these findings within the larger framework of Uganda's mobile money regulatory and operational environment. The study concludes with implications for providers, policymakers, and the future of secure digital finance in Uganda.


Amplify Initiative: Building A Localized Data Platform for Globalized AI

Rashid, Qazi Mamunur, van Liemt, Erin, Shih, Tiffany, Ebinama, Amber, Ramos, Karla Barrios, Maji, Madhurima, Verma, Aishwarya, Kalia, Charu, Smith-Loud, Jamila, Nakatumba-Nabende, Joyce, Baguma, Rehema, Katumba, Andrew, Mutebi, Chodrine, Marvin, Jagen, Wairagala, Eric Peter, Bruce, Mugizi, Oketta, Peter, Nderu, Lawrence, Obiajunwa, Obichi, Oppong, Abigail, Zimba, Michael, Authors, Data

arXiv.org Artificial Intelligence

Current AI models often fail to account for local context and language, given the predominance of English and Western internet content in their training data. This hinders the global relevance, usefulness, and safety of these models as they gain more users around the globe. Amplify Initiative, a data platform and methodology, leverages expert communities to collect diverse, high-quality data to address the limitations of these models. The platform is designed to enable co-creation of datasets, provide access to high-quality multilingual datasets, and offer recognition to data authors. This paper presents the approach to co-creating datasets with domain experts (e.g., health workers, teachers) through a pilot conducted in Sub-Saharan Africa (Ghana, Kenya, Malawi, Nigeria, and Uganda). In partnership with local researchers situated in these countries, the pilot demonstrated an end-to-end approach to co-creating data with 155 experts in sensitive domains (e.g., physicians, bankers, anthropologists, human and civil rights advocates). This approach, implemented with an Android app, resulted in an annotated dataset of 8,091 adversarial queries in seven languages (e.g., Luganda, Swahili, Chichewa), capturing nuanced and contextual information related to key themes such as misinformation and public interest topics. This dataset in turn can be used to evaluate models for their safety and cultural relevance within the context of these languages.


A Comparative Analysis of Wealth Index Predictions in Africa between three Multi-Source Inference Models

Karsai, Márton, Kertész, János, Espín-Noboa, Lisette

arXiv.org Artificial Intelligence

Poverty map inference is a critical area of research, with growing interest in both traditional and modern techniques, ranging from regression models to convolutional neural networks applied to tabular data, images, and networks. Despite extensive focus on the validation of training phases, the scrutiny of final predictions remains limited. Here, we compare the Relative Wealth Index (RWI) inferred by Chi et al. (2022) with the International Wealth Index (IWI) inferred by Lee and Braithwaite (2022) and Esp\'in-Noboa et al. (2023) across six Sub-Saharan African countries. Our analysis focuses on identifying trends and discrepancies in wealth predictions over time. Our results show that the predictions by Chi et al. and Esp\'in-Noboa et al. align with general GDP trends, with differences expected due to the distinct time-frames of the training sets. However, predictions by Lee and Braithwaite diverge significantly, indicating potential issues with the validity of the model. These discrepancies highlight the need for policymakers and stakeholders in Africa to rigorously audit models that predict wealth, especially those used for decision-making on the ground. These and other techniques require continuous verification and refinement to enhance their reliability and ensure that poverty alleviation strategies are well-founded.


LM-PUB-QUIZ: A Comprehensive Framework for Zero-Shot Evaluation of Relational Knowledge in Language Models

Ploner, Max, Wiland, Jacek, Pohl, Sebastian, Akbik, Alan

arXiv.org Artificial Intelligence

Knowledge probing evaluates the extent to which a language model (LM) has acquired relational knowledge during its pre-training phase. It provides a cost-effective means of comparing LMs of different sizes and training setups and is useful for monitoring knowledge gained or lost during continual learning (CL). In prior work, we presented an improved knowledge probe called BEAR (Wiland et al., 2024), which enables the comparison of LMs trained with different pre-training objectives (causal and masked LMs) and addresses issues of skewed distributions in previous probes to deliver a more unbiased reading of LM knowledge. With this paper, we present LM-PUB- QUIZ, a Python framework and leaderboard built around the BEAR probing mechanism that enables researchers and practitioners to apply it in their work. It provides options for standalone evaluation and direct integration into the widely-used training pipeline of the Hugging Face TRANSFORMERS library. Further, it provides a fine-grained analysis of different knowledge types to assist users in better understanding the knowledge in each evaluated LM. We publicly release LM-PUB-QUIZ as an open-source project.


New Curriculum, New Chance -- Retrieval Augmented Generation for Lesson Planning in Ugandan Secondary Schools. Prototype Quality Evaluation

Kloker, Simon, Bukoli, Herbertson, Kateete, Twaha

arXiv.org Artificial Intelligence

Introduction: Poor educational quality in Secondary Schools is still regarded as one of the major struggles in 21st century Uganda - especially in rural areas. Research identifies several problems, including low quality or absent teacher lesson planning. As the government pushes towards the implementation of a new curriculum, exiting lesson plans become obsolete and the problem is worsened. Using a Retrieval Augmented Generation approach, we developed a prototype that generates customized lesson plans based on the government-accredited textbooks. This helps teachers create lesson plans more efficiently and with better quality, ensuring they are fully aligned the new curriculum and the competence-based learning approach. Methods: The prototype was created using Cohere LLM and Sentence Embeddings, and LangChain Framework - and thereafter made available on a public website. Vector stores were trained for three new curriculum textbooks (ICT, Mathematics, History), all at Secondary 1 Level. Twenty-four lessons plans were generated following a pseudo-random generation protocol, based on the suggested periods in the textbooks. The lesson plans were analyzed regarding their technical quality by three independent raters following the Lesson Plan Analysis Protocol (LPAP) by Ndihokubwayo et al. (2022) that is specifically designed for East Africa and competence-based curriculums. Results: Evaluation of 24 lesson plans using the LPAP resulted in an average quality of between 75 and 80%, corresponding to "very good lesson plan". None of the lesson plans scored below 65%, although one lesson plan could be argued to have been missing the topic. In conclusion, the quality of the generated lesson plans is at least comparable, if not better, than those created by humans, as demonstrated in a study in Rwanda, whereby no lesson plan even reached the benchmark of 50%.


Lions' record-breaking swim across channel captured by drone camera

New Scientist

A pair of lion brothers have made the longest swim ever recorded for their species – about 1.5 kilometres across hippo and crocodile-infested waters. The massive swim – equivalent to the aquatic leg of an Olympic triathlon – was the pair's fourth attempt to cross the Kazinga Channel in Queen Elizabeth National Park, Uganda, and was recorded by a drone-mounted thermal camera at night. The lions had to abort earlier attempts after encountering large animals, most likely hippos or Nile crocodiles, which are also visible in the footage. Making the effort even more extraordinary, one of the lions, named Jacob, has only three legs. Jacob has had an extremely challenging life, says Alexander Braczkowski at Griffith University in Australia: he has been gored by a buffalo, his family was poisoned for the lion body-part trade, he was caught in a poacher's snare and he eventually lost his leg after it was stuck in a poacher's steel trap.


James Muldoon, Mark Graham and Callum Cant: 'AI feeds off the work of human beings'

The Guardian

James Muldoon is a reader in management at the University of Essex, Mark Graham a professor at the Oxford Internet Institute and Callum Cant a senior lecturer at the University of Essex business school. They work together at Fairwork, a project that appraises the working conditions in digital workplaces, and they are co-authors of Feeding the Machine: The Hidden Human Labour Powering AI. Why did you write the book? James Muldoon: The idea for the book emerged out of field work we did in Kenya and Uganda on the data annotation industry. We spoke to a number of data annotators, and the working conditions were just horrendous.


Decoding moral judgement from text: a pilot study

Gherman, Diana E., Zander, Thorsten O.

arXiv.org Artificial Intelligence

Moral judgement is a complex human reaction that engages cognitive and emotional dimensions. While some of the morality neural correlates are known, it is currently unclear if we can detect moral violation at a single-trial level. In a pilot study, here we explore the feasibility of moral judgement decoding from text stimuli with passive brain-computer interfaces. For effective moral judgement elicitation, we use video-audio affective priming prior to text stimuli presentation and attribute the text to moral agents. Our results show that further efforts are necessary to achieve reliable classification between moral congruency vs. incongruency states. We obtain good accuracy results for neutral vs. morally-charged trials. With this research, we try to pave the way towards neuroadaptive human-computer interaction and more human-compatible large language models (LLMs)


BooookScore: A systematic exploration of book-length summarization in the era of LLMs

Chang, Yapei, Lo, Kyle, Goyal, Tanya, Iyyer, Mohit

arXiv.org Artificial Intelligence

Summarizing book-length documents (>100K tokens) that exceed the context window size of large language models (LLMs) requires first breaking the input document into smaller chunks and then prompting an LLM to merge, update, and compress chunk-level summaries. Despite the complexity and importance of this task, it has yet to be meaningfully studied due to the challenges of evaluation: existing book-length summarization datasets (e.g., BookSum) are in the pretraining data of most public LLMs, and existing evaluation methods struggle to capture errors made by modern LLM summarizers. In this paper, we present the first study of the coherence of LLM-based book-length summarizers implemented via two prompting workflows: (1) hierarchically merging chunk-level summaries, and (2) incrementally updating a running summary. We obtain 1193 fine-grained human annotations on GPT-4 generated summaries of 100 recently-published books and identify eight common types of coherence errors made by LLMs. Because human evaluation is expensive and time-consuming, we develop an automatic metric, BooookScore, that measures the proportion of sentences in a summary that do not contain any of the identified error types. BooookScore has high agreement with human annotations and allows us to systematically evaluate the impact of many other critical parameters (e.g., chunk size, base LLM) while saving $15K and 500 hours in human evaluation costs. We find that closed-source LLMs such as GPT-4 and Claude 2 produce summaries with higher BooookScore than the oft-repetitive ones generated by LLaMA 2. Incremental updating yields lower BooookScore but higher level of detail than hierarchical merging, a trade-off sometimes preferred by human annotators. We release code and annotations after blind review to spur more principled research on book-length summarization.


Artificial intelligence could reduce barriers to TB care

#artificialintelligence

A new study led by faculty at the University of Georgia demonstrates the potential of using artificial intelligence to transform tuberculosis treatment in low-resource communities. And while the study focused on TB patients, it has applications across the health care sector, freeing up health care workers to perform other necessary tasks. Growing evidence has demonstrated the potential for AI to increase productivity, reduce health care worker burnout, and improve quality of care in clinical settings. The study, which was published last month in the Journal of Medical Internet Research AI, pilots the use of AI to watch thousands of submitted videos of TB patients taking their medication. This application could automate the job of a health care worker watching a patient take their pill at a clinic, known as directly observed therapy (DOT).