Mbuvha, Rendani
Leveraging AI for Climate Resilience in Africa: Challenges, Opportunities, and the Need for Collaboration
Mbuvha, Rendani, Yaakoubi, Yassine, Bagiliko, John, Potes, Santiago Hincapie, Nammouchi, Amal, Amrouche, Sabrina
As climate change issues become more pressing, their impact in Africa calls for urgent, innovative solutions tailored to the continent's unique challenges. While Artificial Intelligence (AI) emerges as a critical and valuable tool for climate change adaptation and mitigation, its effectiveness and potential are contingent upon overcoming significant challenges such as data scarcity, infrastructure gaps, and limited local AI development. This position paper explores the role of AI in climate change adaptation and mitigation in Africa. It advocates for a collaborative approach to build capacity, develop open-source data repositories, and create context-aware, robust AI-driven climate solutions that are culturally and contextually relevant.
Open problems in causal structure learning: A case study of COVID-19 in the UK
Constantinou, Anthony, Kitson, Neville K., Liu, Yang, Chobtham, Kiattikun, Hashemzadeh, Arian, Nanavati, Praharsh A., Mbuvha, Rendani, Petrungaro, Bruno
Causal machine learning (ML) algorithms recover graphical structures that tell us something about cause-and-effect relationships. The causal representation praovided by these algorithms enables transparency and explainability, which is necessary for decision making in critical real-world problems. Yet, causal ML has had limited impact in practice compared to associational ML. This paper investigates the challenges of causal ML with application to COVID-19 UK pandemic data. We collate data from various public sources and investigate what the various structure learning algorithms learn from these data. We explore the impact of different data formats on algorithms spanning different classes of learning, and assess the results produced by each algorithm, and groups of algorithms, in terms of graphical structure, model dimensionality, sensitivity analysis, confounding variables, predictive and interventional inference. We use these results to highlight open problems in causal structure learning and directions for future research. To facilitate future work, we make all graphs, models, data sets, and source code publicly available online.
MphayaNER: Named Entity Recognition for Tshivenda
Mbuvha, Rendani, Adelani, David I., Mutavhatsindi, Tendani, Rakhuhu, Tshimangadzo, Mauda, Aluwani, Maumela, Tshifhiwa Joshua, Masindi, Andisani, Rananga, Seani, Marivate, Vukosi, Marwala, Tshilidzi
Named Entity Recognition (NER) plays a vital role in various Natural Language Processing tasks such as information retrieval, text classification, and question answering. However, NER can be challenging, especially in low-resource languages with limited annotated datasets and tools. This paper adds to the effort of addressing these challenges by introducing MphayaNER, the first Tshivenda NER corpus in the news domain. We establish NER baselines by \textit{fine-tuning} state-of-the-art models on MphayaNER. The study also explores zero-shot transfer between Tshivenda and other related Bantu languages, with chiShona and Kiswahili showing the best results. Augmenting MphayaNER with chiShona data was also found to improve model performance significantly. Both MphayaNER and the baseline models are made publicly available.
Imputation of Missing Streamflow Data at Multiple Gauging Stations in Benin Republic
Mbuvha, Rendani, Adounkpe, Julien Yise Peniel, Mongwe, Wilson Tsakane, Houngnibo, Mandela, Newlands, Nathaniel, Marwala, Tshilidzi
Streamflow observation data is vital for flood monitoring, agricultural, and settlement planning. However, such streamflow data are commonly plagued with missing observations due to various causes such as harsh environmental conditions and constrained operational resources. This problem is often more pervasive in under-resourced areas such as Sub-Saharan Africa. In this work, we reconstruct streamflow time series data through bias correction of the GEOGloWS ECMWF streamflow service (GESS) forecasts at ten river gauging stations in Benin Republic. We perform bias correction by fitting Quantile Mapping, Gaussian Process, and Elastic Net regression in a constrained training period. We show by simulating missingness in a testing period that GESS forecasts have a significant bias that results in low predictive skill over the ten Beninese stations. Our findings suggest that overall bias correction by Elastic Net and Gaussian Process regression achieves superior skill relative to traditional imputation by Random Forest, k-Nearest Neighbour, and GESS lookup. The findings of this work provide a basis for integrating global GESS streamflow data into operational early-warning decision-making systems (e.g., flood alert) in countries vulnerable to drought and flooding due to extreme weather events.
Forecasting The JSE Top 40 Using Long Short-Term Memory Networks
Balusik, Adam, de Magalhaes, Jared, Mbuvha, Rendani
As a result of the greater availability of big data, as well as the decreasing costs and increasing power of modern computing, the use of artificial neural networks for financial time series forecasting is once again a major topic of discussion and research in the financial world. Despite this academic focus, there are still contrasting opinions and bodies of literature on which artificial neural networks perform the best and whether or not they outperform the forecasting capabilities of conventional time series models. This paper uses a long-short term memory network to perform financial time series forecasting on the return data of the JSE Top 40 index. Furthermore, the forecasting performance of the long-short term memory network is compared to the forecasting performance of a seasonal autoregressive integrated moving average model. This paper evaluates the varying approaches presented in the existing literature and ultimately, compares the results to that existing literature. The paper concludes that the long short-term memory network outperforms the seasonal autoregressive integrated moving average model when forecasting intraday directional movements as well as when forecasting the index close price.
Healing Products of Gaussian Processes
Cohen, Samuel, Mbuvha, Rendani, Marwala, Tshilidzi, Deisenroth, Marc Peter
Gaussian processes (GPs) are nonparametric Bayesian models that have been applied to regression and classification problems. One of the approaches to alleviate their cubic training cost is the use of local GP experts trained on subsets of the data. In particular, product-of-expert models combine the predictive distributions of local experts through a tractable product operation. While these expert models allow for massively distributed computation, their predictions typically suffer from erratic behaviour of the mean or uncalibrated uncertainty quantification. By calibrating predictions via a tempered softmax weighting, we provide a solution to these problems for multiple product-of-expert models, including the generalised product of experts and the robust Bayesian committee machine. Furthermore, we leverage the optimal transport literature and propose a new product-of-expert model that combines predictions of local experts by computing their Wasserstein barycenter, which can be applied to both regression and classification.
Automatic Relevance Determination Bayesian Neural Networks for Credit Card Default Modelling
Mbuvha, Rendani, Boulkaibet, Illyes, Marwala, Tshilidzi
Credit risk modelling is an integral part of the global financial system. While there has been great attention paid to neural network models for credit default prediction, such models often lack the required interpretation mechanisms and measures of the uncertainty around their predictions. This work develops and compares Bayesian Neural Networks(BNNs) for credit card default modelling. This includes a BNNs trained by Gaussian approximation and the first implementation of BNNs trained by Hybrid Monte Carlo(HMC) in credit risk modelling. The results on the Taiwan Credit Dataset show that BNNs with Automatic Relevance Determination(ARD) outperform normal BNNs without ARD. The results also show that BNNs trained by Gaussian approximation display similar predictive performance to those trained by the HMC. The results further show that BNN with ARD can be used to draw inferences about the relative importance of different features thus critically aiding decision makers in explaining model output to consumers. The robustness of this result is reinforced by high levels of congruence between the features identified as important using the two different approaches for training BNNs.
On the Performance of Network Parallel Training in Artificial Neural Networks
Ericson, Ludvig, Mbuvha, Rendani
Artificial Neural Networks (ANNs) have received increasing attention in recent years with applications that span a wide range of disciplines including vital domains such as medicine, network security and autonomous transportation. However, neural network architectures are becoming increasingly complex and with an increasing need to obtain real-time results from such models, it has become pivotal to use parallelization as a mechanism for speeding up network training and deployment. In this work we propose an implementation of Network Parallel Training through Cannon's Algorithm for matrix multiplication. We show that increasing the number of processes speeds up training until the point where process communication costs become prohibitive; this point varies by network complexity. We also show through empirical efficiency calculations that the speedup obtained is superlinear.