NAIROBI (Thomson Reuters Foundation) - Countries are rapidly developing "killer robots" - machines with artificial intelligence (AI) that independently kill - but are moving at a snail's pace on agreeing global rules over their use in future wars, warn technology and human rights experts. From drones and missiles to tanks and submarines, semi-autonomous weapons systems have been used for decades to eliminate targets in modern day warfare - but they all have human supervision. Nations such as the United States, Russia and Israel are now investing in developing lethal autonomous weapons systems (LAWS) which can identify, target, and kill a person all on their own - but to date there are no international laws governing their use. "Some kind of human control is necessary ... Only humans can make context-specific judgements of distinction, proportionality and precautions in combat," said Peter Maurer, President of the International Committee of the Red Cross (ICRC).
For emerging and developing nations, lower rates of Internet access further widens the digital skills divide. For example, a 2013 Pew Research Center study demonstrated that while 84% of the adult population uses the internet in the United States, only 8% of adults do so in Pakistan and 26% in Ghana. This geographic divide affects developed countries as well, where refugees and migrants from developing countries are especially vulnerable. In Germany, only 45 percent of Syrian refugees have a school-leaving certificate, and only 23% hold a college degree. These refugees lag behind Germans in terms of skills and education background, factors that make upward mobility difficult, as only 8% are hired as skilled workers.
Artificial intelligence is our shiny new toy. We're dazzled by its potential to help us make our lives more efficient, more productive…just…better. Already, AI systems seem to be getting closer to making better decisions than a human can, which is a relatively new development. Until now, while even the most advanced machine learning systems have been very good at sorting massive amounts of data and contextualising it to make sense of it, they haven't been better than us at deciding what to do with their findings. AI changes all this through one distinguishing quality: its ability to adapt its own behaviour, often in milliseconds – instead of days, weeks or years, as is the case with the average human.
Now that we understand what feature engineering is, let's go straight into the practical aspect of this article. The first is the Loan Default Prediction dataset hosted on Zindi by Data Science Nigeria, and the second -- also hosted on Zindi -- is the Sendy Logistics dataset by Sendy. You can find the descriptions of the dataset and the corresponding machine learning tasks in the links above. If you have cloned the repo, you'll have a folder of the datasets and the notebook used for this article and can follow along easily. First, let's import some libraries and the datasets: We can see that the loan dataset has three tables.
The contact centre is changing. In the past, call centre agents had to process a large volume of standard calls, really quickly. But with the deployment of new technologies like artificial intelligence (AI) and robotic process automation (RPA) these agents no longer have to carry out incredibly repetitive tasks and can rather focus their attention on tackling more complex customer concerns. For call agents, RPA makes it possible to complete simple tasks across back-end systems, which reduces the amount of time spent on admin, says Adriaan van Staden, senior sales manager at call centre tech vendor Genesys South Africa. RPA is in its broadest sense an application that is governed by business logic and structured inputs, aimed at automating business processes.
The future is intelligent: By 2030, artificial intelligence (AI) will add $15.7 trillion to the global GDP, with $6.6 trillion projected to be from increased productivity and $9.1 trillion from consumption effects. Furthermore, augmentation, which allows people and AI to work together to enhance performance, "will create $2.9 trillion of business value and 6.2 billion hours of worker productivity globally." In a world that is increasingly characterized by enhanced connectivity and where data is as pervasive as it is valuable, Africa has a unique opportunity to leverage new digital technologies to drive large-scale transformation and competitiveness. Africa cannot and should not be left behind. There are 10 key enabling technologies that will drive Africa's digital economy, including cybersecurity, cloud computing, big data analytics, blockchain, the Internet of Things, 3D printing, biotechnology, robotics, energy storage, and AI.
Electroencephalography (EEG) recordings of rhythm perception might contain enough information to distinguish different rhythm types/genres or even identify the rhythms themselves. We apply convolutional neural networks (CNNs) to analyze and classify EEG data recorded within a rhythm perception study in Kigali, Rwanda which comprises 12 East African and 12 Western rhythmic stimuli – each presented in a loop for 32 seconds to 13 participants. We investigate the impact of the data representation and the pre-processing steps for this classification tasks and compare different network structures. Using CNNs, we are able to recognize individual rhythms from the EEG with a mean classification accuracy of 24.4% (chance level 4.17%) over all subjects by looking at less than three seconds from a single channel. Aggregating predictions for multiple channels, a mean accuracy of up to 50% can be achieved for individual subjects.
Cyber-attack Automated Unconventional Sensor Environment (CAUSE), applies AI/ML-based models to develop novel, automated methods for event-based detection and prediction of cyber-attacks significantly earlier than existing approaches. Forecasting cyber-attack events with actionable details advances the state-of-the-art by enabling threat-specific cyber incident response and defense measures; Creation of Operationally Realistic 3D Environment (CORE3D), uses machine learning and deep learning techniques to develop methods for the construction of a fully automated high fidelity 3D model of the world using remote sensing data; Deep Intermodal Video Analytics (DIVA), leverages machine learning techniques to develop robust automatic activity detection in streaming video across multiple cameras; Finding Engineering-Linked Indicators (FELIX), uses AI for detection of engineering signatures across multiple biological organisms. The goal is to distinguish natural organisms from those that have been engineered; Functional Map of the World Challenge, developed algorithms that would quickly and accurately classify 63 classes of buildings and regions in satellite imagery. All the top participants used various forms of deep learning; Functional Genomic and Computational Assessment of Threats (Fun GCAT), develops AI/ML-based approaches to learn and classify genetic (e.g., DNA) sequence data by genetic taxonomy, sequence function, and threat potential; Mercury Challenge, asked challenge participants to make use of AI/ML approaches to forecast a variety of political events in the Middle East and North Africa region, such as non-violent civil unrest and military activity; Machine Intelligence from Cortical Networks (MICrONS), aims to revolutionize machine learning by reverse-engineering the algorithms of the brain. The program is expressly designed as a dialogue between data science and neuroscience; Machine Translation for English Retrieval of Information in Any Language (MATERIAL), develops machine learning methods to identify foreign language information from speech and text relevant to English queries, and providing evidence of relevance of the retrieved information in English in a meaningful way.