Collaborating Authors


BBC Radio 4 - The Reith Lectures - Reith Lectures 2021 - Living With Artificial Intelligence


The lectures will examine what Russell will argue is the most profound change in human history as the world becomes increasingly reliant on super-powerful AI. Examining the impact of AI on jobs, military conflict and human behaviour, Russell will argue that our current approach to AI is wrong and that if we continue down this path, we will have less and less control over AI at the same time as it has an increasing impact on our lives. How can we ensure machines do the right thing? The lectures will suggest a way forward based on a new model for AI, one based on machines that learn about and defer to human preferences. The series of lectures will be held in four locations across the UK; Newcastle, Edinburgh, Manchester and London and will be broadcast on Radio 4 and the World Service as well as available on BBC Sounds.

Is my digital life being tracked or am I just paranoid?

USATODAY - Tech Top Stories

Every week, I help people like you on my national radio show with their technology or digital life issues. Sometimes, the answer is simple. I recommend a great way to get something done online, give a shopping recommendation, or share my tech wisdom. Other times, the issue is harder to pinpoint. Here's a common question I get: "A friend called and said they got a strange email from me that I don't remember sending.

Using DeepProbLog to perform Complex Event Processing on an Audio Stream Artificial Intelligence

In this paper, we present an approach to Complex Event Processing (CEP) that is based on DeepProbLog. This approach has the following objectives: (i) allowing the use of subsymbolic data as an input, (ii) retaining the flexibility and modularity on the definitions of complex event rules, (iii) allowing the system to be trained in an end-to-end manner and (iv) being robust against noisily labelled data. Our approach makes use of DeepProbLog to create a neuro-symbolic architecture that combines a neural network to process the subsymbolic data with a probabilistic logic layer to allow the user to define the rules for the complex events. We demonstrate that our approach is capable of detecting complex events from an audio stream. We also demonstrate that our approach is capable of training even with a dataset that has a moderate proportion of noisy data.

Breaking up or getting divorced? How to remove your ex from your digital life

FOX News

Fox News Flash top headlines are here. Check out what's clicking on You get married or move in together, and your lives are tied in countless ways: a mortgage, the power bill, and your relationship status on social media sites. Then it ends, and you're left with a lot of heartache and a lot of work. It's bad enough thinking about everything strangers know about you.

AI & Law: Informing Clients About AI


For a free podcast of this article, visit this link or find our AI & Law podcast series on Spotify, iTunes, iHeartRadio, plus on other audio services. For the latest…

Amazing Business Radio: Customer Insight


They discuss Khandelwal's artificial intelligence platform that collects valuable information and insights from consumers who reach out. The goal of this platform is to provide you with real-time insight into why consumers are contacting you in the first place. This information should subsequently be disseminated across the board with everyone involved. This department assures and is accountable for your customer's return. Companies should appreciate their support workers in addition to their data.

5 insider tech travel hacks you'll use every single trip

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Every summer, I get the travel itch. Before you head out, make sure your home is locked down. The bad news is security cameras, from video doorbells to a full-fledged security system, aren't always hack-proof out of the box.

Māori are trying to save their language from Big Tech


In March 2018, Peter-Lucas Jones and the ten other staff at Te Hiku Media, a small non-profit radio station nestled just below New Zealand's most northern tip, were in disbelief. In ten days, thanks to a competition it had started, Māori speakers across New Zealand had recorded over 300 hours of annotated audio in their mother tongue. It was enough data to build language tech for te reo Māori, the Māori language – including automatic speech recognition and speech-to-text. The small staff of Māori language broadcasters and one engineer were about to become pioneers in indigenous speech recognition technology. But building the tools was only half the battle. Te Hiku soon found itself fending off corporate entities trying to develop their own indigenous data sets and resisting detrimental western approaches to data sharing.

Using Radio Archives for Low-Resource Speech Recognition: Towards an Intelligent Virtual Assistant for Illiterate Users Artificial Intelligence

For many of the 700 million illiterate people around the world, speech recognition technology could provide a bridge to valuable information and services. Yet, those most in need of this technology are often the most underserved by it. In many countries, illiterate people tend to speak only low-resource languages, for which the datasets necessary for speech technology development are scarce. In this paper, we investigate the effectiveness of unsupervised speech representation learning on noisy radio broadcasting archives, which are abundant even in low-resource languages. We make three core contributions. First, we release two datasets to the research community. The first, West African Radio Corpus, contains 142 hours of audio in more than 10 languages with a labeled validation subset. The second, West African Virtual Assistant Speech Recognition Corpus, consists of 10K labeled audio clips in four languages. Next, we share West African wav2vec, a speech encoder trained on the noisy radio corpus, and compare it with the baseline Facebook speech encoder trained on six times more data of higher quality. We show that West African wav2vec performs similarly to the baseline on a multilingual speech recognition task, and significantly outperforms the baseline on a West African language identification task. Finally, we share the first-ever speech recognition models for Maninka, Pular and Susu, languages spoken by a combined 10 million people in over seven countries, including six where the majority of the adult population is illiterate. Our contributions offer a path forward for ethical AI research to serve the needs of those most disadvantaged by the digital divide.

The insider pro trick to find any photo on your phone in seconds

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Our phones are jam-packed with photos. Pick 25 at random, and I bet only a handful are decent photos you want to keep around. Duplicates and the shot right before the good one make up a lot of that junk.