Goto

Collaborating Authors

 Communications: AI-Alerts


Aza Raskin Tried To Fix Social Media. Now He Wants to Use AI to Talk to Animals

TIME - Tech

During the early years of the Cold War, an array of underwater microphones monitoring for sounds of Russian submarines captured something otherworldly in the depths of the North Atlantic. The haunting sounds came not from enemy craft, nor aliens, but humpback whales, a species that, at the time, humans had hunted almost to the brink of extinction. Years later, when environmentalist Roger Payne obtained the recordings from U.S. Navy storage and listened to them, he was deeply moved. The whale songs seemed to reveal majestic creatures that could communicate with one another in complex ways. If only the world could hear these sounds, Payne reasoned, the humpback whale might just be saved from extinction. When Payne released the recordings in 1970 as the album Songs of the Humpback Whale, he was proved right. It was played at the U.N. general assembly, and it inspired Congress to pass the 1973 endangered species act. By 1986, commercial whaling was banned under international law.


How natural language processing helps promote inclusivity in online communities

#artificialintelligence

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. To create healthy online communities, companies need better strategies to weed out harmful posts. In this VB On-Demand event, AI/ML experts from Cohere and Google Cloud share insights into the new tools changing how moderation is done. Game players experience a staggering amount of online abuse. A recent study found that five out of six adults (18–45) experienced harassment in online multiplayer games, or over 80 million gamers.


This Chatbot Aims to Steer People Away From Child Abuse Material

WIRED

There are huge volumes of child sexual abuse photos and videos online--millions of pieces are removed from the web every year. These illegal images are often found on social media websites, image hosting services, dark web forums, and legal pornography websites. Now a new tool on one of the biggest pornography websites is trying to interrupt people as they search for child sexual abuse material and redirect them to a service where they can get help. Since March this year, each time someone has searched for a word or phrase that could be related to child sexual abuse material (also known as CSAM) on Pornhub's UK website, a chatbot has appeared and interrupted their attempted search, asking them whether they want to get help with the behavior they're showing. During the first 30 days of the system's trial, users triggered the chatbot 173,904 times.


Artificial intelligence suffers from some very human flaws. Gender bias is one

#artificialintelligence

Last month, Facebook parent Meta unveiled an artificial intelligence chatbot said to be its most advanced yet. BlenderBot 3, as the AI is known, is able to search the internet to talk to people about almost anything, and it has abilities related to personality, empathy, knowledge and long-term memory. BlenderBot 3 is also good at peddling anti-Semitic conspiracy theories, claiming that former US President Donald Trump won the 2020 election, and calling Meta Chairman and Facebook co-founder Mark Zuckerberg "creepy". It's not the first time an AI has gone rogue. In 2016, Microsoft's Tay AI took less than 24 hours to morph into a rightwing bigot on Twitter, posting racist and misogynistic tweets and praising Adolf Hitler.


Technical Perspective: Physical Layer Resilience through Deep Learning in Software Radios

Communications of the ACM

Resilience is the new holy grail in wireless communication systems. Complex radio environments and malicious attacks using intelligent jamming contribute to unreliable communication systems. Early approaches to deal with such problems were based on frequency hopping, scrambling, chirping, and cognitive radio-based concepts, among others. Physical-layer security was increased using known codes and pseudorandom number sequences. However, these approaches are not up to modern standards; they do not improve resilience and are rather easy to attack by means of intelligent jamming.


It didn't take long for Meta's new chatbot to say something offensive

CNN Top Stories

Meta's new chatbot can convincingly mimic how humans speak on the internet -- for better and worse. In conversations with CNN Business this week, the chatbot, which was released publicly Friday and has been dubbed BlenderBot 3, said it identifies as "alive" and "human," watches anime and has an Asian wife. It also falsely claimed that Donald Trump is still president and there is "definitely a lot of evidence" that the election was stolen. If some of those responses weren't concerning enough for Facebook's parent company, users were quick to point out that the artificial intelligence-powered bot openly blasted Facebook. In one case, the chatbot reportedly said it had "deleted my account" over frustration with how Facebook handles user data.



Do Computers Have Feelings? Don't Let Google Alone Decide

Bloomberg View

News that Alphabet Inc.'s Google sidelined an engineer who claimed its artificial intelligence system had become sentient after he'd had several months of conversations with it prompted plenty of skepticism from AI scientists. Many have said, via postings on Twitter, that senior software engineer Blake Lemoine projected his own humanity onto Google's chatbot generator LaMDA. Whether they're right, or Lemoine is right, is a matter for debate -- which should be allowed to continue without Alphabet stepping in to decide the matter.


The Download: DeepMind's AI shortcomings, and China's social media translation problem

MIT Technology Review

Earlier this month, DeepMind presented a new "generalist" AI model called Gato. The model can play the video game Atari, caption images, chat, and stack blocks with a real robot arm, the Alphabet-owned AI lab announced. All in all, Gato can do hundreds of different tasks. But while Gato is undeniably fascinating, in the week since its release some researchers have got a bit carried away. One of DeepMind's top researchers and a coauthor of the Gato paper, Nando de Freitas, couldn't contain his excitement.


UK watchdog fines facial recognition firm £7.5m over image collection

The Guardian

The UK's data watchdog has fined a facial recognition company £7.5m for collecting images of people from social media platforms and the web to add to a global database. The Information Commissioner's Office (ICO) also ordered US-based Clearview AI to delete the data of UK residents from its systems. Clearview AI has collected more than 20bn images of people's faces from Facebook, other social media companies and from scouring the web. John Edwards, the UK information commissioner, said Clearview's business model was unacceptable. "Clearview AI Inc has collected multiple images of people all over the world, including in the UK, from a variety of websites and social media platforms, creating a database with more than 20bn images," he said. "The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service.