Goto

Collaborating Authors

 new series


Improving Deep Attractor Network by BGRU and GMM for Speech Separation

Melhem, Rawad, Jafar, Assef, Hamadeh, Riad

arXiv.org Artificial Intelligence

Deep Attractor Network (DANet) is the state-of-the-art technique in speech separation field, which uses Bidirectional Long Short-Term Memory (BLSTM), but the complexity of the DANet model is very high. In this paper, a simplified and powerful DANet model is proposed using Bidirectional Gated neural network (BGRU) instead of BLSTM. The Gaussian Mixture Model (GMM) other than the k-means was applied in DANet as a clustering algorithm to reduce the complexity and increase the learning speed and accuracy. The metrics used in this paper are Signal to Distortion Ratio (SDR), Signal to Interference Ratio (SIR), Signal to Artifact Ratio (SAR), and Perceptual Evaluation Speech Quality (PESQ) score. Two speaker mixture datasets from TIMIT corpus were prepared to evaluate the proposed model, and the system achieved 12.3 dB and 2.94 for SDR and PESQ scores respectively, which were better than the original DANet model. Other improvements were 20.7% and 17.9% in the number of parameters and time training, respectively. The model was applied on mixed Arabic speech signals and the results were better than that in English.


New Series: Creating Media with Machine Learning

#artificialintelligence

Welcome to the first post in our multi-part series on how Netflix is developing and using machine learning (ML) to help creators make better media -- from TV shows to trailers to movies to promotional art and so much more. Media is at the heart of Netflix. Through each engagement, media is how we bring our members continued joy. This blog series will take you behind the scenes, showing you how we use the power of machine learning to create stunning media at a global scale. At Netflix, we launch thousands of new TV shows and movies every year for our members across the globe.


#TalkDataToMe – a new series from The Alan Turing Institute

AIHub

This month, The Alan Turing Institute launched #TalkDataToMe, a new video series and social media campaign which aims to explain topics related to artificial intelligence (AI) and data science. The video series has been created to provide accessible and factual information. The first episode sees host Tabitha Goldstaub ask Andrea Baronchelli about non-fungible tokens (NFTs). He explains what NFTs are, provides some examples of NFTs, and discusses what the development of NFTs means for online "ownership". The Institute have invited the audience to get in touch and suggest topics that they'd like to see covered.


AIhub monthly digest: April 2022 – images of AI, data justice, and winning at bridge

AIHub

Welcome to our April 2022 monthly digest, where you can catch up with any AIhub stories you may have missed, get the low-down on recent events, and much more. This month, we hear from our latest new voice in AI, talk about AI images, investigate data justice, and watch an AI system play bridge. In our latest episode of New voices in AI, we caught up with Maria De-Arteaga who told us about her work and journey into algorithmic fairness and human algorithm collaboration. You can find all episodes in the series here. In this article, Thom Badings and Nils Jansen write about their work on controllers for autonomous systems that won them, and co-authors Alessandro Abate, David Parker, Hasan Poonawala, and Marielle Stoelinga, a distinguished paper award at AAAI 2022.


Microsoft announces ND A100 v4 VM series--a new series of AI virtual machines

#artificialintelligence

Microsoft has announced the development of its new ND A100 v4 VM--an AI virtual machine series designed to give its customers a massive AI processing boost. Microsoft made the announcement on its Azure blog, saying that the new series will be available soon. Projects based on artificial intelligence tend to require a lot of computer power--more than most organizations have on hand. This situation has led big companies like Microsoft to develop AI systems that can be accessed via the internet. Training such systems typically requires more resources than its customers have access to, which has led to the development of partially trained systems--the bulk of the training is done by a team at Microsoft, leaving customers to add a minor amount of training for their unique application.


Deep learning with point clouds

#artificialintelligence

If you've ever seen a self-driving car in the wild, you might wonder about that spinning cylinder on top of it. It's a "lidar sensor," and it's what allows the car to navigate the world. By sending out pulses of infrared light and measuring the time it takes for them to bounce off objects, the sensor creates a "point cloud" that builds a 3D snapshot of the car's surroundings. Making sense of raw point-cloud data is difficult, and before the age of machine learning it traditionally required highly trained engineers to tediously specify which qualities they wanted to capture by hand. But in a new series of papers out of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), researchers show that they can use deep learning to automatically process point clouds for a wide range of 3D-imaging applications.


Rebooting 'Battlestar Galactica' isn't worth the fraking risk. Or is it?

#artificialintelligence

In the 2003 reboot of Battlestar Galactica we learned that "this has all happened before and will all happen again." And now, that prophecy is coming true, again. On Tuesday, news broke that NBC's new streaming service, Peacock, will debut a new version of Battlestar Galactica written and produced by Sam Esmail, famous for his work on Mr. Robot. But, because Battlestar has already been rebooted, and kind of recently, the newly announced series is either a losing sci-fi gamble or the best reboot idea in years. Back in 2003, the late Richard Hatch actively tried to sabotage the Sci-Fi Channel's "reimagining" of Battlestar Galactica.


Robert Downey Jr. To Make AI Series For YouTube

Forbes - Tech

HOLLYWOOD, CA - APRIL 23: Actor Robert Downey Jr. attends the premiere of Disney and Marvel's'Avengers: Infinity War' on April 23, 2018 in Hollywood, California. Movie star Robert Downey Jr. and his wife, producer Susan Downey, are making a new series about artificial intelligence (AI) for YouTube. The untitled documentary series, announced by YouTube on Tuesday, will be broadcast on YouTube Red -- a subscription service that competes with Netflix. It will have eight hour-long episodes when it airs in 2019. The show will feature experts in science, philosophy, technology, engineering, medicine, futurism, and entertainment, YouTube said. It will be hosted and narrated by Robert Downey Jr.. AI has been the topic of several well-known science fiction movies but few serious documentaries have been made about the technology, which could have a more profound impact than the industrial revolution.


Machine Learning: A New Series From Adafruit Make:

#artificialintelligence

Adafruit has started a new series titled "MACHINE LEARNING:". They'll be exploring the intersections of hackers, makers, engineers, artists, and artificial intelligence. In this first part of the series, Phil Torrone looks at the 1970 movie Colossus: The Forbin Project. Movie history is filled with computers that make us miserable. Unlike today's computers that make our lives dreadful, like the little ones in our pockets eager to sell out our privacy for a nickel, or crash when we need them most, yesterday's computers were their own character in each film.


Episode two Blue Planet II gives glimpse into the deep

Daily Mail - Science & tech

Episode two of Blue Planet II could be one of Sir David Attenborough's scariest shows yet - giving us a glimpse of life in total darkness that we are only just starting to explore. The episode also looks at peculiar gardens that are thriving in the pitch black as well as species of coral that have never been seen in shallower waters. The fangtooth (pictured) has the largest teeth relative to body size for any fish in the entire ocean. The filming of Blue Planet involved around 1,000 people from producers to deep sea divers, researchers to scientists, camera crews to helicopter pilots and drone operators. Some 125 expeditions were undertaken across every ocean, with 1,500 days spent at sea and 6,000 hours underwater.