In a project called "Relentless Doppelganger," a neural network is grinding out the blast beats, super-distorted guitars, and bellowing vocals of death metal. The best part of all: it's streaming its brutal creations 24 hours a day on YouTube -- an intriguing and public example of AI that's now able to generate convincing imitations of human art. The neural network is the work of Dadabots, a research duo that experiments with creating music using artificial intelligence tools. The death metal project, which they trained using tracks by death metal band Archspire, is the first that they've livestreamed instead of releasing as an album, and the change in format had everything to do with the quality of the neural network's output. In Dadabots' previous experiments, which dabbled in black metal and Beatles-inspired tracks, only about 5 percent of the AI-generated tracks were usable, co-creator CJ Carr told Futurism, and the programmers had to curate it.
The recent AI boom has had all sorts of weird and wonderful side effects as amateur tinkerers find ways to repurpose research from universities and tech companies. But one of the more unexpected applications has been in the world of video game mods. Fans have discovered that machine learning is the perfect tool to improve the graphics of classic games. The technique being used is known as "AI upscaling." In essence, you feed an algorithm a low-resolution image, and, based on training data it's seen, it spits out a version that looks the same but has more pixels in it.
Uber's self-driving car unit has been valued at $7.3bn (£5.6bn), after receiving $1bn of investment by a consortium including Toyota and Saudi Arabia's sovereign wealth fund. With weeks to go until the loss-making San Francisco firm's stock market float, expected to value the company at up to $100bn, Uber said it had secured new financial backing for its plans to develop autonomous vehicles. Japanese carmakers Toyota and its compatriot Denso, a car parts supplier, will invest a combined $667m in Uber's Advanced Technologies Group (ATG). The remainder will come from Japanese conglomerate SoftBank's $100bn Vision Fund, whose largest investor is Saudi Arabia. Toyota and SoftBank are already major investors in Uber, with the latter owning 16%.
The ACLU and other groups urged Amazon to halt selling facial recognition technology to law enforcement departments. Lending tools charge higher interest rates to Hispanics and African Americans. Job hunting tools favor men. Negative emotions are more likely to be assigned to black men's faces than white men. Computer vision systems for self-driving cars have a harder time spotting pedestrians with darker skin tones.
Every year trash companies sift through an estimated 68 million tons of recycling, which is the weight equivalent of more than 30 million cars. A key step in the process happens on fast-moving conveyor belts, where workers have to sort items into categories like paper, plastic and glass. Such jobs are dull, dirty, and often unsafe, especially in facilities where workers also have to remove normal trash from the mix. With that in mind, a team led by researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) has developed a robotic system that can detect if an object is paper, metal, or plastic. The team's "RoCycle" system includes a soft Teflon hand that uses tactile sensors on its fingertips to detect an object's size and stiffness.
A prized attribute among law enforcement specialists, the expert ability to visually identify human faces can inform forensic investigations and help maintain safe border crossings, airports, and public spaces around the world. The field of forensic facial recognition depends on highly refined traits such as visual acuity, cognitive discrimination, memory recall, and elimination of bias. Humans, as well as computers running machine learning (ML) algorithms, possess these abilities. And it is the combination of the two--a human facial recognition expert teamed with a computer running ML analyses of facial image data--that provides the most accurate facial identification, according to a recent 2018 study in which Rama Chellappa, Distinguished University Professor and Minta Martin Professor of Engineering, and his team collaborated with researchers at the National Institute of Standards and Technology and the University of Texas at Dallas. Chellappa, who holds appointments in UMD's Departments of Electrical and Computer Engineering and Computer Science and Institute for Advanced Computer Studies, is not surprised by the study results.
Sales of smart speakers are soaring despite some people's concerns over privacy, with Amazon's Alexa leading the charge into homes in various Echo devices and Google's Home and Assistant snapping at its heels. They come in various shapes, sizes and prices, but if you just want to dip your toe into the burgeoning voice-powered world, what's the cheapest way to get Alexa or Google Assistant into your home? Voice assistants don't actually need a dedicated speaker to work. If you have a modern smartphone chances are you either already have Google Assistant, if you have an Android phone, or can install the app on an iPhone. The same goes for Amazon's Alexa, which can even be set as the default voice assistant on an Android phone.