Collaborating Authors


Listen to brutal death metal made by a neural network


In a project called "Relentless Doppelganger," a neural network is grinding out the blast beats, super-distorted guitars, and bellowing vocals of death metal. The best part of all: it's streaming its brutal creations 24 hours a day on YouTube -- an intriguing and public example of AI that's now able to generate convincing imitations of human art. The neural network is the work of Dadabots, a research duo that experiments with creating music using artificial intelligence tools. The death metal project, which they trained using tracks by death metal band Archspire, is the first that they've livestreamed instead of releasing as an album, and the change in format had everything to do with the quality of the neural network's output. In Dadabots' previous experiments, which dabbled in black metal and Beatles-inspired tracks, only about 5 percent of the AI-generated tracks were usable, co-creator CJ Carr told Futurism, and the programmers had to curate it.

Artificial intelligence is helping old video games look like new


The recent AI boom has had all sorts of weird and wonderful side effects as amateur tinkerers find ways to repurpose research from universities and tech companies. But one of the more unexpected applications has been in the world of video game mods. Fans have discovered that machine learning is the perfect tool to improve the graphics of classic games. The technique being used is known as "AI upscaling." In essence, you feed an algorithm a low-resolution image, and, based on training data it's seen, it spits out a version that looks the same but has more pixels in it.

The Buddy System: Human-Computer Teams

IEEE Spectrum Robotics

A prized attribute among law enforcement specialists, the expert ability to visually identify human faces can inform forensic investigations and help maintain safe border crossings, airports, and public spaces around the world. The field of forensic facial recognition depends on highly refined traits such as visual acuity, cognitive discrimination, memory recall, and elimination of bias. Humans, as well as computers running machine learning (ML) algorithms, possess these abilities. And it is the combination of the two--a human facial recognition expert teamed with a computer running ML analyses of facial image data--that provides the most accurate facial identification, according to a recent 2018 study in which Rama Chellappa, Distinguished University Professor and Minta Martin Professor of Engineering, and his team collaborated with researchers at the National Institute of Standards and Technology and the University of Texas at Dallas. Chellappa, who holds appointments in UMD's Departments of Electrical and Computer Engineering and Computer Science and Institute for Advanced Computer Studies, is not surprised by the study results.

TensorFlow.js puts machine learning in the browser


Google's TensorFlow open source machine learning library has been extended to JavaScript with Tensorflow.js, a JavaScript library for deploying machine learning models in the browser. A WebGL-accelerated library, Tensorflow.js also works with the Node.js With machine learning directly in the browser, there is no need for drivers; developers can just run code. The project, which features an ecosystem of JavaScript tools, evolved from the Deeplearn.js APIs can be used to build models using the low-level JavaScript linear algebra library or the higher-level layers API.