Vol 65 (2019)

Journal of Artificial Intelligence Research

JAIR is published by AI Access Foundation, a nonprofit public charity whose purpose is to facilitate the dissemination of scientific results in artificial intelligence. JAIR, established in 1993, was one of the first open-access scientific journals on the Web, and has been a leading publication venue since its inception. We invite you to check out our other initiatives.

Robot dishwasher could replace human workers

Daily Mail - Science & tech

The worst part of working in any restaurant could soon be eliminated after a robot capable of washing the pots has been invented by a US-based start-up. Although there are more than half a million people employed as dishwashers in the US alone, the job is poorly paid, gruelling work and has a high-turnover rate. But now Dishcraft, based in Silicon Valley, is hoping to tackle these issues with their automated dishwasher, reports CNBC. The system currently works by using bowls and plates that have metal pieces attached to them, but the founders, Linda Pouliot and Paul Birkmeyer, hope to move on to other items in the future. Dishcraft's robot currently only works with plates and bowls the company develop themselves as it has metal pieces attached to the bottom and are much stronger than other dishware.

Amazon Alexa could pick up on a patient in cardiac arrest

Daily Mail - Science & tech

The research was led by Justin Chan, a PhD student in the department of computer science and engineering. Almost 500,000 Americans die each year from a cardiac arrest, the researchers wrote in the journal npj Digital Medicine. And the condition kills 100,000 Britons annually, according to Arrhythmia Alliance. Study author Dr Jacob Sunshine, assistant professor of anesthesiology and pain medicine, said: 'Cardiac arrests are a very common way for people to die and right now many of them can go unwitnessed. 'Part of what makes this technology so compelling is that it could help us catch more patients in time for them to be treated.'

Neuromophic Computing


I saw a video article on Neuromorphic Computing the other day - something I had not really heard much about, though it ties in heavily to Artificial Intelligence which I, of course, do know about. Wow.. the possibilities are now endless. This is what Techopedia says about Neuromorphic Computing... Neuromorphic computing utilizes an engineering approach or method based on the activity of the biological brain. This type of approach can make technologies more versatile and adaptable, and promote more vibrant results than other types of traditional architectures, for instance, the von Neumann architecture that is so useful in traditional hardware design. Neuromorphic computing is also known as neuromorphic engineering.

NVIDIA Researchers Present Pixel Adaptive Convolutional Neural Networks at CVPR 2019 - NVIDIA Developer News Center


Despite the widespread use of convolutional neural networks (CNN), the convolution operations used in standard CNNs have some limitations. To overcome these limitations, Researchers from NVIDIA and University of Massachusetts Amherst, developed a new type of convolutional operations that can dynamically adapt to input images to generate filters specific to the content. The researchers will present their work at the annual Computer Vision and Pattern Recognition (CVPR) conference in Long Beach, California this week. "Convolutions are the fundamental building blocks of CNNs," the researchers wrote in the research paper, "the fact that their weights are spatially shared is one of the main reasons for their widespread use, but it is also a major limitation, as it makes convolutions content-agnostic". To help improve the efficiency of CNNs, the team proposed a generalization of convolutional operation, Pixel-Adaptive Convolution (PAC), to mitigate the limitation.

Machine Learning and Data Science with Python - BoTree Technologies


In this post, we will see one of the usage of machine learning using Python. But, initially we need to understand what machine learning is. Machine Learning is getting computers to program themselves. If programming is automation, then machine learning is automating the process of automation. If programming is automation, then machine learning is automating the process of automation.

Summit Achieves 445 Petaflops on New 'HPL-AI' Benchmark


Traditionally, supercomputer performance is measured using the High-Performance Linpack (HPL) benchmark, which is the basis for the Top500 list that biannually ranks world's fastest supercomputers. The Linpack benchmark tests a supercomputer's ability to conduct high-performance tasks (like simulations) that use double-precision math. On June's Top500 list, announced Monday, Summit's 148 Linpack petaflops land it first place by a comfortable margin. Using that same machine configuration, Oak Ridge National Laboratory (ORNL) and Nvidia have tested Summit on HPL-AI and gotten a result of 445 petaflops. While the HPL benchmark tests supercomputers' performance in double-precision math, AI is a rapidly growing use case for supercomputers -- and most AI models use mixed-precision math.

Festival of Work: Why job automation is an opportunity, not a threat - Personnel Today


Job automation was hotly debated by speakers at the CIPD's Festival of Work conference this week, with many agreeing that it will replace many middle-skilled occupations. But this shouldn't be seen as a threat to the human workforce, as Ashleigh Webber reports. Until very recently, job automation and the idea that artificial intelligence (AI) would put swathes of the workforce out of work seemed distant concerns. However, with Amazon recently rolling out 200,000 robots across 50 warehouses and two NHS hospitals in London introducing software to automate certain back office functions, the concept of a "robot workforce" appears to be rapidly becoming reality. What challenges will HR face in the next 10 years?

New deepfake algorithm allows you to text-edit the words of a speaker in a video


On the non-fingerprinting side of things, many, if not most, deep learning applications are already working on the problem of how to spot fakes. Indeed, with the Generative Adversarial Network approach, two networks compete against each other – one generating fake after fake, and another trying to pick the fakes from real inputs. Over millions of generations, the discerning network gets better at picking fakes, and the better it gets, the better the fake generating network has to become to fool it.

MIT's neural network aims to create the perfect pizza ZDNet


Cooking well takes patience, time, practice, and skill, and so is it possible for a machine to do what professional human chefs take years to perfect? A new study in deep neural networking, titled "How to make a pizza: Learning a compositional layer-based GAN model" and recently published on arxiv.org The PizzaGAN project is described as an experiment in how to teach a machine to make a pizza by recognizing aspects of cooking, such as adding and subtracting ingredients or cooking the dish. The Generative Adversarial Network (GAN) deep learning model is trained to recognize these different steps and objects, and by doing so, is able to view a single image of a pizza, dissect and peel apart each object or change'layer,' and recreate a step-by-step guide to cook it. "Given only weak image-level supervision, the operators are trained to generate a visual layer that needs to be added to or removed from the existing image," the research paper explains.