Collaborating Authors

Man arrested after using AI to beat Japan's smut censorship


In brief A man was detained in Japan for selling uncensored pornographic content that he had, in a way, depixelated using machine-learning tools. Masayuki Nakamoto, 43, was said to have made about 11 million yen ($96,000) from peddling over 10,000 processed porn clips, and was formally accused of selling ten hardcore photos for 2,300 yen ($20). Explicit images of genitalia are forbidden in Japan, and as such its porn is partially pixelated. Don't pretend you don't know what we're talking about. Nakamato flouted these rules by downloading smutty photos and videos, and reportedly used deepfake technology to generate fake private parts in place of the pixelation.

Lyceum: An efficient and scalable ecosystem for robot learning Artificial Intelligence

We introduce Lyceum, a high-performance computational ecosystem for robot learning. Lyceum is built on top of the Julia programming language and the MuJoCo physics simulator, combining the ease-of-use of a high-level programming language with the performance of native C. In addition, Lyceum has a straightforward API to support parallel computation across multiple cores and machines. Overall, depending on the complexity of the environment, Lyceum is 5-30x faster compared to other popular abstractions like OpenAI's Gym and DeepMind's dm-control. This substantially reduces training time for various reinforcement learning algorithms; and is also fast enough to support real-time model predictive control through MuJoCo. The code, tutorials, and demonstration videos can be found at:

Comparing Popular Simulation Environments in the Scope of Robotics and Reinforcement Learning Artificial Intelligence

This letter compares the performance of four different, popular simulation environments for robotics and reinforcement learning (RL) through a series of benchmarks. The benchmarked scenarios are designed carefully with current industrial applications in mind. Given the need to run simulations as fast as possible to reduce the real-world training time of the RL agents, the comparison includes not only different simulation environments but also different hardware configurations, ranging from an entry-level notebook up to a dual CPU high performance server. We show that the chosen simulation environments benefit the most from single core performance. Yet, using a multi core system, multiple simulations could be run in parallel to increase the performance.

An End-to-End Differentiable but Explainable Physics Engine for Tensegrity Robots: Modeling and Control Artificial Intelligence

This work proposes an end-to-end differentiable physics engine for tensegrity robots, which introduces a data-efficient linear contact model for accurately predicting collision responses that arise due to contacting surfaces, and a linear actuator model that can drive these robots by expanding and contracting their flexible cables. To the best of the authors' knowledge, this is the \emph{first} differentiable physics engine for tensegrity robots that supports cable modeling, contact, and actuation. This engine can be used inside an off-the-shelf, RL-based locomotion controller in order to provide training examples. This paper proposes a progressive training pipeline for the differentiable physics engine that helps avoid local optima during the training phase and reduces data requirements. It demonstrates the data-efficiency benefits of using the differentiable engine for learning locomotion policies for NASA's icosahedron SUPERballBot. In particular, after the engine has been trained with few trajectories to match a ground truth simulated model, then a policy learned on the differentiable engine is shown to be transferable back to the ground-truth model. Training the controller requires orders of magnitude more data than training the differential engine.

IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks Artificial Intelligence

The IKEA Furniture Assembly Environment is one of the first benchmarks for testing and accelerating the automation of complex manipulation tasks. The environment is designed to advance reinforcement learning from simple toy tasks to complex tasks requiring both long-term planning and sophisticated low-level control. Our environment supports over 80 different furniture models, Sawyer and Baxter robot simulation, and domain randomization. The IKEA Furniture Assembly Environment is a testbed for methods aiming to solve complex manipulation tasks. The environment is publicly available at