Goto

Collaborating Authors

 serenity


A method for Selecting Scenes and Emotion-based Descriptions for a Robot's Diary

Ichikura, Aiko, Kawaharazuka, Kento, Obinata, Yoshiki, Okada, Kei, Inaba, Masayuki

arXiv.org Artificial Intelligence

Furthermore, we found that the robot's emotion generally improves the preference of the robot's diary regardless of the scene it describes. However, presenting negative or mixed emotions at once may decrease the preference of the diary or reduce the robot's robot-likeness, and thus the method of presenting emotions still needs further investigation. I. INTRODUCTION In human-robot communication, various studies have attempted to enhance the relationship between humans and robots. Among them, we focused on the effect on the relationship between a robot and a person when the robot shares its daily experiences with the person through a diary. Diaries are also used as an application for commercial robots sold in Japan, and have become one of the interaction tools between robots and people.SHARP's RoBoHoN[1], for example, can remember events of the day like a diary when you talk to the robot. GROOVE X, Inc.'s LOVOT[2] can't speak, but its mobile app provides a diary that displays a timeline of human


Serenity: Library Based Python Code Analysis for Code Completion and Automated Machine Learning

Zhao, Wenting, Abdelaziz, Ibrahim, Dolby, Julian, Srinivas, Kavitha, Helali, Mossad, Mansour, Essam

arXiv.org Artificial Intelligence

Dynamically typed languages such as Python have become very popular. Among other strengths, Python's dynamic nature and its straightforward linking to native code have made it the de-facto language for many research areas such as Artificial Intelligence. This flexibility, however, makes static analysis very hard. While creating a sound, or a soundy, analysis for Python remains an open problem, we present in this work Serenity, a framework for static analysis of Python that turns out to be sufficient for some tasks. The Serenity framework exploits two basic mechanisms: (a) reliance on dynamic dispatch at the core of language translation, and (b) extreme abstraction of libraries, to generate an abstraction of the code. We demonstrate the efficiency and usefulness of Serenity's analysis in two applications: code completion and automated machine learning. In these two applications, we demonstrate that such analysis has a strong signal, and can be leveraged to establish state-of-the-art performance, comparable to neural models and dynamic analysis respectively.


My AI-moderated video chat with strangers gave me hope

Engadget

In 2017, artists and filmmakers Lauren Lee McCarthy, Grace Lee and Tony Patrick were tasked with dreaming up the "future of work" for a residency at the University of Southern California. As part of a 3-month process of exploring ideas for the betterment of Los Angeles, the trio had to imagine what 2020 would look like. They predicted that the election year would bring about, among other things, "massive civil unrest," "a second civil war" and "a massive data dump," Lee said during a panel at Sundance 2021. "We called it the Breakdown 2020." The residency brought the trio together as they "tried to figure out what the hell world-building is," Patrick said.


Ordering Chaos: Memory-Aware Scheduling of Irregularly Wired Neural Networks for Edge Devices

Ahn, Byung Hoon, Lee, Jinwon, Lin, Jamie Menjay, Cheng, Hsin-Pai, Hou, Jilei, Esmaeilzadeh, Hadi

arXiv.org Machine Learning

Recent advances demonstrate that irregularly wired neural networks from Neural Architecture Search (NAS) and Random Wiring can not only automate the design of deep neural networks but also emit models that outperform previous manual designs. These designs are especially effective while designing neural architectures under hard resource constraints (memory, MACs, . . . ) which highlights the importance of this class of designing neural networks. However, such a move creates complication in the previously streamlined pattern of execution. In fact one of the main challenges is that the order of such nodes in the neural network significantly effects the memory footprint of the intermediate activations. Current compilers do not schedule with regard to activation memory footprint that it significantly increases its peak compared to the optimum, rendering it not applicable for edge devices. To address this standing issue, we present a memory-aware compiler, dubbed SERENITY, that utilizes dynamic programming to find a sequence that finds a schedule with optimal memory footprint. Our solution also comprises of graph rewriting technique that allows further reduction beyond the optimum. As such, SERENITY achieves optimal peak memory, and the graph rewriting technique further improves this resulting in 1.68x improvement with dynamic programming-based scheduler and 1.86x with graph rewriting, against TensorFlow Lite with less than one minute overhead.


Google speakers don't stand out, and that's a good thing

Engadget

Two years ago, Google unveiled the Home, its first-ever smart speaker. Unlike the Echo, with its tall, cylindrical shape that seemed like an over-sized router, the Home was short, stout and decidedly more friendly in appearance. Google followed that same design philosophy with last year's Home Mini, a fabric-wrapped shell that looked more like a piece of home decor than a smart speaker. Of course, the Home Max does look more speaker-like as that's its primary purpose, but it still has the fabric-clad aesthetic. The underlying design philosophy behind all of it: To look as unobtrusive as possible.