Goto

Collaborating Authors

 rabinovich


AI Can Run Your Work Meetings Now

WIRED

Julian Green was explaining the big problem with meetings when our meeting started to glitch. A sentence came out as hiccups. Then he sputtered, froze, and ghosted. Green and I had been chatting on Headroom, a new video conferencing platform he and cofounder Andrew Rabinovich launched this fall. The glitch, they assured me, was not caused by their software, but by Green's Wi-Fi connection.


Semantic Drift in Multilingual Representations

Beinborn, Lisa, Choenni, Rochelle

arXiv.org Artificial Intelligence

Multilingual representations have mostly been evaluated based on their performance on specific tasks. In this article, we look beyond engineering goals and analyze the relations between languages in computational representations. We introduce a methodology for comparing languages based on their organization of semantic concepts. We propose to conduct an adapted version of representational similarity analysis of a selected set of concepts in computational multilingual representations. Using this analysis method, we can reconstruct a phylogenetic tree that closely resembles those assumed by linguistic experts. These results indicate that multilingual distributional representations which are only trained on monolingual text and bilingual dictionaries preserve relations between languages without the need for any etymological information. In addition, we propose a measure to identify semantic drift between language families. We perform experiments on word-based and sentence-based multilingual models and provide both quantitative results and qualitative examples. Analyses of semantic drift in multilingual representations can serve two purposes: they can indicate unwanted characteristics of the computational models and they provide a quantitative means to study linguistic phenomena across languages. The code is available at https://github.com/beinborn/SemanticDrift.


A Grammar-Based Structural CNN Decoder for Code Generation

Sun, Zeyu, Zhu, Qihao, Mou, Lili, Xiong, Yingfei, Li, Ge, Zhang, Lu

arXiv.org Machine Learning

Code generation maps a program description to executable source code in a programming language. Existing approaches mainly rely on a recurrent neural network (RNN) as the decoder. However, we find that a program contains significantly more tokens than a natural language sentence, and thus it may be inappropriate for RNN to capture such a long sequence. In this paper, we propose a grammar-based structural convolutional neural network (CNN) for code generation. Our model generates a program by predicting the grammar rules of the programming language; we design several CNN modules, including the tree-based convolution and pre-order convolution, whose information is further aggregated by dedicated attentive pooling layers. Experimental results on the HearthStone benchmark dataset show that our CNN code generator significantly outperforms the previous state-of-the-art method by 5 percentage points; additional experiments on several semantic parsing tasks demonstrate the robustness of our model. We also conduct in-depth ablation test to better understand each component of our model.


Magic Leap's Mica is a human-like AI in augmented reality

#artificialintelligence

Magic Leap showed off a demo of Mica, a humanlike artificial intelligence that can be viewed in the company's augmented reality glasses, the Magic Leap One Creator Edition. I saw a demo of Mica, a short-haired woman who doesn't speak but still communicates in warm ways with the viewer. I put the AR glasses on my head and looked through prescription inserts to see the virtual overlays on the real world. I thought it was the best thing Magic Leap showed off. I walked into a physical room and sat in a chair.


Multi-Path Feedback Recurrent Neural Networks for Scene Parsing

Jin, Xiaojie (National University of Singapore) | Chen, Yunpeng (National University of Singapore) | Jie, Zequn (National University of Singapore) | Feng, Jiashi (National University of Singapore) | Yan, Shuicheng (National University of Singapore)

AAAI Conferences

In this paper, we consider the scene parsing problem and propose a novel Multi-Path Feedback recurrent neural network (MPF-RNN) for parsing scene images. MPF-RNN can enhance the capability of RNNs in modeling long-range context information at multiple levels and better distinguish pixels that are easy to confuse. Different from feedforward CNNs and RNNs with only single feedback, MPF-RNN propagates the contextual features learned at top layer through multiple weighted recurrent connections to learn bottom features. For better training MPF-RNN, we propose a new strategy that considers accumulative loss at multiple recurrent steps to improve performance of the MPF-RNN on parsing small objects. With these two novel components, MPF-RNN has achieved significant improvement over strong baselines (VGG16 and Res101) on five challenging scene parsing benchmarks, including traditional SiftFlow, Barcelona, CamVid, Stanford Background as well as the recently released large-scale ADE20K.