Goto

Collaborating Authors

 ramen


RAMEN: Real-time Asynchronous Multi-agent Neural Implicit Mapping

Zhao, Hongrui, Ivanovic, Boris, Mehr, Negar

arXiv.org Artificial Intelligence

Figure 1: In a challenging real-world experiment with limited communication (agents can only exchange information every 30 seconds), our method RAMEN enables each turtlebot to successfully map the full scene while only physically visiting half of the scene (explored areas and trajectories are colored accordingly). Our method achieves accuracy comparable to the ground truth while the baseline method (DiNNO) fails to converge. Abstract --Multi-agent neural implicit mapping allows robots to collaboratively capture and reconstruct complex environments with high fidelity. However, existing approaches often rely on synchronous communication, which is impractical in real-world scenarios with limited bandwidth and potential communication interruptions. This paper introduces RAMEN: Real-time Asynchronous Multi-agEnt Neural implicit mapping, a novel approach designed to address this challenge. RAMEN employs an uncertainty-weighted multi-agent consensus optimization algorithm that accounts for communication disruptions. When communication is lost between a pair of agents, each agent retains only an outdated copy of its neighbor's map, with the uncertainty of this copy increasing over time since the last communication. Using gradient update information, we quantify the uncertainty associated with each parameter of the neural network map. Neural network maps from different agents are brought to consensus on the basis of their levels of uncertainty, with consensus biased towards network parameters with lower uncertainty. T o achieve this, we derive a weighted variant of the decentralized consensus alternating direction method of multipliers (C-ADMM) algorithm, facilitating robust collaboration among agents with varying communication and update frequencies.


Graph Regularized Encoder Training for Extreme Classification

Mittal, Anshul, Mohan, Shikhar, Saini, Deepak, Prabhu, Suchith C., jiao, Jain, Agarwal, Sumeet, Chakrabarti, Soumen, Kar, Purushottam, Varma, Manik

arXiv.org Artificial Intelligence

Deep extreme classification (XC) aims to train an encoder architecture and an accompanying classifier architecture to tag a data point with the most relevant subset of labels from a very large universe of labels. XC applications in ranking, recommendation and tagging routinely encounter tail labels for which the amount of training data is exceedingly small. Graph convolutional networks (GCN) present a convenient but computationally expensive way to leverage task metadata and enhance model accuracies in these settings. This paper formally establishes that in several use cases, the steep computational cost of GCNs is entirely avoidable by replacing GCNs with non-GCN architectures. The paper notices that in these settings, it is much more effective to use graph data to regularize encoder training than to implement a GCN. Based on these insights, an alternative paradigm RAMEN is presented to utilize graph metadata in XC settings that offers significant performance boosts with zero increase in inference computational costs. RAMEN scales to datasets with up to 1M labels and offers prediction accuracy up to 15% higher on benchmark datasets than state of the art methods, including those that use graph metadata to train GCNs. RAMEN also offers 10% higher accuracy over the best baseline on a proprietary recommendation dataset sourced from click logs of a popular search engine. Code for RAMEN will be released publicly.


I spent a day eating food cooked by robots in America's tech capital

The Guardian

Around the world, an industry has emerged around automating food service through robotics, raising questions about job security and mass unemployment while also prompting praise for streamlining and innovation. In the epicenter of Silicon Valley, where innovation is exalted beyond all else, this industry has played out in various forms, from cafes, burger shops and pizza delivery to odd vending machines. Man cannot survive on bread alone, the saying goes, but in the Bay Area, a woman could conceivably sustain herself on a varied menu of foodstuffs that had not passed the hand of man in preparation at all that day. And that woman is me. I began my day with a coffee at CafeX, where I met Francisco, the dancing and spinning robotic arm.


Noodle on this: Machine learning that can identify ramen by shop

@machinelearnbot

With 41 locations around Tokyo, Ramen Jiro is one of the most popular restaurant franchises in Japan, because of its generous portions of toppings, noodles and soup served at low prices. They serve the same basic menu at each shop, and as you can see above, it's almost impossible for a human (especially if you're new to Ramen Jiro) to tell what shop each bowl is made at. But Kenji thought deep learning could discern the minute details that make one shop's bowl of ramen different from the next. He had already built a machine learning model to classify ramen, but wanted to see if AutoML Vision could do it more efficiently. AutoML Vision creates customized ML models automatically--to identify animals in the wild, or recognize types of products to improve an online store, or in this case classify ramen.


The worst gadgets of 2017

Engadget

And it wasn't just the weekly political dramas, sexual harassment scandals or a massive security breach that affected nearly half the population that had us down. There was also a slew of terrible consumer devices that sullied our mood this year. Before we say goodbye to them, though, let's relive the horror one last time. Here's hoping that 2018 brings us better gadgets than this sorry lot. Even though Juicero technically debuted in 2016, it wasn't until 2017 that it met its epic end, and it's for that reason we're naming it one of the worst gadgets of the year.