figure
- North America > United States > California (0.04)
- Europe > France (0.04)
- Asia > Middle East > Jordan (0.04)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
- Government > Regional Government > North America Government > United States Government (0.93)
- Materials > Chemicals (0.93)
- Information Technology (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.95)
- Information Technology > Biomedical Informatics > Translational Bioinformatics (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
- North America > United States > Virginia (0.04)
- (3 more...)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > Virginia (0.04)
- North America > United States > New York (0.04)
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.67)
The Robot in Your Kitchen
A dozen or so young men and women, eyes obscured by VR headsets, shuffle around a faux kitchen inside a tech company's Silicon Valley headquarters. Their arms are bent at the elbows, palms facing down. One pilot stops to pick up a bottle of hot sauce from a counter, hinging at the waist, making sure to keep her hands in view of the camera on her headset at all times. Meters away, two humanoid robots, with bulbous joints and expressionless plastic domes for faces, stand at a desk. In front of each is a crumpled towel; to its right, a basket. More often than not, the towel catches on the edge of the basket and the robot freezes. Then an engineer steps in and returns the towel to a crumpled heap, and the sequence begins again. This was the scene inside the Silicon Valley headquarters of Figure AI on an August morning this year. The three-year-old startup was in a sprint ahead of the October announcement of its next robot, the Figure 03, which was undergoing top-secret training when TIME visited.
- Asia > China (0.04)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay (0.04)
- North America > United States > Texas (0.04)
- (2 more...)
- Banking & Finance > Economy (0.68)
- Information Technology > Robotics & Automation (0.47)
Asynchronous Gossip Algorithms for Rank-Based Statistical Methods
Van Elst, Anna, Colin, Igor, Clémençon, Stephan
Abstract--As decentralized AI and edge intelligence become increasingly prevalent, ensuring robustness and trustworthiness in such distributed settings has become a critical issue--especially in the presence of corrupted or adversarial data. Traditional decentralized algorithms are vulnerable to data contamination as they typically rely on simple statistics (e.g., means or sum), motivating the need for more robust statistics. In line with recent work on decentralized estimation of trimmed means and ranks, we develop gossip algorithms for computing a broad class of rank-based statistics, including L-statistics and rank statistics-- both known for their robustness to outliers. We apply our method to perform robust distributed two-sample hypothesis testing, introducing the first gossip algorithm for Wilcoxon rank-sum tests. We provide rigorous convergence guarantees, including the first convergence rate bound for asynchronous gossip-based rank estimation.
- North America > United States > California > San Diego County > San Diego (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Export Reviews, Discussions, Author Feedback and Meta-Reviews
This paper presents a hybrid approach for using both crowdsourced labels and an incrementally (online) trained model to address prediction problems; the core idea is to lean heavily on the crowd as the system is ramping up, learn from the labels thus acquired, and then use the crowd less and less often as the model becomes more confident. This is done via a sophisticated framing of the problem as a stochastic game based on a CRF prediction model in which the system and the crowd are both players. The system can issue one or more queries q for tokens x (with true label y) which elicit responses r, where there is a utility U(q,r) for each outcome; the system thus attempts to pick the actions that will maximize the expected utility. Furthermore, the queries are not issued all at once, but at times s (with response times t); utility is maximized with respect to a t_deadline by which an answer needs to be computed (this thus determines how many queries are sent out, at what rate, etc.) Computing this expected utility requires using the simulation dynamics model P(y,r,t x,q,s) in order to compute the utilities as in (4). Given the utility values, the optimal action could be chosen; however, the introduction of continuous time makes this intractable to optimize and as such an approximation is used based on Monte Carlo Tree Search and TD learning (Algorithm 1).
Reviews: Stochastic Submodular Maximization: The Case of Coverage Functions
The papers deals with the problem of submodular maximization; specifically, it proposes a stochastic optimization algorithm for maximizing a specific family of submodular functions, i.e. weighted coverage, under matroid constraints. The algorithm operates on the multilinear extension of the weighted coverage function. This way, the authors are sacrificing accuracy by optimizing a concave function which is a bounded approximation of the target function. However, they gain theoretically supported bounds on the convergence rate and the running time, which they also showcase in the experiments. In general, the paper is well written, and the set of ideas that it uses are well put together. The experimental section, although brief, drives the point that the authors want to make.
Reviews: Visual Reinforcement Learning with Imagined Goals
This paper proposes an algorithm for learning goal-conditioned RL policy, in which a goal is defined as a single image. The authors propose to encode a state (an image) to a vector in latent space using variational autoencoder, and define reward functions inside the latent space. The paper shows that such reward function outperforms baseline such as pixel based reward functions. The authors then proposed latent goal relabeling, which generates new goals and rewards given an exist tuple (s, a, s'). Finally, the authors propose goal imagination, which samples goals from latent space during training, essentially allowing training without specifying a particular goal.