Goto

Collaborating Authors

 ensembler


EveryDayVLA: A Vision-Language-Action Model for Affordable Robotic Manipulation

Chopra, Samarth, McMoil, Alex, Carnovale, Ben, Sokolson, Evan, Kubendran, Rajkumar, Dickerson, Samuel

arXiv.org Artificial Intelligence

Abstract-- While Vision-Language-Action (VLA) models map visual inputs and language instructions directly to robot actions, they often rely on costly hardware and struggle in novel or cluttered scenes. We introduce EverydayVLA, a 6-DOF manipulator that can be assembled for $300, capable of modest payloads and workspaces. A single unified model jointly outputs discrete and continuous actions, and our adaptive-horizon ensembler monitors motion uncertainty to trigger on-the-fly replanning for safe, reliable operation. On LIBERO, Ev-erydayVLA matches state-of-the-art success rates, and in real-world tests it outperforms prior methods by 49% in-distribution and 34.9% out-of-distribution. By combining a state-of-the-art VLA with cost-effective hardware, EverydayVLA democratizes access to a robotic foundation model, and paves the way for economical use in homes and research labs alike.


Ensembler: Combating model inversion attacks using model ensemble during collaborative inference

Liu, Dancheng, Xiong, Jinjun

arXiv.org Artificial Intelligence

Deep learning models have exhibited remarkable performance across various domains. Nevertheless, the burgeoning model sizes compel edge devices to offload a significant portion of the inference process to the cloud. While this practice offers numerous advantages, it also raises critical concerns regarding user data privacy. In scenarios where the cloud server's trustworthiness is in question, the need for a practical and adaptable method to safeguard data privacy becomes imperative. In this paper, we introduce Ensembler, an extensible framework designed to substantially increase the difficulty of conducting model inversion attacks for adversarial parties. Ensembler leverages model ensembling on the adversarial server, running in parallel with existing approaches that introduce perturbations to sensitive data during colloborative inference. Our experiments demonstrate that when combined with even basic Gaussian noise, Ensembler can effectively shield images from reconstruction attacks, achieving recognition levels that fall below human performance in some strict settings, significantly outperforming baseline methods lacking the Ensembler framework. In numerous critical domains, deep learning (DL) models have demonstrated exceptional performance when compared to traditional methods, including image classification Deng et al. (2009); Dosovitskiy et al. (2021), natural language processing Brown et al. (2020), protein predictions Jumper et al. (2021), and more.


On the Perils of Cascading Robust Classifiers

Mangal, Ravi, Wang, Zifan, Zhang, Chi, Leino, Klas, Pasareanu, Corina, Fredrikson, Matt

arXiv.org Artificial Intelligence

Ensembling certifiably robust neural networks is a promising approach for improving the \emph{certified robust accuracy} of neural models. Black-box ensembles that assume only query-access to the constituent models (and their robustness certifiers) during prediction are particularly attractive due to their modular structure. Cascading ensembles are a popular instance of black-box ensembles that appear to improve certified robust accuracies in practice. However, we show that the robustness certifier used by a cascading ensemble is unsound. That is, when a cascading ensemble is certified as locally robust at an input $x$ (with respect to $\epsilon$), there can be inputs $x'$ in the $\epsilon$-ball centered at $x$, such that the cascade's prediction at $x'$ is different from $x$ and thus the ensemble is not locally robust. Our theoretical findings are accompanied by empirical results that further demonstrate this unsoundness. We present \emph{cascade attack} (CasA), an adversarial attack against cascading ensembles, and show that: (1) there exists an adversarial input for up to 88\% of the samples where the ensemble claims to be certifiably robust and accurate; and (2) the accuracy of a cascading ensemble under our attack is as low as 11\% when it claims to be certifiably robust and accurate on 97\% of the test set. Our work reveals a critical pitfall of cascading certifiably robust models by showing that the seemingly beneficial strategy of cascading can actually hurt the robustness of the resulting ensemble. Our code is available at \url{https://github.com/TristaChi/ensembleKW}.


Building a Rock Paper Scissors AI

#artificialintelligence

In this article, I'll walk you through my process of building a full stack python Flask artificial intelligence project capable of beating the human user over 60% of the time using a custom scoring system to ensemble six models (naïve logic-based, decision tree, neural network) trained on both game-level and stored historical data in AWS RDS Cloud SQL database. Rock Paper Scissors caught my attention for an AI project because, on the surface, it seems impossible to get an edge in the game. These days, it is easy to assume that a computer can beat you in chess, because it can harness all of its computing power to see all possible outcomes and choose the ones that benefit it. Rock Paper Scissors, on the other hand, is commonly used in place of a coin toss to solve disputes because the winner seems random. My theory though, was that humans can't actually make random decisions, and that if an AI could learn to understand the ways in which humans make their choices over the course of a series of matches, even if the human was trying to behave randomly, then the AI would be able to significantly exceed 33% accuracy in guessing the player's decisions.


Early Detection of Sepsis using Ensemblers

Nirgudkar, Shailesh, Ding, Tianyu

arXiv.org Machine Learning

This paper describes a methodology to detect sepsis ahead of time by analyzing hourly patient records. The Physionet 2019 challenge consists of medical records of over 40,000 patients. Using imputation and weak ensembler technique to analyze these medical records and 3-fold validation, a model is created and validated internally. The model achieved an accuracy of 93.45% and a utility score of 0.271. The utility score as defined by the organizers takes into account true positives, negatives and false alarms.


Diverse Instances-Weighting Ensemble based on Region Drift Disagreement for Concept Drift Adaptation

Liu, Anjin, Lu, Jie, Zhang, Guangquan

arXiv.org Machine Learning

Concept drift refers to changes in the distribution of underlying data and is an inherent property of evolving data streams. Ensemble learning, with dynamic classifiers, has proved to be an efficient method of handling concept drift. However, the best way to create and maintain ensemble diversity with evolving streams is still a challenging problem. In contrast to estimating diversity via inputs, outputs, or classifier parameters, we propose a diversity measurement based on whether the ensemble members agree on the probability of a regional distribution change. In our method, estimations over regional distribution changes are used as instance weights. Constructing different region sets through different schemes will lead to different drift estimation results, thereby creating diversity. The classifiers that disagree the most are selected to maximize diversity. Accordingly, an instance-based ensemble learning algorithm, called the diverse instance weighting ensemble (DiwE), is developed to address concept drift for data stream classification problems. Evaluations of various synthetic and real-world data stream benchmarks show the effectiveness and advantages of the proposed algorithm.