Goto

Collaborating Authors

 expt


A Additional experimental details

Neural Information Processing Systems

RBF kernel to increase pretraining data diversity. Architectural details In all experiments, we use the same ExPT architecture. This section details how we constructed new objectives from the original D'Kitty and Ant that we In Ant-Energy, the reward at each time step is: R =1+ Survival reward Control cost Contact cost, (6) which means we incentivize the robot to conserve energy instead of running fast. D'Kitty tasks In D'Kitty, the goal is to design a morphology that allows the D'Kitty robot to reach We found the approximate oracle provided by Design-Bench not accurate enough to provide a reliable comparison of optimization methods on this task. C.1 Effects of GP hyperparameters We empirically examine the impact of two GP hyperparameters, the variance and the length scale ` Specifically, we evaluate the performance of ExPT on D'Kitty We average the performance across 3 seeds.



cf708fc1decf0337aded484f8f4519ae-AuthorFeedback.pdf

Neural Information Processing Systems

Lee at al. 2017 (and the AAAI-19 version) deal only with the constrained inference, not learning. Yes, all our constraints are linguistically31 important. The goal of our expts was different from Mehta's. We obtained 69.11 using CL39 (supervised), which is 1.09 pt higher than their reported 68.02 for CL.


ExPT: Synthetic Pretraining for Few-Shot Experimental Design

Neural Information Processing Systems

Experimental design is a fundamental problem in many science and engineering fields. In this problem, sample efficiency is crucial due to the time, money, and safety costs of real-world design evaluations. Existing approaches either rely on active data collection or access to large, labeled datasets of past experiments, making them impractical in many real-world scenarios. In this work, we address the more challenging yet realistic setting of few-shot experimental design, where only a few labeled data points of input designs and their corresponding values are available. We approach this problem as a conditional generation task, where a model conditions on a few labeled examples and the desired output to generate an optimal input design. To this end, we introduce Experiment Pretrained Transformers (ExPT), a foundation model for few-shot experimental design that employs a novel combination of synthetic pretraining with in-context learning. In ExPT, we only assume knowledge of a finite collection of unlabelled data points from the input domain and pretrain a transformer neural network to optimize diverse synthetic functions defined over this domain. Unsupervised pretraining allows ExPT to adapt to any design task at test time in an in-context fashion by conditioning on a few labeled data points from the target task and generating the candidate optima. We evaluate ExPT on few-shot experimental design in challenging domains and demonstrate its superior generality and performance compared to existing methods.


A Additional experimental details

Neural Information Processing Systems

RBF kernel to increase pretraining data diversity. Architectural details In all experiments, we use the same ExPT architecture. This section details how we constructed new objectives from the original D'Kitty and Ant that we In Ant-Energy, the reward at each time step is: R =1+ Survival reward Control cost Contact cost, (6) which means we incentivize the robot to conserve energy instead of running fast. D'Kitty tasks In D'Kitty, the goal is to design a morphology that allows the D'Kitty robot to reach We found the approximate oracle provided by Design-Bench not accurate enough to provide a reliable comparison of optimization methods on this task. C.1 Effects of GP hyperparameters We empirically examine the impact of two GP hyperparameters, the variance and the length scale ` Specifically, we evaluate the performance of ExPT on D'Kitty We average the performance across 3 seeds.



We have violations after CI since we do early stopping - satisfying them till end can sometimes hurt overall

Neural Information Processing Systems

Thank you for your detailed comments. We will make all clarifications below in the next version. We note that our formulation in Sec 3.2 can handle any Learning the constraints automatically is a direction for future work. There are important differences compared to Diligenti et al. We said that Diligenti and Mehta are task specific since they only experiment on a single task. Jin et al. assume arbitrary non-convex non-concave form for a twice We will add this important theorem in the paper.


Physics-Guided Dual Implicit Neural Representations for Source Separation

Ni, Yuan, Chen, Zhantao, Petsch, Alexander N., Xu, Edmund, Peng, Cheng, Kolesnikov, Alexander I., Chowdhury, Sugata, Bansil, Arun, Thayer, Jana B., Turner, Joshua J.

arXiv.org Artificial Intelligence

Significant challenges exist in efficient data analysis of most advanced experimental and observational techniques because the collected signals often include unwanted contributions--such as background and signal distortions--that can obscure the physically relevant information of interest. To address this, we have developed a self-supervised machine-learning approach for source separation using a dual implicit neural representation framework that jointly trains two neural networks: one for approximating distortions of the physical signal of interest and the other for learning the effective background contribution. Our method learns directly from the raw data by minimizing a reconstruction-based loss function without requiring labeled data or pre-defined dictionaries. We demonstrate the effectiveness of our framework by considering a challenging case study involving large-scale simulated as well as experimental momentum-energy-dependent inelastic neutron scattering data in a four-dimensional parameter space, characterized by heterogeneous background contributions and unknown distortions to the target signal. The method is found to successfully separate physically meaningful signals from a complex or structured background even when the signal characteristics vary across all four dimensions of the parameter space. An analytical approach that informs the choice of the regularization parameter is presented. Our method offers a versatile framework for addressing source separation problems across diverse domains, ranging from superimposed signals in astronomical measurements to structural features in biomedical image reconstructions.


A-SEE2.0: Active-Sensing End-Effector for Robotic Ultrasound Systems with Dense Contact Surface Perception Enabled Probe Orientation Adjustment

Zhetpissov, Yernar, Ma, Xihan, Yang, Kehan, Zhang, Haichong K.

arXiv.org Artificial Intelligence

Conventional freehand ultrasound (US) imaging is highly dependent on the skill of the operator, often leading to inconsistent results and increased physical demand on sonographers. Robotic Ultrasound Systems (RUSS) aim to address these limitations by providing standardized and automated imaging solutions, especially in environments with limited access to skilled operators. This paper presents the development of a novel RUSS system that employs dual RGB-D depth cameras to maintain the US probe normal to the skin surface, a critical factor for optimal image quality. Our RUSS integrates RGB-D camera data with robotic control algorithms to maintain orthogonal probe alignment on uneven surfaces without preoperative data. Validation tests using a phantom model demonstrate that the system achieves robust normal positioning accuracy while delivering ultrasound images comparable to those obtained through manual scanning. A-SEE2.0 demonstrates 2.47 ${\pm}$ 1.25 degrees error for flat surface normal-positioning and 12.19 ${\pm}$ 5.81 degrees normal estimation error on mannequin surface. This work highlights the potential of A-SEE2.0 to be used in clinical practice by testing its performance during in-vivo forearm ultrasound examinations.


ExPT: Synthetic Pretraining for Few-Shot Experimental Design

Neural Information Processing Systems

Experimental design is a fundamental problem in many science and engineering fields. In this problem, sample efficiency is crucial due to the time, money, and safety costs of real-world design evaluations. Existing approaches either rely on active data collection or access to large, labeled datasets of past experiments, making them impractical in many real-world scenarios. In this work, we address the more challenging yet realistic setting of few-shot experimental design, where only a few labeled data points of input designs and their corresponding values are available. We approach this problem as a conditional generation task, where a model conditions on a few labeled examples and the desired output to generate an optimal input design.