Goto

Collaborating Authors

 Singh, Mohit


DeepVL: Dynamics and Inertial Measurements-based Deep Velocity Learning for Underwater Odometry

arXiv.org Artificial Intelligence

This paper presents a learned model to predict the robot-centric velocity of an underwater robot through dynamics-aware proprioception. The method exploits a recurrent neural network using as inputs inertial cues, motor commands, and battery voltage readings alongside the hidden state of the previous time-step to output robust velocity estimates and their associated uncertainty. An ensemble of networks is utilized to enhance the velocity and uncertainty predictions. Fusing the network's outputs into an Extended Kalman Filter, alongside inertial predictions and barometer updates, the method enables long-term underwater odometry without further exteroception. Furthermore, when integrated into visual-inertial odometry, the method assists in enhanced estimation resilience when dealing with an order of magnitude fewer total features tracked (as few as 1) as compared to conventional visual-inertial systems. Tested onboard an underwater robot deployed both in a laboratory pool and the Trondheim Fjord, the method takes less than 5ms for inference either on the CPU or the GPU of an NVIDIA Orin AGX and demonstrates less than 4% relative position error in novel trajectories during complete visual blackout, and approximately 2% relative error when a maximum of 2 visual features from a monocular camera are available.


An Online Self-calibrating Refractive Camera Model with Application to Underwater Odometry

arXiv.org Artificial Intelligence

This work presents a camera model for refractive media such as water and its application in underwater visual-inertial odometry. The model is self-calibrating in real-time and is free of known correspondences or calibration targets. It is separable as a distortion model (dependent on refractive index $n$ and radial pixel coordinate) and a virtual pinhole model (as a function of $n$). We derive the self-calibration formulation leveraging epipolar constraints to estimate the refractive index and subsequently correct for distortion. Through experimental studies using an underwater robot integrating cameras and inertial sensing, the model is validated regarding the accurate estimation of the refractive index and its benefits for robust odometry estimation in an extended envelope of conditions. Lastly, we show the transition between media and the estimation of the varying refractive index online, thus allowing computer vision tasks across refractive media.


The Price of Fair PCA: One Extra dimension

Neural Information Processing Systems

We investigate whether the standard dimensionality reduction technique of PCA inadvertently produces data representations with different fidelity for two different populations. We show on several real-world data sets, PCA has higher reconstruction error on population A than on B (for example, women versus men or lower- versus higher-educated individuals). This can happen even when the data set has a similar number of samples from A and B. This motivates our study of dimensionality reduction techniques which maintain similar fidelity for A and B. We define the notion of Fair PCA and give a polynomial-time algorithm for finding a low dimensional representation of the data which is nearly-optimal with respect to this measure. Finally, we show on real-world data sets that our algorithm can be used to efficiently generate a fair low dimensional representation of the data.


The Price of Fair PCA: One Extra Dimension

arXiv.org Machine Learning

We investigate whether the standard dimensionality reduction technique of PCA inadvertently produces data representations with different fidelity for two different populations. We show on several real-world data sets, PCA has higher reconstruction error on population A than on B (for example, women versus men or lower- versus higher-educated individuals). This can happen even when the data set has a similar number of samples from A and B. This motivates our study of dimensionality reduction techniques which maintain similar fidelity for A and B. We define the notion of Fair PCA and give a polynomial-time algorithm for finding a low dimensional representation of the data which is nearly-optimal with respect to this measure. Finally, we show on real-world data sets that our algorithm can be used to efficiently generate a fair low dimensional representation of the data.


Proportional Volume Sampling and Approximation Algorithms for A-Optimal Design

arXiv.org Machine Learning

We study the $A$-optimal design problem where we are given vectors $v_1,\ldots,v_n\in\mathbb{R}^d$, an integer $k\geq d$, and the goal is to select a set $S$ of $k$ vectors that minimizes the trace of $(\sum_{i\in S}v_iv_i^\top)^{-1}$. Traditionally, the problem is an instance of optimal design of experiments in statistics where each vector corresponds to a linear measurement of an unknown vector and the goal is to pick $k$ of them that minimize the average variance of the error in the maximum likelihood estimate of the vector being measured. The problem also finds applications in sensor placement in wireless networks, sparse least squares regression, feature selection for $k$-means clustering, and matrix approximation. In this paper, we introduce proportional volume sampling to obtain improved approximation algorithms for $A$-optimal design. Given a matrix, proportional volume sampling picks a set of columns $S$ of size $k$ with probability proportional to $\mu(S)$ times $\det(\sum_{i\in S}v_iv_i^\top)$ for some measure $\mu$. Our main result is to show the approximability of the $A$-optimal design problem can be reduced to approximate independence properties of the measure $\mu$. We appeal to hard-core distributions as candidate distributions $\mu$ that allow us to obtain improved approximation algorithms for the $A$-optimal design. Our results include a $d$-approximation when $k=d$, an $(1+\epsilon)$-approximation when $k=\Omega\left(\frac{d}{\epsilon}+\frac{1}{\epsilon^2}\log\frac{1}{\epsilon}\right)$ and $\frac{k}{k-d+1}$-approximation when repetitions of vectors are allowed in the solution. We consider generalization of the problem for $k\leq d$ and obtain a $k$-approximation. The last result implies a restricted invertibility principle for the harmonic mean of singular values. We also show that the problem is $\mathsf{NP}$-hard to approximate within a fixed constant when $k=d$.


Approximate Positively Correlated Distributions and Approximation Algorithms for D-optimal Design

arXiv.org Machine Learning

Experimental design is a classical problem in statistics and has also found new applications in machine learning. In the experimental design problem, the aim is to estimate an unknown vector x in m-dimensions from linear measurements where a Gaussian noise is introduced in each measurement. The goal is to pick k out of the given n experiments so as to make the most accurate estimate of the unknown parameter x. Given a set S of chosen experiments, the most likelihood estimate x' can be obtained by a least squares computation. One of the robust measures of error estimation is the D-optimality criterion which aims to minimize the generalized variance of the estimator. This corresponds to minimizing the volume of the standard confidence ellipsoid for the estimation error x-x'. The problem gives rise to two natural variants depending on whether repetitions are allowed or not. The latter variant, while being more general, has also found applications in the geographical location of sensors. In this work, we first show that a 1/e-approximation for the D-optimal design problem with and without repetitions giving us the first constant factor approximation for the problem. We also consider the case when the number of experiments chosen is much larger than the dimension of the measurements and provide an asymptotically optimal approximation algorithm.