Goto

Collaborating Authors

 socp


A Computationally Efficient Learning-Based Model Predictive Control for Multirotors under Aerodynamic Disturbances

Akbari, Babak, Greeff, Melissa

arXiv.org Artificial Intelligence

Neglecting complex aerodynamic effects hinders high-speed yet high-precision multirotor autonomy. In this paper, we present a computationally efficient learning-based model predictive controller that simultaneously optimizes a trajectory that can be tracked within the physical limits (on thrust and orientation) of the multirotor system despite unknown aerodynamic forces and adapts the control input. To do this, we leverage the well-known differential flatness property of multirotors, which allows us to transform their nonlinear dynamics into a linear model. The main limitation of current flatness-based planning and control approaches is that they often neglect dynamic feasibility. This is because these constraints are nonlinear as a result of the mapping between the input, i.e., multirotor thrust, and the flat state. In our approach, we learn a novel representation of the drag forces by learning the mapping from the flat state to the multirotor thrust vector (in a world frame) as a Gaussian Process (GP). Our proposed approach leverages the properties of GPs to develop a convex optimal controller that can be iteratively solved as a second-order cone program (SOCP). In simulation experiments, our proposed approach outperforms related model predictive controllers that do not account for aerodynamic effects on trajectory feasibility, leading to a reduction of up to 55% in absolute tracking error.


C*: A New Bounding Approach for the Moving-Target Traveling Salesman Problem

Philip, Allen George, Ren, Zhongqiang, Rathinam, Sivakumar, Choset, Howie

arXiv.org Artificial Intelligence

We introduce a new bounding approach called Continuity* (C*) that provides optimality guarantees to the Moving-Target Traveling Salesman Problem (MT-TSP). Our approach relies on relaxing the continuity constraints on the agent's tour. This is done by partitioning the targets' trajectories into small sub-segments and allowing the agent to arrive at any point in one of the sub-segments and depart from any point in the same sub-segment when visiting each target. This lets us pose the bounding problem as a Generalized Traveling Salesman Problem (GTSP) in a graph where the cost of traveling an edge requires us to solve a new problem called the Shortest Feasible Travel (SFT). We also introduce C*-lite, which follows the same approach as C*, but uses simple and easy to compute lower-bounds to the SFT. We first prove that the proposed algorithms provide lower bounds to the MT-TSP. We also provide computational results to corroborate the performance of C* and C*-lite for instances with up to 15 targets. For the special case where targets travel along lines, we compare our C* variants with the SOCP based method, which is the current state-of-the-art solver for MT-TSP. While the SOCP based method performs well for instances with 5 and 10 targets, C* outperforms the SOCP based method for instances with 15 targets. For the general case, on average, our approaches find feasible solutions within ~4% of the lower bounds for the tested instances.


Safe Control Synthesis with Uncertain Dynamics and Constraints

Long, Kehan, Dhiman, Vikas, Leok, Melvin, Cortés, Jorge, Atanasov, Nikolay

arXiv.org Artificial Intelligence

This paper considers safe control synthesis for dynamical systems with either probabilistic or worst-case uncertainty in both the dynamics model and the safety constraints. We formulate novel probabilistic and robust (worst-case) control Lyapunov function (CLF) and control barrier function (CBF) constraints that take into account the effect of uncertainty in either case. We show that either the probabilistic or the robust (worst-case) formulation leads to a second-order cone program (SOCP), which enables efficient safe and stable control synthesis. We evaluate our approach in PyBullet simulations of an autonomous robot navigating in unknown environments and compare the performance with a baseline CLF-CBF quadratic programming approach.


Quantum algorithms for Second-Order Cone Programming and Support Vector Machines

Kerenidis, Iordanis, Prakash, Anupam, Szilágyi, Dániel

arXiv.org Machine Learning

Convex optimization is one of the central areas of study in computer science and mathematical optimization. The reason for the great importance of convex optimization is twofold. Firstly, starting with the seminal works of Khachiyan [25] and Karmarkar [18], efficient algorithms have been developed for a large family of convex optimization problems over the last few decades. Secondly, convex optimization has many real world applications and many optimization problems that arise in practice can be reduced to convex optimization [8]. There are three main classes of structured convex optimization problems: linear programs (LP), semidefinite programs (SDP), and second-order conic programs (SOCP). The fastest (classical) algorithms for these problems belong to the family of interior-point methods (IPM). Interior point methods are iterative algorithms where the main computation in each step is the solution of a system of linear equations whose size depends on the dimension of the optimization problem. The size of structured optimization problems that can be solved in practice is therefore limited by the efficiency of linear system solvers - on a single computer, most open-source and commercial solvers can handle dense problems with up to tens of thousands of constraints and variables, or sparse problems with the same number of nonzero entries [30, 31]. In recent years, there has been a tremendous interest in quantum linear algebra algorithms following the breakthrough algorithm of Harrow, Hassidim and Lloyd [16].


Worst-Case Linear Discriminant Analysis as Scalable Semidefinite Feasibility Problems

Li, Hui, Shen, Chunhua, Hengel, Anton van den, Shi, Qinfeng

arXiv.org Artificial Intelligence

In this paper, we propose an efficient semidefinite programming (SDP) approach to worst-case linear discriminant analysis (WLDA). Compared with the traditional LDA, WLDA considers the dimensionality reduction problem from the worst-case viewpoint, which is in general more robust for classification. However, the original problem of WLDA is non-convex and difficult to optimize. In this paper, we reformulate the optimization problem of WLDA into a sequence of semidefinite feasibility problems. To efficiently solve the semidefinite feasibility problems, we design a new scalable optimization method with quasi-Newton methods and eigen-decomposition being the core components. The proposed method is orders of magnitude faster than standard interior-point based SDP solvers. Experiments on a variety of classification problems demonstrate that our approach achieves better performance than standard LDA. Our method is also much faster and more scalable than standard interior-point SDP solvers based WLDA. The computational complexity for an SDP with $m$ constraints and matrices of size $d$ by $d$ is roughly reduced from $\mathcal{O}(m^3+md^3+m^2d^2)$ to $\mathcal{O}(d^3)$ ($m>d$ in our case).


A problem dependent analysis of SOCP algorithms in noisy compressed sensing

Stojnic, Mihailo

arXiv.org Machine Learning

Under-determined systems of linear equations with sparse solutions have been the subject of an extensive research in last several years above all due to results of \cite{CRT,CanRomTao06,DonohoPol}. In this paper we will consider \emph{noisy} under-determined linear systems. In a breakthrough \cite{CanRomTao06} it was established that in \emph{noisy} systems for any linear level of under-determinedness there is a linear sparsity that can be \emph{approximately} recovered through an SOCP (second order cone programming) optimization algorithm so that the approximate solution vector is (in an $\ell_2$-norm sense) guaranteed to be no further from the sparse unknown vector than a constant times the noise. In our recent work \cite{StojnicGenSocp10} we established an alternative framework that can be used for statistical performance analysis of the SOCP algorithms. To demonstrate how the framework works we then showed in \cite{StojnicGenSocp10} how one can use it to precisely characterize the \emph{generic} (worst-case) performance of the SOCP. In this paper we present a different set of results that can be obtained through the framework of \cite{StojnicGenSocp10}. The results will relate to \emph{problem dependent} performance analysis of SOCP's. We will consider specific types of unknown sparse vectors and characterize the SOCP performance when used for recovery of such vectors. We will also show that our theoretical predictions are in a solid agreement with the results one can get through numerical simulations.


Efficient Recovery of Jointly Sparse Vectors

Sun, Liang, Liu, Jun, Chen, Jianhui, Ye, Jieping

Neural Information Processing Systems

We consider the reconstruction of sparse signals in the multiple measurement vector (MMV) model,in which the signal, represented as a matrix, consists of a set of jointly sparse vectors. MMV is an extension of the single measurement vector (SMV) model employed in standard compressive sensing (CS). Recent theoretical studies focus on the convex relaxation of the MMV problem based on the $(2,1)$-norm minimization, which is an extension of the well-known $1$-norm minimization employed in SMV. However, the resulting convex optimization problem in MMV is significantly much more difficult to solve than the one in SMV. Existing algorithms reformulate it as a second-order cone programming (SOCP) or semidefinite programming (SDP), which is computationally expensive to solve for problems of moderate size. In this paper, we propose a new (dual) reformulation of the convex optimization problem in MMV and develop an efficient algorithm based on the prox-method. Interestingly, our theoretical analysis reveals the close connection between the proposed reformulation and multiple kernel learning. Our simulation studies demonstrate the scalability of the proposed algorithm.



Multi-Task Learning via Conic Programming

Kato, Tsuyoshi, Kashima, Hisashi, Sugiyama, Masashi, Asai, Kiyoshi

Neural Information Processing Systems

When we have several related tasks, solving them simultaneously is shown to be more effective than solving them individually. This approach is called multi-task learning (MTL) and has been studied extensively. Existing approaches to MTL often treat all the tasks as \emph{uniformly related to each other and the relatedness of the tasks is controlled globally. For this reason, the existing methods can lead to undesired solutions when some tasks are not highly related to each other, and some pairs of related tasks can have significantly different solutions. In this paper, we propose a novel MTL algorithm that can overcome these problems. Our method makes use of a task network, which describes the relation structure among tasks. This allows us to deal with intricate relation structures in a systematic way. Furthermore, we control the relatedness of the tasks locally, so all pairs of related tasks are guaranteed to have similar solutions. We apply the above idea to support vector machines (SVMs) and show that the optimization problem can be cast as a second order cone program, which is convex and can be solved efficiently. The usefulness of our approach is demonstrated through simulations with protein super-family classification and ordinal regression problems.