Goto

Collaborating Authors

Computing for Ocean Environments: Bio-Inspired Underwater Devices & Swarming Algorithms for Robotic Vehicles

#artificialintelligence

Assistant Professor Wim van Rees and his team have developed simulations of self-propelled undulatory swimmers to better understand how fish-like deformable fins could improve propulsion in underwater devices, seen here in a top-down view. MIT ocean and mechanical engineers are using advances in scientific computing to address the ocean's many challenges, and seize its opportunities. There are few environments as unforgiving as the ocean. Its unpredictable weather patterns and limitations in terms of communications have left large swaths of the ocean unexplored and shrouded in mystery. "The ocean is a fascinating environment with a number of current challenges like microplastics, algae blooms, coral bleaching, and rising temperatures," says Wim van Rees, the ABS Career Development Professor at MIT. "At the same time, the ocean holds countless opportunities -- from aquaculture to energy harvesting and exploring the many ocean creatures we haven't discovered yet."


Optimal Path Planning of Autonomous Marine Vehicles in Stochastic Dynamic Ocean Flows using a GPU-Accelerated Algorithm

arXiv.org Artificial Intelligence

Autonomous marine vehicles play an essential role in many ocean science and engineering applications. Planning time and energy optimal paths for these vehicles to navigate in stochastic dynamic ocean environments is essential to reduce operational costs. In some missions, they must also harvest solar, wind, or wave energy (modeled as a stochastic scalar field) and move in optimal paths that minimize net energy consumption. Markov Decision Processes (MDPs) provide a natural framework for sequential decision-making for robotic agents in such environments. However, building a realistic model and solving the modeled MDP becomes computationally expensive in large-scale real-time applications, warranting the need for parallel algorithms and efficient implementation. In the present work, we introduce an efficient end-to-end GPU-accelerated algorithm that (i) builds the MDP model (computing transition probabilities and expected one-step rewards); and (ii) solves the MDP to compute an optimal policy. We develop methodical and algorithmic solutions to overcome the limited global memory of GPUs by (i) using a dynamic reduced-order representation of the ocean flows, (ii) leveraging the sparse nature of the state transition probability matrix, (iii) introducing a neighbouring sub-grid concept and (iv) proving that it is sufficient to use only the stochastic scalar field's mean to compute the expected one-step rewards for missions involving energy harvesting from the environment; thereby saving memory and reducing the computational effort. We demonstrate the algorithm on a simulated stochastic dynamic environment and highlight that it builds the MDP model and computes the optimal policy 600-1000x faster than conventional CPU implementations, making it suitable for real-time use.


Bayesian Optimization with Exponential Convergence

Neural Information Processing Systems

This paper presents a Bayesian optimization method with exponential convergence without the need of auxiliary optimization and without the delta-cover sampling. Most Bayesian optimization methods require auxiliary optimization: an additional non-convex global optimization problem, which can be time-consuming and hard to implement in practice. Also, the existing Bayesian optimization method with exponential convergence requires access to the delta-cover sampling, which was considered to be impractical. Our approach eliminates both requirements and achieves an exponential convergence rate. Papers published at the Neural Information Processing Systems Conference.


Qian

AAAI Conferences

Pareto optimization solves a constrained optimization task by reformulating the task as a bi-objective problem. Pareto optimization has been shown quite effective in applications; however, it has little theoretical support. This work theoretically compares Pareto optimization with a penalty approach, which is a common method transforming a constrained optimization into an unconstrained optimization. We prove that on two large classes of constrained Boolean optimization problems, minimum matroid optimization (P-solvable) and minimum cost coverage (NP-hard), Pareto optimization is more efficient than the penalty function method for obtaining the optimal and approximate solutions, respectively. Furthermore, on a minimum cost coverage instance, we also show the advantage of Pareto optimization over a greedy algorithm.


Data Analysis Method: Mathematics Optimization to Build Decision Making

@machinelearnbot

Optimization is a problem associated with the best decision that is effective and efficient decisions whether it is worth maximum or minimum by way of determining a satisfactory solution.