to

Computing for Ocean Environments: Bio-Inspired Underwater Devices & Swarming Algorithms for Robotic Vehicles

Assistant Professor Wim van Rees and his team have developed simulations of self-propelled undulatory swimmers to better understand how fish-like deformable fins could improve propulsion in underwater devices, seen here in a top-down view. MIT ocean and mechanical engineers are using advances in scientific computing to address the ocean's many challenges, and seize its opportunities. There are few environments as unforgiving as the ocean. Its unpredictable weather patterns and limitations in terms of communications have left large swaths of the ocean unexplored and shrouded in mystery. "The ocean is a fascinating environment with a number of current challenges like microplastics, algae blooms, coral bleaching, and rising temperatures," says Wim van Rees, the ABS Career Development Professor at MIT. "At the same time, the ocean holds countless opportunities -- from aquaculture to energy harvesting and exploring the many ocean creatures we haven't discovered yet."

Computing for ocean environments

There are few environments as unforgiving as the ocean. Its unpredictable weather patterns and limitations in terms of communications have left large swaths of the ocean unexplored and shrouded in mystery. "The ocean is a fascinating environment with a number of current challenges like microplastics, algae blooms, coral bleaching, and rising temperatures," says Wim van Rees, the ABS Career Development Professor at MIT. "At the same time, the ocean holds countless opportunities -- from aquaculture to energy harvesting and exploring the many ocean creatures we haven't discovered yet." Ocean engineers and mechanical engineers, like van Rees, are using advances in scientific computing to address the ocean's many challenges, and seize its opportunities. These researchers are developing technologies to better understand our oceans, and how both organisms and human-made vehicles can move within them, from the micro scale to the macro scale.

MIT's New Tool for Tackling Hard Computational Problems

Some difficult computation problems, depicted by finding the highest peak in a "landscape" of countless mountain peaks separated by valleys, can take advantage of the Overlap Gap Property: At a high enough "altitude," any two points will be either close or far apart -- but nothing in-between. David Gamarnik has developed a new tool, the Overlap Gap Property, for understanding computational problems that appear intractable. The notion that some computational problems in math and computer science can be hard should come as no surprise. There is, in fact, an entire class of problems deemed impossible to solve algorithmically. Just below this class lie slightly "easier" problems that are less well-understood -- and may be impossible, too.

Artificial Intelligence Upskills Software via Mathematics - ASME

Fusing artificial intelligence with mathematical optimization will dramatically increase the "brainpower" for the task at hand, whether it's optimizing flight patterns or bringing energy and food to underserved areas. That's the word from the academic researchers who are part of a new interdisciplinary institute that aims to integrate the two fields. The National AI Institute for Advances in Optimization (AI4OPT) is led by a multidisciplinary team from six U.S. universities, including computer science and civil, environmental, electrical, and computer engineering professors. The combined methods will foster no less than a "paradigm shift" in optimization, said Pascal Van Hentenryck, professor of industrial and systems engineering at Georgia Tech and institute lead. According to Hentenryck, tackling problems at the scale and complexity faced by society today requires a fusion of optimization and machine learning, with the two technologies working hand-in-hand.

Optimizing molecules using efficient queries from property evaluations - Nature Machine Intelligence

Machine learning-based methods have shown potential for optimizing existing molecules with more desirable properties, a critical step towards accelerating new chemical discovery. Here we propose QMO, a generic query-based molecule optimization framework that exploits latent embeddings from a molecule autoencoder. QMO improves the desired properties of an input molecule based on efficient queries, guided by a set of molecular property predictions and evaluation metrics. We show that QMO outperforms existing methods in the benchmark tasks of optimizing small organic molecules for drug-likeness and solubility under similarity constraints. We also demonstrate substantial property improvement using QMO on two new and challenging tasks that are also important in real-world discovery problems: (1) optimizing existing potential SARS-CoV-2 main protease inhibitors towards higher binding affinity and (2) improving known antimicrobial peptides towards lower toxicity. Results from QMO show high consistency with external validations, suggesting an effective means to facilitate material optimization problems with design constraints. Zeroth-order optimization is used on problems where no explicit gradient function is accessible, but single points can be queried. Hoffman et al. present here a molecular design method that uses zeroth-order optimization to deal with the discreteness of molecule sequences and to incorporate external guidance from property evaluations and design constraints.

#003A Logistic Regression – Cost Function Optimization - Master Data Science

First, to train parameters $$w$$ and $$b$$ of a logistic regression model we need to define a cost function. Given a training set of $$m$$ training examples, we want to find parameters $$w$$ and $$b$$, so that $$\hat{y}$$ is as close to $$y$$ (ground truth). Here, we will use $$(i)$$ superscript to index different training examples. Henceforth, we will use loss (error) function $$\mathcal{L}$$ to measure how well our algorithm is doing. In logistic regression squared error loss function is not an optimal choice.

Bayesian Optimization with Python

If you are in the fields of data science or machine learning, chances are you already are doing optimization! For example, training a neural network is an optimization problem, as we want to find the set of model weights that best minimizes the loss function. Finding the set of hyper parameters that results in the best performing model is another optimization problem. Optimization algorithms come in many forms, each created to solve a particular type of problem. In particular, one type of problem commonly faced by scientists in both academia and industry is the optimization of expensive-to-evaluate black box functions.

Manager, use this to plan the smart working days for your team!

Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. During covid-19 several teams were constrained to work from home entirely.

Daily Digest

Understanding the interactions formed between a ligand and its molecular target is key to guiding the optimization of molecules. Different experimental and computational methods have been applied to better understanding these intermolecular interactions. Here researchers report a method based on geometric deep learning that is capable of predicting the binding conformations of ligands to protein targets. The model learns a statistical potential based on the distance likelihood, which is tailor-made for each ligand–target pair. This potential can be coupled with global optimization algorithms to reproduce the experimental binding conformations of ligands.

Deep Learning Optimization Theory -- Introduction

Optimization of convex functions is considered a mature field in mathematics. Accordingly, one can use well-established tools and theories to answer the questions described in the last paragraph for optimization. However, optimization of complicated non-convex functions is hard to analyze. Since the optimization of deep neural networks (yes, linear ones also) is non-convex, how can we attempt to answer those questions? One might seek wide empirical evidence that SGD converges to global minima on real-world problems.