Goto

Collaborating Authors

Optimization



Artificial Intelligence: Optimization Algorithms in Python

#artificialintelligence

What would an "optimal world" look like to you? Would people get along better? Would we take better care of our environment? Many data scientists choose to optimize by using pre-built machine learning libraries. But we think that this kind of'plug-and-play' study hinders your learning. That's why this course gets you to build an optimization algorithm from the ground up.


What Lies Ahead for Artificial Intelligence?

#artificialintelligence

The past few years have marked the breakthrough in the advancement of technology with the evolution of artificial intelligence that is rapidly gaining the attention of the research around the globe. Fremont, CA: Designing a model that will mimic the human brain and function similarly has solved the scientific community's biggest puzzle. The constant and rigorous efforts of researchers from various years lead to the evolution of Artificial intelligence. In the past seven decades, AI and its applications were considered both a boon and a curse. In this period, many times, the technology does not meet the expectations.


Ant colony optimization algorithms - Wikipedia

#artificialintelligence

In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. Artificial Ants stand for multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used.[2] Combinations of Artificial Ants and local search algorithms have become a method of choice for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing. The burgeoning activity in this field has led to conferences dedicated solely to Artificial Ants, and to numerous commercial applications by specialized companies such as AntOptima. As an example, Ant colony optimization[3] is a class of optimization algorithms modeled on the actions of an ant colony. Real ants lay down pheromones directing each other to resources while exploring their environment. The simulated'ants' similarly record their positions and the quality of their solutions, so that in later simulation iterations more ants locate better solutions.[4]



Scientists use reinforcement learning to train quantum algorithm

#artificialintelligence

Recent advancements in quantum computing have driven the scientific community's quest to solve a certain class of complex problems for which quantum computers would be better suited than traditional supercomputers. To improve the efficiency with which quantum computers can solve these problems, scientists are investigating the use of artificial intelligence approaches. In a new study, scientists at the U.S. Department of Energy's (DOE) Argonne National Laboratory have developed a new algorithm based on reinforcement learning to find the optimal parameters for the Quantum Approximate Optimization Algorithm (QAOA), which allows a quantum computer to solve certain combinatorial problems such as those that arise in materials design, chemistry and wireless communications. "Combinatorial optimization problems are those for which the solution space gets exponentially larger as you expand the number of decision variables," said Argonne computer scientist Prasanna Balaprakash. "In one traditional example, you can find the shortest route for a salesman who needs to visit a few cities once by enumerating all possible routes, but given a couple thousand cities, the number of possible routes far exceeds the number of stars in the universe; even the fastest supercomputers cannot find the shortest route in a reasonable time."


Operations Research Learning

#artificialintelligence

There is a huge synergy between Operations Research (OR) and Machine Learning (ML). While some ML researchers are using OR to improve further their learning, some OR researchers are using ML to incorporate learning in the optimization process with the expectation of significant gain in terms of time, gap, as well as other metrics. In this article, I will go through some stories into which machine learning is leveraged to tackle optimization problems. I like calling it operations research learning (ORL). These stories provide insights about the way synergy is built, transferred among problems as well as prospective improvements opportunities.


Optimal Sepsis Patient Treatment using Human-in-the-loop Artificial Intelligence

#artificialintelligence

This study proposes a clinical prescriptive model with human in the loop functionality that recommends optimal, individual-specific amounts of IV fluids for the treatment of septic patients in ICUs. The proposed methodology combines constrained optimization and machine learning techniques to arrive at optimal solutions. A key novelty of the proposed clinical model is utilization of a physician's input to derive optimal solutions. The efficacy of the method is demonstrated using a real world medical dataset. We further validated the robustness of the proposed approach to show that our method benefits from the human in the loop component, but is also robust to poor input, which is a crucial consideration for new physicians.


Exploring different optimization algorithms

#artificialintelligence

Machine learning is a field of study in the broad spectrum of artificial intelligence (AI) that can make predictions using data without being explicitly programmed to do so. Machine learning algorithms are used in a wide variety of applications, such as recommendation engines, computer vision, spam filtering and so much more. They perform extraordinary well where it is difficult or infeasible to develop conventional algorithms to perform the needed tasks. While many machine learning algorithms have been around for a long time, the ability to automatically apply complex mathematical calculations to big data-- over and over, faster and faster -- is a recent development. One of the most overwhelmingly represented machine learning techniques is a neural network.


Why Can a Machine Beat Mario but not Pokemon?

#artificialintelligence

By now, you've probably heard of bots playing video games at superhuman levels. These bots can be programmed explicitly, reacting to set inputs with set outputs, or learn and evolve, reacting in different ways to the same inputs in hopes of finding the optimal responses. These games are complex, and training these machines takes clever combinations of complicated algorithms, repeated simulations, and time. I want to focus on MarI/O and why we can't use a similar approach to beat a game of Pokemon (watch the video in the link above if you are unfamiliar with how it works). Let's compare the games using each of these factors. The way a machine learns is by optimizing some kind of objective function.