Single Objective Problems

#artificialintelligence

Before moving on, let's take some time to have a closer look at a single-objective problem. This will give us some perspective. In single-objective problems, the objective is to find a single solution which represents the global optimum in the entire search space. Determining which solutions outperforms others is a simple task when only considering a single-objective, because the best solution is simply the one with the highest (for maximisation problems) or lowest (for minimisation problems) objective value. Let's take the Sphere function as an example.


Data Science Process: Defining a Problem Statement

#artificialintelligence

Traditional BI projects were typically set on long-term objectives so that the client often did not see results until the total completion; this, in many cases, produced deviations, both in terms of cost and in scope. Machine Learning projects must set short-term objectives and must be managed via agile approach, the loop between business questions, hypothesis and data evidence must be a continuous one, new findings must be used to drive and improve subsequent project waves and results, even when partial, need to be shared with business people to keep their commitment always at high level.



Efficient Methods for Multi-Objective Decision-Theoretic Planning

AAAI Conferences

In decision-theoretic planning problems, such as (partially observable) Markov decision problems or coordination graphs, agents typically aim to optimize a scalar value function. However, in many real-world problems agents are faced with multiple possibly conflicting objectives. In such multi-objective problems, the value is a vector rather than a scalar, and we need methods that compute a coverage set, i.e., a set of solutions optimal for all possible trade-offs between the objectives. In this project propose new multi-objective planning methods that compute the so-called convex coverage set (CCS): the coverage set for when policies can be stochastic, or the preferences are linear. We show that the CCS has favorable mathematical properties, and is typically much easier to compute that the Pareto front, which is often axiomatically assumed as the solution set for multi-objective decision problems.


Terra-Neves

AAAI Conferences

Minimal Correction Subsets (MCSs) have been successfully applied to find approximate solutions to several real-world single-objective optimization problems. However, only recently have MCSs been used to solve Multi-Objective Combinatorial Optimization (MOCO) problems. In particular, it has been shown that all optimal solutions of MOCO problems with linear objective functions can be found by an MCS enumeration procedure. In this paper, we show that the approach of MCS enumeration can also be applied to MOCO problems where objective functions are divisions of linear expressions. Hence, it is not necessary to use a linear approximation of these objective functions. Additionally, we also propose the integration of diversification techniques on the MCS enumeration process in order to find better approximations of the Pareto front of MOCO problems. Finally, experimental results on the Virtual Machine Consolidation (VMC) problem show the effectiveness of the proposed techniques.