### Efficient Methods for Multi-Objective Decision-Theoretic Planning

In decision-theoretic planning problems, such as (partially observable) Markov decision problems or coordination graphs, agents typically aim to optimize a scalar value function. However, in many real-world problems agents are faced with multiple possibly conflicting objectives. In such multi-objective problems, the value is a vector rather than a scalar, and we need methods that compute a coverage set, i.e., a set of solutions optimal for all possible trade-offs between the objectives. In this project propose new multi-objective planning methods that compute the so-called convex coverage set (CCS): the coverage set for when policies can be stochastic, or the preferences are linear. We show that the CCS has favorable mathematical properties, and is typically much easier to compute that the Pareto front, which is often axiomatically assumed as the solution set for multi-objective decision problems.

### Search Space Reduction and Russian Doll Search

In a constraint optimization problem (COP), many feasible valuations lead to the same objective value. This often means a huge search space and poor performance in the propagation between the objective and problem variables. In this paper, we propose a different modeling and search strategy which focuses on the cost function. We show that by constructing a dual model on the objective variables, we can get strong propagation between the objective variables and the problem variables which allows search on the objective variables. We explain why and when searching on the objective variables can lead to large gains. We present a new Russian Doll Search algorithm, ORDS, which works on objective variables with dynamic variable ordering. Finally, we demonstrate using the hard Still-Life optimization problem the benefits of changing to the objective function model and ORDS.

### Optimization Using R

Optimization is a technique for finding out the best possible solution for a given problem for all the possible solutions. Optimization uses a rigorous mathematical model to find out the most efficient solution to the given problem. To start with an optimization problem, it is important to first identify an objective. An objective is a quantitative measure of performance. For example: to maximize profits, minimize time, minimize costs, maximize sales.

### A global optimization algorithm for sparse mixed membership matrix factorization

Mixed membership factorization is a popular approach for analyzing data sets that have within-sample heterogeneity. In recent years, several algorithms have been developed for mixed membership matrix factorization, but they only guarantee estimates from a local optimum. Here, we derive a global optimization (GOP) algorithm that provides a guaranteed $\epsilon$-global optimum for a sparse mixed membership matrix factorization problem. We test the algorithm on simulated data and find the algorithm always bounds the global optimum across random initializations and explores multiple modes efficiently.

### Reinforcement Learning on Multiple Correlated Signals

We prove that this conflicts may exist between objectives, there is in general modification preserves the total order, and thus also optimality, a need to identify (a set of) tradeoff solutions. The set of policies, mainly relying on the results by Ng, Harada, of optimal, i.e. non-dominated, incomparable solutions is and Russell (1999). This insight - that any MDP can be called the Pareto-front. We identify multi-objective problems framed as a CMOMDP - significantly increases the importance with correlated objectives (CMOP) as a specific subclass of this problem class, as well as techniques developed of multi-objective problems, defined to contain those for it, as these could be used to solve regular single-objective MOPs whose Pareto-front is so limited that one can barely MDPs faster and better, provided several meaningful shapings speak of tradeoffs (Brys et al. 2014b). By consequence, can be devised.