Collaborating Authors

Planning & Scheduling

Goal Setting For Students: Plan And Achieve Your Goals


Are you a student, struggling at setting and achieving your goals? This goal setting course will quickly allow you to start setting personal and professional goals to boost your success. If you are a studente you can start seeing a path to the success you hope for: you will be able to develop a goal setting mindset, find out what you want in life and create a concrete action plan to reach your targets. In this course I will teach you everything you need to know about setting clear goals. You'll learn how to use Goal Setting to create an action plan focused on your desired results and align your daily actions with your goals.

Finnish artificial intelligence company Plain Complex raises €150k from angel investors


Plain Complex is a Finnish startup company founded in 2021, although the initial product development started already three years ago. As an anesthesiologist Sasu Liuhanen repeatedly witnessed the challenges of planning nurses’ shifts in his daily work; the process was slow, required a lot of manual work and the quality of the rosters left all too often a lot to desire.To develop a solution to the problem, Sasu Liuhanen, an experienced anesthesiologist and software developer, Tuomo Peltola, a seasoned professional in health-tech sales and marketing, and Stefano Campadello, a professional in business development and information technology founded a company and are now commercializing its artificial intelligence-based roster planning software. 150k from Finnish angel investors  The first angel investment round of the company was completed with four renowned angel investors. Ali Omar (FiBAN business angel of the year 2019), Reima Linnanvirta (chair of the board, FiBAN), Henry Nilert (founder of IoBox, FiBAN angel investor), and Pekka Ylitalo (Dimerent) made a 150 000 € seed investment into the company.  - Roster planning affects a great number of nurses and their families. Hence, the quality of the rosters has an immense effect on employees’ work-life balance and their well-being. Artificial intelligence is a true game-changer and enables finding optimal shifts for each employee, says Ali Omar.  - Well-being employees are the focus of Plain Complex, but at the same time, an organization can achieve significant cost savings brought by the uniform quality and fairness of the rosters. Using artificial intelligence is a true win-win, says FiBAN’s chair of the board, Reima Linnanvirta.  Artificial intelligence improves well-being at work and brings costs savings to healthcare  An ongoing pilot in a large Finnish hospital has already proven that artificial intelligence can plan rosters where employees can combine shift work and personal life in a way that has not been possible before. Transforming the previously long and tedious planning process from days or even weeks into a few minutes opens up new and unseen opportunities. - A good roster needs to comply with all applicable laws, collective agreements, organizational requirements, criteria for ergonomic design, and the employees’ personal wishes and preferences. Such a puzzle is often extremely difficult to solve and frequently it is the employees’ wishes that need to give in. Artificial intelligence can change all this and solve the puzzle in a way that everyone wins. A plan that takes all the aforementioned aspects into account is ready in minutes, says Sasu Liuhanen, CEO and co-founder of the company.  More information: Sasu LiuhanenCEO, Co-Founder, Plain Complex040-516 / Antti ViitanenDeal Flow Manager, FiBAN+358 45 2565

NICE: Robust Scheduling through Reinforcement Learning-Guided Integer Programming Artificial Intelligence

Integer programs provide a powerful abstraction for representing a wide range of real-world scheduling problems. Despite their ability to model general scheduling problems, solving large-scale integer programs (IP) remains a computational challenge in practice. The incorporation of more complex objectives such as robustness to disruptions further exacerbates the computational challenge. We present NICE (Neural network IP Coefficient Extraction), a novel technique that combines reinforcement learning and integer programming to tackle the problem of robust scheduling. More specifically, NICE uses reinforcement learning to approximately represent complex objectives in an integer programming formulation. We use NICE to determine assignments of pilots to a flight crew schedule so as to reduce the impact of disruptions. We compare NICE with (1) a baseline integer programming formulation that produces a feasible crew schedule, and (2) a robust integer programming formulation that explicitly tries to minimize the impact of disruptions. Our experiments show that, across a variety of scenarios, NICE produces schedules resulting in 33\% to 48\% fewer disruptions than the baseline formulation. Moreover, in more severely constrained scheduling scenarios in which the robust integer program fails to produce a schedule within 90 minutes, NICE is able to build robust schedules in less than 2 seconds on average.

MCTS Based Agents for Multistage Single-Player Card Game Artificial Intelligence

The article presents the use of Monte Carlo Tree Search algorithms for the card game Lord of the Rings. The main challenge was the complexity of the game mechanics, in which each round consists of 5 decision stages and 2 random stages. To test various decision-making algorithms, a game simulator has been implemented. The research covered an agent based on expert rules, using flat Monte-Carlo search, as well as complete MCTS-UCB. Moreover different playout strategies has been compared. As a result of experiments, an optimal (assuming a limited time) combination of algorithms were formulated. The developed MCTS based method have demonstrated a advantage over agent with expert knowledge.

A dynamic programming algorithm for informative measurements and near-optimal path-planning Artificial Intelligence

Observing the outcomes of a sequence of measurements usually increases our knowledge about the state of a particular system we might be interested in. An informative measurement is the most efficient way of gaining this information, having the largest possible statistical dependence between the state being measured and the observed measurement outcome. Lindley first introduced the notion of the amount of information in an experiment, and suggested the following greedy rule for experimentation: perform that experiment for which the expected gain in information is the greatest, and continue experimentation until a preassigned amount of information has been attained [Lindley, 1955]. Greedy methods are still the most common approaches for finding informative measurements, being both simple to implement and efficient to compute. For example, in a weighing problem where an experimenter has a two-pan balance and is given a set of balls of equal weight except for a single odd ball that is heavier or lighter than the others (see Figure 1), the experimenter would like to find the odd ball in the fewest weighings. MacKay suggested that for useful information to be gained as quickly as possible, each stage of an optimal measurement sequence should have measurement outcomes as close as possible to equiprobable [MacKay, 2003].

Safe-Planner: A Single-Outcome Replanner for Computing Strong Cyclic Policies in Fully Observable Non-Deterministic Domains Artificial Intelligence

Replanners are efficient methods for solving non-deterministic planning problems. Despite showing good scalability, existing replanners often fail to solve problems involving a large number of misleading plans, i.e., weak plans that do not lead to strong solutions, however, due to their minimal lengths, are likely to be found at every replanning iteration. The poor performance of replanners in such problems is due to their all-outcome determinization. That is, when compiling from non-deterministic to classical, they include all compiled classical operators in a single deterministic domain which leads replanners to continually generate misleading plans. We introduce an offline replanner, called Safe-Planner (SP), that relies on a single-outcome determinization to compile a non-deterministic domain to a set of classical domains, and ordering heuristics for ranking the obtained classical domains. The proposed single-outcome determinization and the heuristics allow for alternating between different classical domains. We show experimentally that this approach can allow SP to avoid generating misleading plans but to generate weak plans that directly lead to strong solutions. The experiments show that SP outperforms state-of-the-art non-deterministic solvers by solving a broader range of problems. We also validate the practical utility of SP in real-world non-deterministic robotic tasks.

Distributed Mission Planning of Complex Tasks for Heterogeneous Multi-Robot Teams Artificial Intelligence

In this paper, we propose a distributed multi-stage optimization method for planning complex missions for heterogeneous multi-robot teams. This class of problems involves tasks that can be executed in different ways and are associated with cross-schedule dependencies that constrain the schedules of the different robots in the system. The proposed approach involves a multi-objective heuristic search of the mission, represented as a hierarchical tree that defines the mission goal. This procedure outputs several favorable ways to fulfill the mission, which directly feed into the next stage of the method. We propose a distributed metaheuristic based on evolutionary computation to allocate tasks and generate schedules for the set of chosen decompositions. The method is evaluated in a simulation setup of an automated greenhouse use case, where we demonstrate the method's ability to adapt the planning strategy depending on the available robots and the given optimization criteria.

Optimal Path Planning of Autonomous Marine Vehicles in Stochastic Dynamic Ocean Flows using a GPU-Accelerated Algorithm Artificial Intelligence

Autonomous marine vehicles play an essential role in many ocean science and engineering applications. Planning time and energy optimal paths for these vehicles to navigate in stochastic dynamic ocean environments is essential to reduce operational costs. In some missions, they must also harvest solar, wind, or wave energy (modeled as a stochastic scalar field) and move in optimal paths that minimize net energy consumption. Markov Decision Processes (MDPs) provide a natural framework for sequential decision-making for robotic agents in such environments. However, building a realistic model and solving the modeled MDP becomes computationally expensive in large-scale real-time applications, warranting the need for parallel algorithms and efficient implementation. In the present work, we introduce an efficient end-to-end GPU-accelerated algorithm that (i) builds the MDP model (computing transition probabilities and expected one-step rewards); and (ii) solves the MDP to compute an optimal policy. We develop methodical and algorithmic solutions to overcome the limited global memory of GPUs by (i) using a dynamic reduced-order representation of the ocean flows, (ii) leveraging the sparse nature of the state transition probability matrix, (iii) introducing a neighbouring sub-grid concept and (iv) proving that it is sufficient to use only the stochastic scalar field's mean to compute the expected one-step rewards for missions involving energy harvesting from the environment; thereby saving memory and reducing the computational effort. We demonstrate the algorithm on a simulated stochastic dynamic environment and highlight that it builds the MDP model and computes the optimal policy 600-1000x faster than conventional CPU implementations, making it suitable for real-time use.

Generating Active Explicable Plans in Human-Robot Teaming Artificial Intelligence

Intelligent robots are redefining a multitude of critical domains but are still far from being fully capable of assisting human peers in day-to-day tasks. An important requirement of collaboration is for each teammate to maintain and respect an understanding of the others' expectations of itself. Lack of which may lead to serious issues such as loose coordination between teammates, reduced situation awareness, and ultimately teaming failures. Hence, it is important for robots to behave explicably by meeting the human's expectations. One of the challenges here is that the expectations of the human are often hidden and can change dynamically as the human interacts with the robot. However, existing approaches to generating explicable plans often assume that the human's expectations are known and static. In this paper, we propose the idea of active explicable planning to relax this assumption. We apply a Bayesian approach to model and predict dynamic human belief and expectations to make explicable planning more anticipatory. We hypothesize that active explicable plans can be more efficient and explicable at the same time, when compared to explicable plans generated by the existing methods. In our experimental evaluation, we verify that our approach generates more efficient explicable plans while successfully capturing the dynamic belief change of the human teammate.

Hierarchical Policy for Non-prehensile Multi-object Rearrangement with Deep Reinforcement Learning and Monte Carlo Tree Search Artificial Intelligence

Non-prehensile multi-object rearrangement is a robotic task of planning feasible paths and transferring multiple objects to their predefined target poses without grasping. It needs to consider how each object reaches the target and the order of object movement, which significantly deepens the complexity of the problem. To address these challenges, we propose a hierarchical policy to divide and conquer for non-prehensile multi-object rearrangement. In the high-level policy, guided by a designed policy network, the Monte Carlo Tree Search efficiently searches for the optimal rearrangement sequence among multiple objects, which benefits from imitation and reinforcement. In the low-level policy, the robot plans the paths according to the order of path primitives and manipulates the objects to approach the goal poses one by one. We verify through experiments that the proposed method can achieve a higher success rate, fewer steps, and shorter path length compared with the state-of-the-art.