The last few years have seen a surge in interest in the use of techniques from Bayesian decision theory to address problems in AI. Decision theory provides a normative framework for representing and reasoning about decision problems under uncertainty. Within the context of this framework, researchers in uncertainty in the AI community have been developing computational techniques for building rational agents and representations suited to engineering their knowledge bases. The articles cover the topics of inference in Bayesian networks, decision-theoretic planning, and qualitative decision theory. Here, I provide a brief introduction to Bayesian networks and then cover applications of Bayesian problem-solving techniques, knowledge-based model construction and structured representations, and the learning of graphic probability models.

Abbasnejad, Ehsan (Australian National University and NICTA)

Decision theory focuses on the problem of making decisions under uncertainty. This uncertainty arises from the unknown aspects of the state of the world the decision maker is in or the unknown utility function of performing actions. The uncertainty can be modeled as a probability distribution capturing our belief about the world the decision maker is in. Upon making new observations, the decision maker becomes more confident about this model. In addition, if there is a prior belief on this uncertainty that may have obtained from similar experiments, the Bayesian methods may be employed. The loss incurred by the decision maker can also be utilized for the optimal action selection. Most machine learning algorithms developed though focus on one of these aspects for learning and prediction; either learning the probabilistic model or minimizing the loss. In probabilistic models, approximate inference, the process of obtaining the desired model from the observations when its is not tractable, does not consider the task loss. On the other end of the spectrum, the common practice in learning is to minimize the task loss without considering the uncertainty of prediction model. Therefore, we investigate the intersection of decision theory and machine learning considering both uncertainty in prediction model and the task loss.

So how do we determine the "best" decision? This requires that we first define some notion of what we want (what are we trying to do?). The formal object that we use to do this goes by many names depending on the field: I will refer to it as a Loss function (\(\mathcal{L}\)) but the same general concept may be alternatively called a cost function, a utility function, an acquisition function, or any number of different things. The crucial idea is that this is a function that allows us to quantify how bad/good a given decision (\(a\)) is given some information (\(\theta\)). What does it mean to quantify?

Uncertain Case-Based Reasoning Hsinyen Wei and Costas Tsatsoulis Center for Excellence in Computer Aided System Engineering Department of Electrical and Computer Engineering The University of Kansas Lawrence, KS 66045 Uncertainty Case-Based Reasoning (CBR) can occur due to three main reasons. First, information may be simply missing. For example, the problem domain may be so complex that it can only be represented incompletely. Even in simpler domains it is not always appropriate to describe a complex situation in every detail, but to tend to use the functionality and the ease of acquisition of the information represented in the case as criterion to decide the representation of a case. Second, for different problems, different features of the world and the problem description will play different roles in achieving a solution.

The use of formal statistical methods to analyse quantitative data in data science has increased considerably over the last few years. One such approach, Bayesian Decision Theory (BDT), also known as Bayesian Hypothesis Testing and Bayesian inference, is a fundamental statistical approach that quantifies the tradeoffs between various decisions using distributions and costs that accompany such decisions. In pattern recognition it is used for designing classifiers making the assumption that the problem is posed in probabilistic terms, and that all of the relevant probability values are known. Generally, we don't have such perfect information but it is a good place to start when studying machine learning, statistical inference, and detection theory in signal processing. BDT also has many applications in science, engineering, and medicine.