Learning From What You Don't Observe

arXiv.org Artificial Intelligence

The process of diagnosis involves learning about the state of a system from various observations of symptoms or findings about the system. Sophisticated Bayesian (and other) algorithms have been developed to revise and maintain beliefs about the system as observations are made. Nonetheless, diagnostic models have tended to ignore some common sense reasoning exploited by human diagnosticians; In particular, one can learn from which observations have not been made, in the spirit of conversational implicature. There are two concepts that we describe to extract information from the observations not made. First, some symptoms, if present, are more likely to be reported before others. Second, most human diagnosticians and expert systems are economical in their data-gathering, searching first where they are more likely to find symptoms present. Thus, there is a desirable bias toward reporting symptoms that are present. We develop a simple model for these concepts that can significantly improve diagnostic inference.


Evidence Optimization Techniques for Estimating Stimulus-Response Functions

Neural Information Processing Systems

An essential step in understanding the function of sensory nervous systems isto characterize as accurately as possible the stimulus-response function (SRF) of the neurons that relay and process sensory information. Oneincreasingly common experimental approach is to present a rapidly varying complex stimulus to the animal while recording the responses ofone or more neurons, and then to directly estimate a functional transformation of the input that accounts for the neuronal firing. The estimation techniques usually employed, such as Wiener filtering or other correlation-based estimation of the Wiener or Volterra kernels, are equivalent to maximum likelihood estimation in a Gaussian-output-noise regression model. We explore the use of Bayesian evidence-optimization techniques to condition these estimates. We show that by learning hyperparameters thatcontrol the smoothness and sparsity of the transfer function it is possible to improve dramatically the quality of SRF estimates, as measured by their success in predicting responses to novel input.


Bayesian Case-Based Reasoning with Neural Networks

AAAI Conferences

A typical example of this approach is expressing knowledge by rules. The underlying assumption is that these representations are compact abstractions of the known individual instances of the knowledge concept to be encoded. These abstractions can be static or dynamic, i.e., they can be compiled into the program directly, or adaptively synthesized during the application execution with a learning procedure. This approach is applicable for representing well-organized knowledge. However, such a summarization approach has faced substantial problems in applications where the concepts required for the knowledge are highly interconnected and have a large amount of irregularities (exceptions), e.g., "common sense".


The Bayesian Basis of Common Sense Medical Diagnosis

AAAI Conferences

Eugene Charniak Department of Computer Science Brown University Providence, Rhode Island 029 12 I Introduction While the mathematics of conditional probabilities in general, and Bayesian statistics in particular, would seem to offer a foundation for medical diagnosis (and other cases of decision making under uncertainty), such approaches have been rejected by most "artificial intelligence in medicine" researchers. Typically, Bayesian statistics have been rejected for the following reasons. The only way to escape from (1) is to impose absurd statistical independence assumptions ['7,9]. And at any rate, Bayesian statistics only works for the single disease situation [3,6] In this,. Furthermore, while (3) seems to be valid, even there, Bayesian statistics is perfectly compatible with various heuristic solutions to the multiple-disease problem. To reject Bayesian statistics on the basis of (3) would be like rejecting closedform solutions to differential equations because the toughest ones must be solved numerically.


Constraining Influence Diagram Structure by Generative Planning: An Application to the Optimization of Oil Spill Response

arXiv.org Artificial Intelligence

This paper works through the optimization of a real world planning problem, with a combination of a generative planning tool and an influence diagram solver. The problem is taken from an existing application in the domain of oil spill emergency response. The planning agent manages constraints that order sets of feasible equipment employment actions. This is mapped at an intermediate level of abstraction onto an influence diagram. In addition, the planner can apply a surveillance operator that determines observability of the state---the unknown trajectory of the oil. The uncertain world state and the objective function properties are part of the influence diagram structure, but not represented in the planning agent domain. By exploiting this structure under the constraints generated by the planning agent, the influence diagram solution complexity simplifies considerably, and an optimum solution to the employment problem based on the objective function is found. Finding this optimum is equivalent to the simultaneous evaluation of a range of plans. This result is an example of bounded optimality, within the limitations of this hybrid generative planner and influence diagram architecture.