kyburg
Professor of Philosophy and Computer Science Henry E. Kyburg Dies
Henry E. Kyburg Jr., a renowned and respected professor of philosophy and computer science at the University of Rochester, died of acute pancreatitis Oct. 30 at the age of 79 at Strong Memorial Hospital. He was well-known for his cutting-edge studies of uncertain inference, which is the human process of reaching conclusions, and data mining, the process by which computers search for information in data or draw conclusions from it. Kyburg, Burbank Professor of Intellectual and Moral Philosophy, was honored in 2007 with the University Award for Lifetime Achievement in Graduate Education. He was clearly admired by his students--who can be found working as pioneers themselves across all disciplines at research and educational institutions--for his insightful instruction, generous spirit, and relentless energy. "The last thing he said to me was'I would like a logic problem to work on,' because Henry was always scribbling, loved his work, and in general never stayed idle," said his wife Sarah Kyburg, who lived with her husband and eight children on their sustainable farm in Lyons, N.Y.
- North America > United States > Wisconsin (0.05)
- North America > United States > New York (0.05)
- North America > United States > Connecticut (0.05)
- Europe > Portugal > Lisbon > Lisbon (0.05)
Knowledge and Uncertainty
One purpose -- quite a few thinkers would say the main purpose -- of seeking knowledge about the world is to enhance our ability to make good decisions. An item of knowledge that can make no conceivable difference with regard to anything we might do would strike many as frivolous. Whether or not we want to be philosophical pragmatists in this strong sense with regard to everything we might want to enquire about, it seems a perfectly appropriate attitude to adopt toward artificial knowledge systems. If is granted that we are ultimately concerned with decisions, then some constraints are imposed on our measures of uncertainty at the level of decision making. If our measure of uncertainty is real-valued, then it isn't hard to show that it must satisfy the classical probability axioms. For example, if an act has a real-valued utility U(E) if the event E obtains, and the same real-valued utility if the denial of E obtains, so that U(E) = U(-E), then the expected utility of that act must be U(E), and that must be the same as the uncertainty-weighted average of the returns of the act, p-U(E) + q-U('E), where p and q represent the uncertainty of E and-E respectively. But then we must have p + q = 1.
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
Computing Reference Classes
For any system with limited statistical knowledge, the combination of evidence and the interpretation of sampling information requires the determination of the right reference class (or of an adequate one). This paper contributes the first frank discussion of how much of Kyburg's system is needed AI discussions on probability have perenially revolved around two problems: what to do with conflicting evidence, and how to get by without a lot of objective statistical knowledge. Hans Reichenbach left modern philosophers of probability with a single task: in order to determine an event's probability, determine the narrowest reference class to which the event belongs, and about which adequate statistics are known [Rei49]. Suppose I know about the next Mets game, "m", that it is one in which Dwight Gooden will pitch, "Dm", and one to be played at home, "Hm", and one in which Keith Hernandez will bat, "Km"; I want to know the probability that the game will be a Mets' victory, P("Vm"). I have statistics about (or have an expert's degree of belief in) the per cent of Mets home games that are Mets' victories, Some are willing to supply the missing numbers, e.g., But A.I. has left the age when inventing such numbers was condoned.
Higher Order Probabilities
A number of writers have supposed that for the full specification of belief, higher order probabilities are required. Some have even supposed that there may be an unending sequence of higher order probabilities of probabilities of probabilities.... In the present paper we show that higher order probabilities can always be replaced by the marginal distributions of joint probability distributions. We consider both the case in which higher order probabilities are of the same sort as lower order probabilities and that in which higher order probabilities are distinct in character, as when lower order probabilities are construed as frequencies and higher order probabilities are construed as subjective degrees of belief. In neither case do higher order probabilities appear to offer any advantages, either conceptually or computationally.
- North America > United States > New York (0.05)
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
Why Do We Need Foundations for Modelling Uncertainties?
Surely we want solid foundations. What kind of castle can we build on sand? What is the point of devoting effort to balconies and minarets, if the foundation may be so weak as to allow the structure to collapse of its own weight? We want our foundations set on bedrock, designed to last for generations. Who would want an architect who cannot certify the soundness of the foundations of his buildings?
- North America > United States > Illinois > Cook County > Chicago (0.05)
- North America > United States > New York > Monroe County > Rochester (0.04)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
A Modification to Evidential Probability
Murtezaoğlu, Bülent, Kyburg, Henry E. Jr
Selecting the right reference class and the right interval when faced with conflicting candidates and no possibility of establishing subset style dominance has been a problem for Kyburg's Evidential Probability system. Various methods have been proposed by Loui and Kyburg to solve this problem in a way that is both intuitively appealing and justifiable within Kyburg's framework. The scheme proposed in this paper leads to stronger statistical assertions without sacrificing too much of the intuitive appeal of Kyburg's latest proposal.
Some Problems for Convex Bayesians
Kyburg, Henry E. Jr., Pittarelli, Michael
The leading contender is Levi's When the set contains only one function, convex conditionalization and E-admissibility reduce to their strict Bayesian counterparts. Thus, with respect to decision making and representing and updating uncertainty, convex Bay· esianism includes strict Bayesianism as a special case. There are natural constraints on probability judg-- ments that cannot be represented by convex sets of classical probability functions. Working with the convex hull of a nonconvex set of probability func-- tions may result in unnecessary indecisiveness. This is not a convex set. Judgments of irrelevance (conditional irrelevance), that is, probabilistic independence (conditional independence}, are often made, are natural to make, can be made reliably, and provide well-known computational advantages [Pearl, 1988].
- North America > United States > New Jersey (0.05)
- North America > United States > California (0.05)
- North America > United States > New York > Oneida County > Utica (0.04)
- North America > United States > New York > Monroe County > Rochester (0.04)
Semantics for Probabilistic Inference
A number of writers(Joseph Halpern and Fahiem Bacchus among them) have offered semantics for formal languages in which inferences concerning probabilities can be made. Our concern is different. This paper provides a formalization of nonmonotonic inferences in which the conclusion is supported only to a certain degree. Such inferences are clearly 'invalid' since they must allow the falsity of a conclusion even when the premises are true. Nevertheless, such inferences can be characterized both syntactically and semantically. The 'premises' of probabilistic arguments are sets of statements (as in a database or knowledge base), the conclusions categorical statements in the language. We provide standards for both this form of inference, for which high probability is required, and for an inference in which the conclusion is qualified by an intermediate interval of support.
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- North America > United States > New York > Monroe County > Rochester (0.04)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
Probabilistic Acceptance
The idea of fully accepting statements when the evidence has rendered them probable enough faces a number of difficulties. We leave the interpretation of probability largely open, but attempt to suggest a contextual approach to full belief. We show that the difficulties of probabilistic acceptance are not as severe as they are sometimes painted, and that though there are oddities associated with probabilistic acceptance they are in some instances less awkward than the difficulties associated with other nonmonotonic formalisms. We show that the structure at which we arrive provides a natural home for statistical inference.
- Europe > Germany > North Rhine-Westphalia > Upper Bavaria > Munich (0.04)
- North America > United States > New York > Monroe County > Rochester (0.04)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.04)
- (3 more...)