The idea has an obvious appeal to AI workers, who rely heavily on representation languages. Moore and Hendrix (1982) gave the classic statement of the case for a sentential theory of attitudes. Haas (1986), Perlis (1988), and Morgenstern (1987) among the authors who have developed sentential theories and applied them to problems in AI. Konolige (1986) proposed a resolution theorem proving algorithm for his version of the sentential theory, and he proved that this algorithm was sound and complete. This is the only known technique for reasoning efficiently in a sentential theory of attitudes. Not surprisingly, he had to limit the expressive power of his logic in order to achieve efficiency. I will criticize his treatment of one problem: quantification into the scope of attitudes. I will argue that in this area Konolige's logic is clearly too weak. In the next section I will sketch a new logic that overcomes the limitations of Konolige's system. Quantification into the scope of attitudes is a difficult problem for any theory of attitudes. The problem arises when a quantifier stands outside the scope of an attirude operator, and binds a variable that appears inside the scope of that operator. Suppose John knows who the president of IBM is. We might try to represent this information as follows.
LaVictoire, Patrick (Quixey) | Fallenstein, Benja (Machine Intelligence Research Institute) | Yudkowsky, Eliezer (Machine Intelligence Research Institute) | Barasz, Mihaly (Nilcons) | Christiano, Paul (University of California at Berkeley) | Herreshoff, Marcello (Google)
Applications of game theory often neglect that real-world agents normally have some amount of out-of-band information about each other. We consider the limiting case of a one-shot Prisoner's Dilemma between algorithms with read-access to one anothers' source code. Previous work has shown that cooperation is possible at a Nash equilibrium in this setting, but existing constructions require interacting agents to be identical or near-identical. We show that a natural class of agents are able to achieve mutual cooperation at Nash equilibrium without any prior coordination of this sort.
Computational philosophy is the use of mechanized computational techniques to unearth philosophical insights that are either difficult or impossible to find using traditional philosophical methods. Computational metaphysics is computational philosophy with a focus on metaphysics. In this paper, we (a) develop results in modal metaphysics whose discovery was computer assisted, and (b) conclude that these results work not only to the obvious benefit of philosophy but also, less obviously, to the benefit of computer science, since the new computational techniques that led to these results may be more broadly applicable within computer science. The paper includes a description of our background methodology and how it evolved, and a discussion of our new results.
We will discuss the question whether artificial intelligence can contribute to a better understanding of human cognition. We will introduce two examples in which AI models provide explanations for certain cognitive abilities: The first example examines aspects of analogical reasoning and the second example discusses a possible solution for learning first-order logical theories by neural networks. We will argue that artificial intelligence can in fact contribute to a better understanding of human cognition.