Goto

Collaborating Authors

 possibility


References

AI Magazine

Furthermore, the main conceptual foundations of AI--namely, the knowledge representation hypothesis of Brian Smith (1982) and the physical symbol system hypothesis of Allen Newell (1980)--are not discussed at all. These hypotheses have been considered fundamental cornerstones of AI research, but they are now being questioned as posing strong limitations on AI (Dahlbäck 1989; Dreyfus 1972; Winograd and Flores 1986). Given this perspective, the author concludes that AI's essential methodology is a continuous attempt to overcome the formal constraints of computer science and philosophy without sacrificing rigor. Although I liked the author's perspective, and I wholly agree with his main conclusion, both are just stated in the preface, and no further reference to them is given. Let's get a feeling of what this first volume is really about.


Reviews of Books

AI Magazine

Li is not small compared to that of A. However, To understand how this rule works, let us return to the submarine example and assume that there are two groups of experts El,..., As is pointed out in Zadeh (1979a), the Dempster rule P*(notA) 1. This, in a nutshell, is the basic idea underly-of combination of evidence may lead to counterintuitive coning the Dempster-Shafer theory. The An important observation is in order at this juncture. P(A), that S is in A, the answer would be (after the object under consideration does not exist. P*(A) are the degrees of belief and plausibility associated of evidence, consider the following situation.


References

AI Magazine

Because it assumes so much previous knowledge, the book will not be useful to the casual reader. One would be at a disadvantage without a reasonable familiarity with predicate calculus and modal logic, AI planning formalisms, and the work of Perrault and Allen on interpreting speech acts (for example, Allen and Perrault [1980]; Perrault and Allen [1980]). Accordingly, the reader of this review should be warned that my point of view is that of a researcher (specifically, an academic researcher) rather than a system builder; your mileage might vary. No review of this book would be complete without some mention of the commentaries, critical pieces written by other workshop participants that follow groups of related papers. Each commentator did an excellent job.


Artificial Intelligence and Ethics: An Exercise in the Moral Imagination

AI Magazine

In a book written in 1964, God and Golem: Inc., Norbert Wiener predicted that the quest to construct computermodeled artificial intelligence (AI) would come to impinge directly upon some of our most widely and deeply held religious and ethical values. It is certainly true that the idea of mind as artifact, the idea of a humanly constructed artificial intelligence, forces us to confront our image of ourselves. In the theistic tradition of Judeo-Christian culture, a tradition that is, to a large extent: our "fate," we were created in the Such is the scenario envisaged by some of the classic science fiction of the past, Shelley's Frankenstein, or the Modern Prometheus and the Capek brothers' R. U.R. (for Rossom's Universal Robots) being notable examples. Both seminal works share the view that Pamela McCorduck (1979) in her work Machines Who Think calls the "Hebraic" attitude toward the AI enterprise. In contrast to what she calls the "Hellenic" fascination with, and openness toward, AI, the Hebraic attitude has been one of fear and warning: "You shall not make for yourself a graven image..." I don't think that the basic outline of Franl%enstein needs to be recapitulated here, even if, The possibility of constructing a personal AI raises many ethical the fear that we might succeed, perhaps it is the fear that we might create a Frankenstein, or perhaps it is the fear that we might become eclipsed, in a strange Oedipal drama, by our own creation.


A (Very) Brief History of Artificial Intelligence

AI Magazine

In this brief history, the beginnings of artificial intelligence are traced to philosophy, fiction, and imagination. Early inventions in electronics, engineering, and many other disciplines have influenced AI. Some early milestones include work in problems solving which included basic work in learning, knowledge representation, and inference as well as demonstration programs in language understanding, translation, theorem proving, associative memory, and knowledge-based systems. The article ends with a brief examination of influential organizations and current issues facing the field. Ever since Homer wrote of mechanical "tripods" waiting on the gods at dinner, imagined mechanical assistants have been a part of our culture.


A Simple View of the Dempster-Shafer Theory of Evidence and its Implication for the Rule of Combination

AI Magazine

The emergence of expert systems as one of the major areas of activity within AI has resulted in a rapid growth of interest within the AI community in issues relating to the management of uncertainty and evidential reasoning. During the past two years, in particular, the Dempster-Shafer theory of evidence has att,ract,ed considerable attention as a promising method of dealing with some of the basic problems arising in combination of evidence and data fusion. To develop an adequate understanding of this theory requires considerable effort and a good background in probability theory. There is, however, a simple way of approaching the Dempster-Shafer theory that only requires a minimal familiarity with relational models of data. For someone with a background in AI or database management, this approach has the advantage of relating in a natural way to the familiar framework of AI and databases.


A Review of Rules of Encounter: Designing Conventions for Automated Negotiation

AI Magazine

The main contribution of the book Rules of Encounter: Designing Conventions for Automated Negotiation, by Jeffrey S. Rosenschein and Gilad Zlotkin, is the formulation of a principled framework within which to study interactions among artificial heterogeneous agents. This framework is based on the theory of games, which is aimed at decision problems faced by agents in situations in which the agent's welfare depends not only on its own actions but also on the actions of other agents. The examples are numerous: The personal digital assistants (PDAs) that might one day keep track of their users' itinerary will have to negotiate with PDAs of other people to adjust and synchronize their meeting schedules. Software agents looking for the right kinds of information on the Internet on behalf of their users might have to negotiate with other such agents over the access to resources. Computer agents that control a telecommunications network will have to interact with computers that control other networks and might find it beneficial to come to agreement with them.


AAAI News

AI Magazine

July 11, 1993 Washington, DC Participants: Pat Hayes, Danny Bobrow, Randy Davis, Barbara Grosz, Norm Nielsen, Joe Bates, Paul Cohen, Tom Dean, Johan de Kleer, Bob Engelmore, Ed Feigenbaum, Richard Fikes, Ken Ford, Mark Fox, Peter Friedland, Barbara Hayes-Roth, Jim Hendler, Elaine Kant, Phil Klahr, Benjamin Kuipers, Ramesh Patil, Candy Sidner, Bill Swartout, Katia Sycara, Beverly Woolf, Carol Hamilton Pat Hayes called the meeting to order with the introduction of the newly elected officer and councilors of AAAI. Randy Davis has been elected to a two-year term as President-Elect. Tom Dean, Bob Engelmore, Peter Friedland, and Ramesh Patil have all been elected to three-year terms as AAAI councilors. Hayes gave a special thanks to retiring councilors Tom Dietterich, Richard Fikes, Mark Fox, and Barbara Hayes-Roth for their generous donations of time and energy over the past three years. Hayes also presented Danny Bobrow with a special plaque, noting his many years of service to AAAI.


The Possibility of a Deep Learning Intelligence Explosion

@machinelearnbot

François Chollet argues about the Impossibility of an Intelligence Explosion. It is a strong article with the exception of the conclusion. Chollet is accurate in describing the many of the obstacles that we expect to encounter in creating an advanced artificial general intelligence (AGI). These obstacles are as follows ( I use my own categorization, but its mapping with Chollet's should be straightforward): The flaw in Chollet's article is that he believes the pace to be linear. There is little evidence that this is true.


2017-11-20-educators-on-artificial-intelligence-here-s-the-one-thing-it-can-t-do-well?utm_content=buffer2c2e1&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer

#artificialintelligence

It isn't just the tech entrepreneurs and Hollywood directors who dream about the role that artificial intelligence can play, or will play, in everyday human life--educators have begun to join them. However, those dreams aren't always pleasant and may, in fact, sometimes turn into nightmares. If computer systems are able to perform tasks that humans have performed for thousands of years, will it render teachers and administrators a thing of the past? Or is artificial intelligence the secret to freeing up educators' time for other, non-routine tasks, like mentoring and spending more one-on-one time with students? To find out, I went straight to the source--eight educators, including superintendents, coaches and teachers--to find out whether AI tickles their fancy or scares them straight.