The 1956 Dartmouth summer research project on artificial intelligence was initiated by this August 31, 1955 proposal, authored by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. The original typescript consisted of 17 pages plus a title page. Copies of the typescript are housed in the archives at Dartmouth College and Stanford University. The first 5 papers state the proposal, and the remaining pages give qualifications and interests of the four who proposed the study.
To build a machine that has "common sense" was once a principal goal in the field of artificial intelligence. But most researchers in recent years have retreated from that ambitious aim. We are convinced, however, that no one such method will ever turn out to be "best," and that instead, the powerful AI systems of the future will use a diverse array of resources that, together, will deal with a great range of problems. To build a machine that's resourceful enough to have humanlike common sense, we must develop ways to combine the advantages of multiple methods to represent knowledge, multiple ways to make inferences, and multiple ways to learn.
Minsky, Marvin L., Laske, Otto
The following excerpts are from an interview with Marvin Minsky which took place at his home in Brookline, Massachusetts, on January 23rd, 1991. The interview, which is included in its entirety as a Foreword in the book Understanding Music with AI: Perspectives on Music Cognition (edited by Mira Balaban, Kemal Ebcioglu, and Otto Laske), is a conversation about music, its peculiar features as a human activity, the special problems it poses for the scientist, and the suitability of AI methods for clarifying and/or solving some of these problems. The conversation is open-ended, and should be read accordingly, as a discourse to be continued at another time.
Minsky, Marvin L.
Engineering and scientific education condition us to expect everything, including intelligence, to have a simple, compact explanation. Today, some researchers who seek a simple, compact explanation hope that systems modeled on neural nets or some other connectionist idea will quickly overtake more traditional systems based on symbol manipulation. Others believe that symbol manipulation, with a history that goes back millennia, remains the only viable approach. AI is not like circuit theory and electromagnetism.
Minsky, Marvin L.
These are the voyages of the MIT Artificial Intelligence Laboratory, and these remarks may help to understand the context of this collection, though in many ways the memoranda speak quite clearly for themselves and my comments are not, in any case, to be regarded as history, for I have written them quite hastily, in much the same spirit of the memos themselves, when it was our strategy in those early days to be unscholarly: we tended to assume, for better or for worse, that everything we did was so likely to be new that there was little need for caution or for reviewing literature or for double -checking anything. As luck would have it, that almost always turned out true.
Minsky, Marvin L.
Today, surrounded by so many automatic machines industrial robots, and the R2-D2's of Star wars movies, most people think AI is much more advanced than it is. But still, many "computer experts" don't believe that machines will ever "really think." I think those specialists are too used to explaining that there's nothing inside computers but little electric currents. And there are many other reasons why so many experts still maintain that machines can never be creative, intuitive, or emotional, and will never really think, believe, or understand anything.
The MIT AI Laboratory has a long tradition of research in most aspects of Artificial Intelligence. Currently, the major foci include computer vision, manipulation, learning, English-language understanding, VLSI design, expert engineering problem solving, common-sense reasoning, computer architecture, distributed problem solving, models of human memory, programmer apprentices, and human education.