Minsky, Marvin L.


The St. Thomas Common Sense Symposium: Designing Architectures for Human-Level Intelligence

AI Magazine

To build a machine that has "common sense" was once a principal goal in the field of artificial intelligence. But most researchers in recent years have retreated from that ambitious aim. We are convinced, however, that no one such method will ever turn out to be "best," and that instead, the powerful AI systems of the future will use a diverse array of resources that, together, will deal with a great range of problems. To build a machine that's resourceful enough to have humanlike common sense, we must develop ways to combine the advantages of multiple methods to represent knowledge, multiple ways to make inferences, and multiple ways to learn.


A Conversation with Marvin Minsky

AI Magazine

The following excerpts are from an interview with Marvin Minsky which took place at his home in Brookline, Massachusetts, on January 23rd, 1991. The interview, which is included in its entirety as a Foreword in the book Understanding Music with AI: Perspectives on Music Cognition (edited by Mira Balaban, Kemal Ebcioglu, and Otto Laske), is a conversation about music, its peculiar features as a human activity, the special problems it poses for the scientist, and the suitability of AI methods for clarifying and/or solving some of these problems. The conversation is open-ended, and should be read accordingly, as a discourse to be continued at another time.


Logical Versus Analogical or Symbolic Versus Connectionist or Neat Versus Scruffy

AI Magazine

Engineering and scientific education condition us to expect everything, including intelligence, to have a simple, compact explanation. Today, some researchers who seek a simple, compact explanation hope that systems modeled on neural nets or some other connectionist idea will quickly overtake more traditional systems based on symbol manipulation. Others believe that symbol manipulation, with a history that goes back millennia, remains the only viable approach. AI is not like circuit theory and electromagnetism.


Why People Think Computers Can't

AI Magazine

Today, surrounded by so many automatic machines industrial robots, and the R2-D2's of Star wars movies, most people think AI is much more advanced than it is. But still, many "computer experts" don't believe that machines will ever "really think." I think those specialists are too used to explaining that there's nothing inside computers but little electric currents. And there are many other reasons why so many experts still maintain that machines can never be creative, intuitive, or emotional, and will never really think, believe, or understand anything.


Research in Progress at the Massachusetts Institute of Technology Artificial Intelligence Laboratory

AI Magazine

The MIT AI Laboratory has a long tradition of research in most aspects of Artificial Intelligence. Currently, the major foci include computer vision, manipulation, learning, English-language understanding, VLSI design, expert engineering problem solving, common-sense reasoning, computer architecture, distributed problem solving, models of human memory, programmer apprentices, and human education.