Part of NSF's Recovering MIT's AI Film History Project. "Here you will find a rough chronology of some of AI's most influential projects. It is intended for both non-scientists and those ready to continue experimentation and research tomorrow. Included is a taste of who the main players have been, concepts they and their projects have explored and how the goals of AI have evolved and changed over time. Many will be surprised that some of what we now consider obvious tools like search engines, spell check and spam filters are all outcroppings of AI research."
Computer scientist Brian Randell was the man who started uncovering the history of Colossus.That history had to be prised out of the archives because official efforts to cover up its success worked so well. Thousands of people worked in the huts at Bletchley Park during WWII on code-cracking but only a handful were involved with Colossus and fewer still knew everything about it.
Computer scientist Brian Randell was the man who started uncovering the history of Colossus.
That history had to be prised out of the archives because official efforts to cover up its success worked so well. Thousands of people worked in the huts at Bletchley Park during WWII on code-cracking but only a handful were involved with Colossus and fewer still knew everything about it.
Turing's genius was to compare machines with humans. To any outsider it was an unlikely place, but the modern era of computers began on May 28, 1936, when the editors of the Proceedings of the London Mathematical Society received a paper with the rather cumbersome title, "On Computable Numbers, With an Application to the Entscheidungsproblem" by Alan M. Turing. Turing's 1936 paper on computable numbers hit that rare bull's eye where philosophy and discovery overlap. But unlike Church, who used the standard abstractions of pure mathematics in his argument, Turing wrote of machines, algorithms, ink, paper tape, and computation. (Before Turing, a "computer" referred not to a machine, but to a human being who calculated with paper and pencil.)
She is a Regents' Professor of Cognitive Science at the Georgia Institute of Technology with joint appointments in the Ivan Allen College of Liberal Arts School of Public Policy and the College of Computing School of Interactive Computing. Nersessian is one of the pioneers of the interdisciplinary field of cognitive studies of science and technology, which comprises psychologists, philosophers of science, artificial intelligence researchers and cognitive anthropologists. So, I was inspired to study math and physics, but in retrospect this was the beginning of my life as a philosopher and cognitive scientist. I was hooked I changed to a double major in physics and philosophy, and headed to graduate school to study the philosophy of physics.
An internationally recognized pioneer in the field is Judea Pearl, a professor at UCLA, who on March 29 will add to his string of honors and awards the Harvey Prize in Science and Technology from the Technion-Israel Institute of Technology. In 2008, on receiving the Benjamin Franklin Medal in Computer and Cognitive Science from the Franklin Institute, Pearl was credited with research that changed the face of computer science, and his three books recognized as being among the most influential works in shaping the theory and practice of knowledge-based systems.
Review of The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant From Two Centuries of Controversy By Sharon Bertsch McGrayne Yale University Press 320 pages ISBN: 978-0-300-16969-0
The heart of all the controversy had to do with the way Bayes began his search for an answer to the inverse probability problem. Probability theory was in its infancy in Bayes's time and, McGrayne writes, applied primarily to gambling: the odds of picking up four aces in three consecutive poker hands, for example, which you could describe as reasoning from cause to effect. The inverse problem instead sought to reason from effect to cause: if you had three consecutive poker hands of four aces, what is the underlying chance that the deck is loaded?
Since receiving her doctorate in 1992, Manuela Veloso's research interests in artificial intelligence have focused on duplicating the success with which humans plan, learn and execute tasks. Founding a robot soccer dynasty was purely coincidental. By David Hart. NSF Discovery (March 24, 2004).
See Appendix A (p.69) "Computer Science Department Theses" for Ph.D. dissertations.
See Appendix B (p.77) "Artificial Intelligence Memos" for technical reports from the the Stanford AI Lab.
Stuart C. Shapiro, A net structure for semantic information storage, deduction and retrieval. In Proceedings of the Second International Joint Conference on Artificial Intelligence (IJCAI-71), Morgan Kaufmann, Inc., Los Altos, CA, 1971, 512-523.
D. P. McKay and S. C. Shapiro. Using active connection graphs for reasoning with recursive rules. In Proceedings of the Seventh International Joint Conference on Artificial Intelligence (IJCAI-81), pages 368-374, Los Altos, CA, 1981. Morgan Kaufmann.
Anthony S. Maida and Stuart C. Shapiro, Intensional concepts in propositional semantic networks. Cognitive Science, 6(4):291-330, 1982. Reprinted in R. J. Brachman and H. J. Levesque, eds. Readings in Knowledge Representation, Morgan Kaufmann, Los Altos, CA, 1985, 170-189.
Stuart C. Shapiro and William J. Rapaport, SNePS considered as a fully intensional propositional semantic network. In N. Cercone and G. McCalla, editors, The Knowledge Frontier, Springer-Verlag, New York, 1987, 263-315.
João P. Martins and Stuart C. Shapiro. A model for belief revision. Artificial Intelligence, 35(1):25-79, 1988.
"The materials primarily concern his work in artificial intelligence at Stanford University and includes administrative files, correspondence, project files, trip files, proposals, reports, reprints, Artificial Intelligence Lab memos, audio tapes, video tapes, and files on computer programs, mainly DENDRAL, MOLGEN, ARPA, EPAM, and SUMEX."
Tips for searching/browsing the site:
See also: Feigenbaum, E.A.: A Companion Site to the Edward A. Feigenbaum Collection.
Interviews at the AAAI 2006 conference with 28 AAAI Fellows:
Bobrow, Brachman, Brooks, Buchanan, Buchanan-speech, Bundy, Doyle, Feigenbaum, Hendler, Kahn, Kautz, Kuipers, McDermott, Michalski, Minsky, Nilsson, Rich, Rissland, Selman, Sidner, Simmons, Sussman, Swartout, Szolovits, Veloso, Wilkins, Winston, Woolf
Extracting refined rules from knowledge-based neural networks, G. Towell & J. Shavlik, Machine Learning 13 (1), 71-101, 1993.
Knowledge-based artificial neural networks, G. Towell & J. Shavlik, Artificial Intelligence 70 (1-2), 119-165, 1994.
Creating advice-taking reinforcement learners, R. Maclin & J. Shavlik, Machine Learning 22 (1), 251-281, 1995.
Knowledge-based support vector machine classifiers, G. Fung, O. Mangasarian, & J. Shavlik, Advances in Neural Information Processing Systems 15, 521-528, 2002.
D. Heckerman. Probabilistic interpretations for MYCIN's certainty factors. In Proceedings of the Workshop on Uncertainty and Probability in Artificial Intelligence, Los Angeles, CA, pages 9-20. Association for Uncertainty in Artificial Intelligence, Mountain View, CA, August 1985. Also in L. Kanal. and J. Lemmer, editors, Uncertainty in Artificial Intelligence, pages 167-196. North-Holland, New York, 1986.
D. Heckerman and E. Horvitz. The myth of modularity in rule-based systems. In Proceedings of the Second Workshop on Uncertainty in Artificial Intelligence, Philadelphia, PA, pages 115-121. Association for Uncertainty in Artificial Intelligence, Mountain View, CA, August 1986. Also in L. Kanal and J. Lemmer, editors, Uncertainty in Artificial Intelligence 2, pages 23-34. North-Holland, New York, 1988.
Dietterich, T. G., (1986). Learning at the knowledge level, Machine Learning, 1(3) 287-316. Postscript preprint.
Dietterich, T. G., Bakiri, G. (1995) Solving Multiclass Learning Problems via Error-Correcting Output Codes. Journal of Artificial Intelligence Research 2: 263-286. Postscript file.
Dietterich, T. G., Lathrop, R. H., Lozano-Perez, T. (1997) Solving the multiple-instance problem with axis-parallel rectangles. Artificial Intelligence, 89(1-2), 31-71. Postscript file
Dietterich, T. G., (1997). Machine Learning Research: Four Current DirectionsAI Magazine. 18 (4), 97-136. Postscript preprint. PDF preprint.
Dietterich, T. G. (2000). Hierarchical reinforcement learning with the MAXQ value function decomposition. Journal of Artificial Intelligence Research, 13, 227-303. Compressed postscript. [2003 IJCAI-JAIR Best Paper Prize]
Dietterich, T. G., Hao, G., Ashenfelter, A. (2008). Gradient Tree Boosting for Training Conditional Random Fields. Journal of Machine Learning Research. 9, 2113-2139. PDF Preprint.
Historical note: "The idea of robots playing soccer was first mentioned by Professor Alan Mackworth (University of British Columbia, Canada) in a paper entitled 'On Seeing Robots' presented at VI-92, 1992. and later published in a book Computer Vision: System, Theory, and Applications, pages 1-13, World Scientific Press, Singapore, 1993. A series of papers on the Dynamo robot soccer project was published by his group. Independently, a group of Japanese researchers organized a Workshop on Grand Challenges in Artificial Intelligence in October, 1992 in Tokyo, discussing possible grand challenge problems. This workshop led to a serious discussions of using the game of soccer for promoting science and technology."---from A Brief History of RoboCup.
In November 1963 -- a half century before the arrival of Google's robocars and Amazon's Kiva factory minions -- Charles Rosen dreamed up the world's first mobile "automaton. " Rosen, a researcher at the Stanford Research Institute in Menlo Park, California, envisioned a roboperson driven by neural networks, algorithms that mimic the human brain.
Every two years, IEEE Intelligent Systems acknowledges and celebrates 10 young stars in the field of AI as "AI's 10 to Watch." These accomplished researchers have all completed their doctoral work in the past five years. Despite being relatively junior in their career, each one has made impressive research contributions and had an impact in the literature — and in some cases, in real-world applications as well.
During Wilensky’s tenure at UC Berkeley, he served as chair of the Computer Science Division, director of the Berkeley Cognitive Science Program, director of the Berkeley Artificial Intelligence Research Project, and board member of the International Computer Science Institute.
“When he joined our department, he began building up a program in artificial intelligence at UC Berkeley, and he succeeded wonderfully,” said longtime colleague Richard Fateman, UC Berkeley professor emeritus of computer science and co-investigator on the Digital Library project. “He was extraordinarily successful in conceiving and executing ideas that led to infrastructure improvement for all his colleagues and contributed to the advancement of technology in programs that are widely used in document processing and Web access. He really was exceptional.”
Wilensky was also instrumental in establishing UC Berkeley’s Cognitive Science Program, helping organizing the diverse campus faculty and leading competitive grants at a time when the research field was in its infancy.
An 18th Century automaton that could beat human chess opponents seemingly marked the arrival of artificial intelligence. But what turned out to be an elaborate hoax had its own sense of genius, says Adam Gopnik.
...So the inventor's real genius was not to build a chess-playing machine. It was to be the first to notice that, in the modern world, there is more mastery available than you might think; that exceptional talent is usually available, and will often work cheap.
Brief summary of Joshua Lederberg's contributions to science. Shown at the presentation of the Morris F. Collin Award to Lederberg by the American College of Medical Informatics, 1999. Includes short interviews with Edward Feigenbaum, Don Lindberg, Tom Rindfleisch, Carl Djerassi, and Ted Shortliffe.
8 min. interview with Tom Mitchell about machine learning, from CMU's 2006 ML Autumn School, recorded in September 2006.
"Tom Mitchell is the first Chair of Department of the first Machine Learning Department in the World, based at Carnegie Mellon. The Videolectures.Net team spoke to him in Pittsburgh at CMU where we discussed about how he started the department, what was the response of the broader community and its past, present and future. "The university said you can only have a department if you have a discipline that is going to be here in one hundred years otherwise you can not have a department.""
Report from the Moore School of Electrical Engineering, University of Pennsylvania
Also available in the ACM Digital Library
The studies reported here have been concerned with the programming of a digital computer to behave in a way which, if done by human beings oranimals, would be described as involving the process of learning. Whilethis is not the place to dwell on the importance of machine-learning procedures,or to discourse on the philosophical aspects,1 there is obviously avery large amount of work, now done by people, which is quite trivial inits demands on the intellect but does, nevertheless, involve some learning.
Also in Computers and Thought. Feigenbaum, Edward A. and Julian Feldman (Editors) 1963.
A program is described that accepts natural
language input and makes inferences from it and paraphrases
of it . The Conceptual Dependency framework is the basis of
thi s system.