I view the World Wide Web as an information food chain. The maze of pages and hyperlinks that comprise the Web are at the very bottom of the chain. The WEBCRAWLERs and ALTAVISTAs of the world are information herbivores; they graze on Web pages and regurgitate them as searchable indices. Today, most Web users feed near the bottom of the information food chain, but the time is ripe to move up. Since 1991, we have been building information carnivores, which intelligently hunt and feast on herbivores in UNIX, on the Internet, and on the Web. Information carnivores will become increasingly critical as the Web continues to grow and as more naive users are exposed to its chaotic jumble.
The difficulty of finding information on the World Wide Web by browsing hypertext documents has led to the development and deployment of various search engines and indexing techniques. However, many information-gathering tasks are better handled by finding a referral to a human expert rather than by simply interacting with online information sources. A personal referral allows a user to judge the quality of the information he or she is receiving as well as to potentially obtain information that is deliberately not made public. The process of finding an expert who is both reliable and likely to respond to the user can be viewed as a search through the net-work of social relationships between individuals as opposed to a search through the network of hypertext documents. The goal of the REFERRAL WEB Project is to create models of social networks by data mining the web and develop tools that use the models to assist in locating experts and related information search and evaluation tasks.
A number of approaches have been advanced for taking data about a user's likes and dislikes and generating a general profile of the user. These profiles can be used to retrieve documents matching user interests; recommend music, movies, or other similar products; or carry out other tasks in a specialized fashion. This article presents a fundamentally new method for generating user profiles that takes advantage of a large-scale database of demographic data. These data are used to generalize user-specified data along the patterns common across the population, including areas not represented in the user's original data. I describe the method in detail and present its implementation in the LIFESTYLE FINDER agent, an internet-based experiment testing our approach on more than 20,006 users worldwide.
AI has been well supported by government research and development dollars for decades now, and people are beginning to ask hard questions: What really works? What are the limits? What doesn't work as advertised? What isn't likely to work? What isn't affordable? This article holds a mirror up to the community, both to provide feedback and stimulate more self-assessment. The significant accomplishments and strengths of the field are highlighted. The research agenda, strategy, and heuristics are reviewed, and a change of course is recommend-ed to improve the field's ability to produce reusable and interoperable components.
Ackerman, Mark, Billsus, Daniel, Gaffney, Scott, Khoo, Gordon, Hettich, Seth, Kim, Dong Joon, Klefstad, Ray, Lowe, Charles, Ludeman, Alexius, Muramatsu, Jack, Omori, Kazuo, Pazzani, Michael J., Semler, Douglas, Starr, Brian, Yap, Paul
Some people believe that the expert system field is dead, yet others believe it is alive and well. To gain a better insight into these possible views, the first three world congresses on expert systems (which typically attract representatives from some 45-50 countries) are used to determine the health of the global expert system field in terms of applied technologies, applications, and management. This article highlights some of these findings.
Search engines are among the most successful applications on the web today. So many search engines have been created that it is difficult for users to know where they are, how to use them, and what topics they best address. Metasearch engines reduce the user burden by dispatching queries to multiple search engines in parallel. The SAVVYSEARCH metasearch engine is designed to efficiently query other search engines by carefully selecting those search engines likely to return useful results and responding to fluctuating load demands on the web. SAVVYSEARCH learns to identify which search engines are most appropriate for particular queries, reasons about resource demands, and represents an iterative parallel search strategy as a simple plan.