We describe Cobot, a mixed initiative socio-semantic conversational search and recommendation system for finding health information. With Cobot, users can start a real time conversation about their health concerns. Cobot then connects relevant users together in the conversation also providing contextual recommendations relevant to the conversation. Conventional search engines and content portals provide a solitary search experience inundating the health information seeker with a hoard of information often confusing and frustrating them. Cobot brings relevant healthcare information directly or through other users without any search through natural language conversation.
In the following writing we discuss a conceptual framework for representing events and scenarios from the perspective of a novel form of causal analysis. This causal analysis is applied to the events and scenarios so as to determine measures that could be used to manage the development of the processes that they are a part of in real time. An overall terminological framework and entity-relationship model are suggested along with a specification of the functional sets involved in both reasoning and analytics. The model is considered to be a specific case of the generic problem of finding sequential series in disparate data. The specific inference and reasoning processes are identified for future implementation.
Collaborative query routing is a new paradigm for web search that treats both established search engines and other publicly available indexes as intelligent peer agents in a search network. The approach makes it transparent for anyone to build his or her own (micro) search engine by integrating established web search services, desktop search, and topical crawling techniques. The challenge in this model is that each of these agents must learn about its environment--the existence, knowledge, diversity, reliability, and trustworthiness of other agents--by analyzing the queries received from and results exchanged with these other agents. We present the 6S peer network, which uses machine-learning techniques to learn about the changing query environment. We show that simple reinforcement learning algorithms are sufficient to detect and exploit semantic locality in the network, resulting in efficient routing and highquality search results.
Collaborative query routing is a new paradigm for Web search that treats both established search engines and other publicly available indices as intelligent peer agents in a search network. The approach makes it transparent for anyone to build their own (micro) search engine, by integrating established Web search services, desktop search, and topical crawling techniques. The challenge in this model is that each of these agents must learn about its environment— the existence, knowledge, diversity, reliability, and trustworthiness of other agents — by analyzing the queries received from and results exchanged with these other agents. We present the 6S peer network, which uses machine learning techniques to learn about the changing query environment. We show that simple reinforcement learning algorithms are sufficient to detect and exploit semantic locality in the network, resulting in efficient routing and high-quality search results. A prototype of 6S is available for public use and is intended to assist in the evaluation of different AI techniques employed by the networked agents.
Is Agent-based Online Search Feasible? Filippo Menczer Management Sciences Department University of Iowa Iowa City, IA 52245, USA f il ippo-menczer@uiowa, edu Abstract The scalability limitations of the current state of the art in Web search technology lead us to explore alternative approaches to centralized indexing, such as agent-based online search. The possible advantages of searching online, or agent-based browsing, are often downplayed in the face of the apparently unquestionable loss of efficiency. This paper is an attempt to reach a more balanced view in which both the obvious and hidden costs of the two approaches are considered. The two approaches can then be compared more fairly, and possible complementarities and synergies are explored. For systems designed to let users locate relevant information in highly distributed and decentralized databases, such as the Web, we argue that scalability is one of the main limitations of the current state of the art. The complexities emerging in networked information environments (decentralization, noise, heterogeneity, and dynamics) are not unlike those faced by ecologies of organisms adapting in natural environments. The capabilities of such natural agents --local adaptation, internalization of environmental signals, distributed control, integration of externally driven and endogenous behaviors, etc. -- represent desirable goals for the next generation of artificial agents: autonomous, intelligent, distributed, and adaptive. These considerations, along the lines of the artificial life approach, inspired us to base our model upon the metaphor of an ecology of agents. In this sense, the multi-agent system is not composed of a few agents with distinct and clearly defined functions, but rather by a (possibly very) large number agents collectively trying to satisfy the user request.