In this paper, we report our development of a hybrid user model for improving a user's effectiveness in a search. Specifically, we dynamically capture a user's intent and combine the captured user intent with the elements of an information retrieval system in a decision theoretic framework. Our solution is to identify a set of key attributes describing a user's intent, and determine the interactions among them. Then we build our user model by capturing these attributes, which we call the IPC model. We further extend this model to combine the captured user intent with the elements of an information retrieval system in a decision theoretic framework, thus creating a hybrid user model.
With such judgements, we can construct a better term-weighted query for the TL search, essentially producing true translingual RF. Of course, this RF process can also be used to enhance the SL query and search other SL databases at no extra cost to or involvement from the analyst. The envisioned mechanism is shown in Figure 3, and encompasses the following steps: 1. The analyst types in a source language query Qs; 2. Parallel corpus (source half) is searched by an engine using Qs; 3. One of the following methods is used to search the TL document database: Prom retrieved SL/TL document pairs, the TL document contents are used as a new query QT to search the TL document database; or The retrieved SL/TL document pairs are first given back to the analyst, in order to scan the SL documents for relevance; then the Rocchio formula is used for both SL and TL document database search.
This paper describe a novel approach to knowledge representation, learning, and reasoning in WebDoc, a system that classifies Web documents according to the Library of Congress classification system. We argue that an automatically constructed domain-independent knowledge base is indispensable. The WebDoc system builds a knowledge base (represented as a semantic network) that contains the Library of Congress subject headings and their relationships. Through training on human-indexed and NLPparsed Web documents, WebDoc modifies the semantic network and generates rules for future index generation tasks.
Information is a meaningful collection of data. Information retrieval (IR) is an important tool for changing data to information. Of the three classical IR models (Boolean, Support Vector Machine, and Probabilistic), the Support Vector Machine (SVM) IR model is most widely used. But this model does not convey enough relevancies between a query and documents to produce effective results reflecting knowledge. To augment the IR process with knowledge, several techniques are proposed including query expansion by using a thesaurus, a term relationship measurement like Latent Semantic Indexing (LSI), and a probabilistic inference engine using Bayesian Networks. Our research aims to create an information retrieval model that incorporates domain specific knowledge to provide knowledgeable answers to users. We use a knowledgebased model to represent domain specific knowledge. Unlike other knowledge-based IR models, our model converts domain-specific knowledge to a relationship of terms represented as quantitative values, which gives improved efficiency.
Traditional information retrieval systems use query words to identify relevant documents. In difficult retrieval tasks, however, one needs access to a wealth of background knowledge. We present a method that uses Wikipedia-based feature generation to improve retrieval performance. Intuitively, we expect that using extensive world knowledge is likely to improve recall but may adversely affect precision. High quality feature selection is necessary to maintain high precision, but here we do not have the labeled training data for evaluating features, that we have in supervised learning. We present a new feature selection method that is inspired by pseudorelevance feedback. We use the top-ranked and bottomranked documents retrieved by the bag-of-words method as representative sets of relevant and non-relevant documents. The generated features are then evaluated and filtered on the basis of these sets. Experiments on TREC data confirm the superior performance of our method compared to the previous state of the art.