Not enough data to create a plot.
Try a different view from the menu above.
Country
BabelRelate! A Joint Multilingual Approach to Computing Semantic Relatedness
Navigli, Roberto (Sapienza Università di Roma) | Ponzetto, Simone Paolo (Sapienza Università di Roma)
We present a knowledge-rich approach to computing semantic relatedness which exploits the joint contribution of different languages. Our approach is based on the lexicon and semantic knowledge of a wide-coverage multilingual knowledge base, which is used to compute semantic graphs in a variety of languages. Complementary information from these graphs is then combined to produce a 'core' graph where disambiguated translations are connected by means of strong semantic relations. We evaluate our approach on standard monolingual and bilingual datasets, and show that: i) we outperform a graph-based approach which does not use multilinguality in a joint way; ii) we achieve uniformly competitive results for both resource-rich and resource-poor languages.
Exploiting Shared Resource Dependencies in Spectrum Based Plan Diagnosis
Gupta, Shekhar (Palo Alto Research Center) | Roos, Nico (Masstricht University) | Witteveen, Cees (Delft University of Technology) | Price, Bob (Palo Alto Research Center) | DeKleer, Johan (Palo Alto Research Center)
In case of a plan failure, plan-repair is a more promising solution than replanning from scratch. The effectiveness of plan-repair depends on knowledge of which plan action failed and why. Therefore, in this paper, we propose an Extended Spectrum Based Diagnosis approach that efficiently pinpoints failed actions. Unlike Model Based Diagnosis (MBD), it does not require the fault models and behavioral descriptions of actions. Our approach first computes the likelihood of an action being faulty and subsequently proposes optimal probe locations to refine the diagnosis. We also exploit knowledge of plan steps that are instances of the same plan operator to optimize the selection of the most informative diagnostic probes. In this paper, we only focus on diagnostic aspect of plan-repair process.
Leveraging Domain Knowledge in Multitask Bayesian Network Structure Learning
Oyen, Diane (University of New Mexico) | Lane, Terran (University of New Mexico)
Network structure learning algorithms have aided network discovery in fields such as bioinformatics, neuroscience, ecology and social science. However, challenges remain in learning informative networks for related sets of tasks because the search space of Bayesian network structures is characterized by large basins of approximately equivalent solutions. Multitask algorithms select a set of networks that are near each other in the search space, rather than a score-equivalent set of networks chosen from independent regions of the space. This selection preference allows a domain expert to see only differences supported by the data. However, the usefulness of these algorithms for scientific datasets is limited because existing algorithms naively assume that all pairs of tasks are equally related. We introduce a framework that relaxes this assumption by incorporating domain knowledge about task-relatedness into the learning objective. Using our framework, we introduce the first multitask Bayesian network algorithm that leverages domain knowledge about the relatedness of tasks. We use our algorithm to explore the effect of task-relatedness on network discovery and show that our algorithm learns networks that are closer to ground truth than naive algorithms and that our algorithm discovers patterns that are interesting.
Seven Challenges in Parallel SAT Solving
Hamadi, Youssef (Microsoft Research, Cambridge) | Wintersteiger, Christoph M (Microsoft Research, Cambridge)
This paper provides a broad overview of the situation in the area of Parallel Search with a specific focus on Parallel SAT Solving. A set of challenges to researchers is presented which, we believe, must be met to ensure the practical applicability of Parallel SAT Solvers in the future. All these challenges are described informally, but put into perspective with related research results, and a (subjective) grading of difficulty for each of them is provided.
Don't Be Strict in Local Search!
Gaspers, Serge (Vienna University of Technology) | Kim, Eun Jung (LAMSADE-CNRS, Université de Paris-Dauphine) | Ordyniak, Sebastian (Vienna University of Technology) | Saurabh, Saket (The Institute of Mathematical Sciences) | Szeider, Stefan (Vienna University of Technology)
Local Search is one of the fundamental approaches to combinatorial optimization and it is used throughout AI. Several local search algorithms are based on searching the k -exchange neighborhood. This is the set of solutions that can be obtained from the current solution by exchanging at most k elements. As a rule of thumb, the larger k is, the better are the chances of finding an improved solution. However, for inputs of size n, a naive brute-force search of the k-exchange neighborhood requires n (O( k )) time, which is not practical even for very small values of k. Fellows et al. (IJCAI 2009) studied whether this brute-force search is avoidable and gave positive and negative answers for several combinatorial problems. They used the notion of local search in a strict sense. That is, an improved solution needs to be found in the k-exchange neighborhood even if a global optimum can be found efficiently. In this paper we consider a natural relaxation of local search, called permissive local search (Marx and Schlotter, IWPEC 2009) and investigate whether it enhances the domain of tractable inputs. We exemplify this approach on a fundamental combinatorial problem, Vertex Cover. More precisely, we show that for a class of inputs, finding an optimum is hard, strict local search is hard, but permissive local search is tractable. We carry out this investigation in the framework of parameterized complexity.
Transfer Learning in Collaborative Filtering with Uncertain Ratings
Pan, Weike (Hong Kong University of Science and Technology) | Xiang, Evan W. (Hong Kong University of Science and Technology) | Yang, Qiang (Hong Kong University of Science and Technology)
To solve the sparsity problem in collaborative filtering, researchers have introduced transfer learning as a viable approach to make use of auxiliary data. Most previous transfer learning works in collaborative filtering have focused on exploiting point-wise ratings such as numerical ratings, stars, or binary ratings of likes/dislikes. However, in many real-world recommender systems, many users may be unwilling or unlikely to rate items with precision.In contrast, practitioners can turn to various non-preference data to estimate a range or rating distribution of a user's preference on an item.Such a range or rating distribution is called an uncertain rating since it represents a rating spectrum of uncertainty instead of an accurate point-wise score. In this paper, we propose an efficient transfer learning solution for collaborative filtering, known as {\em transfer by integrative factorization} (TIF), to leverage such auxiliary uncertain ratings to improve the performance of recommendation. In particular, we integrate auxiliary data of uncertain ratings as additional constraints in the target matrix factorization problem, and learn an expected rating value for each uncertain rating automatically. The advantages of our proposed approach include the efficiency and the improved effectiveness of collaborative filtering, showing that incorporating the auxiliary data of uncertain ratings can really bring a benefit. Experimental results on two movie recommendation tasks show that our TIF algorithm performs significantly better over a state-of-the-art non-transfer learning method.
Sparse Probabilistic Relational Projection
Li, Wu-Jun (Shanghai Jiao Tong University) | Yeung, Dit-Yan (Hong Kong University of Science and Technology)
Probabilistic relational PCA (PRPCA) can learn a projection matrix to perform dimensionality reduction for relational data. However, the results learned by PRPCA lack interpretability because each principal component is a linear combination of all the original variables. In this paper, we propose a novel model, called sparse probabilistic relational projection (SPRP), to learn a sparse projection matrix for relational dimensionality reduction. The sparsity in SPRP is achieved by imposing on the projection matrix a sparsity-inducing prior such as the Laplace prior or Jeffreys prior. We propose an expectation-maximization (EM) algorithm to learn the parameters of SPRP. Compared with PRPCA, the sparsity in SPRP not only makes the results more interpretable but also makes the projection operation much more efficient without compromising its accuracy. All these are verified by experiments conducted on several real applications.
Filtering Decomposable Global Cost Functions
Allouche, David (Institut National de la Recherche Agronomique) | Bessiere, Christian (University of Montpellier) | Boizumault, Patrice (University of Caen) | Givry, Simon de (Institut National de la Recherche Agronomique) | Gutierrez, Patricia (IIIA-CSIC, University of Autonomade Barcelona) | Loudni, Samir (University of Caen) | Métivier, Jean-Philippe (University of Caen) | Schiex, Thomas (Institut National de la Recherche Agronomique)
As (Lee et al., 2012) have shown, weighted constraint satisfaction problems can benefit from the introduction of global cost functions, leading to a new Cost Function Programming paradigm. In this paper, we explore the possibility of decomposing global cost functions in such a way that enforcing soft local consistencies on the decomposition offers guarantees on the level of consistency enforced on the original global cost function. We show that directional arc consistency and virtual arc consistency offer such guarantees. We conclude by experiments on decomposable cost functions showing that decompositions may be very useful to easily integrate efficient global cost functions in solvers.
Advances in Lifted Importance Sampling
Gogate, Vibhav (The University of Texas at Dallas) | Jha, Abhay (University of Washington) | Venugopal, Deepak (The University of Texas at Dallas)
We consider lifted importance sampling (LIS), a previously proposed approximate inference algorithm for statistical relational learning (SRL) models. LIS achieves substantial variance reduction over conventional importance sampling by using various lifting rules that take advantage of the symmetry in the relational representation. However, it suffers from two drawbacks. First, it does not take advantage of some important symmetries in the relational representation and may exhibit needlessly high variance on models having these symmetries. Second, it uses an uninformative proposal distribution which adversely affects its accuracy. We propose two improvements to LIS that address these limitations. First, we identify a new symmetry in SRL models and define a lifting rule for taking advantage of this symmetry. The lifting rule reduces the variance of LIS. Second, we propose a new, structured approach for constructing and dynamically updating the proposal distribution via adaptive sampling. We demonstrate experimentally that our new, improved LIS algorithm is substantially more accurate than the LIS algorithm.
LRTDP Versus UCT for Online Probabilistic Planning
Kolobov, Andrey (University of Washington, Seattle) | Mausam, . (University of Washington, Seattle) | Weld, Daniel S. (University of Washington, Seattle)
UCT, the premier method for solving games such as Go, is also becoming the dominant algorithm for probabilistic planning. Out of the five solvers at the International Probabilistic Planning Competition (IPPC) 2011, four were based on the UCT algorithm. However, while a UCT-based planner, PROST, won the contest, an LRTDP-based system, Glutton, came in a close second, outperforming other systems derived from UCT. These results raise a question: what are the strengths and weaknesses of LRTDP and UCT in practice? This paper starts answering this question by contrasting the two approaches in the context of finite-horizon MDPs. We demonstrate that in such scenarios, UCT's lack of a sound termination condition is a serious practical disadvantage. In order to handle an MDP with a large finite horizon under a time constraint, UCT forces an expert to guess a non-myopic lookahead value for which it should be able to converge on the encountered states. Mistakes in setting this parameter can greatly hurt UCT's performance. In contrast, LRTDP's convergence criterion allows for an iterative deepening strategy. Using this strategy, LRTDP automatically finds the largest lookahead value feasible under the given time constraint. As a result, LRTDP has better performance and stronger theoretical properties. We present an online version of Glutton, named Gourmand, that illustrates this analysis and outperforms PROST on the set of IPPC-2011 problems.