Not enough data to create a plot.
Try a different view from the menu above.
Country
Higher-Order Description Logics for Domain Metamodeling
Giacomo, Giuseppe De (Sapienza Universita') | Lenzerini, Maurizio (di Roma) | Rosati, Riccardo (Sapienza Universita')
We investigate an extension of Description Logics (DL) with higher-order capabilities, based on Henkin-style semantics. Our study starts from the observation that the various possibilities of adding higher-order con- structs to a DL form a spectrum of increasing expres- sive power, including domain metamodeling, i.e., using concepts and roles as predicate arguments. We argue that higher-order features of this type are sufficiently rich and powerful for the modeling requirements aris- ing in many relevant situations, and therefore we carry out an investigation of the computational complexity of satisfiability and conjunctive query answering in DLs extended with such higher-order features. In particular, we show that adding domain metamodeling capabilities to SHIQ (the core of OWL 2) has no impact on the complexity of the various reasoning tasks. This is also true for DL-LiteR (the core of OWL 2 QL) under suit- able restrictions on the queries.
Grammatical Error Detection for Corrective Feedback Provision in Oral Conversations
Lee, Sungjin (Pohang University of Science and Technology (POSTECH)) | Noh, Hyungjong (Pohang University of Science and Technology (POSTECH)) | Lee, Kyusong (Pohang University of Science and Technology (POSTECH)) | Lee, Gary Geunbae (Pohang University of Science and Technology (POSTECH))
The demand for computer-assisted language learning systems that can provide corrective feedback on language learnersโ speaking has increased. However, it is not a trivial task to detect grammatical errors in oral conversations because of the unavoidable errors of automatic speech recognition systems. To provide corrective feedback, a novel method to detect grammatical errors in speaking performance is proposed. The proposed method consists of two sub-models: the grammaticality-checking model and the error-type classification model. We automatically generate grammatical errors that learners are likely to commit and construct error patterns based on the articulated errors. When a particular speech pattern is recognized, the grammaticality-checking model performs a binary classification based on the similarity between the error patterns and the recognition result using the confidence score. The error-type classification model chooses the error type based on the most similar error pattern and the error frequency extracted from a learner corpus. The grammaticality checking method largely outperformed the two comparative models by 56.36% and 42.61% in F-score while keeping the false positive rate very low. The error-type classification model exhibited very high performance with a 99.6% accuracy rate. Because high precision and a low false positive rate are important criteria for the language-tutoring setting, the proposed method will be helpful for intelligent computer-assisted language learning systems.
Partially Supervised Text Classification with Multi-Level Examples
Liu, Tao (Renmin University of China) | Du, Xiaoyong (Renmin University of China) | Xu, Yongdong (Harbin Institute of Technology) | Li, Minghui (Microsoft) | Wang, Xiaolong (Harbin Institute of Technology)
Partially supervised text classification has received great research attention since it only uses positive and unlabeled examples as training data. This problem can be solved by automatically labeling some negative (and more positive) examples from unlabeled examples before training a text classifier. But it is difficult to guarantee both high quality and quantity of the new labeled examples. In this paper, a multi-level example based learning method for partially supervised text classification is proposed, which can make full use of all unlabeled examples. A heuristic method is proposed to assign possible labels to unlabeled examples and partition them into multiple levels according to their labeling confidence. A text classifier is trained on these multi-level examples using weighted support vector machines. Experiments show that the multi-level example based learning method is effective for partially supervised text classification, and outperforms the existing popular methods such as Biased-SVM, ROC-SVM, S-EM and WL.
A Local Monte Carlo Tree Search Approach in Deterministic Planning
Xie, Fan (University of Alberta) | Nakhost, Hootan (University of Alberta) | Mรผller, Martin (University of Alberta)
Much recent work in satisficing planning has aimed at striking a balance between coverage - solving as many problems as possible - and plan quality. Current planners achieve near perfect coverage on the latest IPC benchmarks. It is therefore natural to investigate their scaling behavior on more difficult instances. Among state of the art planners, LAMA (Richter, Helmert, and Westphal 2008) is able to generate high quality plans, but its coverage drops off rapidly with increasing prob- lem complexity. The Arvand planner (Nakhost and Mรผller 2009) scales to much harder instances but generates lower quality plans. This paper introduces a new algorithm, Monte Carlo Random Walk-based Local Tree Search (MRW-LTS), which uses random walks to selectively build local search trees. Experiments demonstrate that MRW-LTS combines a scaling behavior that is better than LAMAโs with a plan quality that is better than Arvandโs.
Optimal Packing of High-Precision Rectangles
Huang, Eric (Palo Alto Research Center) | Korf, Richard E. (University of California, Los Angeles)
The rectangle-packing problem consists of ๏ฌnding an enclosing rectangle of smallest area that can contain a given set of rectangles without overlap. Our new benchmark includes rectangles of successively higher precision, challenging the previous state-of-the-art, which enumerates all locations for placing rectangles, as well as all bounding box widths and heights up to the optimal box. We instead limit the rectanglesโ coordinates and bounding box dimensions to the set of subset sums of the rectanglesโ dimensions. We also dynamically prune values by learning from infeasible subtrees and constrain the problem by replacing rectangles and empty space with larger rectangles. Compared to the previous state-of-the-art, we test 4,500 times fewer bounding boxes on the high-precision benchmark and solve N =9 over two orders of magnitude faster. We also present all optimal solutions up to N =15, which requires 39 bits of precision to solve. Finally, on the open problem of whether or not one can pack a particular in๏ฌnite series of rectangles into the unit square, we pack the ๏ฌrst 50,000 such rectangles witha greedy heuristic and conjecture that the entire in๏ฌnite series can ๏ฌt..
Limits of Preprocessing
Szeider, Stefan (Vienna University of Technology)
We present a first theoretical analysis of the power of polynomial-time preprocessing for important combinatorial problems from various areas in AI. We consider problems from Constraint Satisfaction, Global Constraints, Satisfiability, Nonmonotonic and Bayesian Reasoning. We show that, subject to a complexity theoretic assumption, none of the considered problems can be reduced by polynomial-time preprocessing to a problem kernel whose size is polynomial in a structural problem parameter of the input, such as induced width or backdoor size. Our results provide a firm theoretical boundary for the performance of polynomial-time preprocessing algorithms for the considered problems.
An Efficient and Complete Approach for Cooperative Path-Finding
Luna, Ryan (University of Nevada, Reno) | Bekris, Kostas E. (University of Nevada, Reno)
Cooperative path-finding can be abstracted as computing non-colliding paths for multiple agents between their start and goal locations on a graph. This work proposes a fast algorithm that can provide completeness guarantees for a general class of problems without any assumptions about the graph's topology. Specifically, the approach can address any solvable instance where there are at most n-2 agents in a graph of size n. The algorithm employs two primitives: a "push" operation where agents move towards their goals up to the point that no progress can be made, and a "swap" operation that allows two agents to swap positions without altering the configuration of other agents. Simulated experiments are provided on hard instances of cooperative path-finding, including comparisons against alternative methods. The results are favorable for the proposed algorithm and show that the technique scales to problems that require high levels of coordination, involving hundreds of agents.
Exploiting Problem Symmetries in State-Based Planners
Pochter, Nir (The Hebrew University of Jerusalem) | Zohar, Aviv (Microsoft Research, Silicon Valley) | Rosenschein, Jeffrey S. (The Hebrew University of Jerusalem)
Previous research in Artificial Intelligence has identified the possibility of simplifying planning problems via the identification and exploitation of symmetries. We advance the state of the art in algorithms that exploit symmetry in planning problems by generalizing previous approaches, and applying symmetry reductions to state-based planners. We suggest several algorithms for symmetry exploitation in state-based search, but also provide a comprehensive view through which additional algorithms can be developed and fine-tuned. We evaluate our approach to symmetry exploitation on instances from previous planning competitions, and demonstrate that our algorithms significantly improve the solution time of instances with symmetries.
Incentive-Compatible Escrow Mechanisms
Witkowski, Jens (Albert-Ludwigs-Universität Freiburg) | Seuken, Sven (Harvard University) | Parkes, David C. (Harvard University)
The most prominent way to establish trust between buyers and sellers on online auction sites are reputation mechanisms. Two drawbacks of this approach are the reliance on the seller being long-lived and the susceptibility to whitewashing. In this paper, we introduce so-called escrow mechanisms that avoid these problems by installing a trusted intermediary which forwards the payment to the seller only if the buyer acknowledges that the good arrived in the promised condition. We address the incentive issues that arise and design an escrow mechanism that is incentive-compatible, efficient, interim individually rational and ex ante budget-balanced. In contrast to previous work on trust and reputation, our approach does not rely on knowing the sellers' cost functions or the distribution of buyer valuations.
Large Scale Diagnosis Using Associations between System Outputs and Components
Guo, Ting (Jilin University) | Li, Zhanshan (Jilin University) | Guo, Ruizhi (Jilin University) | Zhu, Xingquan (University of Technology, Sydney)
Model-based diagnosis (MBD) uses an abstraction of system to diagnose possible faulty functions of an underlying system. To improve the solution efficiency for multi-fault diagnosis problems, especially for large scale systems, this paper proposes a method to induce reasonable diagnosis solutions, under coarse diagnosis, by using the relationships between system outputs and components. Compared to existing diagnosis methods, the proposed framework only needs to consider associations between outputs and components by using an assumption-based truth maintenance system (ATMS) [de Kleer 1986] to obtain correlation components for every output node. As a result, our method significantly reduces the number of variables required for model diagnosis, which makes it suitable for large scale circuit systems.