Country
The Dynamic Controllability of Conditional STNs with Uncertainty
Hunsberger, Luke, Posenato, Roberto, Combi, Carlo
Recent attempts to automate business processes and medical-treatment processes have uncovered the need for a formal framework that can accommodate not only temporal constraints, but also observations and actions with uncontrollable durations. To meet this need, this paper defines a Conditional Simple Temporal Network with Uncertainty (CSTNU) that combines the simple temporal constraints from a Simple Temporal Network (STN) with the conditional nodes from a Conditional Simple Temporal Problem (CSTP) and the contingent links from a Simple Temporal Network with Uncertainty (STNU). A notion of dynamic controllability for a CSTNU is defined that generalizes the dynamic consistency of a CTP and the dynamic controllability of an STNU. The paper also presents some sound constraint-propagation rules for dynamic controllability that are expected to form the backbone of a dynamic-controllability-checking algorithm for CSTNUs.
Soft Constraint Logic Programming for Electric Vehicle Travel Optimization
Monreale, Giacoma Valentina, Montanari, Ugo, Hoch, Nicklas
Soft Constraint Logic Programming is a natural and flexible declarative programming formalism, which allows to model and solve real-life problems involving constraints of different types. In this paper, after providing a slightly more general and elegant presentation of the framework, we show how we can apply it to the e-mobility problem of coordinating electric vehicles in order to overcome both energetic and temporal constraints and so to reduce their running cost. In particular, we focus on the journey optimization sub-problem, considering sequences of trips from a user's appointment to another one. Solutions provide the best alternatives in terms of time and energy consumption, including route sequences and possible charging events.
Balanced K-SAT and Biased random K-SAT on trees
Sumedha, null, Krishnamurthy, Supriya, Sahoo, Sharmistha
We study and solve some variations of the random K-satisfiability problem - balanced K-SAT and biased random K-SAT - on a regular tree, using techniques we have developed earlier(arXiv:1110.2065). In both these problems, as well as variations of these that we have looked at, we find that the SAT-UNSAT transition obtained on the Bethe lattice matches the exact threshold for the same model on a random graph for K=2 and is very close to the numerical value obtained for K=3. For higher K it deviates from the numerical estimates of the solvability threshold on random graphs, but is very close to the dynamical 1-RSB threshold as obtained from the first non-trivial fixed point of the survey propagation algorithm.
An Empirical Comparison of V-fold Penalisation and Cross Validation for Model Selection in Distribution-Free Regression
Dhanjal, Charanpal, Baskiotis, Nicolas, Clรฉmenรงon, Stรฉphan, Usunier, Nicolas
Model selection is a crucial issue in machine-learning and a wide variety of penalisation methods (with possibly data dependent complexity penalties) have recently been introduced for this purpose. However their empirical performance is generally not well documented in the literature. It is the goal of this paper to investigate to which extent such recent techniques can be successfully used for the tuning of both the regularisation and kernel parameters in support vector regression (SVR) and the complexity measure in regression trees (CART). This task is traditionally solved via V-fold cross-validation (VFCV), which gives efficient results for a reasonable computational cost. A disadvantage however of VFCV is that the procedure is known to provide an asymptotically suboptimal risk estimate as the number of examples tends to infinity. Recently, a penalisation procedure called V-fold penalisation has been proposed to improve on VFCV, supported by theoretical arguments. Here we report on an extensive set of experiments comparing V-fold penalisation and VFCV for SVR/CART calibration on several benchmark datasets. We highlight cases in which VFCV and V-fold penalisation provide poor estimates of the risk respectively and introduce a modified penalisation technique to reduce the estimation error.
ANOVA kernels and RKHS of zero mean functions for model-based sensitivity analysis
Durrande, Nicolas, Ginsbourger, David, Roustant, Olivier, Carraro, Laurent
Given a reproducing kernel Hilbert space H of real-valued functions and a suitable measure mu over the source space D (subset of R), we decompose H as the sum of a subspace of centered functions for mu and its orthogonal in H. This decomposition leads to a special case of ANOVA kernels, for which the functional ANOVA representation of the best predictor can be elegantly derived, either in an interpolation or regularization framework. The proposed kernels appear to be particularly convenient for analyzing the e ffect of each (group of) variable(s) and computing sensitivity indices without recursivity.
Changepoint detection for high-dimensional time series with missing data
Xie, Yao, Huang, Jiaji, Willett, Rebecca
This paper describes a novel approach to change-point detection when the observed high-dimensional data may have missing elements. The performance of classical methods for change-point detection typically scales poorly with the dimensionality of the data, so that a large number of observations are collected after the true change-point before it can be reliably detected. Furthermore, missing components in the observed data handicap conventional approaches. The proposed method addresses these challenges by modeling the dynamic distribution underlying the data as lying close to a time-varying low-dimensional submanifold embedded within the ambient observation space. Specifically, streaming data is used to track a submanifold approximation, measure deviations from this approximation, and calculate a series of statistics of the deviations for detecting when the underlying manifold has changed in a sharp or unexpected manner. The approach described in this paper leverages several recent results in the field of high-dimensional data analysis, including subspace tracking with missing data, multiscale analysis techniques for point clouds, online optimization, and change-point detection performance analysis. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.
A simple method for decision making in robocup soccer simulation 3d environment
Maleki, Khashayar Niki, Valipour, Mohammad Hadi, Mokari, Sadegh, Ashrafi, Roohollah Yeylaghi, Jamali, Mohammad Reza, Lucas, Caro
In this paper new hierarchical hybrid fuzzy-crisp methods for decision making and action selection of an agent in soccer simulation 3D environment are presented. First, the skills of an agent are introduced, implemented and classified in two layers, the basicskills and the highlevel skills. In the second layer, a twophase mechanism for decision making is introduced. In phase one, some useful methods are implemented which check the agent's situation for performing required skills. In the next phase, the team str ategy, team for mation, agent's role and the agent's positioning system are introduced. A fuzzy logical approach is employed to recognize the team strategy and further more to tell the player the best position to move. At last, we comprised our implemented algor ithm in the Robocup Soccer Simulation 3D environment and results showed th eefficiency of the introduced methodology.
Testing the AgreementMaker System in the Anatomy Task of OAEI 2012
Faria, Daniel, Pesquita, Catia, Santos, Emanuel, Couto, Francisco M., Stroe, Cosmin, Cruz, Isabel F.
The AgreementMaker system was the leading system in the anatomy task of the Ontology Alignment Evaluation Initiative (OAEI) competition in 2011. While AgreementMaker did not compete in OAEI 2012, here we report on its performance in the 2012 anatomy task, using the same configurations of AgreementMaker submitted to OAEI 2011. Additionally, we also test AgreementMaker using an updated version of the UBERON ontology as a mediating ontology, and otherwise identical configurations. AgreementMaker achieved an F-measure of 91.8% with the 2011 configurations, and an F-measure of 92.2% with the updated UBERON ontology. Thus, AgreementMaker would have been the second best system had it competed in the anatomy task of OAEI 2012, and only 0.1% below the F-measure of the best system.
On the probabilistic continuous complexity conjecture
In this paper we prove the probabilistic continuous complexity conjecture. In continuous complexity theory, this states that the complexity of solving a continuous problem with probability approaching 1 converges (in this limit) to the complexity of solving the same problem in its worst case. We prove the conjecture holds if and only if space of problem elements is uniformly convex. The non-uniformly convex case has a striking counterexample in the problem of identifying a Brownian path in Wiener space, where it is shown that probabilistic complexity converges to only half of the worst case complexity in this limit.