Plotting

 AAAI Conferences


Chen

AAAI Conferences

Random feature map is popularly used to scale up kernel methods. However, employing a large number of mapped features to ensure an accurate approximation will still make the training time consuming. In this paper, we aim to improve the training efficiency of shift-invariant kernels by using fewer informative features without sacrificing precision. We propose a novel feature map method by extending Random Kitchen Sinks through fast data-dependent subspace embedding to generate the desired features. More specifically, we describe two algorithms with different tradeoffs on the running speed and accuracy, and prove that O(l) features induced by them are able to perform as accurately as O(l2) features by other feature map methods. In addition, several experiments are conducted on the real-world datasets demonstrating the superiority of our proposed algorithms.


Ding

AAAI Conferences

We propose a deep learning method for event-driven stock market prediction. First, events are extracted from news text, and represented as dense vectors, trained using a novel neural tensor network. Second, a deep convolutional neural network is used to model both short-term and long-term influences of events on stock price movements. Experimental results show that our model can achieve nearly 6% improvements on S&P 500 index prediction and individual stock prediction, respectively, compared to state-of-the-art baseline methods.


Chen

AAAI Conferences

Person re-identification concerns the matching of pedestrians across disjoint camera views. Due to the changes of viewpoints, lighting conditions and camera features, images of the same person from different views always appear differently, and thus feature representations across disjoint camera views of the same person follow different distributions. In this work, we propose an effective, low cost and easy-to-apply schema called the Mirror Representation, which embeds the view-specific feature transformation and enables alignment of the feature distributions across disjoint views for the same person. The proposed Mirror Representation is also designed to explicitly model the relation between different view-specific transformations and meanwhile control their discrepancy. With our Mirror Representation, we can enhance existing subspace/metric learning models significantly, and we particularly show that kernel marginal fisher analysis significantly outperforms the current state-of-the-art methods through extensive experiments on VIPeR, PRID450S and CUHK01.


Mahmood

AAAI Conferences

Learning representations from data is one of the fundamental problems of artificial intelligence and machine learning. Many different approaches exist for learning representations, but what constitutes a good representation is not yet well understood. In this work, we view the problem of representation learning as one of learning features (e.g., hidden units of neural networks) such that performance of the underlying base system continually improves. We study an important case where learning is done fully online (i.e., on an example-by-example basis) from an unending stream of data, and the computational cost of the learning element should not grow with time or cannot be much more than that of the performance element. Few methods can be used effectively in this case.


Holte

AAAI Conferences

In his 1997 paper on solving Rubik's Cube optimally using IDA* and pattern database heuristics (PDBs), Rich Korf conjectured that there was an inverse relationship between the size of a PDB and the amount of time required for IDA* to solve a problem instance on average. In the current paper, I examine the implications of this relationship, in particular how it limits the ability of abstraction-based heuristic methods, such as PDBs, to scale to larger problems. My overall conclusion is that abstraction will play an important, but auxiliary role in heuristic search systems of the future, in contrast to the primary role it played in Korf's Rubik's Cube work and in much work since.


van Seijen

AAAI Conferences

This paper introduces a novel approach for abstraction selection in reinforcement learning problems modelled as factored Markov decision processes (MDPs), for which a state is described via a set of state components. In abstraction selection, an agent must choose an abstraction from a set of candidate abstractions, each build up from a different combination of state components.


Surynek

AAAI Conferences

We suggest to employ propositional satisfiability techniques in solving a problem of cooperative multi-robot path-finding optimally. Several propositional encodings of path-finding problems have been suggested recently. In this paper we evaluate how efficient these encodings are in solving certain cases of cooperative path-findings problems optimally. Particularly, a case where robots have multiple optional locations as their targets is considered in this paper.


Sturtevant

AAAI Conferences

Pattern databases (PDBs) have been widely used as heuristics for many types of search spaces,but they have always been computed so as to fit in the main memory of the machine usingthe PDB. This paper studies the how external-memory PDBs can be used. It presentsresults of both using hard disk drives and solid-state drives directly to access the data, and of justloading a portion of the PDB into RAM. For the time being, all of these approaches are inferiorto building the largest PDB that fits into RAM.


Sadeqi

AAAI Conferences

A mutex pair in a state space is a pair of assignments of values to state variables that does not occur in any reachable state. Detecting mutex pairs is a problem that has been addressed frequently in the planning literature. In this paper, we present the Coarse Abstraction (CA) method, a new efficient method for detecting mutex pairs in state spaces represented with multi-valued variables. CA detects mutex pairs based on exhaustive search in a collection of very small abstract state spaces. While in general CA may miss some mutex pairs, we provide a formal guarantee that CA finds all mutex pairs under a simple and quite natural condition. Using this formal guarantee, we prove that these properties hold for a range of common benchmark domains. We also show that CA can find all mutex pairs even if the formal guarantee is not satisfied. Finally, we show that CA's effectiveness depends on how the domain is represented, and that it can fail to find mutex pairs in some domains and representations.


Kumar

AAAI Conferences

We pose the identified classes of problems within the general framework of Weighted Constraint Satisfaction Problems (WCSPs), reformulated as minimum weighted vertex cover problems. We examine the Constraint Composite Graphs (CCGs) associated with these WCSPs and provide simple arguments for establishing their tractability. We construct simple - almost trivial - bipartite graph representations for submodular cost functions, and reformulate these WCSPs as max-flow problems on bipartite graphs. By doing this, we achieve better time complexities than state-of-the-art algorithms. We also use CCGs to exploit planarity in variable interaction graphs, and provide algorithms with significantly improved time complexities for classes of submodular constraints. Moreover, our framework for exploiting planarity is not limited to submodular constraints. Our work confirms the usefulness of studying CCGs associated with combinatorial problems modeled as WCSPs.