Goto

Collaborating Authors

Qu, Qiang


Towards Understanding Chinese Checkers with Heuristics, Monte Carlo Tree Search, and Deep Reinforcement Learning

arXiv.org Machine Learning

The game of Chinese Checkers is a challenging traditional board game of perfect information that differs from other traditional games in two main aspects: first, unlike Chess, all checkers remain indefinitely in the game and hence the branching factor of the search tree does not decrease as the game progresses; second, unlike Go, there are also no upper bounds on the depth of the search tree since repetitions and backward movements are allowed. Therefore, even in a restricted game instance, the state-space of the game can still be unbounded, making it challenging for a computer program to excel. In this work, we present an approach that effectively combines the use of heuristics, Monte Carlo tree search, and deep reinforcement learning for building a Chinese Checkers agent without the use of any human game-play data. Experiment results show that our agent is competent under different scenarios and reaches the level of experienced human players.


N-fold Superposition: Improving Neural Networks by Reducing the Noise in Feature Maps

arXiv.org Machine Learning

Considering the use of Fully Connected (FC) layer limits the performance of Convolutional Neural Networks (CNNs), this paper develops a method to improve the coupling between the convolution layer and the FC layer by reducing the noise in Feature Maps (FMs). Our approach is divided into three steps. Firstly, we separate all the FMs into n blocks equally. Then, the weighted summation of FMs at the same position in all blocks constitutes a new block of FMs. Finally, we replicate this new block into n copies and concatenate them as the input to the FC layer. This sharing of FMs could reduce the noise in them apparently and avert the impact by a particular FM on the specific part weight of hidden layers, hence preventing the network from overfitting to some extent. Using the Fermat Lemma, we prove that this method could make the global minima value range of the loss function wider, by which makes it easier for neural networks to converge and accelerates the convergence process. This method does not significantly increase the amounts of network parameters (only a few more coefficients added), and the experiments demonstrate that this method could increase the convergence speed and improve the classification performance of neural networks.


Negative-Aware Influence Maximization on Social Networks

AAAI Conferences

How to minimize the impact of negative users within the maximal set of influenced users? The Influenced Maximization (IM) is important for various applications. However, few studies consider the negative impact of some of the influenced users.We propose a negative-aware influence maximization problem by considering users' negative impact. A novel algorithm is proposed to solve the problem. Experiments on real-world datasets show the proposed algorithm can achieve 70% improvement on average in expected influence compared with rivals.


Generative Adversarial Network for Abstractive Text Summarization

AAAI Conferences

In this paper, we propose an adversarial process for abstractive text summarization, in which we simultaneously train a generative model G and a discriminative model D. In particular, we build the generator G as an agent of reinforcement learning, which takes the raw text as input and predicts the abstractive summarization. We also build a discriminator which attempts to distinguish the generated summary from the ground truth summary. Extensive experiments demonstrate that our model achieves competitive ROUGE scores with the state-of-the-art methods on CNN/Daily Mail dataset. Qualitatively, we show that our model is able to generate more abstractive, readable and diverse summaries.