Goto

Collaborating Authors

 Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences


Deep Reinforcement Learning for Unsupervised Video Summarization With Diversity-Representativeness Reward

AAAI Conferences

Video summarization aims to facilitate large-scale video browsing by producing short, concise summaries that are diverse and representative of original videos. In this paper, we formulate video summarization as a sequential decision-making process and develop a deep summarization network (DSN) to summarize videos. DSN predicts for each video frame a probability, which indicates how likely a frame is selected, and then takes actions based on the probability distributions to select frames, forming video summaries. To train our DSN, we propose an end-to-end, reinforcement learning-based framework, where we design a novel reward function that jointly accounts for diversity and representativeness of generated summaries and does not rely on labels or user interactions at all. During training, the reward function judges how diverse and representative the generated summaries are, while DSN strives for earning higher rewards by learning to produce more diverse and more representative summaries. Since labels are not required, our method can be fully unsupervised. Extensive experiments on two benchmark datasets show that our unsupervised method not only outperforms other state-of-the-art unsupervised methods, but also is comparable to or even superior than most of published supervised approaches.


LSTD: A Low-Shot Transfer Detector for Object Detection

AAAI Conferences

Recent advances in object detection are mainly driven by deep learning with large-scale detection benchmarks. However, the fully-annotated training set is often limited for a target detection task, which may deteriorate the performance of deep detectors. To address this challenge, we propose a novel low-shot transfer detector (LSTD) in this paper, where we leverage rich source-domain knowledge to construct an effective target-domain detector with very few training examples. The main contributions are described as follows. First, we design a flexible deep architecture of LSTD to alleviate transfer difficulties in low-shot detection. This architecture can integrate the advantages of both SSD and Faster RCNN in a unified deep framework. Second, we introduce a novel regularized transfer learning framework for low-shot detection, where the transfer knowledge (TK) and background depression (BD) regularizations are proposed to leverage object knowledge respectively from source and target domains, in order to further enhance fine-tuning with a few target images. Finally, we examine our LSTD on a number of challenging low-shot detection experiments, where LSTD outperforms other state-of-the-art approaches. The results demonstrate that LSTD is a preferable deep detector for low-shot scenarios.


Recurrently Aggregating Deep Features for Salient Object Detection

AAAI Conferences

Salient object detection is a fundamental yet challenging problem in computer vision, aiming to highlight the most visually distinctive objects or regions in an image. Recent works benefit from the development of fully convolutional neural networks (FCNs) and achieve great success by integrating features from multiple layers of FCNs. However, the integrated features tend to include non-salient regions (due to low level features of the FCN) or lost details of salient objects (due to high level features of the FCN) when producing the saliency maps. In this paper, we develop a novel deep saliency network equipped with recurrently aggregated deep features (RADF) to more accurately detect salient objects from an image by fully exploiting the complementary saliency information captured in different layers. The RADF utilizes the multi-level features integrated from different layers of a FCN to recurrently refine the features at each layer, suppressing the non-salient noise at low-level of the FCN and increasing more salient details into features at high layers. We perform experiments to evaluate the effectiveness of the proposed network on 5 famous saliency detection benchmarks and compare it with 15 state-of-the-art methods. Our method ranks first in 4 of the 5 datasets and second in the left dataset.


Generative Adversarial Network for Abstractive Text Summarization

AAAI Conferences

In this paper, we propose an adversarial process for abstractive text summarization, in which we simultaneously train a generative model G and a discriminative model D. In particular, we build the generator G as an agent of reinforcement learning, which takes the raw text as input and predicts the abstractive summarization. We also build a discriminator which attempts to distinguish the generated summary from the ground truth summary. Extensive experiments demonstrate that our model achieves competitive ROUGE scores with the state-of-the-art methods on CNN/Daily Mail dataset. Qualitatively, we show that our model is able to generate more abstractive, readable and diverse summaries.


Sparse Deep Transfer Learning for Convolutional Neural Network

AAAI Conferences

Extensive studies have demonstrated that the representations of convolutional neural networks (CNN), which are learned from a large-scale data set in the source domain, can be effectively transferred to a new target domain. However, compared to the source domain, the target domain often has limited data in practice. In this case, overfitting may significantly depress transferability, due to the model redundancy of the intensive CNN structures. To deal with this difficulty, we propose a novel sparse deep transfer learning approach for CNN. There are three main contributions in this work. First, we introduce a Sparse-SourceNet to reduce the redundancy in the source domain. Second, we introduce a Hybrid-TransferNet to improve the generalization ability and the prediction accuracy of transfer learning, by taking advantage of both model sparsity and implicit knowledge. Third, we introduce a Sparse-TargetNet, where we prune our Hybrid-TransferNet to obtain a highly-compact, source-knowledge-integrated CNN in the target domain. To examine the effectiveness of our methods, we perform our sparse deep transfer learning approach on a number of benchmark transfer learning tasks. The results show that, compared to the standard fine-tuning approach, our proposed approach achieves a significant pruning rate on CNN while improves the accuracy of transfer learning.