Well File:
- Well Planning ( results)
- Shallow Hazard Analysis ( results)
- Well Plat ( results)
- Wellbore Schematic ( results)
- Directional Survey ( results)
- Fluid Sample ( results)
- Log ( results)
- Density ( results)
- Gamma Ray ( results)
- Mud ( results)
- Resistivity ( results)
- Report ( results)
- Daily Report ( results)
- End of Well Report ( results)
- Well Completion Report ( results)
- Rock Sample ( results)
National University of Defense Technology
Inverse Reinforcement Learning Based Human Behavior Modeling for Goal Recognition in Dynamic Local Network Interdiction
Zeng, Yunxiu (National University of Defense Technology) | Xu, Kai (National University of Defense Technology) | Yin, Quanjun (National University of Defense Technology) | Qin, Long (National University of Defense Technology) | Zha, Yabing (National University of Defense Technology) | Yeoh, William (Washington University in St. Louis)
Goal recognition is the task of inferring an agent's goals given some or all of the agentโs observed actions. Among different ways of problem formulation, goal recognition can be solved as a model-based planning problem using off-the-shell planners. However, obtaining accurate cost or reward models of an agent and incorporating them into the planning model becomes an issue in real applications. Towards this end, we propose an Inverse Reinforcement Learning (IRL)-based opponent behavior modeling method, and apply it in the goal recognition assisted Dynamic Local Network Interdiction (DLNI) problem. We first introduce the overall framework and the DLNI problem domain of our work. After that, an IRL-based human behavior modeling method and Markov Decision Process-based goal recognition are introduced. Experimental results indicate that our learned behavior model has a higher tracking accuracy and yields better interdiction outcomes than other models.
Transferable Semi-Supervised Semantic Segmentation
Xiao, Huaxin (National University of Defense Technology) | Wei, Yunchao (Beckman Institute, University of Illinois at Urbana-Champaign) | Liu, Yu (National University of Defense Technology) | Zhang, Maojun (National University of Defense Technology) | Feng, Jiashi (National University of Singapore)
The performance of deep learning based semantic segmentation models heavily depends on sufficient data with careful annotations. However, even the largest public datasets only provide samples with pixel-level annotations for rather limited semantic categories. Such data scarcity critically limits scalability and applicability of semantic segmentation models in real applications. In this paper, we propose a novel transferable semi-supervised semantic segmentation model that can transfer the learned segmentation knowledge from a few strong categories with pixel-level annotations to unseen weak categories with only image-level annotations, significantly broadening the applicable territory of deep segmentation models. In particular, the proposed model consists of two complementary and learnable components: a Label transfer Network (L-Net) and a Prediction transfer Network (P-Net). The L-Net learns to transfer the segmentation knowledge from strong categories to the images in the weak categories and produces coarse pixel-level semantic maps, by effectively exploiting the similar appearance shared across categories. Meanwhile, the P-Net tailors the transferred knowledge through a carefully designed adversarial learning strategy and produces refined segmentation results with better details. Integrating the L-Net and P-Net achieves 96.5% and 89.4% performance of the fully-supervised baseline using 50% and 0% categories with pixel-level annotations respectively on PASCAL VOC 2012. With such a novel transfer mechanism, our proposed model is easily generalizable to a variety of new categories, only requiring image-level annotations, and offers appealing scalability in real applications.
Deep Embedding for Determining the Number of Clusters
Wang, Yiqi (National University of Defense Technology) | Shi, Zhan (University of Texas at Austin) | Guo, Xifeng (National University of Defense Technology) | Liu, Xinwang (National University of Defense Technology) | Zhu, En (National University of Defense Technology) | Yin, Jianping (Dongguan University of Technology)
Determining the number of clusters is important but challenging, especially for data of high dimension. In this paper, we propose Deep Embedding Determination (DED), a method that can solve jointly for the unknown number of clusters and feature extraction. DED first combines the virtues of the convolutional autoencoder and the t-SNE technique to extract low dimensional embedded features. Then it determines the number of clusters using an improved density-based clustering algorithm. Our experimental evaluation on image datasets shows significant improvement over state-of-the-art methods and robustness with respect to hyperparameter settings.
Latent Semantic Aware Multi-View Multi-Label Classification
Zhang, Changqing (Tianjin University) | Yu, Ziwei (Tianjin University) | Hu, Qinghua (Tianjin University) | Zhu, Pengfei (Tianjin University) | Liu, Xinwang (National University of Defense Technology) | Wang, Xiaobo (Chinese Academy of Sciences)
For real-world applications, data are often associated with multiple labels and represented with multiple views. Most existing multi-label learning methods do not sufficiently consider the complementary information among multiple views, leading to unsatisfying performance. To address this issue, we propose a novel approach for multi-view multi-label learning based on matrix factorization to exploit complementarity among different views. Specifically, under the assumption that there exists a common representation across different views, the uncovered latent patterns are enforced to be aligned across different views in kernel spaces. In this way, the latent semantic patterns underlying in data could be well uncovered and this enhances the reasonability of the common representation of multiple views. As a result, the consensus multi-view representation is obtained which encodes the complementarity and consistence of different views in latent semantic space. We provide theoretical guarantee for the strict convexity for our method by properly setting parameters. Empirical evidence shows the clear advantages of our method over the state-of-the-art ones.
Exploring Temporal Preservation Networks for Precise Temporal Action Localization
Yang, Ke (National University of Defense Technology) | Qiao, Peng (National University of Defense Technology) | Li, Dongsheng (National University of Defense Technology) | Lv, Shaohe (National University of Defense Technology) | Dou, Yong (National University of Defense Technology)
Temporal action localization is an important task of computer vision. Though a variety of methods have been proposed, it still remains an open question how to predict the temporal boundaries of action segments precisely. Most works use segment-level classifiers to select video segments pre-determined by action proposal or dense sliding windows. However, in order to achieve more precise action boundaries, a temporal localization system should make dense predictions at a fine granularity. A newly proposed work exploits Convolutional-Deconvolutional-Convolutional (CDC) filters to upsample the predictions of 3D ConvNets, making it possible to perform per-frame action predictions and achieving promising performance in terms of temporal action localization. However, CDC network loses temporal information partially due to the temporal downsampling operation. In this paper, we propose an elegant and powerful Temporal Preservation Convolutional (TPC) Network that equips 3D ConvNets with TPC filters. TPC network can fully preserve temporal resolution and downsample the spatial resolution simultaneously, enabling frame-level granularity action localization with minimal loss of time information. TPC network can be trained in an end-to-end manner. Experiment results on public datasets show that TPC network achieves significant improvement in both per-frame action prediction and segment-level temporal action localization.
Reliable Multi-View Clustering
Tao, Hong (National University of Defense Technology) | Hou, Chenping (National University of Defense Technology) | Liu, Xinwang (National University of Defense Technology) | Liu, Tongliang (University of Sydney) | Yi, Dongyun (National University of Defense Technology) | Zhu, Jubo (National University of Defense Technology)
With the advent of multi-view data, multi-view learning (MVL) has become an important research direction in machine learning. It is usually expected that multi-view algorithms can obtain better performance than that of merely using a single view. However, previous researches have pointed out that sometimes the utilization of multiple views may even deteriorate the performance. This will be a stumbling block for the practical use of MVL in real applications, especially for tasks requiring high dependability. Thus, it is eager to design reliable multi-view approaches, such that their performance is never degenerated by exploiting multiple views.This issue is vital but rarely studied. In this paper, we focus on clustering and propose the Reliable Multi-View Clustering (RMVC) method. Based on several candidate multi-view clusterings, RMVC maximizes the worst-case performance gain against the best single view clustering, which is equivalently expressed as no label information available. Specifically, employing the squared ฯ 2 distance for clustering comparison makes the formulation of RMVC easy to solve, and an efficient strategy is proposed for optimization. Theoretically, it can be proved that the performance of RMVC will never be significantly decreased under some assumption. Experimental results on a number of data sets demonstrate that the proposed method can effectively improve the reliability of multi-view clustering.
Metric-Based Auto-Instructor for Learning Mixed Data Representation
Jian, Songlei (National University of Defense Technology, University of Technology Sydney) | Hu, Liang (University of Technology Sydney) | Cao, Longbing (University of Technology Sydney) | Lu, Kai (National University of Defense Technology)
Mixed data with both categorical and continuous features are ubiquitous in real-world applications. Learning a good representation of mixed data is critical yet challenging for further learning tasks. Existing methods for representing mixed data often overlook the heterogeneous coupling relationships between categorical and continuous features as well as the discrimination between objects. To address these issues, we propose an auto-instructive representation learning scheme to enable margin-enhanced distance metric learning for a discrimination-enhanced representation. Accordingly, we design a metric-based auto-instructor (MAI) model which consists of two collaborative instructors. Each instructor captures the feature-level couplings in mixed data with fully connected networks, and guides the infinite-margin metric learning for the peer instructor with a contrastive order. By feeding the learned representation into both partition-based and density-based clustering methods, our experiments on eight UCI datasets show highly significant learning performance improvement and much more distinguishable visualization outcomes over the baseline methods.
Multiple Kernel k-Means with Incomplete Kernels
Liu, Xinwang (National University of Defense Technology) | Li, Miaomiao (National University of Defense Technology) | Wang, Lei (University of Wollongong) | Dou, Yong (National University of Defense Technology) | Yin, Jianping (National University of Defense Technology) | Zhu, En (National University of Defense Technology)
Multiple kernel clustering (MKC) algorithms optimally combine a group of pre-specified base kernels to improve clustering performance. However, existing MKC algorithms cannot efficiently address the situation where some rows and columns of base kernels are absent. This paper proposes a simple while effective algorithm to address this issue. Different from existing approaches where incomplete kernels are firstly imputed and a standard MKC algorithm is applied to the imputed kernels, our algorithm integrates imputation and clustering into a unified learning procedure. Specifically, we perform multiple kernel clustering directly with the presence of incomplete kernels, which are treated as auxiliary variables to be jointly optimized. Our algorithm does not require that there be at least one complete base kernel over all the samples. Also, it adaptively imputes incomplete kernels and combines them to best serve clustering. A three-step iterative algorithm with proved convergence is designed to solve the resultant optimization problem. Extensive experiments are conducted on four benchmark data sets to compare the proposed algorithm with existing imputation-based methods. Our algorithm consistently achieves superior performance and the improvement becomes more significant with increasing missing ratio, verifying the effectiveness and advantages of the proposed joint imputation and clustering.
Optimal Neighborhood Kernel Clustering with Multiple Kernels
Liu, Xinwang (National University of Defense Technology) | Zhou, Sihang (National University of Defense Technology) | Wang, Yueqing (National University of Defense Technology) | Li, Miaomiao (National University of Defense Technology) | Dou, Yong (National University of Defense Technology) | Zhu, En (National University of Defense Technology) | Yin, Jianping (National University of Defense Technology)
Multiple kernel $k$-means (MKKM) aims to improve clustering performance by learning an optimal kernel, which is usually assumed to be a linear combination of a group of pre-specified base kernels. However, we observe that this assumption could: i) cause limited kernel representation capability; and ii) not sufficiently consider the negotiation between the process of learning the optimal kernel and that of clustering, leading to unsatisfying clustering performance. To address these issues, we propose an optimal neighborhood kernel clustering (ONKC) algorithm to enhance the representability of the optimal kernel and strengthen the negotiation between kernel learning and clustering. We theoretically justify this ONKC by revealing its connection with existing MKKM algorithms. Furthermore, this justification shows that existing MKKM algorithms can be viewed as a special case of our approach and indicates the extendability of the proposed ONKC for designing better clustering algorithms. An efficient algorithm with proved convergence is designed to solve the resultant optimization problem. Extensive experiments have been conducted to evaluate the clustering performance of the proposed algorithm. As demonstrated, our algorithm significantly outperforms the state-of-the-art ones in the literature, verifying the effectiveness and advantages of ONKC.
Discriminative Vanishing Component Analysis
Hou, Chenping (National University of Defense Technology) | Nie, Feiping (Northwestern Polytechnical University) | Tao, Dacheng (University of Technology, Sydney)
Vanishing Component Analysis (VCA) is a recently proposed prominent work in machine learning. It narrows the gap between tools and computational algebra: the vanishing ideal and its applications to classification problem. In this paper, we will analyze VCA in the kernel view, which is also another important research direction in machine learning. Under a very weak assumption, we provide a different point of view to VCA and make the kernel trick on VCA become possible. We demonstrate that the projection matrix derived by VCA is located in the same space as that of Kernel Principal Component Analysis (KPCA) with a polynomial kernel. Two groups of projections can express each other by linear transformation. Furthermore, we prove that KPCA and VCA have identical discriminative power, provided that the ratio trace criteria is employed as the measurement. We also show that the kernel formulated by the inner products of VCA's projections can be expressed by the KPCA's kernel linearly. Based on the analysis above, we proposed a novel Discriminative Vanishing Component Analysis (DVCA) approach. Experimental results are provided for demonstration.