Well File:
- Well Planning ( results)
- Shallow Hazard Analysis ( results)
- Well Plat ( results)
- Wellbore Schematic ( results)
- Directional Survey ( results)
- Fluid Sample ( results)
- Log ( results)
- Density ( results)
- Gamma Ray ( results)
- Mud ( results)
- Resistivity ( results)
- Report ( results)
- Daily Report ( results)
- End of Well Report ( results)
- Well Completion Report ( results)
- Rock Sample ( results)
The University of Melbourne
Planning with Perspectives -- Decomposing Epistemic Planning using Functional STRIPS
Hu, Guang (a:1:{s:5:"en_US";s:27:"The University of Melbourne";}) | Miller, Tim (The University of Melbourne) | Lipovetzky, Nir (The University of Melbourne)
In this paper, we present a novel approach to epistemic planning called planning with perspectives (PWP) that is both more expressive and computationally more efficient than existing state-of-the-art epistemic planning tools. Epistemic planning โ planning with knowledge and belief โ is essential in many multi-agent and human-agent interaction domains. Most state-of-the-art epistemic planners solve epistemic planning problems by either compiling to propositional classical planning (for example, generating all possible knowledge atoms or compiling epistemic formulae to normal forms); or explicitly encoding Kripke-based semantics. However, these methods become computationally infeasible as problem sizes grow. In this paper, we decompose epistemic planning by delegating reasoning about epistemic formulae to an external solver. We do this by modelling the problem using Functional STRIPS, which is more expressive than standard STRIPS and supports the use of external, black-box functions within action models. Building on recent work that demonstrates the relationship between what an agent โseesโ and what it knows, we define the perspective of each agent using an external function, and build a solver for epistemic logic around this. Modellers can customise the perspective function of agents, allowing new epistemic logics to be defined without changing the planner. We ran evaluations on well-known epistemic planning benchmarks to compare an existing state-of-the-art planner, and on new scenarios that demonstrate the expressiveness of the PWP approach. The results show that our PWP planner scales significantly better than the state-of-the-art planner that we compared against, and can express problems more succinctly.
A General Approach to Multimodal Document Quality Assessment
Shen, Aili (The University of Melbourne) | Salehi, Bahar | Qi, Jianzhong | Baldwin, Timothy
The perceived quality of a document is affected by various factors, including grammat- icality, readability, stylistics, and expertise depth, making the task of document quality assessment a complex one. In this paper, we explore this task in the context of assessing the quality of Wikipedia articles and academic papers. Observing that the visual rendering of a document can capture implicit quality indicators that are not present in the document text โ such as images, font choices, and visual layout โ we propose a joint model that combines the text content with a visual rendering of the document for document qual- ity assessment. Our joint model achieves state-of-the-art results over five datasets in two domains (Wikipedia and academic papers), which demonstrates the complementarity of textual and visual features, and the general applicability of our model. To examine what kinds of features our model has learned, we further train our model in a multi-task learning setting, where document quality assessment is the primary task and feature learning is an auxiliary task. Experimental results show that visual embeddings are better at learning structural features while textual embeddings are better at learning readability scores, which further verifies the complementarity of visual and textual features.
Detecting Misflagged Duplicate Questions in Community Question-Answering Archives
Hoogeveen, Doris (The University of Melbourne, Data61) | Bennett, Andrew (The University of Melbourne) | Li, Yitong (The University of Melbourne) | Verspoor, Karin M. (The University of Melbourne) | Baldwin, Timothy (The University of Melbourne)
In this paper we introduce the task of misflagged duplicate question detection for question pairs in community question-answer (cQA) archives and compare it to the more standard task of detecting valid duplicate questions. A misflagged duplicate is a question that has been erroneously hand-flagged by the community as a duplicate of an archived one, where the two questions are not actually the same. We find that form is flagged duplicate detection, meta data features that capture user authority, question quality, and relational data between questions, outperform pure text-based methods, while for regular duplicate detection a combination of meta data features and semantic features gives the best results. We show that misflagged duplicate questions are even more challenging to model than regular duplicate question detection, but that good results can still be obtained.
Adequacy of the Gradient-Descent Method for Classifier Evasion Attacks
Han, Yi (The University of Melbourne) | Rubinstein, Benjamin (The University of Melbourne)
Despite the widespread use of machine learning in adversarial settings such as computer security, recent studies have demonstrated vulnerabilities to evasion attacks---carefully crafted adversarial samples that closely resemble legitimate instances, but cause misclassification. In this paper, we examine the adequacy of the leading approach to generating adversarial samples---the gradient-descent approach. In particular (1) we perform extensive experiments on three datasets, MNIST, USPS and Spambase, in order to analyse the effectiveness of the gradient-descent method against non-linear support vector machines, and conclude that carefully reduced kernel smoothness can significantly increase robustness to the attack; (2) we demonstrate that separated inter-class support vectors lead to more secure models, and propose a quantity similar to margin that can efficiently predict potential susceptibility to gradient-descent attacks, before the attack is launched; and (3) we design a new adversarial sample construction algorithm based on optimising the multiplicative ratio of class decision functions.
Lagrangian Constrained Community Detection
Ganji, Mohadeseh (The University of Melbourne) | Bailey, James (The University of Melbourne) | Stuckey, Peter J. (The University of Melbourne)
Semi-supervised or constrained community detection incorporates side information to findcommunities of interest in complex networks. The supervision is often represented as constraints such as known labels and pairwise constraints. Existing constrained community detection approaches often fail to fully benefit from the available side information. This results in poor performance for scenarios such as: when the constraints are required to be fully satisfied, when there is a high confidence about the correctness of the supervision information, and in situations where the side information is expensive or hard to achieve and is only available in a limited amount. In this paper, we propose a new constrained community detection algorithm based on Lagrangian multipliers to incorporate and fully satisfy the instance level supervisio nconstraints. Our proposed algorithm can more fully utilise available side information and find better quality solutions. Our experiments on real and synthetic data sets show our proposed LagCCD algorithm outperforms existing algorithms in terms of solution quality, ability to satisfy the constraints and noise resistance.
Learning Datum-Wise Sampling Frequency for Energy-Efficient Human Activity Recognition
Cheng, Weihao (The University of Melbourne) | Erfani, Sarah (The University of Melbourne) | Zhang, Rui (The University of Melbourne) | Kotagiri, Ramamohanarao (The University of Melbourne)
Continuous Human Activity Recognition (HAR) is an important application of smart mobile/wearable systems for providing dynamic assistance to users. However, HAR in real-time requires continuous sampling of data using built-in sensors (e.g., accelerometer), which significantly increases the energy cost and shortens the operating span. Reducing sampling rate can save energy but causes low recognition accuracy. Therefore, choosing adaptive sampling frequency that balances accuracy and energy efficiency becomes a critical problem in HAR. In this paper, we formalize the problem as minimizing both classification error and energy cost by choosing dynamically appropriate sampling rates. We propose Datum-Wise Frequency Selection (DWFS) to solve the problem via a continuous state Markov Decision Process (MDP). A policy function is learned from the MDP, which selects the best frequency for sampling an incoming data entity by exploiting a datum related state of the system. We propose a method for alternative learning the parameters of an activity classification model and the MDP that improves both the accuracy and the energy efficiency. We evaluate DWFS with three real-world HAR datasets, and the results show that DWFS statistically outperforms the state-of-the-arts regarding a combined measurement of accuracy and energy efficiency.
Improving Efficiency of SVM k -Fold Cross-Validation by Alpha Seeding
Wen, Zeyi (The University of Melbourne) | Li, Bin (South China University of Technology) | Kotagiri, Ramamohanarao (The University of Melbourne) | Chen, Jian (South China University of Technology) | Chen, Yawen (South China University of Technology) | Zhang, Rui (The University of Melbourne)
The k-fold cross-validation is commonly used to evaluate the effectiveness of SVMs with the selected hyper-parameters. It is known that the SVM k-fold cross-validation is expensive, since it requires training k SVMs. However, little work has explored reusing the h-th SVM for training the (h+1)-th SVM for improving the efficiency of k-fold cross-validation. In this paper, we propose three algorithms that reuse the h-th SVM for improving the efficiency of training the (h+1)-th SVM. Our key idea is to efficiently identify the support vectors and to accurately estimate their associated weights (also called alpha values) of the next SVM by using the previous SVM. Our experimental results show that our algorithms are several times faster than the k-fold cross-validation which does not make use of the previously trained SVM. Moreover, our algorithms produce the same results (hence same accuracy) as the k-fold cross-validation which does not make use of the previously trained SVM.
The Bernstein Mechanism: Function Release under Differential Privacy
Aldร , Francesco (Ruhr-Universitรคt Bochum) | Rubinstein, Benjamin I. P. (The University of Melbourne)
We address the problem of general function release under differential privacy, by developing a functional mechanism that applies under the weak assumptions of oracle access to target function evaluation and sensitivity. These conditions permit treatment of functions described explicitly or implicitly as algorithmic black boxes. We achieve this result by leveraging the iterated Bernstein operator for polynomial approximation of the target function, and polynomial coefficient perturbation. Under weak regularity conditions, we establish fast rates on utility measured by high-probability uniform approximation. We provide a lower bound on the utility achievable for any functional mechanism that is epsilon-differentially private. The generality of our mechanism is demonstrated by the analysis of a number of example learners, including naive Bayes, non-parametric estimators and regularized empirical risk minimization. Competitive rates are demonstrated for kernel density estimation; and epsilon-differential privacy is achieved for a broader class of support vector machines than known previously.
Automatic Logic-Based Benders Decomposition with MiniZinc
Davies, Toby O. (The University of Melbourne) | Gange, Graeme (The University of Melbourne) | Stuckey, Peter J. (The University of Melbourne)
Logic-based Benders decomposition (LBBD) is a powerful hybrid optimisation technique that can combine the strong dual bounds of mixed integer programming (MIP) with the combinatorial search strengths of constraint programming (CP). A major drawback of LBBD is that it is a far more involved process to implement an LBBD solution to a problem than the "model-and-run" approach provided by both CP and MIP. We propose an automated approach that accepts an arbitrary MiniZinc model and solves it using LBBD with no additional intervention on the part of the modeller. The design of this approach also reveals an interesting duality between LBBD and large neighborhood search (LNS). We compare our implementation of this approach to CP and MIP solvers on 4 different problem classes where LBBD has been applied before.
From Shared Subspaces to Shared Landmarks: A Robust Multi-Source Classification Approach
Erfani, Sarah M. (The University of Melbourne) | Baktashmotlagh, Mahsa (Queensland Universityof Technology) | Moshtaghi, Masud (The University of Melbourne) | Nguyen, Vinh (The University of Melbourne) | Leckie, Christopher (The University of Melbourne) | Bailey, James (The University of Melbourne) | Ramamohanarao, Kotagiri (The University of Melbourne)
Training machine leaning algorithms on augmented data fromdifferent related sources is a challenging task. This problemarises in several applications, such as the Internet of Things(IoT), where data may be collected from devices with differentsettings. The learned model on such datasets can generalizepoorly due to distribution bias. In this paper we considerthe problem of classifying unseen datasets, given several labeledtraining samples drawn from similar distributions. Weexploit the intrinsic structure of samples in a latent subspaceand identify landmarks, a subset of training instances fromdifferent sources that should be similar. Incorporating subspacelearning and landmark selection enhances generalizationby alleviating the impact of noise and outliers, as well asimproving efficiency by reducing the size of the data. However,since addressing the two issues simultaneously resultsin an intractable problem, we relax the objective functionby leveraging the theory of nonlinear projection and solve atractable convex optimisation. Through comprehensive analysis,we show that our proposed approach outperforms stateof-the-art results on several benchmark datasets, while keepingthe computational complexity low.