Goto

Collaborating Authors

 University of Adelaide


Kill Two Birds With One Stone: Weakly-Supervised Neural Network for Image Annotation and Tag Refinement

AAAI Conferences

The number of social images has exploded by the wide adoption of social networks, and people like to share their comments about them. These comments can be a description of the image, or some objects, attributes, scenes in it, which are normally used as the user-provided tags. However, it is well-known that user-provided tags are incomplete and imprecise to some extent. Directly using them can damage the performance of related applications, such as the image annotation and retrieval. In this paper, we propose to learn an image annotation model and refine the user-provided tags simultaneously in a weakly-supervised manner. The deep neural network is utilized as the image feature learning and backbone annotation model, while visual consistency, semantic dependency, and user-error sparsity are introduced as the constraints at the batch level to alleviate the tag noise. Therefore, our model is highly flexible and stable to handle large-scale image sets. Experimental results on two benchmark datasets indicate that our proposed model achieves the best performance compared to the state-of-the-art methods.


Learning Sparse Confidence-Weighted Classifier on Very High Dimensional Data

AAAI Conferences

Confidence-weighted (CW) learning is a successful online learning paradigm which maintains a Gaussian distribution over classifier weights and adopts a covariancematrix to represent the uncertainties of the weight vectors. However, there are two deficiencies in existing full CW learning paradigms, these being the sensitivity to irrelevant features, and the poor scalability to high dimensional data due to the maintenance of the covariance structure. In this paper, we begin by presenting an online-batch CW learning scheme, and then present a novel paradigm to learn sparse CW classifiers. The proposed paradigm essentially identifies feature groups and naturally builds a block diagonal covariance structure, making it very suitable for CW learning over very high-dimensional data.Extensive experimental results demonstrate the superior performance of the proposed methods over state-of-the-art counterparts on classification and feature selection tasks.


Increasing VCG Revenue by Decreasing the Quality of Items

AAAI Conferences

The VCG mechanism is the standard method to incentivize bidders in combinatorial auctions to bid truthfully. Under the VCG mechanism, the auctioneer can sometimes increase revenue by “burning” items. We study this phenomenon in a setting where items are described by a number of attributes. The value of an attribute corresponds to a quality level, and bidders’ valuations are non-decreasing in the quality levels. In addition to burning items, we allow the auctioneer to present some of the attributes as lower quality than they actually are. We consider the following two revenue maximization problems under VCG: finding an optimal way to mark down items by reducing their quality levels, and finding an optimal set of items to burn. We study the effect of the following parameters on the computational complexity of these two problems: the number of attributes, the number of quality levels per attribute, and the complexity of the bidders’ valuation functions. Bidders have unit demand, so VCG’s outcome can be computed in polynomial time, and the valuation functions we consider are step functions that are non-decreasing with the quality levels. We prove that both problems are NP-hard even in the following three simple settings: a) four attributes, arbitrarily many quality levels per attribute, and single-step valuation functions, b) arbitrarily many attributes, two quality levels per attribute, and single-step valuation functions, and c) one attribute, arbitrarily many quality levels, and multi-step valuation functions. For the case where items have only one attribute, and every bidder has a single-step valuation (zero below some quality threshold), we show that both problems can be solved in polynomial-time using a dynamic programming approach. For this case, we also quantify how much better marking down is than item burning, and we compare the revenue of both approaches with computational experiments.


A Parameterized Runtime Analysis of Evolutionary Algorithms for the Euclidean Traveling Salesperson Problem

AAAI Conferences

We contribute to the theoretical understanding of evolutionary algorithms and carry out a parameterized analysis of evolutionary algorithms for the Euclidean traveling salesperson problem (Euclidean TSP). We exploit structural properties related to the optimization process of evolutionary algorithms for this problem and use them to bound the runtime of evolutionary algorithms. Our analysis studies the runtime in dependence of the number of inner points $k$ and shows that simple evolutionary algorithms solve the Euclidean TSP in expected time O( n k(2 k -1)!).  Moreover, we show that, under reasonable geometric constraints, a locally optimal 2-opt tour can be found by randomized local search in expected time $O( n 2 k k !).


The Role of Non-Factorizability in Determining "Pseudo-Classical "Non-separability

AAAI Conferences

This article introduces a "pseudo classical" notion of modelling non-separability. This form of non-separability can be viewed as lying between separability and quantum-like non-separability. Non-separability is formalized in terms of the non-factorizabilty of the underlying joint probability distribution. A decision criterium for determining the non-factorizability of the joint distribution is related to determining the rank of a matrix as well as another approach based on the chi-square-goodness-of-fit test. This pseudo-classical notion of non-separability is discussed in terms of quantum games and concept combinations in human cognition.