Not enough data to create a plot.
Try a different view from the menu above.
Wang, Di
Differentially Private Empirical Risk Minimization Revisited: Faster and More General
Wang, Di, Ye, Minwei, Xu, Jinhui
In this paper we study the differentially private Empirical Risk Minimization (ERM) problem in different settings. For smooth (strongly) convex loss function with or without (non)-smooth regularization, we give algorithms that achieve either optimal or near optimal utility bounds with less gradient complexity compared with previous work. For ERM with smooth convex loss function in high-dimensional ($p\gg n$) setting, we give an algorithm which achieves the upper bound with less gradient complexity than previous ones. At last, we generalize the expected excess empirical risk from convex loss functions to non-convex ones satisfying the Polyak-Lojasiewicz condition and give a tighter upper bound on the utility than the one in \cite{ijcai2017-548}.
Efficient Empirical Risk Minimization with Smooth Loss Functions in Non-interactive Local Differential Privacy
Wang, Di, Gaboardi, Marco, Xu, Jinhui
In this paper, we study the Empirical Risk Minimization problem in the non-interactive local model of differential privacy. We first show that if the ERM loss function is $(\infty, T)$-smooth, then we can avoid a dependence of the sample complexity, to achieve error $\alpha$, on the exponential of the dimensionality $p$ with base $1/\alpha$ ({\em i.e.,} $\alpha^{-p}$), which answers a question in \cite{smith2017interaction}. Our approach is based on Bernstein polynomial approximation. Then, we propose player-efficient algorithms with $1$-bit communication complexity and $O(1)$ computation cost for each player. The error bound is asymptotically the same as the original one. Also with additional assumptions we show a server efficient algorithm with polynomial running time. At last, we propose (efficient) non-interactive locally differential private algorithms, based on different types of polynomial approximations, for learning the set of k-way marginal queries and the set of smooth queries.
Large Scale Constrained Linear Regression Revisited: Faster Algorithms via Preconditioning
Wang, Di, Xu, Jinhui
In this paper, we revisit the large-scale constrained linear regression problem and propose faster methods based on some recent developments in sketching and optimization. Our algorithms combine (accelerated) mini-batch SGD with a new method called two-step preconditioning to achieve an approximate solution with a time complexity lower than that of the state-of-the-art techniques for the low precision case. Our idea can also be extended to the high precision case, which gives an alternative implementation to the Iterative Hessian Sketch (IHS) method with significantly improved time complexity. Experiments on benchmark and synthetic datasets suggest that our methods indeed outperform existing ones considerably in both the low and high precision cases.
Large Scale Constrained Linear Regression Revisited: Faster Algorithms via Preconditioning
Wang, Di (State University of New York at Buffalo) | Xu, Jinhui (State University of New York at Buffalo)
In this paper, we revisit the large-scale constrained linear regression problem and propose faster methods based on some recent developments in sketching and optimization. Our algorithms combine (accelerated) mini-batch SGD with a new method called two-step preconditioning to achieve an approximate solution with a time complexity lower than that of the state-of-the-art techniques for the low precision case. Our idea can also be extended to the high precision case, which gives an alternative implementation to the Iterative Hessian Sketch (IHS) method with significantly improved time complexity. Experiments on benchmark and synthetic datasets suggest that our methods indeed outperform existing ones considerably in both the low and high precision cases.
Differentially Private Empirical Risk Minimization Revisited: Faster and More General
Wang, Di, Ye, Minwei, Xu, Jinhui
In this paper we study the differentially private Empirical Risk Minimization (ERM) problem in different settings. For smooth (strongly) convex loss function with or without (non)-smooth regularization, we give algorithms that achieve either optimal or near optimal utility bounds with less gradient complexity compared with previous work. For ERM with smooth convex loss function in high-dimensional (p n) setting, we give an algorithm which achieves the upper bound with less gradient complexity than previous ones. At last, we generalize the expected excess empirical risk from convex loss functions to non-convex ones satisfying the Polyak-Lojasiewicz condition and give a tighter upper bound on the utility than the one in [34].
Semi-Supervised Dictionary Learning via Structural Sparse Preserving
Wang, Di (Wenzhou University) | Zhang, Xiaoqin (Wenzhou University) | Fan, Mingyu (Wenzhou University) | Ye, Xiuzi (Wenzhou University)
While recent techniques for discriminative dictionary learning have attained promising results on the classification tasks, their performance is highly dependent on the number of labeled samples available for training. However, labeling samples is expensive and time consuming due to the significant human effort involved. In this paper, we present a novel semi- supervised dictionary learning method which utilizes the structural sparse relationships between the labeled and unlabeled samples. Specifically, by connecting the sparse reconstruction coefficients on both the original samples and dictionary, the unlabeled samples can be automatically grouped to the different labeled samples, and the grouped samples share a small number of atoms in the dictionary via mixed l2p- norm regularization. This makes the learned dictionary more representative and discriminative since the shared atoms are learned by using the labeled and unlabeled samples potentially from the same class. Minimizing the derived objective function is a challenging task because it is non-convex and highly non-smooth. We propose an efficient optimization algorithm to solve the problem based on the block coordinate descent method. Moreover, we have a rigorous proof of the convergence of the algorithm. Extensive experiments are presented to show the superior performance of our method in classification applications.
Multi-Modality Tracker Aggregation: From Generative to Discriminative
Zhang, Xiaoqin (Wenzhou University) | Li, Wei (Taobao Software Company Limited) | Fan, Mingyu (Wenzhou University) | Wang, Di (Wenzhou University) | Ye, Xiuzi (Wenzhou University)
Visual tracking is an important research topic in computer vision community. Although there are numerous tracking algorithms in the literature, no one performs better than the others under all circumstances, and the best algorithm for a particular dataset may not be known a priori. This motivates a fundamental problem-the necessity of an ensemble learning of different tracking algorithms to overcome their drawbacks and to increase the generalization ability. This paper proposes a multi-modality ranking aggregation framework for fusion of multiple tracking algorithms. In our work, each tracker is viewed as a `ranker' which outputs a rank list of the candidate image patches based on its own appearance model in a particular modality. Then the proposed algorithm aggregates the rankings of different rankers to produce a joint ranking. Moreover, the level of expertise for each `ranker' based on the historical ranking results is also effectively used in our model. The proposed model not only provides a general framework for fusing multiple tracking algorithms on multiple modalities, but also provides a natural way to combine the advantages of the generative model based trackers and the the discriminative model based trackers. It does not need to directly compare the output results obtained by different trackers, and such a comparison is usually heuristic. Extensive experiments demonstrate the effectiveness of our work.
Hybrid Singular Value Thresholding for Tensor Completion
Zhang, Xiaoqin (Wenzhou University) | Zhou, Zhengyuan (Stanford University) | Wang, Di (Wenzhou University) | Ma, Yi (ShanghaiTech University)
In this paper, we study the low-rank tensor completion problem, where a high-order tensor with missing entries is given and the goal is to complete the tensor. We propose to minimize a new convex objective function, based on log sum of exponentials of nuclear norms, that promotes the low-rankness of unfolding matrices of the completed tensor. We show for the first time that the proximal operator to this objective function is readily computable through a hybrid singular value thresholding scheme. This leads to a new solution to high-order (low-rank) tensor completion via convex relaxation. We show that this convex relaxation and the resulting solution are much more effective than existing tensor completion methods (including those also based on minimizing ranks of unfolding matrices). The hybrid singular value thresholding scheme can be applied to any problem where the goal is to minimize the maximum rank of a set of low-rank matrices.
Simultaneous Rectification and Alignment via Robust Recovery of Low-rank Tensors
Zhang, Xiaoqin, Wang, Di, Zhou, Zhengyuan, Ma, Yi
In this work, we propose a general method for recovering low-rank three-order tensors, in which the data can be deformed by some unknown transformation and corrupted by arbitrary sparse errors. Since the unfolding matrices of a tensor are interdependent, we introduce auxiliary variables and relax the hard equality constraints by the augmented Lagrange multiplier method. To improve the computational efficiency, we introduce a proximal gradient step to the alternating direction minimization method. We have provided proof for the convergence of the linearized version of the problem which is the inner loop of the overall algorithm. Both simulations and experiments show that our methods are more efficient and effective than previous work. The proposed method can be easily applied to simultaneously rectify and align multiple images or videos frames. In this context, the state-of-the-art algorithms RASL'' and "TILT'' can be viewed as two special cases of our work, and yet each only performs part of the function of our method."
Creating Human-like Autonomous Players in Real-time First Person Shooter Computer Games
Wang, Di (Nanyang Technological University) | Subagdja, Budhitama (Nanyang Technological University) | Tan, Ah-Hwee (Nanyang Technological University) | Ng, Gee-Wah (DSO National Laboratories)
This paper illustrates how we create a software agent by employing FALCON, a self-organizing neural network that performs reinforcement learning, to play a well-known first person shooter computer game known as Unreal Tournament 2004. Through interacting with the game environment and its opponents, our agent learns in real-time without any human intervention. Our agent bot participated in the 2K Bot Prize competition, similar to the \emph{Turing test} for intelligent agents, wherein human judges were tasked to identify whether their opponents in the game were human players or virtual agents. To perform well in the competition, an agent must act like human and be able to adapt to some changes made to the game. Although our agent did not emerge top in terms of human-like, the overall performance of our agent was encouraging as it acquired the highest game score while staying convincing to be human-like in some judges' opinions.