Zibulevsky, Michael
Primal-Dual Sequential Subspace Optimization for Saddle-point Problems
Choukroun, Yoni, Zibulevsky, Michael, Kisilev, Pavel
We introduce a new sequential subspace optimization method for large-scale saddle-point problems. It solves iteratively a sequence of auxiliary saddle-point problems in low-dimensional subspaces, spanned by directions derived from first-order information over the primal \emph{and} dual variables. Proximal regularization is further deployed to stabilize the optimization process. Experimental results demonstrate significantly better convergence relative to popular first-order methods. We analyze the influence of the subspace on the convergence of the algorithm, and assess its performance in various deterministic optimization scenarios, such as bi-linear games, ADMM-based constrained optimization and generative adversarial networks.
SEBOOST - Boosting Stochastic Learning Using Subspace Optimization Techniques
Richardson, Elad, Herskovitz, Rom, Ginsburg, Boris, Zibulevsky, Michael
SEBOOST applies a secondary optimization process in the subspace spanned by the last steps and descent directions. The method was inspired by the SESOP optimization method, and has been adapted for the stochastic learning. It can be applied on top of any existing optimization method with no need to tweak the internal algorithm. We show that the method is able to boost the performance of different algorithms, and make them more robust to changes in their hyper-parameters. As the boosting steps of SEBOOST are applied between large sets of descent steps, the additional subspace optimization hardly increases the overall computational burden. We introduce hyper-parameters that control the balance between the baseline method and the secondary optimization process. The method was evaluated on several deep learning tasks, demonstrating significant improvement in performance. Video presentation is given in [15]
SEBOOST - Boosting Stochastic Learning Using Subspace Optimization Techniques
Richardson, Elad, Herskovitz, Rom, Ginsburg, Boris, Zibulevsky, Michael
We present SEBOOST, a technique for boosting the performance of existing stochastic optimization methods. SEBOOST applies a secondary optimization process in the subspace spanned by the last steps and descent directions. The method was inspired by the SESOP optimization method for large-scale problems, and has been adapted for the stochastic learning framework. It can be applied on top of any existing optimization method with no need to tweak the internal algorithm. We show that the method is able to boost the performance of different algorithms, and make them more robust to changes in their hyper-parameters. As the boosting steps of SEBOOST are applied between large sets of descent steps, the additional subspace optimization hardly increases the overall computational burden. We introduce two hyper-parameters that control the balance between the baseline method and the secondary optimization process. The method was evaluated on several deep learning tasks, demonstrating promising results.
Blind Source Separation via Multinode Sparse Representation
Zibulevsky, Michael, Kisilev, Pavel, Zeevi, Yehoshua Y., Pearlmutter, Barak A.
We consider a problem of blind source separation from a set of instantaneous linear mixtures, where the mixing matrix is unknown. It was discovered recently, that exploiting the sparsity of sources in an appropriate representation according to some signal dictionary, dramatically improves the quality of separation. In this work we use the property of multi scale transforms, such as wavelet or wavelet packets, to decompose signals into sets of local features with various degrees of sparsity. We use this intrinsic property for selecting the best (most sparse) subsets of features for further separation. The performance of the algorithm is verified on noise-free and noisy data. Experiments with simulated signals, musical sounds and images demonstrate significant improvement of separation quality over previously reported results.
Blind Source Separation via Multinode Sparse Representation
Zibulevsky, Michael, Kisilev, Pavel, Zeevi, Yehoshua Y., Pearlmutter, Barak A.
We consider a problem of blind source separation from a set of instantaneous linearmixtures, where the mixing matrix is unknown. It was discovered recently, that exploiting the sparsity of sources in an appropriate representationaccording to some signal dictionary, dramatically improves the quality of separation. In this work we use the property of multi scale transforms, such as wavelet or wavelet packets, to decompose signals into sets of local features with various degrees of sparsity. We use this intrinsic property for selecting the best (most sparse) subsets of features for further separation. The performance of the algorithm is verified onnoise-free and noisy data. Experiments with simulated signals, musical sounds and images demonstrate significant improvement of separation qualityover previously reported results. 1 Introduction
An MEG Study of Response Latency and Variability in the Human Visual System During a Visual-Motor Integration Task
Tang, Akaysha C., Pearlmutter, Barak A., Hely, Tim A., Zibulevsky, Michael, Weisend, Michael P.
Human reaction times during sensory-motor tasks vary considerably. To begin to understand how this variability arises, we examined neuronal populational response time variability at early versus late visual processing stages. The conventional view is that precise temporal information is gradually lost as information is passed through a layered network of mean-rate "units." We tested in humans whether neuronal populations at different processing stages behave like mean-rate "units". A blind source separation algorithm was applied to MEG signals from sensory-motor integration tasks. Response time latency and variability for multiple visual sources were estimated by detecting single-trial stimulus-locked events for each source.