norm variance
d9812f756d0df06c7381945d2e2c7d4b-AuthorFeedback.pdf
We thank the four reviewers for their constructive comments. The following are our responses to reviewers' comments. We will rewrite the formulations in the revision. The classifier is trained with lr in {2, 5, 10} and bs=256 for 50 epochs. Best accuracy of the classifier is reported). NNP (we test s=25 and 64), and original authors' GitHub for MS.
d9812f756d0df06c7381945d2e2c7d4b-AuthorFeedback.pdf
We thank the four reviewers for their constructive comments. The following are our responses to reviewers' comments. We will rewrite the formulations in the revision. The classifier is trained with lr in {2, 5, 10} and bs=256 for 50 epochs. Best accuracy of the classifier is reported). NNP (we test s=25 and 64), and original authors' GitHub for MS.
Accelerating the k-means++ Algorithm by Using Geometric Information
Corominas, Guillem Rodríguez, Blesa, Maria J., Blum, Christian
The k-means clustering is a widely used method in data clustering and unsupervised machine learning, aiming to divide a given dataset into k distinct, non-overlapping clusters. This division seeks to minimize the within-cluster variance. The k-means clustering problem becomes NP-hard when extended beyond a single dimension [3]. Despite this complexity, there are algorithms designed to find sufficiently good solutions within a reasonable amount of time. Among these, Lloyd's algorithm, also referred to as the standard algorithm or batch k-means, is the most renowned [42]. The k-means algorithm is one of the most popular algorithms in data mining [58, 32], mainly due to its simplicity, scalability, and guaranteed termination. However, its performance is highly sensible to the initial placement of the centers [5]. In fact, there is no general approximation expectation for Lloyd's algorithm that applies to all scenarios, i.e., an arbitrary initialization may lead to an arbitrarily bad clustering. Therefore, it is crucial to employ effective initialization methods [24].
- North America > Canada > Ontario > Toronto (0.14)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Europe > Switzerland (0.04)
- (2 more...)
Weakly Supervised Deep Hyperspherical Quantization for Image Retrieval
Wang, Jinpeng, Chen, Bin, Zhang, Qiang, Meng, Zaiqiao, Liang, Shangsong, Xia, Shu-Tao
Deep quantization methods have shown high efficiency on large-scale image retrieval. However, current models heavily rely on ground-truth information, hindering the application of quantization in label-hungry scenarios. A more realistic demand is to learn from inexhaustible uploaded images that are associated with informal tags provided by amateur users. Though such sketchy tags do not obviously reveal the labels, they actually contain useful semantic information for supervising deep quantization. To this end, we propose Weakly-Supervised Deep Hyperspherical Quantization (WSDHQ), which is the first work to learn deep quantization from weakly tagged images. Specifically, 1) we use word embeddings to represent the tags and enhance their semantic information based on a tag correlation graph. 2) To better preserve semantic information in quantization codes and reduce quantization error, we jointly learn semantics-preserving embeddings and supervised quantizer on hypersphere by employing a well-designed fusion layer and tailor-made loss functions. Extensive experiments show that WSDHQ can achieve state-of-art performance on weakly-supervised compact coding. Code is available at https://github.com/gimpong/AAAI21-WSDHQ.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Singapore (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
VQC-Based Reinforcement Learning with Data Re-uploading: Performance and Trainability
Coelho, Rodrigo, Sequeira, André, Santos, Luís Paulo
Reinforcement Learning (RL) consists of designing agents that make intelligent decisions without human supervision. When used alongside function approximators such as Neural Networks (NNs), RL is capable of solving extremely complex problems. Deep Q-Learning, a RL algorithm that uses Deep NNs, achieved super-human performance in some specific tasks. Nonetheless, it is also possible to use Variational Quantum Circuits (VQCs) as function approximators in RL algorithms. This work empirically studies the performance and trainability of such VQC-based Deep Q-Learning models in classic control benchmark environments. More specifically, we research how data re-uploading affects both these metrics. We show that the magnitude and the variance of the gradients of these models remain substantial throughout training due to the moving targets of Deep Q-Learning. Moreover, we empirically show that increasing the number of qubits does not lead to an exponential vanishing behavior of the magnitude and variance of the gradients for a PQC approximating a 2-design, unlike what was expected due to the Barren Plateau Phenomenon. This hints at the possibility of VQCs being specially adequate for being used as function approximators in such a context.