Goto

Collaborating Authors

 metamask


MetaMask: Revisiting Dimensional Confounder for Self-Supervised Learning

Neural Information Processing Systems

As a successful approach to self-supervised learning, contrastive learning aims to learn invariant information shared among distortions of the input sample. While contrastive learning has yielded continuous advancements in sampling strategy and architecture design, it still remains two persistent defects: the interference of task-irrelevant information and sample inefficiency, which are related to the recurring existence of trivial constant solutions. From the perspective of dimensional analysis, we find out that the dimensional redundancy and dimensional confounder are the intrinsic issues behind the phenomena, and provide experimental evidence to support our viewpoint. We further propose a simple yet effective approach MetaMask, short for the dimensional Mask learned by Meta-learning, to learn representations against dimensional redundancy and confounder. MetaMask adopts the redundancy-reduction technique to tackle the dimensional redundancy issue and innovatively introduces a dimensional mask to reduce the gradient effects of specific dimensions containing the confounder, which is trained by employing a meta-learning paradigm with the objective of improving the performance of masked representations on a typical self-supervised task. We provide solid theoretical analyses to prove MetaMask can obtain tighter risk bounds for downstream classification compared to typical contrastive methods. Empirically, our method achieves state-of-the-art performance on various benchmarks.


A Appendix

Neural Information Processing Systems

M) null /τ null, (10) which is derived by considering Equation 4. To simplify the equation, we hold L M is the dimensional mask. Note that the scalar derivation, e.g., the MetaMask's training paradigm as follows. In order to prove Theorem 5.2 and the conclusion that the bounds of supervised cross-entropy loss A.2.1 Proof for the Equality Part To prove Φ null g To prove Equation 20, we demonstrate an evidence example in Figure 5. The reason behind such a phenomenon is that, following Theorem 5.1, the self-paced dimensional mask jointly enhances the gradient Being aware of proofs in Section A.2.1 and Section A.2.2, we confirm the validation of Theorem Then, we bring Theorem 5.2 into Theorem 5.1 to derive the comparison of the lower bounds of Therefore, the lower bound obtained by the masked representation, i.e., MetaMask, is larger than the Concretely, we conclude that our approach can better bound the downstream classification risk, i.e., However, our dimensional confounder is defined as a negative factor that may lead to model degradation, which is proposed from the dimensional perspective. MetaMask using a trick of fixed learning rate instead of the cosine annealing strategy.





MetaMask: Revisiting Dimensional Confounder for Self-Supervised Learning

Neural Information Processing Systems

As a successful approach to self-supervised learning, contrastive learning aims to learn invariant information shared among distortions of the input sample. While contrastive learning has yielded continuous advancements in sampling strategy and architecture design, it still remains two persistent defects: the interference of task-irrelevant information and sample inefficiency, which are related to the recurring existence of trivial constant solutions. From the perspective of dimensional analysis, we find out that the dimensional redundancy and dimensional confounder are the intrinsic issues behind the phenomena, and provide experimental evidence to support our viewpoint. We further propose a simple yet effective approach MetaMask, short for the dimensional Mask learned by Meta-learning, to learn representations against dimensional redundancy and confounder. MetaMask adopts the redundancy-reduction technique to tackle the dimensional redundancy issue and innovatively introduces a dimensional mask to reduce the gradient effects of specific dimensions containing the confounder, which is trained by employing a meta-learning paradigm with the objective of improving the performance of masked representations on a typical self-supervised task.


MetaMask: Revisiting Dimensional Confounder for Self-Supervised Learning

Li, Jiangmeng, Qiang, Wenwen, Zhang, Yanan, Mo, Wenyi, Zheng, Changwen, Su, Bing, Xiong, Hui

arXiv.org Artificial Intelligence

As a successful approach to self-supervised learning, contrastive learning aims to learn invariant information shared among distortions of the input sample. While contrastive learning has yielded continuous advancements in sampling strategy and architecture design, it still remains two persistent defects: the interference of task-irrelevant information and sample inefficiency, which are related to the recurring existence of trivial constant solutions. From the perspective of dimensional analysis, we find out that the dimensional redundancy and dimensional confounder are the intrinsic issues behind the phenomena, and provide experimental evidence to support our viewpoint. We further propose a simple yet effective approach MetaMask, short for the dimensional Mask learned by Meta-learning, to learn representations against dimensional redundancy and confounder. MetaMask adopts the redundancy-reduction technique to tackle the dimensional redundancy issue and innovatively introduces a dimensional mask to reduce the gradient effects of specific dimensions containing the confounder, which is trained by employing a meta-learning paradigm with the objective of improving the performance of masked representations on a typical self-supervised task. We provide solid theoretical analyses to prove MetaMask can obtain tighter risk bounds for downstream classification compared to typical contrastive methods. Empirically, our method achieves state-of-the-art performance on various benchmarks.


AI Floki Price: AIFLOKI Live Price Chart & News

#artificialintelligence

The price of AI Floki (AIFLOKI) is $0.000000038085 today with a 24-hour trading volume of $21,126.88. This represents a 24.50% price increase in the last 24 hours and a price increase in the past 7 days. With a circulating supply of 0 AIFLOKI, AI Floki is valued at a market cap of -. AIFLOKI tokens can be traded on decentralized exchanges and centralized crypto exchanges. The most popular exchange to buy and trade AI Floki is PancakeSwap (v2), where the most active trading pair AIFLOKI/WBNB has a trading volume of $13,519.96 in the last 24 hours.