ss task
Self-supervised GAN: Analysis and Improvement with Multi-class Minimax Game
Self-supervised (SS) learning is a powerful approach for representation learning using unlabeled data. Recently, it has been applied to Generative Adversarial Networks (GAN) training. Specifically, SS tasks were proposed to address the catastrophic forgetting issue in the GAN discriminator. In this work, we perform an in-depth analysis to understand how SS tasks interact with learning of generator. From the analysis, we identify issues of SS tasks which allow a severely mode-collapsed generator to excel the SS tasks.
- Asia > Singapore (0.05)
- North America > Canada (0.04)
Self-supervised GAN: Analysis and Improvement with Multi-class Minimax Game
Self-supervised (SS) learning is a powerful approach for representation learning using unlabeled data. Recently, it has been applied to Generative Adversarial Networks (GAN) training. Specifically, SS tasks were proposed to address the catastrophic forgetting issue in the GAN discriminator. In this work, we perform an in-depth analysis to understand how SS tasks interact with learning of generator. From the analysis, we identify issues of SS tasks which allow a severely mode-collapsed generator to excel the SS tasks.
Sentence Simplification Using Paraphrase Corpus for Initialization
Neural sentence simplification method based on sequence-to-sequence framework has become the mainstream method for sentence simplification (SS) task. Unfortunately, these methods are currently limited by the scarcity of parallel SS corpus. In this paper, we focus on how to reduce the dependence on parallel corpus by leveraging a careful initialization for neural SS methods from paraphrase corpus. Our work is motivated by the following two findings: (1) Paraphrase corpus includes a large proportion of sentence pairs belonging to SS corpus. (2) We can construct large-scale pseudo parallel SS data by keeping these sentence pairs with a higher complexity difference. Therefore, we propose two strategies to initialize neural SS methods using paraphrase corpus. We train three different neural SS methods with our initialization, which can obtain substantial improvements on the available WikiLarge data compared with themselves without initialization.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > United Kingdom > Scotland (0.05)
- Europe > Italy (0.04)
- (7 more...)
Sentence Simplification via Large Language Models
Feng, Yutao, Qiang, Jipeng, Li, Yun, Yuan, Yunhao, Zhu, Yi
Nevertheless, it remains unclear how LLMs perform in SS task compared to current SS methods. To Sentence Simplification aims to rephrase address this gap in research, we undertake a systematic complex sentences into simpler sentences evaluation of the Zero-/Few-Shot learning capability of while retaining original meaning. Large Language LLMs, by assessing their performance on existing SS models (LLMs) have demonstrated the benchmarks. We carry out an empirical comparison of ability to perform a variety of natural language the performance of ChatGPT and the most advanced processing tasks. However, it is not GPT3.5 model (text-davinci-003).
- Europe > France > Provence-Alpes-Côte d'Azur > Bouches-du-Rhône > Marseille (0.04)
- Europe > Belgium > Brussels-Capital Region > Brussels (0.04)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- (3 more...)
Self-supervised GAN: Analysis and Improvement with Multi-class Minimax Game
Tran, Ngoc-Trung, Tran, Viet-Hung, Nguyen, Bao-Ngoc, Yang, Linxiao, Cheung, Ngai-Man (Man)
Self-supervised (SS) learning is a powerful approach for representation learning using unlabeled data. Recently, it has been applied to Generative Adversarial Networks (GAN) training. Specifically, SS tasks were proposed to address the catastrophic forgetting issue in the GAN discriminator. In this work, we perform an in-depth analysis to understand how SS tasks interact with learning of generator. From the analysis, we identify issues of SS tasks which allow a severely mode-collapsed generator to excel the SS tasks.
Best of arXiv.org for AI, Machine Learning, and Deep Learning – November 2019 - insideBIGDATA
A large chunk of research on the security issues of neural networks is focused on adversarial attacks. However, there exists a vast sea of simpler attacks one can perform both against and with neural networks. This paper gives a quick introduction on how deep learning in security works and explore the basic methods of exploitation, but also look at the offensive capabilities deep learning enabled tools provide. All presented attacks, such as backdooring, GPU-based buffer overflows or automated bug hunting, are accompanied by short open-source exercises for anyone to try out. The TensorFlow code for this paper can be found HERE.
Best of arXiv.org for AI, Machine Learning, and Deep Learning – November 2019 - insideBIGDATA
A large chunk of research on the security issues of neural networks is focused on adversarial attacks. However, there exists a vast sea of simpler attacks one can perform both against and with neural networks. This paper gives a quick introduction on how deep learning in security works and explore the basic methods of exploitation, but also look at the offensive capabilities deep learning enabled tools provide. All presented attacks, such as backdooring, GPU-based buffer overflows or automated bug hunting, are accompanied by short open-source exercises for anyone to try out. The TensorFlow code for this paper can be found HERE.