Goto

Collaborating Authors

Text Classification with Simple Transformers

#artificialintelligence

Using Transformer models has never been simpler! Yes that's what Simple Transformers author Thilina Rajapakse says and I agree with him so should you. You might have seen lengthy code with hundreds of lines to implement transformers models such as BERT, RoBERTa, etc. Once you understand how to use Simple Transformers you will know how easy and simple it is to use transformer models. TheSimple Transformers library is built on top of Hugging Face Transformers library. Hugging Face Transformers provides state-of-the-art general-purpose architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet, T5, etc.) for Natural Language Understanding (NLU) and Natural Language Generation (NLG) and provides more than thousand pre-trained models and covers around 100 languages.


Hyundai Power Transformers $33M Expansion to Add 86 Jobs

U.S. News

News outlets reported Tuesday that Hyundai Power Transformers USA Inc. will begin the expansion of its Montgomery facility in July, which is expected to create 86 jobs. Company officials said in a statement that the expansion will increase production of its power transformers by more than 60 percent.


Michael Bay has 14 'Transformers' stories ready oh no

Mashable

The future of movies is, among other things, extremely exhausting. We've got another trilogy's worth of Fast and Furious movies, not two but four more Fantastic Beasts installments, the ongoing myth of James Cameron's Avatar sequels, and now, inexplicably, an endless string of Transformers movies that will outlive us all. Transformers: The Last Knight, releasing in June, is currently set to be the last film directed by Bay, but he told MTV News that he has 14 more stories written. SEE ALSO: 'Avatar 2' is delayed yet again, confirms James Cameron In the interview, Bay also describes the upcoming standalone Bumblebee movie and Shia LaBeouf's recent arrest before dropping the "14 stories written" reveal. Now, stories doesn't means scripts or even outlines -- it could be as simple as ideas on a napkin, but still, 14 of them.


Transformer with a Mixture of Gaussian Keys

arXiv.org Machine Learning

Multi-head attention is a driving force behind state-of-the-art transformers which achieve remarkable performance across a variety of natural language processing (NLP) and computer vision tasks. It has been observed that for many applications, those attention heads learn redundant embedding, and most of them can be removed without degrading the performance of the model. Inspired by this observation, we propose Transformer with a Mixture of Gaussian Keys (Transformer-MGK), a novel transformer architecture that replaces redundant heads in transformers with a mixture of keys at each head. These mixtures of keys follow a Gaussian mixture model and allow each attention head to focus on different parts of the input sequence efficiently. Compared to its conventional transformer counterpart, Transformer-MGK accelerates training and inference, has fewer parameters, and requires less FLOPs to compute while achieving comparable or better accuracy across tasks. Transformer-MGK can also be easily extended to use with linear attentions. We empirically demonstrate the advantage of Transformer-MGK in a range of practical applications including language modeling and tasks that involve very long sequences. On the Wikitext-103 and Long Range Arena benchmark, Transformer-MGKs with 4 heads attain comparable or better performance to the baseline transformers with 8 heads.


AdapterHub: A Framework for Adapting Transformers

#artificialintelligence

Great, now let's look at the key features of this framework! To add the adapters, the authors used something called'Mix-Ins' which are inherited by the HuggingFace transformer, so as to keep the codebases reasonably separate. You'll notice that the code mostly corresponds to regular HuggingFace transformers, and we just add to two lines to add & train the adapter. Something special about this AdapterHub framework is that you can dynamically configure the adapters, and change the architectures. Whilst you can use adapters directly from the literature -- for example from Pfeiffer et al(2020a) or Houlsby et al (2020), you can also modify these architectures quite easily using a configuration file.