Collaborating Authors

Question Generation using Natural Language processing


Auto generate assessments in edtech like MCQs, True/False, Fill-in-the-blanks etc using state-of-the-art NLP techniques. This course focuses on using state-of-the-art Natural Language processing techniques to solve the problem of question generation in edtech. If we pick up any middle school textbook, at the end of every chapter we see assessment questions like MCQs, True/False questions, Fill-in-the-blanks, Match the following, etc. In this course, we will see how we can take any text content and generate these assessment questions using NLP techniques. This course will be a very practical use case of NLP where we put basic algorithms like word vectors (word2vec, Glove, etc) to recent advancements like BERT, openAI GPT-2, and T5 transformers to real-world use.

Getting Started with Tokenization, Transformers and NLP #NLP #Tokenization #MachineLearning #Transformers @huggingface @MorganFunto


Earlier this month @huggingface released a number of notebooks that walk users through some NLP basics. The three-part series, written by @MorganFunto, covers tokenizers, transformers, and pipelines utilizing Hugging Face's transformer library. The notebooks cover the basics on a high level and get you working in the code quickly. The notebooks written in Colab allows anyone to run the code in the browser. Before going deep into any Machine Learning or Deep Learning Natural Language Processing models, every practitioner should find a way to map raw input strings to a representation understandable by a trainable model.



Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides state-of-the-art general-purpose architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet...) for Natural Language Understanding (NLU) and Natural Language Generation (NLG) with over 32 pretrained models in 100 languages and deep interoperability between TensorFlow 2.0 and PyTorch. Choose the right framework for every part of a model's lifetime This repo is tested on Python 2.7 and 3.5 (examples are tested only on python 3.5), PyTorch 1.0.0 and TensorFlow 2.0.0-rc1 First you need to install one of, or both, TensorFlow 2.0 and PyTorch. Please refere to TensorFlow installation page and/or PyTorch installation page regarding the specific install command for your platform. When TensorFlow 2.0 and/or PyTorch has been installed, Transformers can be installed using pip as follows: Here also, you first need to install one of, or both, TensorFlow 2.0 and PyTorch.

Easy Machine Translation with Machine Learning and HuggingFace Transformers – MachineCurve


Transformers have significantly changed the way in which Natural Language Processing tasks can be performed. This architecture, which trumps the classic recurrent one – and even LSTM-based architectures in some cases, has been around since 2017 and is the process of being democratized today. And in fact, many tasks can use these developments: for example, text summarization, named entity recognition, sentiment analysis – they can all be successfully used with this type of model. In this tutorial, we will be looking at the task of machine translation. We'll first take a look at how Transformers can be used for this purpose, and that they effectively perform a sequence-to-sequence learning task.