Goto

Collaborating Authors

Microsoft researchers claim 'state-of-the-art' biomedical NLP model

#artificialintelligence

In a paper published on the preprint server Arxiv.org, Micorosft researchers propose an AI technique they call domain-specific language model pretraining for biomedical natural language processing (NLP). By compiling a "comprehensive" biomedical (NLP) benchmark from publicly available data sets, the coauthors claim they managed to achieve state-of-the-art results on tasks including named entity recognition, evidence-based medical information extraction, document classification, and more. In specialized domains like biomedicine, when training an NLP model, previous studies have shown domain-specific data sets can deliver accuracy gains. But a prevailing assumption is that "out-of-domain" text is still helpful; the researchers question this assumption.


Google AI's ALBERT claims top spot in multiple NLP performance benchmarks

#artificialintelligence

Researchers from Google AI (formerly Google Research) and Toyota Technological Institute of Chicago have created ALBERT, an AI model that achieves state-of-the-art results that exceed human performance. ALBERT now claims first place on major NLP performance leaderboards for benchmarks like GLUE and SQuAD 2.0, and high RACE performance score. On the Stanford Question Answering Dataset benchmark (SQUAD), ALBERT achieves a score of 92.2, on General Language Understanding Evaluation (GLUE) benchmark, ALBERT achieves a score of 89.4, and on ReAding Comprehension from English Examinations (RACE) benchmark, ALBERT gets a score of 89.4%. ALBERT is a version of Transformer-based BERT that "uses parameter reduction techniques to lower memory consumption and increase the training speed of BERT," according to a paper published on OpenReview.net The paper was published alongside other papers being considered for publication as part of the International Conference of Learning Representations, which will take place in April 2020 in Addis Ababa, Ethiopia.


Microsoft's UniLM AI achieves state-of-the-art performance on summarization and language generation

#artificialintelligence

Language model pretraining, a technique that "teaches" machine learning systems contextualized text representations by having them predict words based on their contexts, has advanced the state of the art across a range of natural language processing objectives. However, models like Google's BERT, which are bidirectional in design (meaning they draw on left-of-word and right-of-word context to form predictions), aren't well-suited to the task of natural language generation with substantial modification. That's why scientists at Microsoft Research investigated an alternative approach dubbed UNIfied pre-trained Language Model (UniLM), which completes unidirectional, sequence-to-sequence, and bidirectional prediction tasks and which can be fine-tuned for both natural language understanding and generation. They claim it compares favorably to BERT on popular benchmarks, achieving state-of-the-art results on a sampling of abstract summarization, generative question answering, and language generation data sets. UniLM is a multi-layer network at its core, made up of Transformer AI models jointly pretrained on large amounts of text and optimized for language modeling.


Is the Stanford Rare Word Similarity dataset a reliable evaluation benchmark?

@machinelearnbot

Rare word representation is one of the active areas in lexical semantics which deals with inducing embeddings for rare and unseen words (for which no or very few occurrences have been observed in the training corpus).


Google achieves state-of-the-art NLP performance with an enormous language model and data set

#artificialintelligence

Transfer learning, or a technique that entails pretraining an AI model on a data-rich task before fine-tuning it on another task, has been successfully applied in domains from robotics to object classification. But it holds particular promise in the subfield of natural language processing (NLP), where it's given rise to a diversity of benchmark-besting approaches. To advance it further, researchers at Google developed a new data set -- Colossal Clean Crawled Corpus -- and a unified framework and model dubbed Text-to-Text Transformer that converts language problems into a text-to-text format. They say that in experiments with one of the largest models ever submitted to the General Language Understanding Evaluation (GLUE) benchmark, they achieve state-of-the-art results on benchmarks covering question answering, text classification, and more. Generally speaking, training a model to perform NLP tasks involves ensuring the model develops knowledge enabling it to "understand" text -- knowledge that might range from low-level (for example, the spelling or meaning of words) to high-level (that a tuba is too large to fit in most backpacks).