Goto

Collaborating Authors

Generation


AI's Latest Breakthrough Will Transform Learning--Here Are 5 Ways

#artificialintelligence

The Fourth Industrial Revolution just took a huge step forward, thanks to a breakthrough artificial intelligence (AI) model that can learn virtually anything about the world -- and produce the content to tell us about it. The AI program is GPT-3 by OpenAI, which started out as a language model to predict the next word in a sentence and has vastly exceeded that capability. Now, drawing from voluminous data -- essentially all of Wikipedia, links from Reddit, and other Internet content -- GPT-3 has shown it can also compose text that is virtually indistinguishable from human-generated content. Asger Alstrup Palm, Area9's chief technology officer, explained that GPT-3 was tasked with testing the "scaling hypothesis" -- to see if a bigger model with ever-increasing amounts of information would lead to better performance. Although it's too early to call the scaling hypothesis proven, there are some strong indications that this is, indeed, the case. Further validating the potential of GPT-3, Microsoft recently announced it will exclusively license the model from OpenAI, with the intention of developing and delivering AI solutions for customers and creating new solutions using natural language generation.


Microsoft gets exclusive license for OpenAI's GPT-3 language model

#artificialintelligence

Microsoft today announced that it will exclusively license GPT-3, one of the most powerful language understanding models in the world, from AI startup OpenAI. In a blog post, Microsoft EVP Kevin Scott said that the new deal will allow Microsoft to leverage OpenAI's technical innovations to develop and deliver AI solutions for customers, as well as create new solutions that harness the power of natural language generation. "We see this as an incredible opportunity to expand our Azure-powered AI platform in a way that democratizes AI technology, enables new products, services and experiences, and increases the positive impact of AI at scale," Scott wrote. "The scope of commercial and creative potential that can be unlocked through the GPT-3 model is profound, with genuinely novel capabilities -- most of which we haven't even imagined yet. Directly aiding human creativity and ingenuity in areas like writing and composition, describing and summarizing large blocks of long-form data (including code), converting natural language to another language -- the possibilities are limited only by the ideas and scenarios that we bring to the table."


The Development of Augmented Analytics.

#artificialintelligence

If data is the gas in a car, then, analytics is the car itself. Currently, there are a few trends and topics in tech without which the talk around technology and innovation is incomplete -- analytics, artificial intelligence, blockchain to name a few. Augmented analytics is an extension of analytics that focuses on three main areas -- Machine Learning, Natural language generation (NLP) and, Insight automation. The basic premise of augmented analytics is the elimination of painstaking tasks in the process of data analysis and, replacing them by automation thus, refocusing human attention on modern analytics, business process, and business value generation. As per predictions made by Gartner, over 40% of tasks involved in data science will be automated thus, increasing productivity, quickening the process, and initiating broader usage of data and analytics.


Bringing AI Supercomputing To Customers - Liwaiwai

#artificialintelligence

The trend toward the use of massive AI models to power a large number of tasks is changing how AI is built. At Microsoft Build 2020, we shared our vision for AI at Scale utilizing state-of-the-art AI supercomputing in Azure and a new class of large-scale AI models enabling next-generation AI. The advantage of large scale models is that they only need to be trained once with massive amounts of data using AI supercomputing, enabling them to then be "fine-tuned" for different tasks and domains with much smaller datasets and resources. The more parameters that a model has, the better it can capture the difficult nuances of the data, as demonstrated by our 17-billion-parameter Turing Natural Language Generation (T-NLG) model and its ability to understand language to answer questions from or summarize documents seen for the first time. Natural language models like this, significantly larger than the state-of-the-art models a year ago, and many orders of magnitude the size of earlier image-centric models, are now powering a variety of tasks throughout Bing, Word, Outlook, and Dynamics.


The Impact of AI on Journalism

#artificialintelligence

Back in 2014, the Los Angeles Times published a report about an earthquake three minutes after it happened. This feat was possible because a staffer had developed a bot (a software robot) called Quakebot to write automated articles based on data generated by the US Geological Survey. Today, AIs write hundreds of thousands of the articles that are published by mainstream media outlets every week. At first, most of the Natural Language Generation (NLG) tools producing these articles were provided by software companies like Narrative Science. Today, many media organisations have developed in-house versions.


Bringing AI supercomputing to customers

#artificialintelligence

The trend toward the use of massive AI models to power a large number of tasks is changing how AI is built. At Microsoft Build 2020, we shared our vision for AI at Scale utilizing state-of-the-art AI supercomputing in Azure and a new class of large-scale AI models enabling next-generation AI. The advantage of large scale models is that they only need to be trained once with massive amounts of data using AI supercomputing, enabling them to then be "fine-tuned" for different tasks and domains with much smaller datasets and resources. The more parameters that a model has, the better it can capture the difficult nuances of the data, as demonstrated by our 17-billion-parameter Turing Natural Language Generation (T-NLG) model and its ability to understand language to answer questions from or summarize documents seen for the first time. Natural language models like this, significantly larger than the state-of-the-art models a year ago, and many orders of magnitude the size of earlier image-centric models, are now powering a variety of tasks throughout Bing, Word, Outlook, and Dynamics.


A Machine Predicts My Next Sentence

#artificialintelligence

Text generation is under the branch of data science that is natural language generation or commonly referred to as NLG. Whereas natural language processing, or NLP, uses libraries to clean, convert, transform, and ultimately manipulate text, NLG strives to create new text from past data. One of the prominent examples is the chatbot. It uses a variety of different types of algorithms that model off of previous text to produce a response where the user thinks (somewhat) that they are talking to a person instead of a machine. This generation of text is not only awesome, but it is useful as well because it can automate otherwise manual processes.


Exploring GPT-3: A New Breakthrough in Language Generation - KDnuggets

#artificialintelligence

It seems like only last year that we were arguing about whether the slow-release rollout of the 1.5 billion parameter Generative Pretrained Transformer-2 (GPT-2) was reasonable. If the debate seems recent, that's because it is (writing from 2020): The notorious GPT-2 model was announced by OpenAI in February 2019, but it wasn't fully released until nearly 9 months later (although it was replicated before that). The release schedule was admittedly somewhat experimental, meant more to foster discussion of responsible open publishing, rather than a last-ditch effort to avert an AI apocalypse. All that is a bit moot by now because not only has OpenAI trained a much larger language model in GPT-3, but you can sign up to access it through their new API. Comparing GPT-3 to GPT-2 is like comparing apples to, well, raisins, because the model is about that much larger.


Text Classification with Simple Transformers

#artificialintelligence

Using Transformer models has never been simpler! Yes that's what Simple Transformers author Thilina Rajapakse says and I agree with him so should you. You might have seen lengthy code with hundreds of lines to implement transformers models such as BERT, RoBERTa, etc. Once you understand how to use Simple Transformers you will know how easy and simple it is to use transformer models. TheSimple Transformers library is built on top of Hugging Face Transformers library. Hugging Face Transformers provides state-of-the-art general-purpose architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet, T5, etc.) for Natural Language Understanding (NLU) and Natural Language Generation (NLG) and provides more than thousand pre-trained models and covers around 100 languages.


Top 8 GAN-Based Projects One Can Try Their Hands-On

#artificialintelligence

Generative Adversarial Networks or popularly known as GANs, have been successfully used in various areas such as computer vision, medical imaging, style transfer, natural language generation, to name a few. In one of our articles, we discussed the beginner's guide to GANs and how it proves to be a front-runner for gaining the ultimate artificial general intelligence (AGI). In this article, we list down the top 8 GAN-based projects one can try their hands-on. GAN Models: For anime creations, you can work with several GAN models such as IllustrationGAN, AnimeGAN, PSGAN. About: Designing your own anime characters can be time-consuming and needs lots of creative efforts.