Goto

Collaborating Authors

What changes OpenAI's GPT-3 and other models brought to us

#artificialintelligence

In June last year, GPT-3 released by OpenAI, it is composed of 175 billion parameters, and training cost tens of millions of dollars, it was the largest artificial intelligence language model ever produced. From answering the questions to writing articles and poems, and even writing slang language everything is covered. The full name of GPT-3 is Generative Pretrained Transformer-3 (Generative Pretrained Transformer-3). This is the third series of generating pretraining converters, which is more than 100 times that of GPT-2 in 2019. In GPT-3 there are 175 billion parameters, the second largest language model has 17 billion parameters.


What is GPT-3? Everything your business needs to know about OpenAI's breakthrough AI language program

ZDNet

GPT-3 is a computer program created by the privately held San Francisco startup OpenAI. It is a gigantic neural network, and as such, it is part of the deep learning segment of machine learning, which is itself a branch of the field of computer science known as artificial intelligence, or AI. The program is better than any prior program at producing lines of text that sound like they could have been written by a human. The reason that such a breakthrough could be useful to companies is that it has great potential for automating tasks. GPT-3 can respond to any text that a person types into the computer with a new piece of text that is appropriate to the context. Type a full English sentence into a search box, for example, and you're more likely to get back some response in full sentences that is relevant. That means GPT-3 can conceivably amplify human effort in a wide variety of situations, from questions and answers for customer service to due diligence document search to report generation. The program is currently in a private beta for which people can sign up on a waitlist. It's being offered by OpenAI as an API accessible through the cloud, and companies that have been granted access have developed some intriguing applications that use the generation of text to enhance all kinds of programs, from simple question-answering to producing programming code. Along with the potential for automation come great drawbacks. GPT-3 is compute-hungry, putting it beyond the use of most companies in any conceivable on-premise fashion. Its generated text can be impressive at first blush, but long compositions tend to become somewhat senseless.


AI Weekly: The promise and shortcomings of OpenAI's GPT-3

#artificialintelligence

I typically think of the dog days of summer as a time when news slows down. It's typically when a lot of people take time off work, and the lull leads local news stations to cover inconsequential things like cat shows or a little baby squirrel on a little baby Jet Ski. But these are not typical times. Fallout surrounding issues of bias and discrimination continues at Facebook, as multiple news outlets reported that Instagram's content moderation algorithm was 50% more likely to flag and disable the accounts of Black users than White users. Facebook and Instagram are now creating teams to examine how algorithms impact the experiences of Black, Latinx, and other specific groups of users.


20 AI, Data Science, Machine Learning Terms You Need to Know in 2020 (Part 2) - KDnuggets

#artificialintelligence

This is the 2nd part of our list of 20 AI, Data Science, Machine Learning terms to know for 2020. Here is 20 AI, Data Science, Machine Learning Terms You Need to Know in 2020 (Part 1). Those definitions were compiled by KDnuggets Editors Matthew Dearing, Matthew Mayo, Asel Mendis, and Gregory Piatetsky. This is a really interesting concept, which Pedro Domingos, a leading AI researcher, called one of the most significant advances in ML theory in 2019. The phenomenon is shown in Figure 1.


GPT-3: The First Artificial General Intelligence?

#artificialintelligence

If you had asked me a year or two ago when Artificial General Intelligence (AGI) would be invented, I'd have told you that we were a long way off. Most experts were saying that AGI was decades away, and some were saying it might not happen at all. The consensus is -- was? -- that all the recent progress in AI concerns so-called "narrow AI," meaning systems that can only perform one specific task. An AGI, or a "strong AI," which could perform any task as well as a human being, is a much harder problem. It is so hard that there isn't a clear roadmap for achieving it, and few researchers are openly working on the topic. GPT-3 is the first model to shake that status-quo seriously. GPT-3 is the latest language model from the OpenAI team.