mary lee pfeiffer
OpenAI says the latest ChatGPT can 'think' – and I have thoughts
We are fast approaching two years of the generative AI revolution, sparked by the November 2022 release of ChatGPT by OpenAI. So far it's been a mixed bag. OpenAI recently announced it had crossed 200 million weekly active users – nothing to be sniffed at, but it got its first 100 million within two months of release. A recent YouGov study found that the inclusion of AI in a product is as likely to turn off a potential purchaser as much as it is to get them to hand over their cash. Nevertheless, money keeps flowing into the sector, and advances keep coming.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Why AI's Tom Cruise problem means it is 'doomed to fail'
In 2021, linguist Emily Bender and computer scientist Timnit Gebru published a paper that described the then-nascent field of language models as one of "stochastic parrots". A language model, they wrote, "is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning." AI can still get better, even if it is a stochastic parrot, because the more training data it has, the better it will seem. But does something like ChatGPT actually display anything like intelligence, reasoning, or thought? Or is it simply, at ever-increasing scales, "haphazardly stitching together sequences of linguistic forms"?