ChatGPT is a robot con artist, and we're suckers for trusting it
A few days after Google and Microsoft announced they'd be delivering search results generated by chatbots -- artificially intelligent software capable of producing uncannily human-sounding prose -- I fretted that our new AI helpers are not to be trusted. After all, Google's own AI researchers had warned the company that chatbots would be "stochastic parrots" (likely to squawk things that are wrong, stupid, or offensive) and "prone to hallucinating" (liable to just make stuff up). The bots, drawing on what are known as large language models, "are trained to predict the likelihood of utterances," a team from DeepMind, the Alphabet-owned AI company, wrote last year in a presentation on the risks of LLMs. "Yet, whether or not a sentence is likely does not reliably indicate whether the sentence is also correct." These chatbots, in other words, are not actually intelligent.
Feb-16-2023, 18:23:09 GMT
- Country:
- North America
- Canada > Saskatchewan (0.05)
- United States
- California (0.05)
- Pennsylvania (0.05)
- North America
- Genre:
- Research Report (0.70)
- Technology: