Global Big Data Conference
OpenAI's impressive AI language model GPT-3 has plenty of things going it, but with 175 billion parameters no one would claim it's particularly streamlined. The Allen Institute for AI (AI2) has demonstrated a model that performs as well or better than GPT-3 on answering questions, but is a tenth the size. Macaw, AI2's model, emerged from research being done at the nonprofit into creating an AI that performs at human levels on standardized tests. "After we got a very high score they moved on to harder questions," said AI2 head Oren Etzioni. "There's this paradox where sometimes the questions that are easiest for people are the hardest for machines -- and the biggest gap was in common sense." For instance, he said, asking "When did Tom Hanks land on the moon?" GPT-3 says 1995, since that's when the film Apollo 13 came out.
Jan-26-2022, 02:35:22 GMT
- Technology: