Machines Beat Humans on a Reading Test. But Do They Understand? Quanta Magazine
In the fall of 2017, Sam Bowman, a computational linguist at New York University, figured that computers still weren't very good at understanding the written word. Sure, they had become decent at simulating that understanding in certain narrow domains, like automatic translation or sentiment analysis (for example, determining if a sentence sounds "mean or nice," he said). But Bowman wanted measurable evidence of the genuine article: bona fide, human-style reading comprehension in English. So he came up with a test. In an April 2018 paper coauthored with collaborators from the University of Washington and DeepMind, the Google-owned artificial intelligence company, Bowman introduced a battery of nine reading-comprehension tasks for computers called GLUE (General Language Understanding Evaluation). The test was designed as "a fairly representative sample of what the research community thought were interesting challenges," said Bowman, but also "pretty straightforward for humans."
Oct-21-2019, 03:49:49 GMT
- Country:
- Asia
- Middle East > Iraq (0.04)
- Taiwan (0.04)
- North America > United States
- California > San Francisco County
- San Francisco (0.04)
- Connecticut (0.04)
- Maryland (0.04)
- Massachusetts > Middlesex County
- Lowell (0.04)
- New York (0.24)
- California > San Francisco County
- Asia
- Industry:
- Education > Assessment & Standards > Student Performance (0.54)
- Technology: