AI system not yet ready to help peer reviewers assess research quality
Artificial intelligence could eventually help to award scores to the tens of thousands of papers submitted to the Research Excellence Framework by UK universities.Credit: Yuichiro Chino/Getty Researchers tasked with examining whether artificial intelligence (AI) technology could assist in the peer review of journal articles submitted to the United Kingdom's Research Excellence Framework (REF) say the system is not yet accurate enough to aid human assessment, and recommend further testing in a large-scale pilot scheme. The team's findings, published on 12 December, show that the AI system generated identical scores to human peer reviewers up to 72% of the time. When averaged out over the multiple submissions made by some institutions across a broad range of the 34 subject-based'units of assessment' that make up the REF, "the correlation between the human score and the AI score was very high", says data scientist Mike Thelwall at the University of Wolverhampton, UK, who is a co-author of the report. In its current form, however, the tool is most useful when assessing research output from institutions that submit a lot of articles to the REF, Thelwall says. It is less useful for smaller universities that submit only a handful of articles.
Dec-19-2022, 13:15:09 GMT
- Country:
- Europe > United Kingdom > England > West Midlands > Wolverhampton (0.25)
- Genre:
- Research Report (1.00)
- Technology: