Goto

Collaborating Authors

 research culture


AI system not yet ready to help peer reviewers assess research quality

#artificialintelligence

Artificial intelligence could eventually help to award scores to the tens of thousands of papers submitted to the Research Excellence Framework by UK universities.Credit: Yuichiro Chino/Getty Researchers tasked with examining whether artificial intelligence (AI) technology could assist in the peer review of journal articles submitted to the United Kingdom's Research Excellence Framework (REF) say the system is not yet accurate enough to aid human assessment, and recommend further testing in a large-scale pilot scheme. The team's findings, published on 12 December, show that the AI system generated identical scores to human peer reviewers up to 72% of the time. When averaged out over the multiple submissions made by some institutions across a broad range of the 34 subject-based'units of assessment' that make up the REF, "the correlation between the human score and the AI score was very high", says data scientist Mike Thelwall at the University of Wolverhampton, UK, who is a co-author of the report. In its current form, however, the tool is most useful when assessing research output from institutions that submit a lot of articles to the REF, Thelwall says. It is less useful for smaller universities that submit only a handful of articles.


How AI Startups Must Compete with Google: Reply to Fei-Fei Li

#artificialintelligence

Google is a giant in artificial intelligence. Every day, their exploits in AI make the news. As a result, AI startups can feel overshadowed by this mega-competitor, and their vision can be cloudy. Fortunately, to navigate through those murky waters, they can rely on Dr Fei-Fei Li, Director of Stanford's AI Lab (SAIL). She is also known as the teacher of an online course on neural networks for computer vision.