A New AI Evaluation Cosmos: Ready to Play the Game?

Hérnandez-Orallo, José (Universitat Politècnica de València) | Baroni, Marco (Facebook) | Bieger, Jordi (Reykjavik University) | Chmait, Nader (Monash University) | Dowe, David L. (Monash University) | Hofmann, Katja (Microsoft Research) | Martínez-Plumed, Fernando (Universitat Politècnica de València) | Strannegård, Claes (Chalmers University of Technology) | Thórisson, Kristinn R. (Reykjavik Universit)

AI Magazine 

We report on a series of new platforms and events dealing with AI evaluation that may change the way in which AI systems are compared and their progress is measured. The introduction of a more diverse and challenging set of tasks in these platforms can feed AI research in the years to come, shaping the notion of success and the directions of the field. However, the playground of tasks and challenges presented there may misdirect the field without some meaningful structure and systematic guidelines for its organization and use. Anticipating this issue, we also report on several initiatives and workshops that are putting the focus on analyzing the similarity and dependencies between tasks, their difficulty, what capabilities they really measure and – ultimately – on elaborating new concepts and tools that can arrange tasks and benchmarks into a meaningful taxonomy.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found