Evaluating AI Evaluation: Perils and Prospects
–arXiv.org Artificial Intelligence
As AI systems appear to exhibit ever-increasing capability and generality, assessing their true potential and safety becomes paramount. This paper contends that the prevalent evaluation methods for these systems are fundamentally inadequate, heightening the risks and potential hazards associated with AI. I argue that a reformation is required in the way we evaluate AI systems and that we should look towards cognitive sciences for inspiration in our approaches, which have a longstanding tradition of assessing general intelligence across diverse species. We will identify some of the difficulties that need to be overcome when applying cognitively-inspired approaches to general-purpose AI systems and also analyse the emerging area of "Evals". The paper concludes by identifying promising research pathways that could refine AI evaluation, advancing it towards a rigorous scientific domain that contributes to the development of safe AI systems.
arXiv.org Artificial Intelligence
Jul-12-2024
- Country:
- Europe > United Kingdom
- England
- Cambridgeshire > Cambridge (0.14)
- Oxfordshire > Oxford (0.14)
- England
- North America > United States
- Maryland > Montgomery County
- Gaithersburg (0.14)
- New York > New York County
- New York City (0.14)
- Maryland > Montgomery County
- Europe > United Kingdom
- Genre:
- Research Report > Experimental Study (0.45)
- Industry:
- Education (1.00)
- Government (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- Information Technology (1.00)
- Law (1.00)
- Law Enforcement & Public Safety (0.67)
- Leisure & Entertainment > Games
- Chess (0.46)
- Technology:
- Information Technology > Artificial Intelligence
- Applied AI (1.00)
- Cognitive Science (1.00)
- Issues > Social & Ethical Issues (0.92)
- Machine Learning > Neural Networks
- Deep Learning (1.00)
- Natural Language > Large Language Model (0.69)
- Representation & Reasoning > Agents (0.67)
- Information Technology > Artificial Intelligence