General Scales Unlock AI Evaluation with Explanatory and Predictive Power (Nature)
- Lexin Zhou ,
- Lorenzo Pacchiardi ,
- Fernando Mart'inez-Plumed ,
- Katherine M. Collins ,
- Yael Moros-Daval ,
- Seraphina Zhang ,
- Qinlin Zhao ,
- Yitian Huang ,
- Luning Sun ,
- Jonathan E Prunty ,
- Zongqian Li ,
- Pablo S'anchez-Garc'ia ,
- Kexin Chen ,
- Pablo Antonio Moreno Casares ,
- Jiyun Zu ,
- John Burden ,
- Behzad Mehrbakhsh ,
- David Stillwell ,
- Manuel Cebrian ,
- Jindong Wang ,
- Peter Henderson ,
- Sherry Wu ,
- Patrick C Kyllonen ,
- Lucy G. Cheke ,
- Xing Xie ,
- Jos'e Hern'andez-Orallo
Nature | , Vol 652: pp. 58-67
Ensuring safe and effective use of artificial intelligence (AI) requires understanding and anticipating its performance on new tasks, from advanced scientific challenges to transformed workplace activities1 (opens in new tab),2 (opens in new tab),3 (opens in new tab). So far, benchmarking has guided progress in AI but has offered limited explanatory and predictive power for general-purpose AI systems4 (opens in new tab),5 (opens in new tab),6 (opens in new tab),7 (opens in new tab),8 (opens in new tab), attributed to limited transferability across specific tasks9 (opens in new tab),10 (opens in new tab),11 (opens in new tab). Here we introduce general scales for AI evaluation that elicit demand profiles explaining what capabilities common AI benchmarks truly measure, extract ability profiles quantifying the general strengths and limits of AI systems and robustly predict AI performance for new task instances. Our fully automated methodology builds on 18 rubrics, capturing a broad range of cognitive and intellectual demands, which place different task instances on the same general scales, illustrated on 15 large language models (LLMs) and 63 tasks. Both the demand and the ability profiles on these scales bring new insights such as construct validity through benchmark sensitivity and specificity and explain conflicting claims about whether AI has reasoning capabilities. Ultimately, high predictive power at the instance level becomes possible using the general scales, providing superior estimates over strong black-box baseline predictors, especially in out-of-distribution settings (new tasks and benchmarks). The scales, rubrics, battery, techniques and results presented here constitute a solid foundation for a science of AI evaluation, underpinning the reliable deployment of AI in the years ahead.