Interview with Alice Xiang: Fair human-centric image dataset for ethical AI benchmarking
Earlier this month, Sony AI released a dataset that establishes a new benchmark for AI ethics in computer vision models. The research behind the dataset, named Fair Human-Centric Image Benchmark (FHIBE), has been published in Nature . FHIBE is the first publicly-available, globally-diverse, consent-based human image dataset (inclusive of over 10,000 human images) for evaluating bias across a wide variety of computer vision tasks. We sat down with project lead, Alice Xiang, Global Head of AI Governance at Sony Group and Lead Research Scientist for AI Ethics at Sony AI, to discuss the project and the broader implications of this research. Could you start by introducing the project and taking us through some of the main contributions?
Dec-4-2025, 10:32:39 GMT
- Genre:
- Research Report (0.49)
- Industry:
- Law (0.69)
- Technology:
- Information Technology > Artificial Intelligence
- Issues > Social & Ethical Issues (1.00)
- Vision (1.00)
- Information Technology > Artificial Intelligence