AI experts call for 'bias bounties' to boost ethics scrutiny – Government & civil service news
Experts from the private sector and leading research labs in the US and Europe have joined forces to create a toolkit for turning AI ethics principles into practice. The preprint paper, published last week, advocates paying people for finding risks of bias in artificial intelligence (AI) systems – adapting a model used to check the security of new computer systems, in which hackers are paid'bounties' for identifying weaknesses. The paper also proposes better linking independent third-party auditing operations and government policies to foster a market in regulatory systems, and suggests that governments increase funding for researchers in academia to verify performance claims made by industry. The 80-page paper, Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims, has been put together by AI specialists from 30 organisations including Google Brain, Intel, OpenAI, Stanford University and the Leverhulme Centre for the Future of Intelligence. "In order for AI developers to earn trust from system users, customers, civil society, governments, and other stakeholders that they are building AI responsibly, there is a need to move beyond [ethics] principles to a focus on mechanisms for demonstrating responsible behaviour," the executive summary reads.
Apr-25-2020, 07:26:31 GMT
- Country:
- Europe (0.26)
- North America
- Canada (0.06)
- United States (0.26)
- Oceania
- Australia (0.06)
- New Zealand (0.06)
- Genre:
- Research Report (0.71)
- Industry:
- Government (1.00)
- Law > Statutes (0.32)
- Technology: