Goto

Collaborating Authors

 ai standard


The Download: how NYC tackles tough problems, and China's AI standards

MIT Technology Review

The standards document is very detailed: it sets clear criteria for when a data source should be banned from training generative AI. It also clarifies what companies should consider a "safety risk" in AI models. But, it's important to remember, these standards are not laws. Zeyi's story is from China Report, our weekly newsletter covering tech and power in China. Sign up to receive it in your inbox every Tuesday.


House Dem warns AI could be a tool of 'digital colonialism' without 'inclusivity' guardrails

FOX News

A House Democrat is warning artificial intelligence could become a tool of "digital colonialism" if the U.S. doesn't take steps to work with Western Hemisphere nations to create AI systems that reflect diversity and inclusion. Rep. Adriano Espaillat, D-N.Y., proposed a resolution during the August break that says the U.S. must champion a "regional" AI strategy that includes Western Hemisphere nations as this new technology is developed. "United States-led investments in the development of AI in the Western Hemisphere would promote the inclusion and representation of underserved populations in the global development and deployment of AI technologies, ensuring that no individual country dominates AI but rather collaborative developments in the Western Hemisphere," his resolution asserted. WHAT IS ARTIFICIAL INTELLIGENCE (AI)? Rep Adriano Espaillat, D-N.Y., is calling on the U.S. to work closely with Western nations as it develops artificial intelligence systems and guidelines.


Rishi Sunak to pitch UK as world leader of AI during meeting with Biden: report

FOX News

WATCH LIVE: VP Harris meets with UK PM Rishi Sunak in Munich. British Prime Minister Rishi Sunak is reportedly hoping to pitch the United Kingdom as a world leader in artificial intelligence governance during his meeting with President Joe Biden. But a post-Brexit U.K. has been locked out of key discussions between the United States and the European Union, such as the fourth Tech and Trade Council (TTC) meeting in Sweden. The White House said both the U.S. and EU recommitted to deepening cooperation on setting AI standards in line with democratic values and universal human rights and work together on emerging technologies "with like-minded partners." Politico reported in March that the Biden administration, meanwhile, has quietly rebuffed British officials' repeated requests for greater dialogue between Washington, D.C., and the U.K. regarding setting AI standards.


Strengthening international cooperation on artificial intelligence

#artificialintelligence

Artificial Intelligence (AI) is a potentially transformational technology that will have broad social, economic, national security, and geopolitical implications for the United States and the world.1 AI is not one particular technology but a general-purpose technology combining software and hardware in systems that enable technologies (machine learning, knowledge representation, and other forms of computerized approximation of human intelligence). This general-purpose nature means that AI could have wide-ranging economic impacts across manufacturing, transportation, health, education, and many other sectors. In 2018, the McKinsey Global Institute estimated that AI could add around 16 percent, or $13 trillion, to global output by 2030.2 Since then COVID-19 has further accelerated the use of AI. While the United States is the world leader in AI, China is catching up fast (and may lead in some areas) and other governments are expanding their own AI capacity. Rather than a zero-sum game, many such efforts can be additive, benefiting global welfare. The U.S. can encourage and support AI efforts that seek to develop and compete on fair terms. Other national policies--China's above all--seek to erect barriers to free and open development of AI, appropriating the benefits for their national champions and applying AI as a geopolitical lever. Such policies could distort the development and benefits of AI for humanity, make the world less secure for the U.S. and allies, and markets less receptive to U.S. products and services. To foster AI policies that support development of beneficial, trustworthy, and robust artificial intelligence will require international engagement by the United States and cooperation among like-minded democracies that are leaders in artificial intelligence.


Will the UK be able to shape global AI standards?

#artificialintelligence

A new initiative to shape international standards for Artificial Intelligence (AI) was launched last week by the UK government, as part of its strategy to become a global AI power. The "AI Standards Hub" will focus on governance and guidance and falls under the National AI Strategy that aims to increase Britain's contribution to development of global AI technical standards. The Alan Turing Institute, the London-based data science and AI organisation, has been selected to lead the pilot with support from the British Standards Institution and National Physical Laboratory. "The new AI Standard Hub will create practical tools for businesses, bring the UK's AI community together through a new online platform, and develop educational materials to help organisations develop and benefit from global standards," the government announced, adding that the move puts the country at the "forefront" of a rapidly developing industry. "On the face of it, the AI Standards Hub offers some substance to the government's claims of Britain being a tech power and paves the way for it to play a leadership role in shaping AI at the global level," London-based political risk analyst Mikhail Sebastian told TRT World.


Can new UK Hub shape global AI standards?

#artificialintelligence

Hot on the heels of the UK's National AI Strategy - launched in September last year - comes the AI Standards Hub, a new government initiative, proposed in the Strategy, which aims to shape global standards for the technology. Britain's Alan Turing Institute, the London-based AI and data science organization founded in 2015, will lead the pilot, with support from the British Standards Institution (the BSI) and metrology institute the National Physical Laboratory. Three august and widely respected bodies, backed by the Department for Digital, Culture, Media and Sport (DCMS) and the UK's Office for AI, which sits across DCMS and what is still called the Department for Business, Energy, and Industrial Strategy (BEIS), even though the Prime Minister scrapped the Industrial Strategy last year - arguably the one bit of government that had been working. That aside, the move adds some much-needed substance to Whitehall claims of world leadership in AI and the UK being a "science and technology superpower". It does this by seeking to focus the debate on standards and regulation at global scale.


Strengthening international cooperation on AI

#artificialintelligence

Since 2017, when Canada became the first country to adopt a national AI strategy, at least 60 countries have adopted some form of policy for artificial intelligence (AI). The prospect of an estimated boost of 16 percent, or US$13 trillion, to global output by 2030 has led to an unprecedented race to promote AI uptake across industry, consumer markets, and government services. Global corporate investment in AI has reportedly reached US$60 billion in 2020 and is projected to more than double by 2025. At the same time, the work on developing global standards for AI has led to significant developments in various international bodies. These encompass both technical aspects of AI (in standards development organizations (SDOs) such as the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the Institute of Electrical and Electronics Engineers (IEEE) among others) and the ethical and policy dimensions of responsible AI.


AI standards launched to help tackle problem of overhyped studies

#artificialintelligence

The first international standards for the design and reporting of clinical trials involving artificial intelligence have been announced in a move experts hope will tackle the issue of overhyped studies and prevent harm to patients. While the possibility that AI could revolutionise healthcare has fuelled excitement, in particular around screening and diagnosis, researchers have previously warned that the field is strewn with poor-quality research. Now an international team of experts has launched a set of guidelines under which clinical trials involving AI will be expected to meet a stringent checklist of criteria before being published in top journals. The new standards are being simultaneously published in the BMJ, Nature Medicine and Lancet Digital Health, expanding on existing standards for clinical trials – put in place more than a decade ago for drugs, diagnostic tests, and other interventions – to make them more suitable for AI-based systems. Prof Alastair Denniston of the University of Birmingham, an expert in the use of AI in healthcare and member of the team, said the guidelines were crucial to making sure AI systems were safe and effective for use in healthcare settings.


AI standards launched to help tackle problem of overhyped studies

The Guardian

The first international standards for the design and reporting of clinical trials involving artificial intelligence have been announced in a move experts hope will tackle the issue of overhyped studies and prevent harm to patients. While the possibility that AI could revolutionise healthcare has fuelled excitement, in particular around screening and diagnosis, researchers have previously warned that the field is strewn with poor-quality research. Now an international team of experts has launched a set of guidelines under which clinical trials involving AI will be expected to meet a stringent checklist of criteria before being published in top journals. The new standards are being simultaneously published in the BMJ, Nature Medicine and Lancet Digital Health, expanding on existing standards for clinical trials – put in place more than a decade ago for drugs, diagnostic tests, and other interventions – to make them more suitable for AI-based systems. Prof Alastair Denniston of the University of Birmingham, an expert in the use of AI in healthcare and member of the team, said the guidelines were crucial to making sure AI systems were safe and effective for use in healthcare settings.


AI Standards: From Principles to Implementation - InfoGovANZ

#artificialintelligence

With the proliferation of AI principles worldwide1, industry is faced with a new challenge: how to implement these AI principles? Since 2017, the international committee responsible for the standardization of AI (SC 42) has been tackling this challenge: it is developing standards covering both technical and organisational specifications to enable responsible and trustworthy AI. Forty-four countries are currently involved in the work of SC 42, and Australia plays an active role in the development of the AI international standards, as it has formed standards committee IT-043 to be Australia's voice at SC 42. When it comes to AI, it is essential to provide for interoperability and global governance, and this is why AI international standards have the buy in from key governments (such as China, the US and the EU). Australia has also identified AI standards as an important national priority.