sc 42
Standardization Trends on Safety and Trustworthiness Technology for Advanced AI
Artificial intelligence (AI) technology has been evolving more rapidly over the past decade. With new ML models, data sources, and increased computational power, AI researchers have developed AI technologies that can understand language, recognize and create images and videos, program, and make scientific inferences. Recent advances in advanced AI technologies have evolved beyond traditional narrow domain AI to approximate or exceed artificial general intelligence (AGI) based on large language models (LLMs) or foundation models (FMs). These advanced AI systems are performing at or above human levels in complex problem solving, sophisticated natural language processing, and multi-domain tasks, and have the potential to revolutionize a wide range of fields, including science, industry, healthcare, and education. They are already surpassing human capabilities in certain task domains, such as Go, strategy games, and protein folding prediction [1] [2]. For these reasons, concerns about the safety and trustworthiness of advanced AI are growing rapidly alongside its development. The increasing complexity and autonomy of advanced AI systems is raising concerns that they could lead to new forms of safety and security risks, such as (1) uncontrollability, (2) conflicts with human values in ethical decision-making, (3) long-term socioeconomic impacts, and (4) safety assurance. In response, international standardization efforts are underway to ensure the safety and trustworthiness of advanced AI. By developing internationally agreed technical standards, efforts are being made to apply consistent safety and trustworthiness criteria to the development and use of advanced AI systems and minimize potential risks.
- Europe > United Kingdom (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Asia > China (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- (3 more...)
Essential guidance on AI-related risk management
As the uses of artificial intelligence (AI) continue to expand, there is a growing need for effective risk management to deal with issues ranging from technical, such as algorithm failures, to ethical, including bias in decision-making. A new ISO/IEC Standard provides essential guidance on risk management for organizations of all sizes and types that utilize AI in their systems or processes. ISO/IEC 23894 shows users how to manage AI-related risks effectively in order to achieve objectives and improve performance. "While AI systems are similar to traditional IT systems in many ways, they also present new aspects such as their ability to learn," says Wael William Diab, who chairs the joint IEC and ISO committee that develops AI standards. "SC 42 took the novel approach of developing a framework that employs well-established techniques around risk management. ISO/IEC 23894 provides a holistic and proactive approach to managing AI-related risks with the goal of enabling users to manage the risks effectively to harness the full potential of AI."
ISO/IEC AI meeting discusses sustainability, ethics and emerging regulation
Around 150 delegates from 50 participating countries took part in the recent plenary meeting of the ISO and IEC joint committee on artificial intelligence (ISO/IEC JTC 1/SC 42). At the meeting, delegates heard from the European Commission (EC) and approved a number of resolutions. The keynote speaker was Salvatore Scalzo, an EC Policy and Legal Officer in the field of AI. He works in the Directorate‑General for Communications Networks, Content and Technology, which develops and implements digital policies for Europe. Mr. Scalzo said that the EC was taking a strong interest in ISO/IEC AI standards as work continued on a future AI act.
- Law (0.38)
- Government > Regional Government > Europe Government (0.38)
Artificial intelligence: getting ML classification models right
"Classification: method of structuring a defined type of item (objects or documents) into classes and subclasses in accordance with their characteristics." Classification is about categorizing data sets into classes. A simple example is an email spam filter, which classifies incoming messages as spam and not spam. The classifier needs examples of'spam' and'not spam' emails to learn how to perform the task by recognizing patterns. The spam filter will almost certainly make mistakes, which can only be ironed out by regularly evaluating its performance.
Using AI to fight climate change
As the world becomes more digital, artificial intelligence (AI) is becoming a natural part of our daily lives. When travelling to a new destination for example, one can easily pull up a phone and search for directions on a Maps app. With the help of AI, the app is able to provide live traffic feed and inform users of the exact distance and predicted time to their destination. AI also affects every industry and has revolutionized critical fields like medicine. It supports medical professionals in diagnosis, treatment, and analysis. For example, AI is used to observe the vital signs of patients receiving critical care and alert clinicians if certain risk factors increase.
Computational approaches for AI systems
Darwin designed his tongue-in-cheek cost-benefit analysis to help himself make a choice. In that respect, it was an algorithm, as are recipes, business processes and just about any other instructions that we use in our daily lives either to solve problems or to complete tasks. Nowadays, algorithms are programmed into devices to automate jobs that past generations had to do by hand. We call it artificial intelligence and it has moved into the mainstream. All this has been made possible by recent improvements in software and hardware, which have boosted computational performance, data storage capabilities and network bandwidth. AI technologies are driving the digital transformation of industry and society by satisfying demands for more intelligent services and analytics.
The CPSC Digs In On Artificial Intelligence - Consumer Protection - United States
American households are increasingly connected internally through the use of artificially intelligent appliances.1 But who regulates the safety of those dishwashers, microwaves, refrigerators, and vacuums powered by artificial intelligence (AI)? On March 2, 2021, at a virtual forum attended by stakeholders across the entire industry, the Consumer Product Safety Commission (CPSC) reminded us all that it has the last say on regulating AI and machine learning consumer product safety. The CPSC is an independent agency comprised of five commissioners who are nominated by the president and confirmed by the Senate to serve staggered seven-year terms. With the Biden administration's shift away from the deregulation agenda of the prior administration and three potential opportunities to staff the commission, consumer product manufacturers, distributors, and retailers should expect increased scrutiny and enforcement.2
- North America > United States (1.00)
- Europe (0.15)
- Consumer Products & Services (1.00)
- Law > Torts Law (0.76)
- Transportation > Ground > Road (0.74)
- Government > Regional Government > North America Government > United States Government (0.70)
The CPSC Digs In on Artificial Intelligence
American households are increasingly connected internally through the use of artificially intelligent appliances.1 But who regulates the safety of those dishwashers, microwaves, refrigerators, and vacuums powered by artificial intelligence (AI)? On March 2, 2021, at a virtual forum attended by stakeholders across the entire industry, the Consumer Product Safety Commission (CPSC) reminded us all that it has the last say on regulating AI and machine learning consumer product safety. The CPSC is an independent agency comprised of five commissioners who are nominated by the president and confirmed by the Senate to serve staggered seven-year terms. With the Biden administration's shift away from the deregulation agenda of the prior administration and three potential opportunities to staff the commission, consumer product manufacturers, distributors, and retailers should expect increased scrutiny and enforcement.2
- North America > United States (0.70)
- Europe (0.15)
- Information Technology (1.00)
- Consumer Products & Services (1.00)
- Transportation > Ground > Road (0.73)
- Government > Regional Government > North America Government > United States Government (0.70)
AI Standards: From Principles to Implementation - InfoGovANZ
With the proliferation of AI principles worldwide1, industry is faced with a new challenge: how to implement these AI principles? Since 2017, the international committee responsible for the standardization of AI (SC 42) has been tackling this challenge: it is developing standards covering both technical and organisational specifications to enable responsible and trustworthy AI. Forty-four countries are currently involved in the work of SC 42, and Australia plays an active role in the development of the AI international standards, as it has formed standards committee IT-043 to be Australia's voice at SC 42. When it comes to AI, it is essential to provide for interoperability and global governance, and this is why AI international standards have the buy in from key governments (such as China, the US and the EU). Australia has also identified AI standards as an important national priority.
Who needs AI IEC e-tech Issue' 01/2019
It is difficult not to smile when reading the Wall Street Journal report about a guest in a robot-staffed hotel in Japan who was woken every few hours by the in-room assistant asking him to repeat his command. The hotel manager finally realized that heavy snoring by the guest had triggered the robot's voice recognition system. For every clanger, though, there is also a success story. For example, DeepMind's AI programme AlphaStar has for the first time beaten human video game players at StarCraft II, winning 10 games in a row. AlphaStar's success demonstrated the ability of AI programmes, in this case based on a reinforcement learning algorithm, to make quick decisions without any errors while operating in a complex environment.