In what could be a harbinger of the future regulation of artificial intelligence (AI) in the United States, the European Commission published its recent proposal for regulation of AI systems. The proposal is part of the European Commission's larger European strategy for data, which seeks to "defend and promote European values and rights in how we design, make and deploy technology in the economy." To this end, the proposed regulation attempts to address the potential risks that AI systems pose to the health, safety, and fundamental rights of Europeans caused by AI systems. Under the proposed regulation, AI systems presenting the least risk would be subject to minimal disclosure requirements, while at the other end of the spectrum AI systems that exploit human vulnerabilities and government-administered biometric surveillance systems are prohibited outright except under certain circumstances. In the middle, "high-risk" AI systems would be subject to detailed compliance reviews.
Firms can expect to hear soon, in a white paper to be published by the Office for AI, whether general AI-specific regulation will be introduced in the UK. EU law makers are currently scrutinising separate plans for a draft new EU AI Act. Both developments are expected to focus on issues such as transparency, explainability and governance. However, any new rules would only apply to technology that fits within the definition of AI in new legislation or regulation. Figuring out whether the technology firms use will be in-scope is therefore an important preliminary task for financial services businesses.
As governments around the world consider how to regulate AI, the European Union is planning first-of-its-kind legislation that would put strict limits on the technology. On Wednesday, the European Commission, the body's executive branch, detailed a regulatory approach that calls for a four-tier system that groups AI software into separate risk categories and applies an appropriate level of regulation to each. At the top would be systems that pose an "unacceptable" risk to people's rights and safety. The EU would outright ban these types of algorithms under the Commission's proposed legislation. An example of software that would fall under this category is any AI that would allow governments and companies to implement social scoring systems.
Today, the European Commission proposed regulations for the European Union (EU). The proposed regulations are discussed on the EU site. They are of interest for more than only facial recognition, but as the start of what will be increasing regulation for many aspects of artificial intelligence (AI). There should be zero surprise that facial recognition is the first major aspect of AI to meet with government regulations. This technology is very intrusive and can directly impact the lives of all citizens in many ways.
California is getting closer to defining what a good school should look like. But how will parents know if their school is one of them? On Thursday, the federal government released draft regulations for the Every Student Succeeds Act's provisions on school accountability. Under the guidelines, states have to tell parents how their schools are doing on a range of factors -- and also give the school an overall rating. The regulations allow for that rating to be in different forms, including a number, grade, or category.