Goto

Collaborating Authors

regulation


As the world grays, Japan's aging market showcases high-tech senior care

The Japan Times

Six years ago, Atsushi Nakanishi launched Triple W with nothing but the seed of an idea and an overwhelming passion to realize it. Today, the startup is the creator and seller of DFree -- the world's first wearable device for urinary incontinence. The tiny, noninvasive device uses ultrasound to monitor the volume of urine in the user's bladder in real time. When the bladder reaches its threshold, DFree sends an alert to the user's smartphone to tell them it is time to go to the bathroom. Nakanishi credits the ground-breaking product to a eureka moment in 2013.


Trust is a must: why business leaders should embrace explainable AI - Raconteur

#artificialintelligence

"Trust is a must," she said. "The EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide." Any fast-moving technology is likely to create mistrust, but Vestager and her colleagues decreed that those in power should do more to tame AI, partly by using such systems more responsibly and being clearer about how these work. The landmark legislation – designed to "guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation" – encourages firms to embrace so-called explainable AI.


Massachusetts Pioneers Rules For Police Use Of Facial Recognition Tech

NPR Technology

Surveillance cameras, like the one here in Boston, are used throughout Massachusetts. The state now regulates how police use facial recognition technology. Surveillance cameras, like the one here in Boston, are used throughout Massachusetts. The state now regulates how police use facial recognition technology. Massachusetts lawmakers passed one of the first state-wide restrictions of facial recognition as part of a sweeping police reform law.


The EU's new Regulation on Artificial Intelligence

#artificialintelligence

The Commission proposes a risk–based approach based on the level of risk presented by the AI system, with different levels of risk attracting corresponding compliance requirements. The risk categories include (i) unacceptable risk (these AI systems are prohibited); (ii) high-risk; (iii) limited risk; and (iv) minimal risk.


It's time to train professional AI risk managers

#artificialintelligence

Last year I wrote about how AI regulations will lead to the emergence of professional AI risk managers. This has already happened in the financial sector where regulations patterned after Basel rules have created a financial risk management profession to assess financial risks. Last week, the EU published a 108-page proposal to regulate AI systems. This will lead to the emergence of professional AI risk managers. The proposal doesn't cover all AI systems, just those deemed high-risk, and the regulation would vary depending on how risky the specific AI systems are: Since systems with unacceptable risks would be banned outright, most of the regulation is about high-risk AI systems.


The European Union Proposes New Legal Framework for Artificial Intelligence

#artificialintelligence

On 21 April 2021, the European Commission proposed a new, transformative legal framework to govern the use of artificial intelligence (AI) in the European Union. The proposal adopts a risk-based approach whereby the uses of artificial intelligence are categorised and restricted according to whether they pose an unacceptable, high, or low risk to human safety and fundamental rights. The policy is widely considered to be one of the first of its kind in the world which would, if passed, have profound and far-reaching consequences for organisations that develop or use technologies incorporating artificial intelligence. The European Commission's proposal has been in the making since 2017, when EU legislators enacted a resolution and a report with recommendations to the Commission on Civil Law Rules on Robotics. In 2020, the European Commission published a white paper on artificial intelligence.


Shapash 1.3.2, announcing new features for more auditable AI

#artificialintelligence

Shapash is a Python library released by MAIF data team in January 2021 to make Machine Learning models understandable by everyone. Shapash is currently using a Shap backend to compute local contributions. You will find the general presentation of Shapash in this article. Version 1.3.2 is now available and Shapash allows the Data Scientist to document each model he releases into production. Within a few lines of code, he can include in an HTML report all the information about his model (and its associated performance), the data he uses, his learning strategy, … this report is designed to be easily shared with a Data Protection Officer, an internal audit department, a risk control department, a compliance department or anyone who wants to understand his work.


The European Union Is Proposing Regulations For Artificial Intelligence

#artificialintelligence

Today, the European Commission proposed regulations for the European Union (EU). The proposed regulations are discussed on the EU site. They are of interest for more than only facial recognition, but as the start of what will be increasing regulation for many aspects of artificial intelligence (AI). There should be zero surprise that facial recognition is the first major aspect of AI to meet with government regulations. This technology is very intrusive and can directly impact the lives of all citizens in many ways.


First ship controlled by artificial intelligence prepares for maiden voyage

#artificialintelligence

The "Mayflower 400", the world's first intelligent ship, bobs gently in a light swell as it stops its engines in Plymouth Sound, off England's southwest coast, before self-activating a hydrophone designed to listen to whales. The 50-foot (15-metre) trimaran, which weighs nine tonnes and navigates with complete autonomy, is preparing for a transatlantic voyage. On its journey, the vessel, covered in solar panels, will study marine pollution and analyse plastic in the water, as well as track aquatic mammals. Eighty per cent of the underwater world remains unexplored. Brett Phaneuf, the co-founder of the charity ProMare and the mastermind behind the Mayflower project, said the ocean exerts "the most powerful force" on the global climate.


The EU's proposed AI laws would regulate robot surgeons but not the military

Engadget

While US lawmakers muddle through yet another congressional hearing on the dangers posed by algorithmic bias in social media, the European Commission (basically the executive branch of the EU) has unveiled a sweeping regulatory framework that, if adopted, could have global implications for the future of AI development. After extensive meetings with advocate groups and other stakeholders, the EC released both the first European Strategy on AI and Coordinated Plan on AI in 2018. Those were followed in 2019 by the Guidelines for Trustworthy AI, then again in 2020 by the Commission's White Paper on AI and Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics. Just as with its ambitious General Data Protection Regulation (GDPR) plan in 2018, the Commission is seeking to establish a basic level of public trust in the technology based on strident user and data privacy protections as well as those against its potential misuse. "Artificial intelligence should not be an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being. Rules for artificial intelligence available in the Union market or otherwise affecting Union citizens should thus put people at the centre (be human-centric), so that they can trust that the technology is used in a way that is safe and compliant with the law, including the respect of fundamental rights," the Commission included in its draft regulations.