draft regulation
The Download: meet the longevity obsessives, and how China's regulating AI
Earlier this month, I traveled to Montenegro for a gathering of longevity enthusiasts, people interested in extending human life through various biotechnology approaches. All the attendees were super friendly, and the sense of optimism was palpable. They're all confident we'll be able to find a way to slow or reverse aging--and they have a bold plan to speed up progress. Around 780 of these people have created a "pop-up city" that hopes to circumvent the traditional process of clinical trials. They want to create an independent state where like-minded innovators can work together in an all-new jurisdiction that gives them free rein to self-experiment with unproven drugs.
- Asia > China (0.52)
- Europe > Montenegro (0.28)
- Law (0.74)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.62)
China isn't waiting to set down rules on generative AI
Last week, I went on the CBC News podcast "Nothing Is Foreign" to talk about the draft regulation--and what it means for the Chinese government to take such quick action on a still-very-new technology. As I said in the podcast, I see the draft regulation as a mixture of sensible restrictions on AI risks and a continuation of China's strong government tradition of aggressive intervention in the tech industry. Many of the clauses in the draft regulation are principles that AI critics are advocating for in the West: data used to train generative AI models shouldn't infringe on intellectual property or privacy; algorithms shouldn't discriminate against users on the basis of race, ethnicity, age, gender, and other attributes; AI companies should be transparent about how they obtained training data and how they hired humans to label the data. At the same time, there are rules that other countries would likely balk at. The government is asking that people who use these generative AI tools register with their real identity--just as on any social platform in China.
- Information Technology > Security & Privacy (0.34)
- Government > Regional Government > Asia Government > China Government (0.31)
Ethical Use of AI in Insurance Modeling and Decision-Making
With increased availability of next-generation technology and data mining tools, insurance company use of external consumer data sets and artificial intelligence (AI) and machine learning (ML)-enabled analytical models is rapidly expanding and accelerating. Insurers have initially targeted key business areas such as underwriting, pricing, fraud detection, marketing distribution and claims management to leverage technical innovations to realize enhanced risk management, revenue growth and improved profitability. At the same time, regulators worldwide are intensifying their focus on the governance and fairness challenges presented by these complex, highly innovative tools – specifically, the potential for unintended bias against protected classes of people. In the United States, the Colorado Division of Insurance recently issued a first-in-the-nation draft regulation to support the implementation of a 2021 law passed by the state's legislature.1 This law (SB21-169) prohibits life insurers from using external personal data and information sources (ECDIS), or employing algorithms and models that use ECDIS, where the resulting impact of such use is unfair discrimination against consumers on the basis of race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression.2
- Law (1.00)
- Banking & Finance > Insurance (1.00)
- Information Technology > Security & Privacy (0.91)
- Government > Regional Government > North America Government > United States Government (0.76)
California FEHC Proposes Sweeping Regulations Regarding Use of Artificial Intelligence and Machine Learning in Connection With Employment Decision Making
The California Fair Employment and Housing Council (FEHC) recently took a major step towards regulating the use of artificial intelligence (AI) and machine learning (ML) in connection with employment decision-making. On March 15, 2022, the FEHC published Draft Modifications to Employment Regulations Regarding Automated-Decision Systems, which specifically incorporate the use of "automated-decision systems" in existing rules regulating employment and hiring practices in California. The draft regulations seek to make unlawful the use of automated-decision systems that "screen out or tend to screen out" applicants or employees (or classes of applicants or employees) on the basis of a protected characteristic, unless shown to be job-related and consistent with business necessity. The draft regulations also contain significant and burdensome recordkeeping requirements. Before the proposed regulations take effect, they will be subject to a 45-day public comment period (which has not yet commenced) before FEHC can move toward a final rulemaking.
California Draft Regulations Would Curb Employer Use of Artificial Intelligence
A statement from Equal Employment Opportunity Commission (EEOC) Chair Charlotte Burrows in late October 2021 announced the employment agency's launch of an initiative to ensure artificial intelligence (AI) and other emerging tools used in hiring and other employment decisions comply with federal civil rights laws. "The EEOC is keenly aware that [artificial intelligence and algorithmic decision-making] tools may mask and perpetuate bias or create new discriminatory barriers to jobs. We must work to ensure that these new technologies do not become a high-tech pathway to discrimination," Burrows said. The EEOC is not alone in its concerns about the use of AI, machine learning and related technologies in employment decision-making activities. On March 25, 2022, California's Fair Employment and Housing Council discussed draft regulations regarding automated-decision systems.
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Asia > China (0.05)
Cryptoassets and artificial intelligence in EU regulation - FinTech Perspectives
Our FinTech Perspectives series, will explore the content, potential, and shortcomings as well as the areas that require further clarification in this important field of European legislation on digital transformation in the financial services sector. Within its overarching plan to Shape Europe's Digital Future, The European Commission is determined to make the lead-up to 2030 Europe's Digital Decade. This ambition has, aside from activities in other fields, resulted in an outpouring of legislative initiatives. The great majority of these initiatives aims to get an ever better handle on our digital reality to date. For all of these, the European legislator needs to reconcile its desire to support technology and business innovation on the one side with the necessary protection for individuals and business in the EU on the other side. Getting this balance right is vital for the success of each of those initiatives.
The Missing Link in Europe's AI Strategy
BRUSSELS – The European Commission's strategy for artificial intelligence focuses on the need to establish "trust" and "excellence." Recently proposed AI regulation, the Commission argues, will create trust in this new technology by addressing its risks, while excellence will follow from EU member states investing and innovating. With these two factors accounted for, Europe's AI uptake supposedly will accelerate. Unfortunately, protecting EU citizens' fundamental rights, which should be the AI regulation's core objective, appears to be a secondary consideration; and protections for workers' rights don't seem to have been considered at all. AI is a flagship component of Europe's digital agenda, and the Commission's legislative package is fundamental to the proposed single market for data.
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.35)
Debate continues over the pros and cons of regulating artificial intelligence
What are the issues of most concern for businesses in the EU Commission's recently published AI Act proposals? Our virtual gathering included representatives from the UK, Netherlands and USA, stretching across the automotive, energy, education, professional services and tech sectors. As with our first AI roundtable, the discussion ranged far and wide. A notable difficulty with the Commission's draft regulation on AI (as proposed, its "AI Act") is that it assumes that an end-to-end "provider" of an AI system can be identified and fixed with liability. The AI Act defines such service providers as the person or organisation that developed the system or had it developed.
- North America > United States (0.36)
- Europe > Netherlands (0.25)
- Law > Statutes (1.00)
- Government (1.00)
Breaking Down the World's First Proposal for Regulating Artificial Intelligence
Today, artificial intelligence and machine learning tools are ubiquitous across sectors--used for everything from determining an individual's credit worthiness to enabling law enforcement surveillance--and rapidly evolving. Despite this, few nations have rules in place to oversee these systems or mitigate the harms they could cause. On April 21, the European Commission released a draft of its proposed AI regulation, the world's first legal framework addressing the risks posed by artificial intelligence. The draft regulation makes some notable strides, prohibiting the use of certain harmful AI systems and reining in harmful uses of some high-risk algorithmic systems. However, the Commission's proposed regulation displays gaps which, if not addressed, could limit its effectiveness in holding some of the biggest developers and deployers of algorithmic systems accountable.
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.37)
EU unveils artificial intelligence rules to temper Big Brother fears
BRUSSELS (AFP) - The European Union unveils a plan on Wednesday (April 21) to regulate the sprawling field of artificial intelligence, aimed at making Europe a leader in the new tech revolution while reassuring the public against Big Brother-like abuses. "Whether it's precision farming in agriculture, more accurate medical diagnosis or safe autonomous driving, artificial intelligence will open up new worlds for us. But this world also needs rules," European Commission President Ursula von der Leyen said in her state-of-the-union speech in September last year. "We want a set of rules that puts people at the centre." The Commission, the EU's executive arm, has been preparing the proposal for over a year and a debate involving the European Parliament and 27 member states is to go on for months more before a definitive text is in force.
- North America > United States > California (0.06)
- Europe > France (0.06)
- Asia > China (0.06)
- Law (1.00)
- Government > Regional Government > Europe Government (0.57)
- Food & Agriculture > Agriculture (0.57)