Goto

Collaborating Authors

Government & the Courts


Disability Bias in AI Hiring Tools Targeted in US Guidance (1)

#artificialintelligence

Employers have a responsibility to inspect artificial intelligence tools for disability bias and should have plans to provide reasonable accommodations, the Equal Employment Opportunity Commission and Justice Department said in guidance documents. The guidance released Thursday is the first from the federal government on the use of AI hiring tools that focuses on their impact on people with disabilities. The guidance also seeks to inform workers of their right to inquire about a company's use of AI and to request accommodations, the agencies said. "Today we are sounding an alarm regarding the dangers of blind reliance on AI and other technologies that are increasingly used by employers," Assistant Attorney General Kristen Clarke told reporters. The DOJ enforces disability discrimination laws with respect to state and local government employers, while the EEOC enforces such laws in the private sector and federal employers.


La veille de la cybersécurité

#artificialintelligence

Criminals are getting smarter and the healthcare industry is no exception. In 2021 alone, the Department of Justice (DOJ) recovered more than $5.6 billion from civil fraud and false claims cases. This is the DOJ's biggest haul since 2014, but a drop in the bucket compared to the estimated $380 billion is lost every year to fraud, waste, and abuse. These numbers add up to higher premiums and out-of-pocket expenses for consumers, as well as reduced benefits or coverage. What's more, relaxed telehealth mandates put into place during the COVID-19 pandemic, the increased digitization of health, and the emergence of telehealth platforms have made it easier than ever for fraudsters to operate are all contributing to a growing problem.


8 Women in AI Who Are Striving to Humanize the World - KDnuggets

#artificialintelligence

Editor's note: This article was originally published on March 8, 2021. Wired reports a gender bias exists in AI and, in 2018, found that only 12% of AI researchers are women. When I started my career as a Data Analyst, a Data Science engineer position was not widely available in Ukraine. Self-education and getting acquainted with ML algorithms took me some time and a lot of effort. Nowadays, I work as an AI engineer at MobiDev, and the more experience I get, the more willing I am to share my experiences with people in my articles and webinars.


Supreme Court reinstates Trump-era water rule, for now

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. The Supreme Court on Wednesday reinstated for now a Trump-era rule that curtails the power of states and Native American tribes to block pipelines and other energy projects that can pollute rivers, streams and other waterways. In a decision that split the court 5-4, the justices agreed to halt a lower court judge's order throwing out the rule. The high court's action does not interfere with the Biden administration's plan to rewrite the rule.


Saudi launches first artificial intelligence run virtual court

#artificialintelligence

Riyadh: The Kingdom of Saudi Arabia (KSA) has launched the first of its kind virtual court that works in a fully automated manner without human intervention, the Saudi Press Agency (SPA) reported. The virtual enforcement court was inaugurated on Sunday by Minister of Justice, Walid Al-Samaani. The virtual court shortens the litigation procedures from twelve steps to only two steps, without human intervention, starting from the submission of the application until the final verdict is issued for electronic execution bonds documented through the Nafith platform. The effective implementation of digital transformation through the virtual court contributes to eliminating seven visits per request after making the services available electronically through the portal. The project establishes the use of artificial intelligence techniques in judicial facilities, to achieve the goals of the justice system in keeping with the Saudi Vision 2030.


European Council introductory handbook on Artificial Intelligence and Human Rights

#artificialintelligence

Turning ethical Artificial Intelligence into reality implies assessing the risks of AI in context, particularly in terms of its impact on civil and social rights and then, depending on the assessed risk, defining standards or regulating the ethical design, development and implementation of algorithmic systems. This is the aim of this introductory handbook by the Council of Europe and the Alan Turing Institute, of late 2021, "Artificial Intelligence, Human Rights, Democracy and the Rule of Law: A Primer". A key initiative in this process was the feasibility study prepared and approved in December by the Council of Europe's Ad Hoc Committee on Artificial Intelligence (CAHAI), which explores options for an international legal response, based on Council of Europe standards in the fields of artificial intelligence, rights, democracy and the rule of law: it proposes nine principles and priorities that are well suited to the new challenges posed by the design, development and deployment of Artificial Intelligence systems. When codified into law, these principles and priorities create a set of interconnected rights and obligations that will work to ensure that the design and use of artificial intelligence technologies conform to the values of human rights, democracy and the rule of law. The key question is whether there are responses to the specific risks and opportunities presented by AI systems that can and should be addressed through the use of binding and non-binding international legal instruments, through the agency of the Council of Europe, which is the guardian of the European Convention on Human Rights, Convention 108, which protects the processing of personal data, and the European Social Charter.


Can Artificial Intelligence, Machine Learning put judiciary on the fast track?

#artificialintelligence

Can artificial intelligence (AI) be used in judicial processes to reduce the pendency of cases? In response to this unstarred question in the Lok Sabha during the first part of the Budget session of Parliament, Law Minister Kiren Rijiju said that while implementing phase two of the eCourts projects, under operation since 2015, a need was felt to adopt new, cutting edge technologies of Machine Learning (ML) and Artificial Intelligence (AI) to increase the efficiency of the justice delivery system. "To explore the use of AI in judicial domain, the Supreme Court of India has constituted Artificial Intelligence Committee which has mainly identified application of AI technology in Translation of judicial documents; Legal research assistance and Process automation," Mr. Rijiju stated. Several law firms are now keen try out new technologies for a quick reference on judicial precedents and pronouncements on cases with similar legal issues at stake. Mumbai-based Riverus, a "legal tech" firm, has developed ML applications that peruse troves of cases, "understand" them, and parse cases that are similar in content -- very much like a human expert would do -- in a fraction of the time.


Texas sues Meta, saying it misused facial recognition data

NPR Technology

FILE photo - Texas sued Meta on Monday over misuse of biometric data, the latest round of litigation between governments and the company over privacy. FILE photo - Texas sued Meta on Monday over misuse of biometric data, the latest round of litigation between governments and the company over privacy. Texas sued Facebook parent company Meta for exploiting the biometric data of millions of people in the state - including those who used the platform and those who did not. The company, according to a suit filed by state Attorney General Ken Paxton, violated state privacy laws and should be responsible for billions of dollars in damages. The suit involves Facebook's "tag suggestions" feature, which the company ended last year, that used facial recognition to encourage users to link the photo to a friend's profile.


State of AI Ethics Report (Volume 6, February 2022)

arXiv.org Artificial Intelligence

This report from the Montreal AI Ethics Institute (MAIEI) covers the most salient progress in research and reporting over the second half of 2021 in the field of AI ethics. Particular emphasis is placed on an "Analysis of the AI Ecosystem", "Privacy", "Bias", "Social Media and Problematic Information", "AI Design and Governance", "Laws and Regulations", "Trends", and other areas covered in the "Outside the Boxes" section. The two AI spotlights feature application pieces on "Constructing and Deconstructing Gender with AI-Generated Art" as well as "Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?". Given MAIEI's mission to democratize AI, submissions from external collaborators have featured, such as pieces on the "Challenges of AI Development in Vietnam: Funding, Talent and Ethics" and using "Representation and Imagination for Preventing AI Harms". The report is a comprehensive overview of what the key issues in the field of AI ethics were in 2021, what trends are emergent, what gaps exist, and a peek into what to expect from the field of AI ethics in 2022. It is a resource for researchers and practitioners alike in the field to set their research and development agendas to make contributions to the field of AI ethics.


A Coupled CP Decomposition for Principal Components Analysis of Symmetric Networks

arXiv.org Machine Learning

In a number of application domains, one observes a sequence of network data; for example, repeated measurements between users interactions in social media platforms, financial correlation networks over time, or across subjects, as in multi-subject studies of brain connectivity. One way to analyze such data is by stacking networks into a third-order array or tensor. We propose a principal components analysis (PCA) framework for sequence network data, based on a novel decomposition for semi-symmetric tensors. We derive efficient algorithms for computing our proposed "Coupled CP" decomposition and establish estimation consistency of our approach under an analogue of the spiked covariance model with rates the same as the matrix case up to a logarithmic term. Our framework inherits many of the strengths of classical PCA and is suitable for a wide range of unsupervised learning tasks, including identifying principal networks, isolating meaningful changepoints or outliers across observations, and for characterizing the "variability network" of the most varying edges. Finally, we demonstrate the effectiveness of our proposal on simulated data and on examples from political science and financial economics. The proof techniques used to establish our main consistency results are surprisingly straight-forward and may find use in a variety of other matrix and tensor decomposition problems.