Goto

Collaborating Authors

Litigation


Law and Justice Powered by Artificial Intelligence? It's Already a Reality

#artificialintelligence

The AI wave in law is not coming, it's already here … and it's already transforming law firms. Change happens faster than we predict. It is also happening more frequently. Consider, China is launching an online AI arbitrator this year. The United Nations wants to improve access to justice through AI judges and has been actively working on this for four years.


Increasing Liabilities of AI

#artificialintelligence

Among the unfortunate are a family, for example, Hector Hernandez-Garcia who, alongside his wife and newborn son, became temporarily homeless after being mistakenly profiled by such an algorithm. Hernandez-Garcia sued; the company settled. Another precedent was the Michigan Integrated Data Automated System, used by the state to monitor filing for unemployment benefits, which was also recently claimed to have falsely accused thousands of citizens of fraud. Class action lawsuits have been filed against the state, professing a myriad of problems with the system that's used and demonstrating how automated systems induce harm that are hard-to-detect. Furthermore, the recent lawsuit against Clearview AI, filed in Illinois near the end of May by the ACLU (American Civil Liberties Union) and a leading privacy class action law firm, alleging that the company's algorithms breached the state's Biometric Information Privacy Act.


The term 'ethical AI' is finally starting to mean something

#artificialintelligence

Earlier this year, the independent research organisation of which I am the Director, London-based Ada Lovelace Institute, hosted a panel at the world's largest AI conference, CogX, called The Ethics Panel to End All Ethics Panels. The title referenced both a tongue-in-cheek effort at self-promotion, and a very real need to put to bed the seemingly endless offering of panels, think-pieces, and government reports preoccupied with ruminating on the abstract ethical questions posed by AI and new data-driven technologies. We had grown impatient with conceptual debates and high-level principles. And we were not alone. It supersedes the two waves that came before it: the first wave, defined by principles and dominated by philosophers, and the second wave, led by computer scientists and geared towards technical fixes. Third-wave ethical AI has seen a Dutch Court shut down an algorithmic fraud detection system, students in the UK take to the streets to protest against algorithmically-decided exam results, and US companies voluntarily restrict their sales of facial recognition technology.


Will AI Ever Enter the Courtroom?

#artificialintelligence

In 2017, U.S. state trial courts received a gastronomical 83 million court cases. The Chinese Civil Law system sees over 19 million cases per year, with only 120,000 judges to rule over them. In the OECD area (consisting of most high-income economies), the average length for civil proceedings is 240 days in the first instance; the final disposition of cases often involves a long process of appeals, which in some countries can go up to 7 years. It's no secret that the judiciary system in many countries is long, tedious, slow, and can cause months of misery, pain, and anxiety to individuals, families, corporations, and litigators. Moreover, when cases do see the light of day in court, the outcome is not always satisfactory, with high-profile cases especially receiving criticism for being plagued by judge biases' and personal preferences. Scholarly research suggests that in the United States, judges' personal backgrounds, professional experiences, life experiences, and partisan ideologies might impact their decision-making.


Facebook wins preliminary approval to settle facial recognition lawsuit - Reuters

#artificialintelligence

The social media company had in July raised its settlement offer by $100 million to $650 million in relation to the lawsuit, in which Illinois users accused it of violating the U.S. state's Biometric Information Privacy Act. The revised settlement agreement resolved the court's concerns, leading to the preliminary approval of the class action settlement, Judge James Donato wrote in an order filed in the U.S. District Court for the Northern District of California. "Preliminary approval of the amended stipulation of class action settlement, Dkt. No. 468, is granted, and a final approval hearing is set for January 7, 2021," the judge said in the eight-page order. Facebook allegedly violated the state's law through its "Tag Suggestions" feature, which allowed users to recognize their Facebook friends from previously uploaded photos, according to the lawsuit, which began in 2015.


Chinese Artificial Intelligence Firm Sues Apple for $1.4 Billion Over Siri

#artificialintelligence

The company is calling for 10 billion yuan ($1.4 billion) in damages and demands that Apple cease "manufacturing, using, promising to sell, selling, and importing" products that infringe on the patent, it said in a social media post. In the lawsuit filed in a local Chinese court, Xiao-i argued that Apple's voice-recognition technology Siri infringes on a patent that it applied for in 2004 and was granted in 2009. In a statement, Apple said that its Siri does not contain features included in the Xiao-i patent, which the iPhone maker argues relates to games and instant messaging. The company also said that independent appraisers certified by the Supreme People's Court have concluded that Apple does not infringe Xiao-i Robot's technology. "We are disappointed Xiao-i Robot has filed another lawsuit," Apple said in a statement.


Victory! Court Orders CA Prisons to Release Race of Parole Candidates

#artificialintelligence

In a win for transparency, a state court judge ordered the California Department of Corrections and Rehabilitation (CDCR) to disclose records regarding the race and ethnicity of parole candidates. This is also a win for innovation, because the plaintiffs will use this data to build new technology in service of criminal justice reform and racial justice. In Voss v. CDCR, EFF represented a team of researchers (known as Project Recon) from Stanford University and University of Oregon who are attempting to study California parole suitability determinations using machine-learning models. This involves using automation to review over 50,000 parole hearing transcripts and identify various factors that influence parole determinations. Project Recon's ultimate goal is to develop an AI tool that can identify parole denials that may have been influenced by improper factors as potential candidates for reconsideration.


South Wales police lose landmark facial recognition case

The Guardian

The use of facial recognition technology by South Wales police broke race and sex equalities law and breached privacy rights because the force did not apply proper safeguards, the court of appeal has ruled. The critical judgment came in a case brought by Ed Bridges, a civil liberties campaigner, who was scanned by the police software in Cardiff in 2017 and 2018. He argued that capturing of thousands of faces was indiscriminate. Bridges' case had previously been rejected by the high court, but the court of appeal ruled in his favour on three counts, in a significant test case for how the controversial technology is applied in practice by police. But the appeal court held that Bridges' right to privacy, under article 8 of the European convention on human rights, was breached because there was "too broad a discretion" left to police officers as to who to put on its watchlist of suspects.


HR can reinvent artificial intelligence

#artificialintelligence

This is the third in a series on AI transforming the workplace. As the founding partner of Future Workplace, an HR advisory and research firm, Jeanne Meister spends much of her professional time thinking about artificial intelligence, HR and how the future will shake out. Currently, that's a future that is being rapidly reshaped by the pandemic. As more employers look to AI as part of the solution to the myriad challenges that will arise post-pandemic, Meister, while a strong proponent of AI-based solutions, says organizations must safeguard data, taking steps to avoid potential bias and a lack of transparency. "Employee awareness about privacy and how much they are willing to blithely share is intensifying," she says, "and must be seriously factored into any post-pandemic AI use."


Class action comedy: Is Microsoft stealing its business customers' data? (Uh, no)

ZDNet

Last week three individuals filed a lawsuit against Microsoft Corporation in the United States District Court for the Northern District of California, with a request for class action certification. Microsoft's multitude of Business and Enterprise editions offer more advanced feature sets than the Home and Personal editions, with collaborative applications and management tools designed for meeting enterprise security and compliance challenges. The plaintiffs contend that Microsoft is routinely violating the privacy of customers who pay for business subscriptions to Microsoft 365 (formerly Office 365). They allege that "Microsoft shares its business customers' data with Facebook and other third parties, without its business customers' consent." The complaint also accuses Microsoft of sharing business customers' data with third-party developers and with "hundreds of subcontractors ... without requiring the subcontractors to keep the data private and secure." And they maintain that Microsoft uses their business customers' private data "to develop and sell new products and services--and otherwise benefit itself."