Law


Can A.I. Be Taught to Explain Itself?

@machinelearnbot

In September, Michal Kosinski published a study that he feared might end his career. The Economist broke the news first, giving it a self-consciously anodyne title: "Advances in A.I. Are Used to Spot Signs of Sexuality." But the headlines quickly grew more alarmed. By the next day, the Human Rights Campaign and Glaad, formerly known as the Gay and Lesbian Alliance Against Defamation, had labeled Kosinski's work "dangerous" and "junk science." In the next week, the tech-news site The Verge had run an article that, while carefully reported, was nonetheless topped with a scorching headline: "The Invention of A.I. 'Gaydar' Could Be the Start of Something Much Worse."


Artificial Intelligences and Responsibility.

#artificialintelligence

MIT Technology Review has a meandering article, "A.I Can Be Made Legally Responsible for It's Decisions". In it's own way, it tries to chart the territories of trade secrets and corporations, threading a needle that we may actually need to change to adapt to using Artificial Intelligence (AI). One of the things that surprises me in such writing and conversations is not that it revolves around protecting trade secrets – I'm sorry, if you put your self-changing code out there and are willing to take the risk, I see that as part of it – is that it focuses on the decision process. Almost all bad decisions in code I have encountered have come about because the developers were hidden in a silo behind a process that isolated them… sort of like what happens with an AI, only two-fold. If the decision process is flawed, the first thing to be looked at is the source data for the decisions – and in an AI, this can be a daunting task as it builds learning algorithms based on… data.


9 Must-Have Datasets for Investigating Recommender Systems

@machinelearnbot

Lab41 is currently in the midst of Project Hermes, an exploration of different recommender systems in order to build up some intuition (and of course, hard data) about how these algorithms can be used to solve data, code, and expert discovery problems in a number of large organizations. Anna's post gives a great overview of recommenders which you should check out if you haven't already. The ideal way to tackle this problem would be to go to each organization, find the data they have, and use it to build a recommender system. But this isn't feasible for multiple reasons: it doesn't scale because there are far more large organizations than there are members of Lab41, and of course most of these organizations would be hesitant to share their data with outsiders. Instead, we need a more general solution that anyone can apply as a guideline.


UN panel to debate 'killer robots' and other AI weapons

#artificialintelligence

A United Nations panel agreed Friday to consider guidelines and potential limitations for military uses of artificial intelligence amid concerns from human rights groups and other leaders that so-called "killer robots" could pose a long-term, lethal threat to humanity. Advocacy groups warned about the threats posed by such "killer robots" and aired a chilling video illustrating their possible uses on the sidelines of the first formal U.N. meeting of government experts on Lethal Autonomous Weapons Systems this week. More than 80 countries took part. Ambassador Amandeep Gill of India, who chaired the gathering, said participants plan to meet again in 2018. He said ideas discussed this week included the creation of legally binding instrument, a code of conduct, or a technology review process.


Panel aims to pull plug on killer robots

Boston Herald

A U.N. panel agreed yesterday to move ahead with talks to define and possibly set limits on weapons that can kill without human involvement, as human rights groups said governments are moving too slowly to keep up with advances in artificial intelligence that could put computers in control one day. Advocacy groups warned about the threats posed by such "killer robots" and aired a chilling video illustrating their possible uses on the sidelines of the first formal U.N. meeting of government experts on Lethal Autonomous Weapons Systems this week. More than 80 countries took part. Ambassador Amandeep Gill of India, who chaired the gathering, said participants plan to meet again in 2018. He said ideas discussed this week included the creation of legally binding instrument, a code of conduct, or a technology review process.


The Tech HHS, SEC, SSA and Other Agencies Use to Ferret Out Cheaters and Crooks

#artificialintelligence

During a press conference in June 2016, leaders from the Justice and Health and Human Services departments unveiled charges in the largest takedown of Medicare and Medicaid fraud in the nation's history. The final tally was eye-opening: About 300 individuals, including 61 doctors and other medical professionals, were accused of falsifying $900 million worth of medical bills. The success, they said, was due to the Medicare Fraud Strike Force. "The Medicare Fraud Strike Force is a model of 21st century data-driven law enforcement, and it has had a remarkable impact on healthcare fraud across the country," former Assistant Attorney General Leslie Caldwell said at the time. Behind the scenes, officials praised nearly real-time data analytics as a major asset in building their case.


UN panel agrees to move ahead with debate on killer robots

Daily Mail

A U.N. panel agreed Friday to move ahead with talks to define and possibly set limits on weapons that can kill without human involvement, as human rights groups said governments are moving too slowly to keep up with advances in artificial intelligence that could put computers in control one day. Advocacy groups warned about the threats posed by such'killer robots' and aired a chilling video illustrating their possible uses on the sidelines of the first formal U.N. meeting of government experts on Lethal Autonomous Weapons Systems this week. More than 80 countries took part. The meeting falls under the U.N.'s Convention on Certain Conventional Weapons - also known as the Inhumane Weapons Convention - a 37-year old agreement that has set limits on the use of arms and explosives like mines, blinding laser weapons and booby traps over the years. The meeting falls under the U.N.'s Convention on Certain Conventional Weapons - also known as the Inhumane Weapons Convention - a 37-year old agreement that has set limits on the use of arms and explosives like mines, blinding laser weapons and booby traps over the years.


UN Panel Agrees to Move Ahead With Debate on 'Killer Robots'

U.S. News

A U.N. panel agreed Friday to move ahead with talks to define and possibly set limits on weapons that can kill without human involvement, as human rights groups said governments are moving too slowly to keep up with advances in artificial intelligence that could put computers in control one day.


Religion that worships artificial intelligence prepares for a world run by machines

#artificialintelligence

A newly established religion called Way of the Future will worship artificial intelligence, focusing on "the realization, acceptance, and worship of a Godhead based on Artificial Intelligence" that followers believe will eventually surpass human control over Earth. The first AI-based church was founded by Anthony Levandowski, the Silicon Valley multimillionaire who championed the robotics team for Uber's self-driving program and Waymo, the self-driving car company owned by Google. Way of the Future "is about creating a peaceful and respectful transition of who is in charge of the planet from people to people'machines,'" the religion's official website reads. "Given that technology will'relatively soon' be able to surpass human abilities, we want to help educate people about this exciting future and prepare a smooth transition." Levandowski filed documents to establish the religion back in May, making himself the "Dean" of the church and the CEO of a related nonprofit that would run it.


Does The Advance of AI Herald A 'Golden Age' for Legal Practice?

#artificialintelligence

A 2016 survey by Thomson Reuters, 'The Generational Shift In Legal Departments' highlights how unprepared the legal services industry is for the arrival, en masse, of this new generation in our workplaces. The report states that three quarters of the workforce will be made of millennials by the year 2025. It's important to bear in mind that they won't just make up 75% of the workforce but likely 75% of potential clients for legal service providers also. Criticisms of this new generation have included having a sense of entitlement, being quicker to move jobs rather than display loyalty and being much more sensitive to time and cost constraints than their preceding generation. I would argue that this new generation will drive even more change as clients or corporate colleagues than they will as lawyers.The practice of law runs on an archaic system which is characterised by time-consuming formalities, cautious advice and vast expense.