Law


7 free skills for the human rights jobs of the future

#artificialintelligence

The human rights job landscape is changing rapidly. Current and future challenges in combating human rights violations require new skills and tactics. We have compiled a list of 7 free online courses and specializations that will equip you with the knowledge and skills for the human rights jobs of the future. Machine learning and artificial intelligence create new opportunities and challenges for the protection of human rights. Artificial intelligence can help make education, health and economic systems more efficient but also bears the risk to amplify polarization, bias and discrimination against certain groups.


Parsing the Shadow Docket

Slate

Slate Plus members get extended, ad-free versions of our podcasts--and much more. Sign up today and try it free for two weeks. Copy this link and add it in your podcast app. For detailed instructions, see our Slate Plus podcasts page. Listen to Amicus via Apple Podcasts, Overcast, Spotify, Stitcher, or Google Podcasts.


UK government backs AI projects for insurance industry - Reinsurance News

#artificialintelligence

The UK government has announced that it plans to invest in 40 artificial intelligence (AI) and data analytics projects to boost productivity and improve customer service in the UK insurance, accountancy, and legal services industries. The government has pledged £13 million to support these collaborative industry and research projects and develop the next generation of professional services. One of these projects, developed by Intelligent Voice Ltd, Strenuus Ltd. and the University of East London, will combine AI and voice recognition technology to detect and interpret emotion and linguistics to assess the credibility of insurance claims. Another project is an analysis tool that looks at images collected by drone to assess flood-damaged areas, using a 3D image recognition system to evaluate flood extent and depth alongside impacts on buildings and infrastructure to help with insurance claim assessments. Other examples include an online bot that uses AI to provide answers to online legal questions and software that analyses accounting data and suggests ways to cut expenditure.


Artificial intelligence to tackle insurance fraud and assess flood damage

#artificialintelligence

A project to develop breakthrough artificial intelligence technology for the anti-fraud sector is one of a number of new projects set to receive funding to enable the UK accountancy, insurance and legal services industries to transform how they operate. The artificial intelligence software, being developed by Intelligent Voice Ltd, Strenuus Ltd. and the University of East London will combine AI and voice recognition technology to detect and interpret emotion and linguistics to assess the credibility of insurance claims. The project is one of 40 backed by £13 million in Government investment to support collaborative industry and research projects to develop the next-generation of professional services. Artificial intelligence and data are transforming industries across the world.We are combining our unique heritage in AI with our world beating professional services to put the UK at the forefront of these cutting-edge technologies and their application. We want to ensure businesses and consumers benefit from the application of AI - from providing quicker access to legal advice for customers, to tackling fraudulent insurance claims, these projects illustrate our modern Industrial Strategy in action.


Call to ban killer robots in wars

BBC News

A group of scientists has called for a ban on the development of weapons controlled by artificial intelligence (AI). It says that autonomous weapons may malfunction in unpredictable ways and kill innocent people. Ethics experts also argue that it is a moral step too far for AI systems to kill without any human intervention. The comments were made at the American Association for the Advancement of Science meeting in Washington DC. Human Rights Watch (HRW) is one of the 89 non-governmental organisations from 50 countries that have formed the Campaign to Stop Killer Robots, to press for an international treaty.


Why tech giants are interested in regulating facial recognition

#artificialintelligence

Last week, Amazon made the unexpected move of calling for regulation on facial recognition. In a blog post published on Thursday, Michael Punke, VP of global public policy at Amazon Web Services, expressed support for a "national legislative framework that protects individual civil rights and ensures that governments are transparent in their use of facial recognition technology." Facial recognition is one of the fastest-growing areas of the artificial intelligence industry. It has drawn interest from both the public and private sector and is already worth billions of dollars. Amazon has been moving fast to establish itself as a leader in facial recognition technology, actively marketing its Rekognition service to different customers, including law enforcement agencies.


Ocasio-Cortez is right, algorithms are biased -- but we can make them fairer

#artificialintelligence

Rep. Alexandria Ocasio-CortezAlexandria Ocasio-CortezHillicon Valley: New York says goodbye to Amazon's HQ2 AOC reacts: 'Anything is possible' FTC pushes for record Facebook fine Cyber threats to utilities on the rise Poll finds Democrats oppose certain aspects of border deal Ocasio-Cortez celebrates Amazon canceling New York offices: 'Anything is possible' MORE (D-N.Y.) recently began sounding the alarm about the potential pitfalls of using algorithms to automate human decision-making. She recently pointed out a fundamental problem with artificial intelligence (AI): "Algorithms are still made by human beings... if you don't fix the bias, then you are just automating the bias." She has continued to raise the issue on social media. Ocasio-Cortez isn't the only person questioning whether machines offer a foolproof way to improve decision-making by removing human error and bias. Algorithms are increasingly deployed to inform important decisions on everything from loans and insurance premiums to job and immigration applications.


'RoboCop' is a prescient satire worth revisiting

Mashable

There is no movie more prescient than RoboCop. The 1987 action movie may seem ridiculous on the surface -- a cop gets turned into a robot cop to fight crime in futuristic Detroit -- but it takes aim at some of the United States' biggest issues that are still affecting us today, including privatization of public institutions, gentrification, unchecked capitalism, corruption, and television media. SEE ALSO: I'll never quit'Hitman 2' if it keeps inviting me back for more Yes, RoboCop is over the top, but that's what makes it so fun. That, combined with its messages that still hold merit 32 years later, makes it worth revisiting. In RoboCop's version of the future, Detroit is basically a dystopia, riddled with crime and protected by an underfunded police department.


How Taylor Swift showed us the scary future of facial recognition

The Guardian

Taylor Swift raised eyebrows late last year when Rolling Stone magazine revealed her security team had deployed facial recognition recognition technology during her Repudiation tour to root out stalkers. But the company contracted for the efforts uses its technology to provide much more than just security. ISM Connect also uses its smart screens to capture metrics for promotion and marketing. Facial recognition, used for decades by law enforcement and militaries, is quickly becoming a commercial tool to help brands engage consumers. Swift's tour is just the latest example of the growing privacy concerns around the largely unregulated, billion-dollar industry.


Microsoft warns investors that its artificial-intelligence tech could go awry and hurt its reputation

#artificialintelligence

Microsoft is spending heavily on its artificial-intelligence tech. But it wants investors to know that the tech may go awry, harming the company's reputation in the process. Or so it warned investors in its latest quarterly report, as first spotted by Quartz's Dave Gershgorn. "Issues in the use of AI in our offerings may result in reputational harm or liability," Microsoft wrote in the filing. "AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. You can read the full filing below. Despite the big talk from tech companies such as Microsoft about the virtues and possibilities of AI, the truth is that the technology is not that smart yet. Today, AI is mostly based on machine learning, in which the computer has a limited ability to infer conclusions from limited data. It must ingest many examples for it to "understand" something, and if that initial data set is biased or flawed, its output will be, too. Intel and startups such as Habana Labs are working on chips that could help computers better perform the complicated task of inference. Inference is the foundation of learning and the ability of humans (and machines) to reason. And Microsoft has already had a few high-profile cases of snafus with its AI tech. In 2016, Microsoft yanked a Twitter chatbot called Tay offline within 24 hours after it began spewing racist and sexist tweets, using words taught to it by trolls. More recently, and more seriously, was research done by Joy Buolamwini at the MIT Media Lab, reported on a year ago by The New York Times. She found three leading facial-recognition systems -- created by Microsoft, IBM, and China's Megvii -- were doing a terrible job identifying nonwhite faces. Microsoft's error rate for darker-skinned women was 21%, which was still better compared with 35% for the other two. Microsoft insists that it listened to that criticism and has improved its facial-recognition technology. Plus, in the wake of outcry over Amazon's Rekognition facial-recognition service, Microsoft has begun calling for regulation of facial-recognition tech. Microsoft CEO Satya Nadella told journalists last month: "Take this notion of facial recognition, right now it's just terrible.