Law


McAfee predicts 'Deepfakes' as major cyber threat in 2020

#artificialintelligence

McAfee Labs released its threats predictions report for 2020, highlighting how the change in cybercrime, technology, and legislation will impact the cyberthreat landscape. The company observed that threat actors are now leveraging artificial intelligence and machine learning to produce even more convincing deepfakes leading to the spread of misinformation resulting in massive chaos, the official press release notes. The threats also include generation of deepfakes to bypass facial recognition, ransomware attacks to morph extortion campaigns, and cloud-native threats as a result of weak API.


DeepMind co-founder moves to Google as the AI lab positions itself for the future

#artificialintelligence

The personnel changes at Alphabet continue, this time with Mustafa Suleyman -- one of the three co-founders of the company's influential AI lab DeepMind -- moving to Google. Suleyman announced the news on Twitter, saying that after a "wonderful decade" at DeepMind, he would be joining Google to work with the company's head of AI Jeff Dean and its chief legal officer Kent Walker. The exact details of Suleyman's new role are unclear but a representative for the company told The Verge it would involve work on AI policy. The move is notable, though, as it was reported earlier this year that Suleyman had been placed on leave from DeepMind. Some speculated that Suleyman's move was the fallout of reported tensions between DeepMind and Google, as the former struggled to commercialize its technology.


Insurance Fraud Detection Market Size Worth $9.7 Billion by 2025: Grand View Research, Inc.

#artificialintelligence

The global insurance fraud detection market size is expected to reach USD 9.7 billion by 2025, registering a CAGR of 13.7% over the forecast period, according to a new report by Grand View Research, Inc. Detecting and preventing fraudulent activities is a global challenge for insurers. However, the emergence of advanced solutions such as the use of automated business rules, self-learning models, text mining, predictive analytics, image screening, network analysis, and device identification is expected to deliver actionable insights to improve claims processes. As a result, insurance organizations are adopting fraud detection solutions that not only recognize the genuine claims process but also reduce the number of false positives. The prevention and detection of fraud capabilities are increasing with the growing awareness of perpetrators and sophisticated crimes. Global concerns about the ever-increasing cases of insurance frauds coupled with sophisticated organized crime, have signaled a need for coherent action by all insurance companies.


Breaking the Glass Ceiling for Embedding-Based Classifiers for Large Output Spaces

Neural Information Processing Systems

In extreme classification settings, embedding-based neural network models are currently not competitive with sparse linear and tree-based methods in terms of accuracy. Most prior works attribute this poor performance to the low-dimensional bottleneck in embedding-based methods. In this paper, we demonstrate that theoretically there is no limitation to using low-dimensional embedding-based methods, and provide experimental evidence that overfitting is the root cause of the poor performance of embedding-based methods. These findings motivate us to investigate novel data augmentation and regularization techniques to mitigate overfitting. To this end, we propose GLaS, a new regularizer for embedding-based neural network approaches.


Beware! Criminals are using AI to steal your personal details - DataFlair

#artificialintelligence

With the wide use of artificial intelligence in various fields, expectations are high that it may become prevalent for attacking purposes than defensive ones. Mark Testoni, the president and CEO of an enterprise security company, SAP NS2, said that hackers and criminals are highly skeptical just like the communities that develop defense systems for themselves. The techniques that the communities use, are used by hackers such as captchas and image recognition, development of malware, phishing and whaling, and many more. They are getting aware of when to hide and attack. Criminals are covering themselves with the help of artificial intelligence, instead of covering behind masks to rob a bank.


Counterfactual Fairness

Neural Information Processing Systems

Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. In this paper, we develop a framework for modeling fairness using tools from causal inference. Our definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group.


Equality of Opportunity in Classification: A Causal Approach

Neural Information Processing Systems

The Equalized Odds (for short, EO) is one of the most popular measures of discrimination used in the supervised learning setting. It ascertains fairness through the balance of the misclassification rates (false positive and negative) across the protected groups -- e.g., in the context of law enforcement, an African-American defendant who would not commit a future crime will have an equal opportunity of being released, compared to a non-recidivating Caucasian defendant. Despite this noble goal, it has been acknowledged in the literature that statistical tests based on the EO are oblivious to the underlying causal mechanisms that generated the disparity in the first place (Hardt et al. 2016). This leads to a critical disconnect between statistical measures readable from the data and the meaning of discrimination in the legal system, where compelling evidence that the observed disparity is tied to a specific causal process deemed unfair by society is required to characterize discrimination. The goal of this paper is to develop a principled approach to connect the statistical disparities characterized by the EO and the underlying, elusive, and frequently unobserved, causal mechanisms that generated such inequality.


Machine Learning: Rules vs. Models in AML Platforms Feedzai

#artificialintelligence

Fueled by mobster movies and international espionage thrillers, the phrase has a mysterious, exciting edge to it. But as is often the case, the truth is far less appealing than the glitzy Hollywood version. In reality, money laundering is an activity that traps 40.3 million people in modern slavery, fuels political unrest, and finances terrorism across the globe. Considering the consequences, it's no wonder governments enact AML regulations. These regulations have honorable and important intentions, but there's no denying the ever-evolving compliance headaches they create for financial institutions.


Artificial Intelligence Bolsters Physical Security

#artificialintelligence

In the wake of the May 2018 mass shooting that resulted in 10 deaths at Santa Fe (Texas) High School, the Santa Fe Independent School District looked at all possible options to improve school safety within reasonable financial constraints. The district considered the idea of technology to enhance its approximately 750 cameras with facial recognition but did not immediately see a workable solution -- for reasons of cost, and concerns about shaky accuracy that could lead to false positives, says Kip Robins, director of technology for Santa Fe ISD, which has about 4,500 students. The district ultimately contracted with a company called AnyVision, which demonstrated its Better Tomorrow product, an artificial-intelligence-based application that plugs into an existing camera network and provides the ability to do surveillance based on a certain face, body or object. School districts or other end users can create a watch list to keep an eye out for potential pedophiles, for example, or someone known to be mentally unstable. The Santa Fe ISD's solution is part of a growing cadre of software offerings that use artificial intelligence to power through reams of data and notice certain predetermined visual information – whether it's someone's face, or a certain license plate, or simply human movement in a place and time where there shouldn't be any.


AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing by Karen Yeung, Andrew Howes , Ganna Pogrebna :: SSRN

#artificialintelligence

In this paper, we (1) argue that the international human rights framework provides the most promising set of standards for ensuring that AI systems are ethical in their design, development and deployment, and (2) sketch the basic contours of a comprehensive governance framework, which we call'human rights-centred design, deliberation and oversight', for ensuring that AI can be relied upon to operate in ways that will not violate human rights.