Goto

Collaborating Authors

Results


GDPR and AI: making sense of a complex relationship

#artificialintelligence

The development and deployment of artificial intelligence (AI) tools should take place in a socio-technical framework where individual interests and the social good are preserved but also opportunities for social knowledge and better governance are enhanced without leading to the extremes of'surveillance capitalism' and'surveillance state'. This was one of the main conclusions of the study'The impact of the General Data Protection Regulation on Artificial Intelligence', which was carried out by Professor Giovanni Sartor and Dr Francesca Lagioia of the European University Institute of Florence at the request of the STOA Panel, following a proposal from Eva Kaili (S&D, Greece), STOA Chair. Data protection is at the forefront of the relationship between AI and the law, as many AI applications involve the massive processing of personal data, including the targeting and personalised treatment of individuals on the basis of such data. This explains why data protection has been the area of the law that has most engaged with AI and, despite the fact that AI is not explicitly mentioned in the General Data Protection Regulation (GPDR), many provisions of the GDPR are not only relevant to AI, but are also challenged by the new ways of processing personal data that are enabled by AI. This new STOA study addresses the relation between the GDPR and AI and analyses how EU data protection rules will apply in this technological domain and thus impact both its development and deployment.


Security Think Tank: Artificial intelligence will be no silver bullet for security

#artificialintelligence

Undoubtedly, artificial intelligence (AI) is able to support organisations in tackling their threat landscape and the widening of vulnerabilities as criminals have become more sophisticated. However, AI is no silver bullet when it comes to protecting assets and organisations should be thinking about cyber augmentation, rather than just the automation of cyber security alone. Areas where AI can currently be deployed include the training of a system to identify even the smallest behaviours of ransomware and malware attacks before it enters the system and then isolate them from that system. Other examples include automated phishing and data theft detection which are extremely helpful as they involve a real-time response. Context-aware behavioural analytics are also interesting, offering the possibility to immediately spot a change in user behaviour which could signal an attack.


Secure Collaborative XGBoost on Encrypted Data

#artificialintelligence

Training a machine learning model requires a large quantity of high-quality data. One way to achieve this is to combine data from many different data organizations or data owners. But data owners are often unwilling to share their data with each other due to privacy concerns, which can stem from business competition, or be a matter of regulatory compliance. The question is: how can we mitigate such privacy concerns? Secure collaborative learning enables many data owners to build robust models on their collective data, but without revealing their data to each other.


Center for Security Research and Education announces seed grant awardees

#artificialintelligence

CSRE is providing a total of $300,000 in funding for the projects, with an additional $300,000 in matching and supplemental funding from other colleges, departments, and institutes. "Today's challenges to global, national, and individual security are numerous and complex," said CSRE Director James W. Houck, "and we are delighted to support these innovative and exciting initiatives." CSRE was established in 2017 to promote interdisciplinary research and education to protect people, infrastructure and institutions from the broad range of threats and hazards confronting society today. Contributing units include the Provost and Office of the Senior Vice President for Research, as well as the colleges of Agricultural Sciences, Earth and Mineral Sciences, Engineering, Information Sciences and Technology, and the Liberal Arts; Penn State Law and the School of International Affairs; Penn State Harrisburg; Applied Research Laboratory; Institute for Computational and Data Sciences; Institutes of Energy and the Environment; Huck Institutes of the Life Sciences; and the Social Sciences Research Institute. In its first three years, CSRE has provided over $633,000 in funding, augmented by an additional $581,000 from contributing units, to a total of 39 seed projects and faculty fellowships and hosted a number of guest speakers, workshops and other events.


São Paulo subway facial recognition system slammed over user data security and privacy

ZDNet

The company responsible for the operation of São Paulo's subway system has failed to demonstrate sufficient evidence that it is ensuring the protection of user privacy in the implementation of a new surveillance system that will use facial recognition technology. This is the conclusion of a group of consumer rights bodies following the conclusion of legal action initiated against Companhia do Metropolitano de São Paulo (METRO) about a project aimed at modernizing the subway's surveillance system. The current legacy system, which includes an estate of non-integrated 2200 cameras will be replaced by 5200 digital high-definition cameras controlled centrally. The platform, which will scan the faces of 4 million daily passengers, is expected to enhance operations and help authorities find wanted criminals through an integration with the police database. The consumer rights bodies that have initiated the civil lawsuit noted in a statement by the Brazilian Institute of Consumer Protection (IDEC) that METRO failed to produce a report on the impact associated with the use of facial recognition technology, or studies that demonstrate the security of the databases to be used for the implementation of the new surveillance system.


Microsoft AI powers better conversations between sellers and customers

#artificialintelligence

Microsoft internal sales executives who manage a large number of accounts operate in a challenging environment. They sell a rich suite of products using an assortment of different sales tools and fragmented data. As a result, they spend too much time gathering and verifying customer information, and too little time helping customers realize how they can achieve their business goals through Microsoft technologies. Microsoft is hardly alone in this. Distilling compelling insights from disparate, siloed information systems has historically been a complex and time-consuming task for sales executives in all industries. A holistic view of data and insights at the commercial-account level simply hasn't been available. For sellers, the challenge is that too many tools take too much of their time away from focusing on their customers.


Digital Threats to Democracy: Ruling with a Silicon Fist

#artificialintelligence

The first tactic in the digital authoritarian toolkit is to establish information walls through fear, friction, or flooding. While employing traditional methods of repression and punishment to censor through fear, digital authoritarians also make it more difficult for citizens to access information through internet shutdowns, firewalls, and paywalls. In addition, digital dictators target traditional democratic values and freedoms by flooding the internet and other outlets for speech, press, and assembly. Inauthentic accounts ("bots"), deepfakes, and new tools of digital propaganda help states amplify narratives, build polarization, and increase "us versus them" divisions. With information walls, regimes can shape public opinion in newly-sophisticated ways by establishing state control over the messages their population can access--and the information they do not.


Abolish the #TechToPrisonPipeline

#artificialintelligence

The authors of the Harrisburg University study make explicit their desire to provide "a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime" as a co-author and former NYPD police officer outlined in the original press release.[38] At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality. Publishers and funding agencies serve a crucial role in feeding this ravenous maw by providing platforms and incentives for such research. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world. To reiterate our demands, the review committee must publicly rescind the offer for publication of this specific study, along with an explanation of the criteria used to evaluate it. Springer must issue a statement condemning the use of criminal justice statistics to predict criminality and acknowledging their role in incentivizing such harmful scholarship in the past. Finally, all publishers must refrain from publishing similar studies in the future.


How machine learning finds anomalies to catch financial cybercriminals

#artificialintelligence

In the last few months, millions of dollars have been stolen from unemployment systems during this time of immense pressure due to coronavirus-related claims. A skilled ring of international fraudsters has been submitting false unemployment claims for individuals that still have steady work. The attackers use previously acquired Personally Identifiable Information (PII) such as social security numbers, addresses, names, phone numbers, and banking account information to trick public officials into accepting the claims. Payouts to these employed people are then redirected to money laundering accomplices who pass the money around to veil the illicit nature of the cash before depositing it into their own accounts. The acquisition of the PII that enabled these attacks, and the pattern of money laundering that financial institutions failed to detect highlight the importance of renewed security.


A new US bill would ban the police use of facial recognition

MIT Technology Review

The news: US Democratic lawmakers have introduced a bill that would ban the use of facial recognition technology by federal law enforcement agencies. Specifically, it would make it illegal for any federal agency or official to "acquire, possess, access, or use" biometric surveillance technology in the US. It would also require state and local law enforcement to bring in similar bans in order to receive federal funding. The Facial Recognition and Biometric Technology Moratorium Act was introduced by Senators Ed Markey of Massachusetts and Jeff Merkley of Oregon and Representatives Pramila Jayapal of Washington and Ayanna Pressley of Massachusetts. Seize the moment: The proposed law has arrived at a point when the police use of facial recognition technology is coming under increased scrutiny amid protests after the killing of George Floyd in late May.