Goto

Collaborating Authors

Results


Poppy Gustafsson: the Darktrace tycoon in new cybersecurity era

The Guardian

Poppy Gustafsson runs a cutting-edge and gender-diverse cybersecurity firm on the brink of a £3bn stock market debut, but she is happy to reference pop culture classic the Terminator to help describe what Darktrace actually does. Launched in Cambridge eight years ago by an unlikely alliance of mathematicians, former spies from GCHQ and the US and artificial intelligence (AI) experts, Darktrace provides protection, enabling businesses to stay one step ahead of increasingly smarter and dangerous hackers and viruses. Marketing its products as the digital equivalent of the human body's ability to fight illness, Darktrace's AI-security works as an "enterprise immune system", can "self-learn and self-heal" and has an "autonomous response capability" to tackle threats without instruction as they are detected. "It really does feel like we're in this new era of cybersecurity," says Gustafsson, the chief executive of Darktrace. "The arms race will absolutely continue, I really don't think it's very long until this [AI] innovation gets into the hands of attackers, and we will see these very highly targeted and specific attacks that humans won't necessarily be able to spot and defend themselves from. "It's not going to be these futuristic Terminator-style robots out shooting each other, it's going to be all these little pieces of code fighting in the background of our businesses.


Seeing stones: pandemic reveals Palantir's troubling reach in Europe

The Guardian

The 24 March, 2020 will be remembered by some for the news that Prince Charles tested positive for Covid and was isolating in Scotland. In Athens it was memorable as the day the traffic went silent. Twenty-four hours into a hard lockdown, Greeks were acclimatising to a new reality in which they had to send an SMS to the government in order to leave the house. As well as millions of text messages, the Greek government faced extraordinary dilemmas. The European Union's most vulnerable economy, its oldest population along with Italy, and one of its weakest health systems faced the first wave of a pandemic that overwhelmed richer countries with fewer pensioners and stronger health provision. One Greek who did go into the office that day was Kyriakos Pierrakakis, the minister for digital transformation, whose signature was inked in blue on an agreement with the US technology company, Palantir. The deal, which would not be revealed to the public for another nine months, gave one of the world's most controversial tech companies access to vast amounts of personal data while offering its software to help Greece weather the Covid storm. The zero-cost agreement was not registered on the public procurement system, neither did the Greek government carry out a data impact assessment – the mandated check to see whether an agreement might violate privacy laws. The questions that emerge in pandemic Greece echo those from across Europe during Covid and show Palantir extending into sectors from health to policing, aviation to commerce and even academia.


Explained: Why it is becoming more difficult to detect deepfake videos and what are the implications

#artificialintelligence

Doctored videos or deepfakes have been one of the key weapons used in propaganda battles for quite some time now. Donald Trump taunting Belgium for remaining in the Paris climate agreement, David Beckham speaking fluently in nine languages, Mao Zedong singing'I will survive' or Jeff Bezos and Elon Musk in a pilot episode of Star Trek… all these videos have gone viral despite being fake, or because they were deepfakes. Last year, Marco Rubio, the Republican senator from Florida, said deepfakes are as potent as nuclear weapons in waging wars in a democracy. "In the old days, if you wanted to threaten the United States, you needed 10 aircraft carriers, and nuclear weapons, and long-range missiles. Today, you just need access to our Internet system, to our banking system, to our electrical grid and infrastructure, and increasingly, all you need is the ability to produce a very realistic fake video that could undermine our elections, that could throw our country into tremendous crisis internally and weaken us deeply," Forbes quoted him as saying.


High-level Approaches to Detect Malicious Political Activity on Twitter

arXiv.org Artificial Intelligence

Our work represents another step into the detection and prevention of these ever-more present political manipulation efforts. We, therefore, start by focusing on understanding what the state-of-the-art approaches lack -- since the problem remains, this is a fair assumption. We find concerning issues within the current literature and follow a diverging path. Notably, by placing emphasis on using data features that are less susceptible to malicious manipulation and also on looking for high-level approaches that avoid a granularity level that is biased towards easy-to-spot and low impact cases. We designed and implemented a framework -- Twitter Watch -- that performs structured Twitter data collection, applying it to the Portuguese Twittersphere. We investigate a data snapshot taken on May 2020, with around 5 million accounts and over 120 million tweets (this value has since increased to over 175 million). The analyzed time period stretches from August 2019 to May 2020, with a focus on the Portuguese elections of October 6th, 2019. However, the Covid-19 pandemic showed itself in our data, and we also delve into how it affected typical Twitter behavior. We performed three main approaches: content-oriented, metadata-oriented, and network interaction-oriented. We learn that Twitter's suspension patterns are not adequate to the type of political trolling found in the Portuguese Twittersphere -- identified by this work and by an independent peer - nor to fake news posting accounts. We also surmised that the different types of malicious accounts we independently gathered are very similar both in terms of content and interaction, through two distinct analysis, and are simultaneously very distinct from regular accounts.


Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits

arXiv.org Artificial Intelligence

While algorithm audits are growing rapidly in commonality and public importance, relatively little scholarly work has gone toward synthesizing prior work and strategizing future research in the area. This systematic literature review aims to do just that, following PRISMA guidelines in a review of over 500 English articles that yielded 62 algorithm audit studies. The studies are synthesized and organized primarily by behavior (discrimination, distortion, exploitation, and misjudgement), with codes also provided for domain (e.g. search, vision, advertising, etc.), organization (e.g. Google, Facebook, Amazon, etc.), and audit method (e.g. sock puppet, direct scrape, crowdsourcing, etc.). The review shows how previous audit studies have exposed public-facing algorithms exhibiting problematic behavior, such as search algorithms culpable of distortion and advertising algorithms culpable of discrimination. Based on the studies reviewed, it also suggests some behaviors (e.g. discrimination on the basis of intersectional identities), domains (e.g. advertising algorithms), methods (e.g. code auditing), and organizations (e.g. Twitter, TikTok, LinkedIn) that call for future audit attention. The paper concludes by offering the common ingredients of successful audits, and discussing algorithm auditing in the context of broader research working toward algorithmic justice.


Beyond traditional assumptions in fair machine learning

arXiv.org Artificial Intelligence

After challenging the validity of these assumptions in real-world applications, we propose ways to move forward when they are violated. First, we show that group fairness criteria purely based on statistical properties of observed data are fundamentally limited. Revisiting this limitation from a causal viewpoint we develop a more versatile conceptual framework, causal fairness criteria, and first algorithms to achieve them. We also provide tools to analyze how sensitive a believed-to-be causally fair algorithm is to misspecifications of the causal graph. Second, we overcome the assumption that sensitive data is readily available in practice. To this end we devise protocols based on secure multi-party computation to train, validate, and contest fair decision algorithms without requiring users to disclose their sensitive data or decision makers to disclose their models. Finally, we also accommodate the fact that outcome labels are often only observed when a certain decision has been made. We suggest a paradigm shift away from training predictive models towards directly learning decisions to relax the traditional assumption that labels can always be recorded. The main contribution of this thesis is the development of theoretically substantiated and practically feasible methods to move research on fair machine learning closer to real-world applications.


Query-free Black-box Adversarial Attacks on Graphs

arXiv.org Artificial Intelligence

Many graph-based machine learning models are known to be vulnerable to adversarial attacks, where even limited perturbations on input data can result in dramatic performance deterioration. Most existing works focus on moderate settings in which the attacker is either aware of the model structure and parameters (white-box), or able to send queries to fetch model information. In this paper, we propose a query-free black-box adversarial attack on graphs, in which the attacker has no knowledge of the target model and no query access to the model. With the mere observation of the graph topology, the proposed attack strategy flips a limited number of links to mislead the graph models. We prove that the impact of the flipped links on the target model can be quantified by spectral changes, and thus be approximated using the eigenvalue perturbation theory. Accordingly, we model the proposed attack strategy as an optimization problem, and adopt a greedy algorithm to select the links to be flipped. Due to its simplicity and scalability, the proposed model is not only generic in various graph-based models, but can be easily extended when different knowledge levels are accessible as well. Extensive experiments demonstrate the effectiveness and efficiency of the proposed model on various downstream tasks, as well as several different graph-based learning models.


Does Palantir See Too Much?

#artificialintelligence

On a bright Tuesday afternoon in Paris last fall, Alex Karp was doing tai chi in the Luxembourg Gardens. He wore blue Nike sweatpants, a blue polo shirt, orange socks, charcoal-gray sneakers and white-framed sunglasses with red accents that inevitably drew attention to his most distinctive feature, a tangle of salt-and-pepper hair rising skyward from his head. Under a canopy of chestnut trees, Karp executed a series of elegant tai chi and qigong moves, shifting the pebbles and dirt gently under his feet as he twisted and turned. A group of teenagers watched in amusement. After 10 minutes or so, Karp walked to a nearby bench, where one of his bodyguards had placed a cooler and what looked like an instrument case. The cooler held several bottles of the nonalcoholic German beer that Karp drinks (he would crack one open on the way out of the park). The case contained a wooden sword, which he needed for the next part of his routine. "I brought a real sword the last time I was here, but the police stopped me," he said matter of factly as he began slashing the air with the sword. Those gendarmes evidently didn't know that Karp, far from being a public menace, was the chief executive of an American company whose software has been deployed on behalf of public safety in France. The company, Palantir Technologies, is named after the seeing stones in J.R.R. Tolkien's "The Lord of the Rings." Its two primary software programs, Gotham and Foundry, gather and process vast quantities of data in order to identify connections, patterns and trends that might elude human analysts. The stated goal of all this "data integration" is to help organizations make better decisions, and many of Palantir's customers consider its technology to be transformative. Karp claims a loftier ambition, however. "We built our company to support the West," he says. To that end, Palantir says it does not do business in countries that it considers adversarial to the U.S. and its allies, namely China and Russia. In the company's early days, Palantir employees, invoking Tolkien, described their mission as "saving the shire." The brainchild of Karp's friend and law-school classmate Peter Thiel, Palantir was founded in 2003. It was seeded in part by In-Q-Tel, the C.I.A.'s venture-capital arm, and the C.I.A. remains a client. Palantir's technology is rumored to have been used to track down Osama bin Laden -- a claim that has never been verified but one that has conferred an enduring mystique on the company. These days, Palantir is used for counterterrorism by a number of Western governments.


The code-breakers who led the rise of computing

Nature

"Most professional scientists aim to be the first to publish their findings, because it is through dissemination that the work realises its value." So wrote mathematician James Ellis in 1987. By contrast, he went on, "the fullest value of cryptography is realised by minimising the information available to potential adversaries." Ellis, like Alan Turing, and so many of the driving forces in the development of computers and the Internet, worked in government signals intelligence, or SIGINT. Today, this covers COMINT (harvested from communications such as phone calls) and ELINT (from electronic emissions, such as radar and other electromagnetic radiation).


Sentinel loads up with $1.35M in the deepfake detection arms race – TechCrunch

#artificialintelligence

Estonia-based Sentinel, which is developing a detection platform for identifying synthesized media (aka deepfakes), has closed a $1.35 million seed round from some seasoned angel investors -- including Jaan Tallinn (Skype), Taavet Hinrikus (TransferWise), Ragnar Sass & Martin Henk (Pipedrive) -- and Baltics early-stage VC firm, United Angels VC. The challenge of building tools to detect deepfakes has been likened to an arms race -- most recently by tech giant Microsoft, which earlier this month launched a detector tool in the hopes of helping pick up disinformation aimed at November's U.S. election. "The fact that [deepfakes are] generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology," it warned, before suggesting there's still short-term value in trying to debunk malicious fakes with "advanced detection technologies." Sentinel co-founder and CEO Johannes Tammekänd agrees on the arms race point -- which is why its approach to this "goal-post-shifting" problem entails offering multiple layers of defence, following a cybersecurity-style template. He says rival tools -- mentioning Microsoft's detector and another rival, Deeptrace, aka Sensity -- are, by contrast, only relying on "one fancy neural network that tries to detect defects," as he puts it.