Goto

Collaborating Authors

Results


Query-free Black-box Adversarial Attacks on Graphs

arXiv.org Artificial Intelligence

Many graph-based machine learning models are known to be vulnerable to adversarial attacks, where even limited perturbations on input data can result in dramatic performance deterioration. Most existing works focus on moderate settings in which the attacker is either aware of the model structure and parameters (white-box), or able to send queries to fetch model information. In this paper, we propose a query-free black-box adversarial attack on graphs, in which the attacker has no knowledge of the target model and no query access to the model. With the mere observation of the graph topology, the proposed attack strategy flips a limited number of links to mislead the graph models. We prove that the impact of the flipped links on the target model can be quantified by spectral changes, and thus be approximated using the eigenvalue perturbation theory. Accordingly, we model the proposed attack strategy as an optimization problem, and adopt a greedy algorithm to select the links to be flipped. Due to its simplicity and scalability, the proposed model is not only generic in various graph-based models, but can be easily extended when different knowledge levels are accessible as well. Extensive experiments demonstrate the effectiveness and efficiency of the proposed model on various downstream tasks, as well as several different graph-based learning models.


Does Palantir See Too Much?

#artificialintelligence

On a bright Tuesday afternoon in Paris last fall, Alex Karp was doing tai chi in the Luxembourg Gardens. He wore blue Nike sweatpants, a blue polo shirt, orange socks, charcoal-gray sneakers and white-framed sunglasses with red accents that inevitably drew attention to his most distinctive feature, a tangle of salt-and-pepper hair rising skyward from his head. Under a canopy of chestnut trees, Karp executed a series of elegant tai chi and qigong moves, shifting the pebbles and dirt gently under his feet as he twisted and turned. A group of teenagers watched in amusement. After 10 minutes or so, Karp walked to a nearby bench, where one of his bodyguards had placed a cooler and what looked like an instrument case. The cooler held several bottles of the nonalcoholic German beer that Karp drinks (he would crack one open on the way out of the park). The case contained a wooden sword, which he needed for the next part of his routine. "I brought a real sword the last time I was here, but the police stopped me," he said matter of factly as he began slashing the air with the sword. Those gendarmes evidently didn't know that Karp, far from being a public menace, was the chief executive of an American company whose software has been deployed on behalf of public safety in France. The company, Palantir Technologies, is named after the seeing stones in J.R.R. Tolkien's "The Lord of the Rings." Its two primary software programs, Gotham and Foundry, gather and process vast quantities of data in order to identify connections, patterns and trends that might elude human analysts. The stated goal of all this "data integration" is to help organizations make better decisions, and many of Palantir's customers consider its technology to be transformative. Karp claims a loftier ambition, however. "We built our company to support the West," he says. To that end, Palantir says it does not do business in countries that it considers adversarial to the U.S. and its allies, namely China and Russia. In the company's early days, Palantir employees, invoking Tolkien, described their mission as "saving the shire." The brainchild of Karp's friend and law-school classmate Peter Thiel, Palantir was founded in 2003. It was seeded in part by In-Q-Tel, the C.I.A.'s venture-capital arm, and the C.I.A. remains a client. Palantir's technology is rumored to have been used to track down Osama bin Laden -- a claim that has never been verified but one that has conferred an enduring mystique on the company. These days, Palantir is used for counterterrorism by a number of Western governments.


The code-breakers who led the rise of computing

Nature

"Most professional scientists aim to be the first to publish their findings, because it is through dissemination that the work realises its value." So wrote mathematician James Ellis in 1987. By contrast, he went on, "the fullest value of cryptography is realised by minimising the information available to potential adversaries." Ellis, like Alan Turing, and so many of the driving forces in the development of computers and the Internet, worked in government signals intelligence, or SIGINT. Today, this covers COMINT (harvested from communications such as phone calls) and ELINT (from electronic emissions, such as radar and other electromagnetic radiation).


Sentinel loads up with $1.35M in the deepfake detection arms race – TechCrunch

#artificialintelligence

Estonia-based Sentinel, which is developing a detection platform for identifying synthesized media (aka deepfakes), has closed a $1.35 million seed round from some seasoned angel investors -- including Jaan Tallinn (Skype), Taavet Hinrikus (TransferWise), Ragnar Sass & Martin Henk (Pipedrive) -- and Baltics early-stage VC firm, United Angels VC. The challenge of building tools to detect deepfakes has been likened to an arms race -- most recently by tech giant Microsoft, which earlier this month launched a detector tool in the hopes of helping pick up disinformation aimed at November's U.S. election. "The fact that [deepfakes are] generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology," it warned, before suggesting there's still short-term value in trying to debunk malicious fakes with "advanced detection technologies." Sentinel co-founder and CEO Johannes Tammekänd agrees on the arms race point -- which is why its approach to this "goal-post-shifting" problem entails offering multiple layers of defence, following a cybersecurity-style template. He says rival tools -- mentioning Microsoft's detector and another rival, Deeptrace, aka Sensity -- are, by contrast, only relying on "one fancy neural network that tries to detect defects," as he puts it.


'Video Authenticator' is Microsoft's answer to Deepfake detection

#artificialintelligence

Deepfakes is a class of synthetic media generated by AI and represents another dark side of technology -- this form of Artificial Intelligence stole the headlines last year when a LinkedIn user by the name Katie Jones, who appeared on the platform & started connecting with the Who's Who of the political elite in Washington DC. It was alarming, how deep learning created a real-life image of a person & then penetrated the social media spreading misinformation. With the U.S presidential elections looming, lawmakers in the country are worried about how deepfakes can greatly jeopardize the transparency of the democratic process. Many of the leading tech companies have been asked for help and are working on developing tools that can detect this fake synthetic media. Global software giant, Microsoft, has now released two new tools that can spot if a certain media has been artificially manipulated.


Suspect AI: Vibraimage, Emotion Recognition Technology, and Algorithmic Opacity

arXiv.org Artificial Intelligence

Vibraimage is a digital system that quantifies a subject's mental and emotional state by analysing video footage of the movements of their head. Vibraimage is used by police, nuclear power station operators, airport security and psychiatrists in Russia, China, Japan and South Korea, and has been deployed at an Olympic Games, FIFA World Cup, and G7 Summit. Yet there is no reliable evidence that the technology is actually effective; indeed, many claims made about its effects seem unprovable. What exactly does vibraimage measure, and how has it acquired the power to penetrate the highest profile and most sensitive security infrastructure across Russia and Asia? I first trace the development of the emotion recognition industry, before examining attempts by vibraimage's developers and affiliates scientifically to legitimate the technology, concluding that the disciplining power and corporate value of vibraimage is generated through its very opacity, in contrast to increasing demands across the social sciences for transparency. I propose the term 'suspect AI' to describe the growing number of systems like vibraimage that algorithmically classify suspects / non-suspects, yet are themselves deeply suspect. Popularising this term may help resist such technologies' reductivist approaches to 'reading' -- and exerting authority over -- emotion, intentionality and agency.


Precision Health Data: Requirements, Challenges and Existing Techniques for Data Security and Privacy

arXiv.org Artificial Intelligence

Precision health leverages information from various sources, including omics, lifestyle, environment, social media, medical records, and medical insurance claims to enable personalized care, prevent and predict illness, and precise treatments. It extensively uses sensing technologies (e.g., electronic health monitoring devices), computations (e.g., machine learning), and communication (e.g., interaction between the health data centers). As health data contain sensitive private information, including the identity of patient and carer and medical conditions of the patient, proper care is required at all times. Leakage of these private information affects the personal life, including bullying, high insurance premium, and loss of job due to the medical history. Thus, the security, privacy of and trust on the information are of utmost importance. Moreover, government legislation and ethics committees demand the security and privacy of healthcare data. Herein, in the light of precision health data security, privacy, ethical and regulatory requirements, finding the best methods and techniques for the utilization of the health data, and thus precision health is essential. In this regard, firstly, this paper explores the regulations, ethical guidelines around the world, and domain-specific needs. Then it presents the requirements and investigates the associated challenges. Secondly, this paper investigates secure and privacy-preserving machine learning methods suitable for the computation of precision health data along with their usage in relevant health projects. Finally, it illustrates the best available techniques for precision health data security and privacy with a conceptual system model that enables compliance, ethics clearance, consent management, medical innovations, and developments in the health domain.


COVI White Paper

arXiv.org Artificial Intelligence

The SARS-CoV-2 (Covid-19) pandemic has caused significant strain on public health institutions around the world. Contact tracing is an essential tool to change the course of the Covid-19 pandemic. Manual contact tracing of Covid-19 cases has significant challenges that limit the ability of public health authorities to minimize community infections. Personalized peer-to-peer contact tracing through the use of mobile apps has the potential to shift the paradigm. Some countries have deployed centralized tracking systems, but more privacy-protecting decentralized systems offer much of the same benefit without concentrating data in the hands of a state authority or for-profit corporations. Machine learning methods can circumvent some of the limitations of standard digital tracing by incorporating many clues and their uncertainty into a more graded and precise estimation of infection risk. The estimated risk can provide early risk awareness, personalized recommendations and relevant information to the user. Finally, non-identifying risk data can inform epidemiological models trained jointly with the machine learning predictor. These models can provide statistical evidence for the importance of factors involved in disease transmission. They can also be used to monitor, evaluate and optimize health policy and (de)confinement scenarios according to medical and economic productivity indicators. However, such a strategy based on mobile apps and machine learning should proactively mitigate potential ethical and privacy risks, which could have substantial impacts on society (not only impacts on health but also impacts such as stigmatization and abuse of personal data). Here, we present an overview of the rationale, design, ethical considerations and privacy strategy of `COVI,' a Covid-19 public peer-to-peer contact tracing and risk awareness mobile application developed in Canada.


GPT-3 Creative Fiction

#artificialintelligence

What if I told a story here, how would that story start?" Thus, the summarization prompt: "My second grader asked me what this passage means: …" When a given prompt isn't working and GPT-3 keeps pivoting into other modes of completion, that may mean that one hasn't constrained it enough by imitating a correct output, and one needs to go further; writing the first few words or sentence of the target output may be necessary.


How WWII Was Won, and Why CS Students Feel Unappreciated

Communications of the ACM

Observations of the 75th anniversary of the end of World War II in Europe (May 8, 1945) included remembrances of such searing events as the struggle on Omaha Beach on D-Day, the Battle of the Bulge, and at least some recognition of the enormous contribution made by the Russian people to the defeat of Fascism. Yet in all this, I suspect the role of the first "high-performance computing" capabilities of the Allies--known as Ultra in Britain, Magic in the U.S.--will receive too little attention. The truth of the matter is that the ability to hack into Axis communications made possible many Allied successes in the field, at sea, and in the air. Alan Turing and other "boffins" at Britain's Bletchley Park facility built the machine--a much-improved version of a prototype developed by the Poles in the interwar period--that had sufficient computing power to break the German Enigma encoding system developed by Arthur Scherbius. The Enigma machine was a typewriter-like device with three rotors, each with an alphabet of its own, so each keystroke could create 17,576 possible meanings (26 x 26 x 26).