Goto

Collaborating Authors

Results


Project Force: AI and the military – a friend or foe?

#artificialintelligence

Artificial Intelligence is already part of our lives, and as the technology matures it will play a key role in future wars. The accuracy and precision of today's weapons are steadily forcing contemporary battlefields to empty of human combatants. As more and more sensors fill the battlespace, sending vast amounts of data back to analysts, humans struggle to make sense of the mountain of information gathered. This is where artificial intelligence (AI) comes in – learning algorithms that thrive off big data; in fact, the more data these systems analyse, the more accurate they can be. In short, AI is the ability for a system to "think" in a limited way, working specifically on problems normally associated with human intelligence, such as pattern and speech recognition, translation and decision-making.



TextFlint: Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing

arXiv.org Artificial Intelligence

Various robustness evaluation methodologies from different perspectives have been proposed for different natural language processing (NLP) tasks. These methods have often focused on either universal or task-specific generalization capabilities. In this work, we propose a multilingual robustness evaluation platform for NLP tasks (TextFlint) that incorporates universal text transformation, task-specific transformation, adversarial attack, subpopulation, and their combinations to provide comprehensive robustness analysis. TextFlint enables practitioners to automatically evaluate their models from all aspects or to customize their evaluations as desired with just a few lines of code. To guarantee user acceptability, all the text transformations are linguistically based, and we provide a human evaluation for each one. TextFlint generates complete analytical reports as well as targeted augmented data to address the shortcomings of the model's robustness. To validate TextFlint's utility, we performed large-scale empirical evaluations (over 67,000 evaluations) on state-of-the-art deep learning models, classic supervised methods, and real-world systems. Almost all models showed significant performance degradation, including a decline of more than 50% of BERT's prediction accuracy on tasks such as aspect-level sentiment classification, named entity recognition, and natural language inference. Therefore, we call for the robustness to be included in the model evaluation, so as to promote the healthy development of NLP technology.


Syntiant – Always-On Voice AI Chips at The Edge

#artificialintelligence

The ability for marketers to gauge intent these days is spooky. Performing a simple Google search for "hotels in Angeles City" while sitting in a cafe in Manila will suddenly surface "cheapest transport from Manila to Angeles City" ads in your Facebook stream. It knows you'll need cheap transport to get there so you can spend your money on other things. What you may find even more surprising is when you're talking to a mate on the phone about the carnal pleasures of Angeles City and suddenly STD test ads start appearing in your Twitter feed. Is your phone really listening to what you're saying?


"TOP READS OF THE WEEK" (for week ending 12 March)

#artificialintelligence

The latest top reads in banking, fintech, payments, cybersecurity, AI, IoT and risk management In this weeks selection; The Pandemic The COVID-19 pandemic - one year in Banks & Credit Unions Arizona group aims to jump-start European-style open banking in U.S. Wells Fargo Ditches Abbot Downing Name for Ultra-Rich Clients Fintech Mood swing – China's government is cracking down on fintech. What to expect from the fintech industry in 2021 Payments BNP Paribas rolls out A2A instant payments for e-commerce merchants New car tipping app arrives for Sonic carhops US, UK Firms Prioritize Innovation To Speed X-Border Payment Flows Cybersecurity Fintech Cybersecurity Threats You Should Know About It may be worth paying, but online privacy comes at a price Top 10 types of information security threats for IT teams Watch Out! New Android Banking Trojan Steals From 112 Financial AppsArtificial Intelligence Top 10 predictions of how AI is going to improve cybersecurity In 2021 Top 10 Artificial Intelligence Technologies Making a Breakthrough in 2021 Deep Learning vs Machine Learning: What Your Firm Needs to Know Arizona group aims to jump-start European-style open banking in U.S. Wells Fargo Ditches Abbot Downing Name for Ultra-Rich Clients Arizona group aims to jump-start European-style open banking in U.S. Mood swing – China's government is cracking down on fintech. Mood swing – China's government is cracking down on fintech. Fintech Cybersecurity Threats You Should Know About It may be worth paying, but online privacy comes at a price Top 10 types of information security threats for IT teams Watch Out! New Android Banking Trojan Steals From 112 Financial Apps


Attack as Defense: Characterizing Adversarial Examples using Robustness

arXiv.org Artificial Intelligence

As a new programming paradigm, deep learning has expanded its application to many real-world problems. At the same time, deep learning based software are found to be vulnerable to adversarial attacks. Though various defense mechanisms have been proposed to improve robustness of deep learning software, many of them are ineffective against adaptive attacks. In this work, we propose a novel characterization to distinguish adversarial examples from benign ones based on the observation that adversarial examples are significantly less robust than benign ones. As existing robustness measurement does not scale to large networks, we propose a novel defense framework, named attack as defense (A2D), to detect adversarial examples by effectively evaluating an example's robustness. A2D uses the cost of attacking an input for robustness evaluation and identifies those less robust examples as adversarial since less robust examples are easier to attack. Extensive experiment results on MNIST, CIFAR10 and ImageNet show that A2D is more effective than recent promising approaches. We also evaluate our defence against potential adaptive attacks and show that A2D is effective in defending carefully designed adaptive attacks, e.g., the attack success rate drops to 0% on CIFAR10.


Multi-Task Federated Reinforcement Learning with Adversaries

arXiv.org Artificial Intelligence

Reinforcement learning algorithms, just like any other Machine learning algorithm pose a serious threat from adversaries. The adversaries can manipulate the learning algorithm resulting in non-optimal policies. In this paper, we analyze the Multi-task Federated Reinforcement Learning algorithms, where multiple collaborative agents in various environments are trying to maximize the sum of discounted return, in the presence of adversarial agents. We argue that the common attack methods are not guaranteed to carry out a successful attack on Multi-task Federated Reinforcement Learning and propose an adaptive attack method with better attack performance. Furthermore, we modify the conventional federated reinforcement learning algorithm to address the issue of adversaries that works equally well with and without the adversaries. Experimentation on different small to mid-size reinforcement learning problems show that the proposed attack method outperforms other general attack methods and the proposed modification to federated reinforcement learning algorithm was able to achieve near-optimal policies in the presence of adversarial agents.


Artificial Intelligence And Online Privacy: Blessing And A Curse - Liwaiwai

#artificialintelligence

Artificial Intelligence (AI) is a beautiful piece of technology made to seamlessly augment our everyday experience. It is widely utilized in everything starting from marketing to even traffic light moderation in cities like Pittsburg. However, swords have two edges and the AI is no different. There are a fair number of upsides as well as downsides that follow such technological advancements. One way or another, the technology is moving too quickly while the education about the risks and safeguards that are in place are falling behind for the vast majority of the population. The whole situation is as much of a blessing for humankind as it is a curse.


Opinion: Artificial Intelligence's Military Risks, Potential

#artificialintelligence

Former Secretary of the Navy J. William Middendorf II, of Little Compton, lays out the threat posed by the Chinese Communist Party in his recent book, "The Great Nightfall." With the emerging priority of artificial intelligence (AI), China is shifting away from a strategy of neutralizing or destroying an enemy's conventional military assets -- its planes, ships and army units. AI strategy is now evolving into dominating what are termed adversaries' "systems-of-systems" -- the combinations of all their intelligence and conventional military assets. What China would attempt first is to disable all of its adversaries' information networks that bind their military systems and assets. It would destroy individual elements of these now-disaggregated forces, probably with missiles and naval strikes.


The AI Index 2021 Annual Report

arXiv.org Artificial Intelligence

Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.