The UK's GCHQ has revealed how AI is set be used to boost national security. The UK's top intelligence and security body, GCHQ, is betting big on artificial intelligence: the organization has revealed how it wants to use AI to boost national security. In a new paper titled "Pioneering a New National Security," GCHQ's analysts went to lengths to explain why AI holds the key to better protection of the nation. The volumes of data that the organization deals with, argued GCHQ, places security agencies and law enforcement bodies under huge pressure; AI could ease that burden, improving not only the speed, but also the quality of experts' decision-making. "AI, like so many technologies, offers great promise for society, prosperity and security. It's impact on GCHQ is equally profound," said Jeremy Fleming, the director of GCHQ.
GCHQ's director has said artificial intelligence software could have a profound impact on the way it operates, from spotting otherwise missed clues to thwart terror plots to better identifying the sources of fake news and computer viruses. Jeremy Fleming's remarks came as the spy agency prepared to publish a rare paper on Thursday defending its use of machine-learning technology to placate critics concerned about its bulk surveillance activities. "AI, like so many technologies, offers great promise for society, prosperity and security. Its impact on GCHQ is equally profound," he said. "While this unprecedented technological evolution comes with great opportunity, it also poses significant ethical challenges for all of society, including GCHQ." AI is considered controversial because it relies on computer algorithms to make decisions based on patterns found in data.
Speaking on the record to an invited audience at RUSI on 21 January 2019, GCHQ Deputy Director for Strategic Policy Paul Killworth described how Artificial Intelligence (AI) and Machine Learning (ML) have the potential to improve the effectiveness and efficiency of various intelligence functions. However, these capabilities bring with them complex legal and ethical considerations, and there is a strong public expectation that the UK's intelligence agencies will act in a way that protects citizens' rights and freedoms. The national security community has expressed a desire to engage in a more open dialogue on these issues, with Killworth stressing that'it is absolutely essential that we have the debates around AI and machine learning in the national security space that will deliver the answers and approaches that will give us public consent'. However, it may prove difficult to provide sufficient reassurances to the public concerning national security uses of AI, due to understandably high levels of sensitivity. Public acceptance of intelligence agencies' use of technology is driven by two conflicting sentiments.
Artificial intelligence will change the world. Because so many people and companies believe this, AI and the entire technological ecosystem in which it functions are highly valuable to private-sector organizations and nation-states. That means that nations will try to identify, steal, and corrupt or otherwise counteract the AI and related assets of others, and will use AI against each other in pursuit of their own national interests. And that presents the United States and its allies with a classic counterintelligence problem in a novel and high-stakes context: How do we protect a valuable national asset against a range of threats from hostile foreign actors, and how do we protect ourselves against the threat from AI in the hands of adversaries? In the broad and diverse discussion of artificial intelligence in the global technological and economic infrastructure of the future, this question has received remarkably little attention. In this post and others to follow, I will endeavor to explore some of the counterintelligence risks and problems presented by AI and the AI ecosystem. I'll first talk about the general problem of AI and counterintelligence and then, in later posts, dive into some of the specific areas that cause me the greatest concern in this sphere. Technological advancements often change society.
The UK has committed to a new approach to the UK's cyber capabilities, to better detect, disrupt and deter adversaries. In what has been billed as the largest security and foreign policy strategy revamp since the Cold War, the UK government has outlined new defense priorities – with at their heart, the imperative to boost the use of new technologies to safeguard the country. Prime minister Boris Johnson unveiled the integrated review this week, which has been in the making for over a year and will be used as a guide for spending decisions in the future. Focusing on foreign policy, defense and security, the review sets goals for the UK to 2025; and underpinning many of the targets is the objective of modernizing the country's armed forces. Johnson pledged to pump more money into defense, with a £24 billion ($33.4 billion) multi-year settlement that will represent a sizeable chunk of the UK's GDP.