Goto

Collaborating Authors

AI could have profound effect on way GCHQ works, says director

The Guardian

GCHQ's director has said artificial intelligence software could have a profound impact on the way it operates, from spotting otherwise missed clues to thwart terror plots to better identifying the sources of fake news and computer viruses. Jeremy Fleming's remarks came as the spy agency prepared to publish a rare paper on Thursday defending its use of machine-learning technology to placate critics concerned about its bulk surveillance activities. "AI, like so many technologies, offers great promise for society, prosperity and security. Its impact on GCHQ is equally profound," he said. "While this unprecedented technological evolution comes with great opportunity, it also poses significant ethical challenges for all of society, including GCHQ." AI is considered controversial because it relies on computer algorithms to make decisions based on patterns found in data.


Global Big Data Conference

#artificialintelligence

Intelligence agencies need to use artificial intelligence to help deal with threats from criminals and hostile states who will try to use AI to strengthen their own attacks. Intelligence and espionage services need to embrace artificial intelligence (AI) in order to protect national security as cyber criminals and hostile nation states increasingly look to use the technology to launch attacks. The UK's intelligence and security agency GCHQ commissioned a study into the use of AI for national security purposes. It warns that while the emergence of AI create new opportunities for boosting national security and keeping members of the public safe, it also presents potential new challenges, including the risk of the same technology being deployed by attackers. "Malicious actors will undoubtedly seek to use AI to attack the UK, and it is likely that the most capable hostile state actors, which are not bound by an equivalent legal framework, are developing or have developed offensive AI-enabled capabilities," says the report from the Royal United Services Institute for Defence and Security Studies (RUSI).


Artificial intelligence will be used to power cyber attacks, warn security experts ZDNet

#artificialintelligence

Intelligence and espionage services need to embrace artificial intelligence (AI) in order to protect national security as cyber criminals and hostile nation-states increasingly look to use the technology to launch attacks. The UK's intelligence and security agency GCHQ commissioned a study into the use of AI for national security purposes. It warns that while the emergence of AI create new opportunities for boosting national security and keeping members of the public safe, it also presents potential new challenges, including the risk of the same technology being deployed by attackers. "Malicious actors will undoubtedly seek to use AI to attack the UK, and it is likely that the most capable hostile state actors, which are not bound by an equivalent legal framework, are developing or have developed offensive AI-enabled capabilities," says the report from the Royal United Services Institute for Defence and Security Studies (RUSI). "In time, other threat actors, including cybercriminal groups, will also be able to take advantage of these same AI innovations".


How artificial intelligence is transforming the world

#artificialintelligence

Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it.1 A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations. Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance. In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values.2 Although there is no uniformly agreed upon definition, AI generally is thought to refer to "machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention."3 According to researchers Shubhendu and Vijay, these software systems "make decisions which normally require [a] human level of expertise" and help people anticipate problems or deal with issues as they come up.4 As such, they operate in an intentional, intelligent, and adaptive manner. Artificial intelligence algorithms are designed to make decisions, often using real-time data. They are unlike passive machines that are capable only of mechanical or predetermined responses. Using sensors, digital data, or remote inputs, they combine information from a variety of different sources, analyze the material instantly, and act on the insights derived from those data. With massive improvements in storage systems, processing speeds, and analytic techniques, they are capable of tremendous sophistication in analysis and decisionmaking.