New EU regulations on AI seek to ban mass and indiscriminate surveillance. For many, that is the good news. The'not so good' news is that the proposed prohibitions are considered by some as being too vague, with serious loopholes. Most recently, the European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS), called for a ban on the use of AI for the automated recognition of human features in "publicly accessible spaces" as well as other uses that might lead to "unfair discrimination". Broadly speaking, this reflects the response to the EU's attempt to set a standard on how tech is regulated around the world.
Europe is lagging behind not only the US and Japan, but also China in terms of technological innovation. The world's 15 largest digital firms are not European! It is beyond question that Europe produces bright minds with amazing ideas and an entrepreneurial mindset. The problem is very simple: European companies do not make it beyond the start-up phase and if they do, their business is believed to be better off out of Europe. Skype is one famous example that was bought up by Microsoft.
Hacking events have increasingly been in the news this year, as a range of serious ransomware and supply chain hacks have wrecked chaos on businesses and infrastructure. The latest (as of July 2021) is a supply-chain-ransomware attack against Miami-based software firm Kaseya, affecting 1500 of its customers - with the hackers (threat-actors) demanding $70 million in cryptocurrency to release the data. According to the World Economic Forum, cyber-attacks now stand side by side with climate change and natural disasters as one of the most pressing threats to humanity. No doubt ways will eventually be found to detect and pre-empt these latest styles of attack. The cybersecurity industry is defined by continual, if largely gradual, innovation - as new threats emerge, technology that protects, detects and responds to the attacks also emerges. This cat and mouse dynamic has been a fundamental trait of the industry to date: a permanently iterating relationship that supercharges the development of new technologies on both sides, where even a small edge over adversaries can pay dividends (or ransoms).
This week the Chair of the European Parliament's committee on AI expressed concerns about the enforcement of the European Commission's proposed AI rules, which he said could create national fragmentation similar to that seen with the GDPR. So what are the issues involved, what is the proposed new EU law and how does GDPR already regulate AI? At the start of 2020, 42% of companies in the EU said they use technologies that depend on AI, and another 18% of companies said they are planning to use AI in the future (European Enterprise Survey – FRA, 2020). So, this is clearly an area that is justifiably generating considerable activity and interest from both industry and the regulators. It is important to note however that currently the available technologies involve varying levels of complexity, automation and human review and, despite some companies' optimism about their AI capabilities, many applications currently used remain in the development stage.
Microsoft recently acknowledged Russian hackers successfully cyberattacked them. If hackers can penetrate their internal systems, what are the chances your company will suffer the consequences of a future hack? What the Russians have done is very bad, but it's only an example of the cyber threats we all face. The cyber threat world is an arms race. The hackers are starting to use AI, and the only way to successfully defend against future threats is for your company to use AI as well.
One is on how to design, develop, and validate AI technologies and systems responsibly (i.e., Responsible AI) so that we can adequately assure ethical and legal concerns, especially pertaining to human values. The other is the use of AI itself as a means to achieve the Responsible AI ends. In this chapter, we focus on the former issue. In the last few years, AI continues demonstrating its positive impact on society while sometimes with ethically questionable consequences. Not doing AI responsibly is starting to have devastating effect on humanity, not only on data protection, privacy and bias but also on labour rights and climate justice . Building and maintaining public trust in AI has been identified as the key to successful and sustainable innovation .
Today, the European Union Agency for Cybersecurity (ENISA) released its Artificial Intelligence Threat Landscape Report, unveiling the major cybersecurity challenges facing the AI ecosystem. ENISA's study takes a methodological approach at mapping the key players and threats in AI. The report follows up the priorities defined in the European Commission's 2020 AI White Paper. The ENISA Ad-Hoc Working Group on Artificial Intelligence Cybersecurity, with members from EU Institutions, academia and industry, provided input and supported the drafting of this report. The benefits of this emerging technology are significant, but so are the concerns, such as potential new avenues of manipulation and attack methods.
Artificial Intelligence (AI) is one of the main weapons by which companies or medium-sized corporations can combat numerous cyber threats successfully. According to Warren Buffet, "Cyber-attack is the biggest threat to mankind, even more of a bigger threat than the nuclear weapon." Therefore, organizations should consider applying the concepts of AI within their workplaces if they want to prosper in the future without compromising their digital anonymity. Continue reading this post to know what is AI and how it is transforming cybersecurity for all the right reasons. Artificial Intelligence (AI) is a modern branch of computer science.