Goto

Collaborating Authors

 ai cybersecurity


AI Cybersecurity Pros And Cons

#artificialintelligence

Artificial Intelligence (AI) is often used for multiple purposes such as automating repetitive tasks and logically inputting data. AI technology has the capability to replicate a human's mind where it can learn behaviour and interpret data. AI is used in many industries for tasks such as facial recognition and assisting with self-driving cars. As organisations become more complex, and structures are constantly evolving, staff can no longer use traditional methods to identify weaknesses and are turning to AI for cybersecurity. On the other hand, cybercriminals have increased opportunities to access a company's network infrastructure, as businesses become more complex, hackers have more exploits to use against them.


Artificial Intelligence in Cyber Security: Benefits and Drawbacks.

#artificialintelligence

You can use artificial intelligence (AI) to automate complex repetitive tasks much faster than a human. AI technology can sort complex, repetitive input logically. That's why AI is used for facial recognition and self-driving cars. But this ability also paved the way for AI cybersecurity. This is especially helpful in assessing threats in complex organizations. When business structures are continually changing, admins can't identify weaknesses traditionally.


Should We Start Certifying Cybersecurity for AI Solutions?

#artificialintelligence

Today machine-learning and deep-learning techniques take part in our daily life under the name of AI. AI technology is being advanced to counter sophisticated and destructive cyberattacks. As AI cybersecurity is an emerging field, experts worry about the potential new threats that may emerge if vulnerabilities in AI technology are exposed. Without a certifying body regulating AI technology for the use of cybersecurity, will organizations find themselves more at risk and victim to manipulation? On April 21, 2021, the European Commission (EC) published a proposal describing the "first-ever legal framework on AI".


Is AI cybersecurity's salvation or its greatest threat?

#artificialintelligence

If you're uncertain whether AI is the best or worst thing to ever happen to cybersecurity, you're in the same boat as experts watching the dawn of this new era with a mix of excitement and terror. AI's potential to automate security on a broader scale offers a welcome advantage in the short term. Yet unleashing a technology designed to eventually take humans out of the equation as much as possible naturally gives the industry some pause. There is an undercurrent of fear about the consequences if things run amok or attackers learn to make better use of the technology. "Everything you invent to defend yourself can also eventually be used against you," said Geert van der Linden, an executive vice president of cybersecurity for Capgemini.


Is AI cybersecurity the next big tech leap?

#artificialintelligence

Deep learning is a useful tool to optimise and validate security posture. But until we overcome some of its challenges, positive security models and behavioural algorithms that are deterministic and predictable are still more effective for defence and mitigation. Most successful deep-learning applications in use today are based on supervised learning neural nets. They take an input and produce an output where the output provides a confidence level across a fixed set of labels. Given lots of data, the neural net will usually make the right "decision".


Is AI cybersecurity the next big tech leap?

#artificialintelligence

Deep learning is a useful tool to optimise and validate security posture. But until we overcome some of its challenges, positive security models and behavioural algorithms that are deterministic and predictable are still more effective for defence and mitigation. Most successful deep-learning applications in use today are based on supervised learning neural nets. They take an input and produce an output where the output provides a confidence level across a fixed set of labels. Given lots of data, the neural net will usually make the right "decision".


AI cybersecurity: businesses need machines to fight machines Verdict

#artificialintelligence

Businesses need to better fund AI cybersecurity to combat the growing threat of artificially intelligent (AI) cyberattacks, according to the CTO of the Cyber Security Research Center at Ben-Gurion University, Dudu Mimran. Speaking at the OECD Forum, Mimran predicted that AI would become a more common weapon in the arsenal of hackers. The senior telecommunications and cybersecurity expert argued that this is partly because using AI would be "cheaper, faster and smarter" for hackers. AI-driven cyberattacks could take many forms, including phishing, identity theft and denial-of-service attacks. According to Natan Bandler, CEO and co-founder of Cy-oT, the two main areas in which AI is being used to carry out attacks are as a "tool to find exploits" and for auto hacking to "map existing exploits and weaknesses."


More funding for AI cybersecurity: Darktrace raises $75M at an $825M valuation

#artificialintelligence

With cybercrime projected to reap some $6 trillion in damages by 2021, and businesses likely to invest around $1 trillion over the next five years to try to mitigate that, we're seeing a rise of startups that are building innovating ways to combat malicious hackers. In the latest development, Darktrace -- a cybersecurity firm that uses machine learning to detect and stop attacks -- has raised $75 million, giving the startup a post-money valuation of $825 million, on the back of a strong business: the company said it has a total contract value of $200 million, 3,000 global customers and has grown 140 percent in the last year. The funding will be used to expand the company's business operations into more markets. Notably, Darktrace also separately (not in its funding release) announced today that it is now in a strategic partnership with Hong Kong-based CITIC Telecom CPC, a telecoms firm serving China and other parts of Asia, "to bring next-generation cyber defense to businesses across Asia Pacific." We're asking if CITIC, which owns the strategic partner, has also invested in Darktrace as part of this partnership.


More funding for AI cybersecurity: Darktrace raises $75M at an $825M valuation

#artificialintelligence

Artificial intelligence, machine learning and autonomy are central to the future of American war. In particular, the Pentagon wants to develop software that can absorb more information from more sources than a human can, analyze it and either advise the human how to respond or -- in high-speed situations like cyber warfare and missile defense -- act on its own with careful limits. Call it the War Algorithm, the holy grail of a single mathematical equation designed to give the US military near-perfect understanding of what is happening on the battlefield and help its human designers to react more quickly than our adversaries and thus win our wars. Our coverage of this issue attracted the attention of Capt. In this op-ed, he offers something of a roadmap for the Pentagon to follow as it pursues this highly complex and challenging goal.


MIT shows how AI cybersecurity excels by keeping humans in the loop - TechRepublic

#artificialintelligence

Cybersecurity threats are among the most pressing concerns for businesses and institutions that need to protect information, but today's security systems are limited. Most security systems fall into two categories: human analyst or machine learning. Now, a new research paper from MIT shows that a combination of human experts with a machine learning system--in other words, supervised machine learning--provides better results than either human or machine alone. "AI squared," which uses a system developed by PatternEx, is 10 times better at catching threats than machine learning alone, and reduces false positives by a factor of five. This, said MIT's researchers, is three times better than current benchmarks.