DUBAI, United Arab Emirates (AP) -- A pair of B-52 bombers flew over the Mideast on Sunday, the latest such mission in the region aimed at warning Iran amid tensions between Washington and Tehran. The flight by the two heavy bombers came as a pro-Iran satellite channel based in Beirut broadcast Iranian military drone footage of an Israeli ship hit by a mysterious explosion only days earlier in the Mideast. While the channel sought to say Iran wasn't involved, Israel has blamed Tehran for what it described as an attack on the vessel. The U.S. military's Central Command said the two B-52s flew over the region accompanied by military aircraft from nations including Israel, Saudi Arabia and Qatar. It marked the fourth-such bomber deployment into the Mideast this year and the second under President Joe Biden.
There is no denying that Artificial Intelligence (AI) is the future of cybersecurity. In other words, the future of cybersecurity lies in the hands of Artificial Intelligence (AI). Companies or medium-sized corporations can counter various cyber threats using the advanced concepts of AI. If you want to know about different AI predictions that will positively influence cybersecurity in 2021 and in the future, read this post in detail. According to a recent research conducted by Trend Micro, Artificial Intelligence (AI) will replace the need for human beings by the end of 2030.
Securing vast and growing IoT environments may not seem to be a humanly possible task--and when the network hosts tens or hundreds of thousands of devices the task, indeed, may be unachievable. To solve this problem, vendors of security products have turned to a decidedly nonhuman alternative: artificial intelligence. "Cyberanalysts are finding it increasingly difficult to effectively monitor current levels of data volume, velocity and variety across firewalls," CapGemini noted in a survey research report, "Reinventing Cybersecurity With Artificial Intelligence." The report also noted that traditional methods may no longer be effective: "Signature-based cybersecurity solutions are unlikely to deliver the requisite performance to detect new attack vectors." In addition to conventional security software's limitations in IoT environments, CapGemini's report revealed a weakness in the human element of cybersecurity.
The National Security Commission on Artificial Intelligence (NSCAI) recently published the Final Report for 2021 outlining an integrated national strategy to empower the US in the era of AI-accelerated competition and conflict. NSCAI worked with technologists, national security professionals, business executives and academic leaders to put out the report. According to the report, the US government is a long way from being "AI-ready." Based on the findings, the commission has proposed a set of policy recommendations. The US leads in almost all AI parameters than most countries, including India.
UPDATED A new machine learning technique could make it easier for penetration testers to find SQL injection exploits in web applications. Introduced in a recently published paper by researchers at the University of Oslo, the method uses reinforcement learning to automate the process of exploiting a known SQL injection vulnerability. While the technique comes with quite a few caveats and assumptions, it provides a promising path toward developing machine learning models that can assist in penetration testing and security assessment tasks. Reinforcement learning is a branch of machine learning in which an AI model is given the possible actions and rewards of an environment and is left to find the best ways to apply those actions to maximize the reward. "It's inevitable that AI and machine learning are also applied in offensive security," Laszlo Erdodi, lead author of the paper and postdoctoral fellow at the department of informatics at the University of Oslo, told The Daily Swig.
AI can assist with cybersecurity and aiding financial institutions in terms of fraud prevention. Complex algorithms allow computers to solve problems that were previously only solvable by humans, such as detecting unusual fraud patterns or strange spending fluctuations or with questionable invoices. However, it remains that AI for cybersecurity companies need to be careful to not overly rely on it for their security, or they could be setting themselves up for problems. As an example, firms needs to pay attention to proper implementation of the cognitive capabilities of AI to ensure that it can really detect a threat. However, artificial intelligence does not always work and there remains a learning curve, according to Mike Cutlip, who is the Chief Executive Officer at Authoriti.
This week the American National Security Commission on artificial intelligence released its final report. Cursory inspection of its 756 pages suggests that it's just another standard product of the military-industrial complex that so worried President Eisenhower at the end of his term of office. On closer examination, however, it turns out to be a set of case notes on a tragic case of what we psychologists call "hegemonic anxiety" – the fear of losing global dominance. The report is the work of 15 bigwigs, led by Dr Eric Schmidt, the former CEO of Alphabet (and before that the adult supervisor imposed by venture capitalists on the young co-founders of Google). Of the 15 members of the commission only four are female.
Toby Walsh, a professor of AI at the University of Sydney, told CNBC the dangers have only "become nearer and more serious" since the letter was published. "Autonomous weapons must be regulated," he said. The Future of Life Institute, a non-profit research institute in Boston, Massachusetts, said last month there are many positive military applications for AI but "delegating life and death decisions to autonomous weapon systems is not one of them." The institute pointed out that autonomous drones could be used for reconnaissance missions to avoid putting troops in danger, while AI could also be used to power defensive anti-missile guns which detect, target, and destroy incoming threats without a human command. "Neither application involves a machine selecting and attacking humans without an operator's green light," it said.
A new report emphasizes why it is urgent that the Department of Defense and Congress work together to modernize the way defense programs and budgets develop, integrate and deploy the latest technologies in support of American national security. Released by the National Security Commission on Artificial Intelligence, a federal body created to review and recommend ways to use artificial intelligence for national security purposes, the report recommends the use of AI to update America's defense plans, predict future threats, deter adversaries and win wars. Because AI will be "incorporated into virtually all future technology," it is easy to recognize that national security threats and opportunities posed by AI should be a catalyst for necessary changes to defense requirements and resourcing processes. "Unless the requirements, budgeting and acquisition processes are aligned to permit faster and more targeted execution, the U.S. will fail to stay ahead of potential adversaries." This blunt recommendation to the Defense Department under the heading "Accelerate Adoption of Existing Digital Technologies" makes clear the urgency for cultural and structural updates to the way the department currently does business.
WHEN IT comes to using artificial intelligence (AI), intelligence agencies have been at it longer than most. In the cold war America's National Security Agency (NSA) and Britain's Government Communications Headquarters (GCHQ) explored early AI to help transcribe and translate the enormous volumes of Soviet phone-intercepts they began hoovering up in the 1960s and 1970s. Your browser does not support the audio element. Yet the technology was immature. One former European intelligence officer says his service did not use automatic transcription or translation in Afghanistan in the 2000s, relying on native speakers instead.