They have the sort of names that only teenage boys or aspiring Bond villains would dream up (REvil, Grief, Wizard Spider, Ragnar), they base themselves in countries that do not cooperate with international law enforcement and they don't care whether they attack a hospital or a multinational corporation. Ransomware gangs are suddenly everywhere, seemingly unstoppable – and very successful. In June, meat producer JBS, which supplies over a fifth of all the beef in the US, paid a £7.8m ransom to regain access to its computer systems. The same month, the US's largest national fuel pipeline, Colonial Pipeline, paid £3.1m to ransomware hackers after they locked the company's systems, causing days of fuel shortages and paralysing the east coast. "It was the hardest decision I've made in my 39 years in the energy industry," said a deflated-looking Colonial CEO Joseph Blount in an evidence session before Congress. In July, hackers attacked software firm Kaseya, demanding £50m.
Evgeniy is a specialist in software development, technological entrepreneurship and emerging technologies. In recent years, companies' growing focus on big data has led to increased digitalization demands. The avalanche of data has forced businesses to reconsider software modernization approaches. With that in mind, let's look at how enterprises use AI in intelligent analysis, hyperautomation and cybersecurity in the world of big data. Data orientation is the future of business, and the survival of companies depends on efficiently processing external and internal information.
Representatives from Google have told an Australian Parliamentary committee looking into foreign interference that the country has not been the target of coordinated influence campaigns. "We've not seen the sort of foreign coordinated foreign influence campaigns targeted at Australia that we have with other jurisdictions, including the United States," Google director of law enforcement and information security Richard Salgado said. "Some of the disinformation campaigns that originate outside Australia, even if not targeting Australia, may affect Australia as collateral ... but not as a target of the campaign. "We have found no instances of foreign coordinated influence campaigns targeting Australia." While acknowledging campaigns that reach Australia do exist, he reiterated they have not specifically targeted Australia. "Some of these campaigns are broad enough that the disinformation could be, sort of, divisive in any jurisdiction in which it is consumed, even if it's not targeting that jurisdiction," Salgado told the Select Committee on Foreign Interference Through Social Media. "Google services, YouTube in particular, which is where we have seen most of these kinds of campaigns run, isn't really very well designed for the purpose of targeting groups to create the division that some of the other platforms have suffered, so it isn't actually all that surprising that we haven't seen this on our services." Appearing alongside Salgado on Friday was Google Australia and New Zealand director of government affairs and public policy Lucinda Longcroft, who told the committee her organisation has been in close contact with the Australian government as it looks to prevent disinformation from emerging leading up the next federal election. Additionally, the pair said that Google undertakes a "constant tuning" of the artificial intelligence and machine learning tech used. It said it also constantly adjusts policies and strategies to avoid moments of surprise, where Google could find itself unable to handle a shift in attacker strategy or shift in volume of attack. Appearing earlier in the week before the Parliamentary Joint Committee on Corporations and Financial Services, Google VP of product membership and partnerships Diana Layfield said her company does not monetise data from Google Pay in Australia. "I suppose you could argue that there are non-transaction data aspects -- so people's personal profile information," she added. "If you sign up for an app, you have to have a Google account.
Artificial intelligence is a double-edged sword when it comes to cybersecurity, with defenders using it to respond to and predict threats and attackers using it to launch even more refined attacks. For example, AI algorithms can send'spear phishing' tweets (personalized tweets sent to targeted users to trick them into sharing sensitive information) six times faster than a human and with twice the success. The enlargement of attack surface and the increased sophistication of attacks has made AI a key weapon in thwarting cyberattacks, Capgemini found. Cyber analysts are finding it increasingly difficult to effectively monitor current levels of data volume, velocity, and variety across firewalls, prompting organizations to turn to artificial intelligence. In fact, Capgemini found that 61 percent of organizations acknowledge they wouldn't be able to identify critical threats without AI.
Like many different technologies, Artificial Intelligence (AI) has been widely adopted and implemented in a variety of businesses and everyday life. As a result, it has the potential to solve many business challenges as well as give consumers a new perspective in the digital world. However, as welcoming as its changes are to us, there is a flipside to AI and its advances. Like most technologies, there are concerns for privacy involving customer and vendor data protection. In addition, AI is fueled by algorithms that create new sensitive information that can affect consumers and employees.
Artificial intelligence has shown incredible potential in countless applications and has simplified today's life seamlessly. It has already been proven that it has unlimited potential in different applications and industries. While maintaining its imperative and optimistic potential, AI further advances its way through cybersecurity to help protect organizations from existing cyber threats and identify newer types of malware. In this world of cyber threats, in which antivirus software and firewalls are taken as antiquity tools, companies are now looking for more advanced technological means to protect their data, confidential and sensitive information. This is where AI comes in to offer protection against digital threats around the world.
Cybersecurity is of the utmost concern for financial institutions (FIs) of all types, ranging from community credit unions to multibillion-dollar international banking conglomerates to everyday consumers. More than 2 million fraud reports were issued to the Federal Trade Commission in 2020, reaching a total loss of more than $3 billion. One survey found that 47 percent of businesses around the world have reported being victimized by digital crime within the past two years, with losses totaling $42 billion. Fraudsters are also growing more advanced in their tactics, leveraging sophisticated technologies like artificial intelligence (AI) and machine learning (ML) to deploy millions of attacks simultaneously. The overwhelming volume of attacks has put organizations on the back foot, scrambling to find countermeasures to the account takeovers (ATOs), phishing attacks and other schemes they face by the thousands every day.
Bio: Angelica Lo Duca (Medium) works as post-doc at the Institute of Informatics and Telematics of the National Research Council (IIT-CNR) in Pisa, Italy. She is Professor of "Data Journalism" for the Master degree course in Digital Humanities at the University of Pisa. Her research interests include Data Science, Data Analysis, Text Analysis, Open Data, Web Applications and Data Journalism, applied to the fields of society, tourism and cultural heritage. She used to work on Data Security, Semantic Web and Linked Data. Angelica is also an enthusiastic tech writer.
New EU regulations on AI seek to ban mass and indiscriminate surveillance. For many, that is the good news. The'not so good' news is that the proposed prohibitions are considered by some as being too vague, with serious loopholes. Most recently, the European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS), called for a ban on the use of AI for the automated recognition of human features in "publicly accessible spaces" as well as other uses that might lead to "unfair discrimination". Broadly speaking, this reflects the response to the EU's attempt to set a standard on how tech is regulated around the world.