Ransomware will inevitably plague self-driving cars. Ransomware is being continually mentioned in the daily news and appears to be a seemingly unstoppable fiendish craze. Perhaps the recent attack of ransomware on the Colonial Pipeline received the most rapt attention since it led to concerns over gasoline shortages and caused quite a stir amongst the general public. When ransomware is used against a particular bank or hospital or school, this normally doesn't have quite the same widespread disruption as did the fuel pipeline incident. The thing is, we are probably going to see a lot more ransomware being fielded and doing so against all manner of businesses and governmental entities. Some would assert that we are only so far at the tip of the iceberg when it comes to ransomware hacks. Part of the reason why you can expect more use of ransomware is that it is relatively easy for an evildoer or crook to deploy the computer hacking scourge. Whereas the perpetrator used to need to have some keen computer nerdish skills, that's pretty much not the case anymore. Sadly, the ease of attempting to infect computer systems with ransomware has become nearly easy-peasy and has opened the floodgates to just about any determined villain to try (ransomware programs can be cheaply purchased online via the so-called dark web). There are now plentiful Ransomware-as-a-Service (RaaS) capabilities available that will do most of the heavy lifting for those that prefer a hands-off chauffeured form of ransomware cyberattacks.
The fight against fraud has always been a messy business, but it's especially grisly in the digital age. To keep ahead of the cybercriminals, investment in technology – particularly artificial intelligence – is paramount, says Ajay Bhalla, president of cyber and intelligence solutions at Mastercard. Since the opening salvo of the coronavirus crisis, cybercriminals have launched increasingly sophisticated attacks across a multitude of channels, taking advantage of heightened emotions and poor online security. Some £1.26 billion was lost to financial fraud in the UK in 2020, according to UK Finance, a trade association, while there was a 43% year-on-year explosion in internet banking fraud losses. The banking industry managed to stop some £1.6 billion of fraud over the course of the year, equivalent to £6.73 in every £10 of attempted fraud.
If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation – flagged and unflagged – has been aimed at the general public. Imagine the possibility of misinformation – information that is false or misleading – in scientific and technical fields like cybersecurity, public safety and medicine. There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as facultymembers doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community.
Traditional cybersecurity isn't necessarily bad at detecting attacks, the trouble is it often does so after they have occurred. A better approach is to spot potential attacks and block them before they can do any damage. One possible way of doing this is via'deep learning' allowing technology to identify the difference between good and bad. We spoke with Brooks Wallace, cybersecurity sales leader at Deep Instinct to find out more about this innovative solution. BW: If you look at cybersecurity, there's always been this holy grail of prevention.
The European Union has introduced a proposal to regulate the development of AI, with the goal of protecting the rights and well-being of its citizens. The Artificial Intelligence Act (AIA) is designed to address certain potentially risky, high-stakes use cases of AI, including biometric surveillance, bank lending, test scoring, criminal justice, and behavior manipulation techniques, among others. The goal of the AIA is to regulate the development of these applications of AI in a way that will foster increased trust in its adoption. Similar to the EU's General Data Protection Regulation (GDPR), the AIA law will apply to anyone selling or providing relevant services to EU citizens. GDPR spearheaded data privacy regulations across the United States and around the world.
If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation – flagged and unflagged – has been aimed at the general public. Imagine the possibility of misinformation – information that is false or misleading – in scientific and technical fields like cybersecurity, public safety and medicine. There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as faculty members doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community.
We devise a novel conditional tabular data synthesizer, CTAB-GAN, that addresses the limitations of the prior state-of-the-art: (i) encoding mixed data type of continuous and categorical variables, (ii) efficient modeling of long tail continuous variables and (iii) increased robustness to imbalanced categorical variables along with skewed continuous variables. Furthermore, two key features of CTAB-GAN are the introduction of classification loss in conditional GAN, and novel encoding for the conditional vector that efficiently encodes mixed variables and helps to deal with highly skewed distributions for continuous variables.
But even he has been surprised by the sheer volume of complaints against digital lenders in recent years. While most of the grievances are about unauthorised lending platforms misusing borrowers' data or harassing them for missed payments, others relate to high interest rates or loan requests that were rejected without explanation, Shah said. "These are not like traditional banks, where you can talk to the manager or file a complaint with the head office. There is no transparency, and no one to ask for remedy," said Shah, founder of JivanamAsteya. "It is hurting young people starting off in their lives -- a loan being rejected can result in a low credit score, which will adversely affect bigger financial events later on," he told the Thomson Reuters Foundation.
Years ago, we searched the web, bought new gadgets, and typed in our email addresses without much thought. As far as accounts went, "Hey if it's free, sign me up," we thought. Fast forward to now, and you can't go online or turn on the news without hearing about the control Big Tech has on our lives – and the growing resentment around it. Probably due to government initiatives, tech companies are making changes to address these concerns. You can now password protect the page that reveals all your Google searches and other activity.