An employer in Spain may not be able to fire a worker caught on a surveillance camera doing something prohibited if the company hasn't informed workers about the video system and its purpose, according to a recent trial court decision. In a case involving an employee fired after a security camera captured him in a parking-lot fight after work hours, a Pamplona labor court ruled that the video evidence was inadmissible under the European Union's General Data Protection Regulation (GDPR) and case law from the European Court of Human Rights (ECHR). "The judgment is of great interest since it is the first ruling by a Spanish court on the validity that can be given to the evidence of video recordings after the publication of the new Spanish Data Protection Law and also an interpretation of the new European Data Protection Regulation," according to a blog post from Manuel Vargas of Barcelona's Marti & Associats law firm. Under Spain's own data-protection law, employers who record a worker doing something illegal are considered to have fulfilled their duty to inform so long as they have posted a sign identifying a video surveillance zone, Vargas wrote. He also noted that recent case law from the Spanish Supreme Court endorses the idea that employers aren't obligated to notify workers that they plan to use video cameras to monitor their activity for possible disciplinary purposes.
Deep Learning Innovator Earns Spot as One of "America's Most Promising Artificial Intelligence Companies" Blue Hexagon, a deep learning and cybersecurity pioneer, announced it has earned a spot on the coveted Forbes AI 50 list. As one of America's most promising artificial intelligence (AI) companies, Blue Hexagon is the only cybersecurity company that relies on deep learning (a subfield of artificial intelligence) 100% of the time for instant, real-time cyber threat detection. Modern malware is more adaptive than ever, and new variants are being created at a rate of more than 4 per second. The Blue Hexagon real-time deep learning platform addresses the limitations of perimeter defenses like intrusion detection systems (IDS) and sandboxes that cannot keep up with the daily onslaught of malicious malware variants. Launched in Q1 2019, the company is first to harness advanced deep learning for network threat protection and is proven to be greater than 99.5% effective in actual customer deployments in identifying attacks.
Overview The goal of artificial intelligence is to enable the development of computers to do things normally done by people -- in particular, things associated with people acting intelligently. In the case of cybersecurity, its most practical application has been automating human intensive tasks to keep pace with attackers! Progressive organizations have begun using artificial intelligence in cybersecurity applications to defend against attackers. However, on it's own, artificial intelligence is best designed to identify "what is wrong." What today's enterprise needs to know is not only "what is wrong" in the face of a breach, but to understand "why it's wrong" and "how to fix it!"
Historically, the MixMode platform has provided its users with a forensic hunting platform with intel-based Indicators and Security Events from public & proprietary sources. While these detections still have their place in the security ecosystem, the increase in state-sponsored attacks, insider threats and adversarial artificial intelligence means there are simply too many threats to your network to rely on solely intelligence-based detections or proactive hunting. Many of these threats are sophisticated enough to evade traditional threat detection or, in the case of zero-day threats, signature-based detection may not even be possible. In the face of this growing threat, the best defense is to supplement these traditional methods with anomaly detection, a term that is quickly becoming genericized as it is rapidly bandied about within the industry. Here we will discuss some of the opportunities and challenges that can arise with anomaly detection as well as MixMode's unique approach to the solution.
Born out of the degradation of AI-powered devices, malicious intelligence has the capacity to be a real threat to the modern-day business ecosystem. It's true, there are plenty of AI applications that play a useful and critical role but focusing on the benefits of AI while forgoing the dangers is unwise. We can't say we haven't been warned. The best and brightest continue to make bleak predictions about AI-usage and the dangers of ignoring the threats that the technology poses. Elon Musk said: "I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it's probably that."
Was George Orwell right, is Big Brother watching us? Undoubtedly many are alarmed by the ever-increasing level of computer-driven surveillance, particularly involving facial recognition technologies. In the past few months, San Francisco and Oakland, California, and the US state of Massachusetts have all banned police from using facial recognition tech. Meanwhile, in Europe, The General Data Protection Regulation (GDPR) introduces restrictive rules about privacy preservation in data processing. A team of researchers from the Norwegian University of Science and Technology recently proposed a new architecture that can anonymize faces in images automatically while the original data distribution remains uninterrupted.
Artificial intelligence (AI) and machine learning (ML) are being heralded as a way to solve a wide range of problems in different industries and applications, such as reducing street traffic, improving online shopping, making life easier with voice-activated digital assistants, and more. The cybersecurity industry is no different. However, we need to be careful of the "hype" around AI and ML. And there is a lot of hype out there! A simple Google search of the term "artificial intelligence" yields about 630 million results, and AI continues to dominate the headlines and has even made its way into mainstream TV advertising.
Regardless of a company's size or type, its executives typically look for ways to help it operate as efficiently as possible. They understand the link between efficiency and profitability. If employees waste too much time with drawn-out processes or complicated tasks, it'll be hard for the enterprise to remain profitable and adapt to challenges. Fortunately, artificial intelligence (AI) supports the need for effective business operations. Here are five ways enterprises can use AI for help: Chatbots are an increasingly popular option for businesses to try, and they use AI to work.
Mobile and online banking providers have been upping their fraud protection measures over the last decade, making it more difficult for bad actors to rely on some of the schemes that previously worked in such channels. The prevalence of CNP fraud, once the bread and butter of the enterprising cybercriminal, has steadily crept downward each year alongside other forms that game customers' credit card numbers. Cybercriminals are still masters of a thriving trade, though. Banks are dealing with rapid rises in fraud schemes such as ATOs, synthetic identity fraud and account opening fraud. Creating new credit or mobile device accounts is a popular application, which uses legitimate customers' stolen information to defraud both them and their financial institutions (FIs).
Artificial intelligence (AI) is the foundation for simulating human intelligence methods by creating and applying algorithms. The technological advancements in this field have led to the adoption of this technology in various industries including healthcare, education, finance, and marketing and it has proven itself to be the most effective technology in modern times. This technology is now being used to prevent cyber-attacks in major organizations. As cybercrimes are increasing in number and complexity, AI is aiding in identifying these attacks and attacking them. AI technologies like Machine Learning and Natural Language Processing allow security analysts to counter such threats immediately.