Goto

Collaborating Authors

Results


What is Data Anonymization?

#artificialintelligence

Data anonymization is the process of mitigating direct and indirect privacy risks within data, such that there is a measurable way to ensure records cannot be attributed to a specific individual or entity. With an estimated 2.5 quintillion bytes of data being generated every day and an increasing reliance on data to power new applications, machine learning models and AI technologies, the importance of implementing effective anonymization techniques and removing any bottlenecks is crucial to accelerating future developments and innovations. This post is a general introduction to anonymization, and the tools and techniques for providing sufficient privacy protections, so that personally identifiable information (PII) is safe from exposure and exploitation. Data anonymization should be considered a continuous process; one that can require rapid iteration of applying various privacy engineering techniques and then measuring those privacy outcomes until a desired end state is reached. In the following sections, we'll dive deeper into our core tenets of the data anonymization process, and then walkthrough how you might apply them to a notional dataset.


Three opportunities of Digital Transformation: AI, IoT and Blockchain

#artificialintelligence

Koomey's law This law posits that the energy efficiency of computation doubles roughly every one-and-a-half years (see Figure 1–7). In other words, the energy necessary for the same amount of computation halves in that time span. To visualize the exponential impact this has, consider the face that a fully charged MacBook Air, when applying the energy efficiency of computation of 1992, would completely drain its battery in a mere 1.5 seconds. According to Koomey's law, the energy requirements for computation in embedded devices is shrinking to the point that harvesting the required energy from ambient sources like solar power and thermal energy should suffice to power the computation necessary in many applications. Metcalfe's law This law has nothing to do with chips, but all to do with connectivity. Formulated by Robert Metcalfe as he invented Ethernet, the law essentially states that the value of a network increases exponentially with regard to the number of its nodes (see Figure 1–8).


How AI and bots strengthen endpoint security

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Fast-growing ransomware, malware and endpoint-directed breach attempts are reordering the threat landscape in 2022. It's appropriate that RSA Conference 2022's theme is'transform,' as new threats continue to call for rapid changes in endpoint security. CISOs and CIOs are transforming their cloud infrastructure and hybrid cloud strategies, accelerating devops internally to produce new apps and platforms, and relying more on software-as-a-service (SaaS) apps than ever before to meet time-to-market goals. Vendors promoting cloud security, extended detection and response (XDR) and zero trust dominated RSAC 2022.


Wazuh and Its XDR Approach

#artificialintelligence

Today's cyber security technological evolution milestones in the context of effective detection and response are the endpoint detection and response (EDR), Manage Detection and Response (MDR), and Network Detection and Response (NDR). However, these all solutions are running independently and missing the correlated high level processed alert to which Extended Detection and Response (XDR) is a solution that emerged, rather than adding another tool, XDR aims to change this security landscape and enable a more compelling activity of the security stack. What problem does XDR solve? Attackers often target endpoints, but they also target other layers of the IT domain in the corporate network, such as email servers and cloud systems, and they may bounce between layers or hide in the interface between them to evade detection. XDR solves both problems at once.


How safe is YOUR smart device? Popular gadgets including Amazon Echo and Google Nest can be HACKED

Daily Mail - Science & tech

Smart home devices from companies such as Amazon and Google can be hacked and used to crash websites, steal data and snoop on users, an investigation reveals. Consumer group Which? has found poor security on eight smart devices, some of which are no longer supported with vital security updates due to their age. Examples include the first generation Amazon Echo smart speaker, released in 2014, and a Virgin Media internet router from 2017. All of the products had vulnerabilities that could leave users exposed to cybercriminals, Which? Domestic abuse survivors can also be tracked and controlled by ex-partners who exploit weak security on devices including Wi-Fi routers and security cameras.


What AI can (and can't) do for organisations' cyber resilience

#artificialintelligence

Technologies such as artificial intelligence (AI), machine learning, the internet of things and quantum computing are expected to unlock unprecedented levels of computing power. These so-called fourth industrial revolution (4IR) technologies will power the future economy and bring new levels of efficiency and automation to businesses and consumers. AI in particular holds enormous promise for organisations battling a scourge of cyber attacks. Over the past few years, cyber attacks have been growing in volume and sophistication. The latest data from Mimecast's State of Email Security 2022 report found that 94% of South African organisations were targeted by e-mail-borne phishing attacks in the past year, and six out of every 10 fell victim to a ransomware attack.


Google quietly bans deepfake training projects on Colab

#artificialintelligence

Google has quietly banned deepfake projects on its Colaboratory (Colab) service, putting an end to the large-scale utilization of the platform's resources for this purpose. Colab is an online computing resource that allows researchers to run Python code directly through the browser while using free computing resources, including GPUs, to power their projects. Due to the multi-core nature of GPUs, Colab is ideal for training machine learning projects like deepfake models or for performing data analysis. Deepfakes can be trained to swap faces on video clips, adding realistic facial expressions to make the result appear genuine, although it's fake. They have been used for spreading fake news, creating revenge porn, or for fun.


Ethical Principles of Facial Recognition Technology

#artificialintelligence

The sheer potential of facial recognition technology in various fields is almost unimaginable. However, certain errors that commonly creep into its functionality and a few ethical considerations need to be addressed before its most elaborate applications can be realized. An accurate facial recognition system uses biometrics to map facial features from a photograph or video. It compares the information with a database of known faces to find a match. Facial recognition can help verify a person's identity, but it also raises privacy issues. A few decades back, we could not have predicted that facial recognition would go on to become a near-indispensable part of our lives in the future.


Why AI and autonomous response are crucial for cybersecurity (VB On-Demand)

#artificialintelligence

Today, cybersecurity is in a state of continuous growth and improvement. In this on-demand webinar, learn how two organizations use a continuous AI feedback loop to identify vulnerabilities, harden defenses and improve the outcomes of their cybersecurity programs. The security risk landscape is in tremendous flux, and the traditional on-premises approach to cybersecurity is no longer enough. Remote work has become the norm, and outside the office walls, employees are letting down their personal security defenses. Cyber risks introduced by the supply chain via third parties are still a major vulnerability, so organizations need to think about not only their defenses but those of their suppliers to protect their priority assets and information from infiltration and exploitation.


Automotive Cybersecurity Market - Insights, Forecast to 2026

#artificialintelligence

The global Automotive Cybersecurity Market size is projected to grow from USD 2.0 billion in 2021 to USD 5.3 billion by 2026, at a CAGR of 21.3%. Increasing incidents of cyber-attacks on vehicles and massive vehicles recalls by OEMs have increased awareness about automotive cybersecurity among OEMs globally. Moreover, increasing government mandates on incorporating several safety features, such as rear-view camera, automatic emergency braking, lane departure warning system, and electronic stability control, have further opened new opportunities for automotive cybersecurity service providers globally. As a result, there are various start-ups present in the automotive cybersecurity ecosystem. Government initiatives toward building an intelligent transport system have also further escalated the demand for cybersecurity solutions all over the world.