Goto

Collaborating Authors

Results


Exclusive Talk with Toby Lewis, Global Head of Threat Analysis at Darktrace

#artificialintelligence

Toby: My role here at Darktrace is the Global Head of Threat Analysis. My day-to-day job involves looking at the 100 or so cybersecurity analysts we have spread from New Zealand to Singapore, the UK, and most major time zones in the US. My main role is to evaluate how we can use the Darktrace platform to work with our customers. How can we ensure that our customers get the most out of our cybersecurity expertise and support when using AI to secure their network? The other half of my role at Darktrace is subject matter expertise. This role involves talking to reporters like yourself or our customers who want to hear more about what Darktrace can do to help them from a cybersecurity perspective, discussing the context of current events. That part of my role was born out of a nearly 20-year career in cybersecurity. I first started in government and was one of the founding members of the National Cybersecurity Center here in the UK.


Survey: 53% of young cybersecurity professionals fear replacement by automation

#artificialintelligence

Although the image of the tech-confused Boomer is a deeply-rooted stereotype, TechRepublic has reported that this is, in fact, a myth: In actuality, a Dropbox survey found that "people over age 55 are actually less likely than their younger colleagues to find using tech in the workplace stressful." A new report from security advisors Exabeam--2020 Cybersecurity Professionals Salary, Skills and Stress Survey--emphasizes these findings, as well. The research shows that although a whopping 88% of cybersecurity professionals embrace new technology, confident that automation will help them in their roles, it is the younger generation that is skeptical: 53% of respondents under the age of 45 "agreed or strongly agreed that AI and ML are a threat to their job security," according to the report. The findings, part of an annual survey, looked at attitudes regarding salary, training, innovation, and emerging technologies like artificial intelligence (AI) and machine learning (ML), among 350 cybersecurity professionals worldwide, hailing from the US, Germany, Singapore, Australia, and the UK. Overall, the results were positive, and the findings show that cybersecurity professionals continue to be satisfied in their jobs.


Europe contemplates new rules for AI โ€“ and what this might mean in A/NZ

#artificialintelligence

At the beginning of 2021, the European Commission will propose legislation on AI that will be, at first instance, horizontal (as opposed to sectoral) and risk-based, with mandatory requirements for high-risk AI applications. The new rules will aim at ensuring transparency, accountability and consumer protection, including safety, through robust AI governance and data quality requirements. Europe's approach to regulating technology is based on the precautionary principle, which enables rapid regulatory intervention in the face of possible danger to human, animal or plant health, or to protect the environment. This perspective has helped Europe to become a global leader in the shaping of the digital technology market. Particularly, with the introduction of the General Data Protection Regulation (GDPR) in 2018, Europe considers it has gained a competitive advantage through the creation of a trust mark for increased privacy protection. Australia and New Zealand have a close relationship with the European Union (EU) and its member countries historically.


Precision Health Data: Requirements, Challenges and Existing Techniques for Data Security and Privacy

arXiv.org Artificial Intelligence

Precision health leverages information from various sources, including omics, lifestyle, environment, social media, medical records, and medical insurance claims to enable personalized care, prevent and predict illness, and precise treatments. It extensively uses sensing technologies (e.g., electronic health monitoring devices), computations (e.g., machine learning), and communication (e.g., interaction between the health data centers). As health data contain sensitive private information, including the identity of patient and carer and medical conditions of the patient, proper care is required at all times. Leakage of these private information affects the personal life, including bullying, high insurance premium, and loss of job due to the medical history. Thus, the security, privacy of and trust on the information are of utmost importance. Moreover, government legislation and ethics committees demand the security and privacy of healthcare data. Herein, in the light of precision health data security, privacy, ethical and regulatory requirements, finding the best methods and techniques for the utilization of the health data, and thus precision health is essential. In this regard, firstly, this paper explores the regulations, ethical guidelines around the world, and domain-specific needs. Then it presents the requirements and investigates the associated challenges. Secondly, this paper investigates secure and privacy-preserving machine learning methods suitable for the computation of precision health data along with their usage in relevant health projects. Finally, it illustrates the best available techniques for precision health data security and privacy with a conceptual system model that enables compliance, ethics clearance, consent management, medical innovations, and developments in the health domain.


Capgemini report shows why AI is the future of cybersecurity

#artificialintelligence

These and many other insights are from Capgemini's Reinventing Cybersecurity with Artificial Intelligence Report published this week. Capgemini Research Institute surveyed 850 senior executives from seven industries, including consumer products, retail, banking, insurance, automotive, utilities, and telecom. Enterprises headquartered in France, Germany, the UK, the US, Australia, the Netherlands, India, Italy, Spain, and Sweden are included in the report. Please see page 21 of the report for a description of the methodology. Capgemini found that as digital businesses grow, their risk of cyberattacks exponentially increases.


Why AI Is The Future Of Cybersecurity

#artificialintelligence

These and many other insights are from Capgemini's Reinventing Cybersecurity with Artificial Intelligence Report published this week. Capgemini Research Institute surveyed 850 senior executives from seven industries, including consumer products, retail, banking, insurance, automotive, utilities, and telecom. Enterprises headquartered in France, Germany, the UK, the US, Australia, the Netherlands, India, Italy, Spain, and Sweden are included in the report. Please see page 21 of the report for a description of the methodology. Capgemini found that as digital businesses grow, their risk of cyberattacks exponentially increases.


Governing autonomous vehicles: emerging responses for safety, liability, privacy, cybersecurity, and industry risks

arXiv.org Artificial Intelligence

The benefits of autonomous vehicles (AVs) are widely acknowledged, but there are concerns about the extent of these benefits and AV risks and unintended consequences. In this article, we first examine AVs and different categories of the technological risks associated with them. We then explore strategies that can be adopted to address these risks, and explore emerging responses by governments for addressing AV risks. Our analyses reveal that, thus far, governments have in most instances avoided stringent measures in order to promote AV developments and the majority of responses are non-binding and focus on creating councils or working groups to better explore AV implications. The US has been active in introducing legislations to address issues related to privacy and cybersecurity. The UK and Germany, in particular, have enacted laws to address liability issues, other countries mostly acknowledge these issues, but have yet to implement specific strategies. To address privacy and cybersecurity risks strategies ranging from introduction or amendment of non-AV specific legislation to creating working groups have been adopted. Much less attention has been paid to issues such as environmental and employment risks, although a few governments have begun programmes to retrain workers who might be negatively affected.


Legal AI Co. Luminance Now Targets Reg Review, Brexit GDPR Artificial Lawyer

#artificialintelligence

Legal AI doc review company, Luminance, is branching out into the regulatory world in order to expand its offering by covering areas such as Brexit impact on contracts and GDPR compliance. The move follows a recent expansion into real estate documentation review, showing the company's initial strategy of focusing only on M&A due diligence is well and truly over, with a mission now to capture a greater share of the NLP-driven doc review market across different practice areas. In other news, the firm has also bagged top New Zealand law firm, Russell McVeagh, as its client base widens to 75 around the world, and operating in 23 countries, which is not bad considering the company only launched in September 2016. Luminance already works with Chapman Tripp, New Zealand's largest full-service commercial law firm. How much each of these firms uses their Luminance review system is currently unknown, but if market feedback is accurate then not all customers are making maximum use of the AI system they have signed up to โ€“ at least not yet.