Collaborating Authors


CALL FOR BOOK CHAPTER (Adversarial Multimedia Forensics) - Ehsan Nowrozi's Official WebSite


It is our pleasure to invite you to submit a chapter for inclusion in the “Adversarial Multimedia Forensics” book to be Published by Springer – Advances in Information Security. The submitted chapter should have 15-20 pages of single-space single-column in latex and include sufficient details to be useful for Cybersecurity Applications experts and readers with […]

Exclusive Talk with Toby Lewis, Global Head of Threat Analysis at Darktrace


Toby: My role here at Darktrace is the Global Head of Threat Analysis. My day-to-day job involves looking at the 100 or so cybersecurity analysts we have spread from New Zealand to Singapore, the UK, and most major time zones in the US. My main role is to evaluate how we can use the Darktrace platform to work with our customers. How can we ensure that our customers get the most out of our cybersecurity expertise and support when using AI to secure their network? The other half of my role at Darktrace is subject matter expertise. This role involves talking to reporters like yourself or our customers who want to hear more about what Darktrace can do to help them from a cybersecurity perspective, discussing the context of current events. That part of my role was born out of a nearly 20-year career in cybersecurity. I first started in government and was one of the founding members of the National Cybersecurity Center here in the UK.

Forecasting: theory and practice Machine Learning

Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases.

False Data Injection Threats in Active Distribution Systems: A Comprehensive Survey Artificial Intelligence

With the proliferation of smart devices and revolutions in communications, electrical distribution systems are gradually shifting from passive, manually-operated and inflexible ones, to a massively interconnected cyber-physical smart grid to address the energy challenges of the future. However, the integration of several cutting-edge technologies has introduced several security and privacy vulnerabilities due to the large-scale complexity and resource limitations of deployments. Recent research trends have shown that False Data Injection (FDI) attacks are becoming one of the most malicious cyber threats within the entire smart grid paradigm. Therefore, this paper presents a comprehensive survey of the recent advances in FDI attacks within active distribution systems and proposes a taxonomy to classify the FDI threats with respect to smart grid targets. The related studies are contrasted and summarized in terms of the attack methodologies and implications on the electrical power distribution networks. Finally, we identify some research gaps and recommend a number of future research directions to guide and motivate prospective researchers.

Modelling and Optimisation of Resource Usage in an IoT Enabled Smart Campus Artificial Intelligence

University campuses are essentially a microcosm of a city. They comprise diverse facilities such as residences, sport centres, lecture theatres, parking spaces, and public transport stops. Universities are under constant pressure to improve efficiencies while offering a better experience to various stakeholders including students, staff, and visitors. Nonetheless, anecdotal evidence indicates that campus assets are not being utilised efficiently, often due to the lack of data collection and analysis, thereby limiting the ability to make informed decisions on the allocation and management of resources. Advances in the Internet of Things (IoT) technologies that can sense and communicate data from the physical world, coupled with data analytics and Artificial intelligence (AI) that can predict usage patterns, have opened up new opportunities for organisations to lower cost and improve user experience. This thesis explores this opportunity via theory and experimentation using UNSW Sydney as a living laboratory.

Europe contemplates new rules for AI – and what this might mean in A/NZ


At the beginning of 2021, the European Commission will propose legislation on AI that will be, at first instance, horizontal (as opposed to sectoral) and risk-based, with mandatory requirements for high-risk AI applications. The new rules will aim at ensuring transparency, accountability and consumer protection, including safety, through robust AI governance and data quality requirements. Europe's approach to regulating technology is based on the precautionary principle, which enables rapid regulatory intervention in the face of possible danger to human, animal or plant health, or to protect the environment. This perspective has helped Europe to become a global leader in the shaping of the digital technology market. Particularly, with the introduction of the General Data Protection Regulation (GDPR) in 2018, Europe considers it has gained a competitive advantage through the creation of a trust mark for increased privacy protection. Australia and New Zealand have a close relationship with the European Union (EU) and its member countries historically.

Precision Health Data: Requirements, Challenges and Existing Techniques for Data Security and Privacy Artificial Intelligence

Precision health leverages information from various sources, including omics, lifestyle, environment, social media, medical records, and medical insurance claims to enable personalized care, prevent and predict illness, and precise treatments. It extensively uses sensing technologies (e.g., electronic health monitoring devices), computations (e.g., machine learning), and communication (e.g., interaction between the health data centers). As health data contain sensitive private information, including the identity of patient and carer and medical conditions of the patient, proper care is required at all times. Leakage of these private information affects the personal life, including bullying, high insurance premium, and loss of job due to the medical history. Thus, the security, privacy of and trust on the information are of utmost importance. Moreover, government legislation and ethics committees demand the security and privacy of healthcare data. Herein, in the light of precision health data security, privacy, ethical and regulatory requirements, finding the best methods and techniques for the utilization of the health data, and thus precision health is essential. In this regard, firstly, this paper explores the regulations, ethical guidelines around the world, and domain-specific needs. Then it presents the requirements and investigates the associated challenges. Secondly, this paper investigates secure and privacy-preserving machine learning methods suitable for the computation of precision health data along with their usage in relevant health projects. Finally, it illustrates the best available techniques for precision health data security and privacy with a conceptual system model that enables compliance, ethics clearance, consent management, medical innovations, and developments in the health domain.

AI Startup Pilots Digital Masks That Counter Facial Recognition


Alethea AI, a synthetic media company, is piloting “privacy-preserving face skins,” or digital masks that counter facial recognition algorithms and help users preserve privacy on pre-recorded videos.  The move comes as companies such as IBM, Microsoft, and Amazon announced they would suspend the sale of their facial recognition technology to law enforcement agencies.  “This is a new technique we developed inhouse that wraps a face with our AI algorithms,” said Alethea AI CEO Arif Khan. “Avatars are fun to play with and develop, but these ‘masks/skins’ are a different, more potent, animal to preserve privacy.” Related: Why CoinDesk Respects Pseudonymity: A Stand Against Doxxing See also: Human Rights Foundation Funds Bitcoin Privacy Tools Despite ‘Coin Mixing’ Legal Stigma The Los Angeles based startup launched in 2019 with a focus on creating avatars for content creators that the creators could license out for revenue. The idea comes as deepfakes, or manipulated media that can make someone appear as if they are doing or saying anything, becomes more accessible and widespread. According to a 2019 report from Deep Trace, a company which detects and monitors deepfakes, there were over 14,000 deepfakes online in 2019 and over 850 people were targeted by them. Alethea AI wants to let creators use their own synthetic media avatars for marketing purposes, in a sense trying to let people leverage deepfakes of themselves for money.  Khan compares the proliferation of facial recognition data now to the Napster-style explosion in music piracy in the early 2000s. Companies, like Clearview AI, have already harvested large amounts of data from people for facial recognition algorithms, then resold this data to security services without consent, and with all the bias inherent in facial recognition algorithms, which are generally less accurate on women and people of color.  Related: The Zcash Privacy Tech Underlying Ethereum’s Transition to Eth 2.0 Clearview AI, has marketed itself to law enforcement and scraped billions of images from websites like Facebook, Youtube, and Venmo. The company is currently being sued for doing so.   “We will get to a point where there needs to be an iTunes sort of layer, where your face and voice data somehow gets protected,” said Khan.  One part of that is creators licensing out their likeness for a fee. Crypto entrepreneur Alex Masmej was the first such avatar, and for $99 you can hire the avatar to say 200 words of whatever you want, provided the real Masmej approves the text.  We will get to a point where… where your face and voice data somehow gets protected Alethea AI has also partnered with software firm Oasis Labs, so that all content generated for Alethea AI’s synthetic media marketplace will be verified using Oasis Lab’s secure blockchain, akin to Twitter’s “verified” blue check mark.  “There are a lot of Black Mirror scenarios when we think of deepfakes but if my personal approval is needed for my deepfakes and it’s then time-stamped on a public blockchain for anyone to verify the videos that I actually want to release, that provides a protection that deepfakes are currently lacking,” said Masmej.  The privacy pilot takes this idea one step further, not only creating a deep fake license out, but preventing companies or anyone from grabbing your facial data from a recording.  There are two parts to the privacy component. The first, currently being piloted, involves pre-recorded videos. Users upload a video, identify where and what face skin they would like superimposed on their own, and then Alethea AI’s algorithms map the key points on your own face, and wrap the mask around this key point map that is created. The video is then sent back to a client.  See also: Fake News on Steroids: Deepfakes Are Coming – Are World Leaders Prepared? Alethea AI also wants to enable face masking during real time communications, such as over a Zoom call. But Khan says computing power doesn’t quite allow that yet, though it should be possible in a year, he hopes.  Alethea AI piloted one example of the tech with Crypto AI Profit, a blockchain and AI influencer, who used it during a Youtube video.  Deepfakes, voice spoofing, and other tech enabled mimicry seem here to stay, but Khan is still optimistic that we’re not yet at the point of no return when it comes to protecting ourselves.  “I’m hopeful that the individual is accorded some sort of framework in this entire emerging landscape,” said Khan. “It’s going to be a very interesting ride. I don’t think the battle is fully decided, although existing systems are oriented towards preserving larger, more corporate input.” Related Stories Subsidiary Rolling Out Privacy Tech From Blockchain Firm ARPA From Australia to Norway, Contact Tracing Is Struggling to Meet Expectations

Artificial intelligence and the regulatory landscape Lexology


Currently, the European Union does not have any specific legislative instrument or standard to regulate the use and development of AI. However, these requirements are likely to set the stage for future legislation, similar in scope and effect as the General Data Protection Regulation (GDPR) for privacy, therefore indicating that the European Union may be on the cusp of providing for specific and unique AI regulatory legislation.

Capgemini report shows why AI is the future of cybersecurity


These and many other insights are from Capgemini's Reinventing Cybersecurity with Artificial Intelligence Report published this week. Capgemini Research Institute surveyed 850 senior executives from seven industries, including consumer products, retail, banking, insurance, automotive, utilities, and telecom. Enterprises headquartered in France, Germany, the UK, the US, Australia, the Netherlands, India, Italy, Spain, and Sweden are included in the report. Please see page 21 of the report for a description of the methodology. Capgemini found that as digital businesses grow, their risk of cyberattacks exponentially increases.