But there is no doubt that the pandemic has hastened the adoption of emerging digital technologies, ushered in a new era of remote and flexible working arrangements, increased organisations' reliance on digital infrastructure and exposed our tech-related strengths and weaknesses alike. Leaving 2020 in the rear-view mirror, we count down our top 10 predictions for 2021 and beyond in the domain of Digital Law in Australia. Despite an existing principles-based framework for the protection of privacy under the Privacy Act, in recent years the Federal Government has preferred to introduce parallel privacy requirements, such as the 13 Privacy Safeguards under the Consumer Data Right legislation and the privacy aspects of the upcoming Data Availability and Transparency Act for Government agencies. These nascent regimes are similar enough to the existing privacy regime to encourage complacency and different enough to give any compliance function a headache. Overlapping and often sectorial regulation adds to the increasing complexity of privacy law in Australia.
Deepfake technology (DT) has taken a new level of sophistication. Cybercriminals now can manipulate sounds, images, and videos to defraud and misinform individuals and businesses. This represents a growing threat to international institutions and individuals which needs to be addressed. This paper provides an overview of deepfakes, their benefits to society, and how DT works. Highlights the threats that are presented by deepfakes to businesses, politics, and judicial systems worldwide. Additionally, the paper will explore potential solutions to deepfakes and conclude with future research direction.
Adversarial attacks for machine learning models have become a highly studied topic both in academia and industry. These attacks, along with traditional security threats, can compromise confidentiality, integrity, and availability of organization's assets that are dependent on the usage of machine learning models. While it is not easy to predict the types of new attacks that might be developed over time, it is possible to evaluate the risks connected to using machine learning models and design measures that help in minimizing these risks. In this paper, we outline a novel framework to guide the risk management process for organizations reliant on machine learning models. First, we define sets of evaluation factors (EFs) in the data domain, model domain, and security controls domain. We develop a method that takes the asset and task importance, sets the weights of EFs' contribution to confidentiality, integrity, and availability, and based on implementation scores of EFs, it determines the overall security state in the organization. Based on this information, it is possible to identify weak links in the implemented security measures and find out which measures might be missing completely. We believe our framework can help in addressing the security issues related to usage of machine learning models in organizations and guide them in focusing on the adequate security measures to protect their assets.
Let us consider a scenario: one night, an executive responsible for operations for a remote downstream oil and gas refinery gets a call from one of their subordinates saying things started acting up ever since they plugged in a USB they brought from home. Multiple processes have become unstable and commands sent to equipment are not executed as requested. Panicking, they say there has been a cyber attack on the supervisory control and data acquisition (SCADA) system. Valves, pumps, and compressors connected to the system are going haywire, and the organisation's legacy systems were not equipped to prevent whatever new malware snuck into the system. Production comes to a halt for two days.
Artificial intelligence (AI) is swiftly fueling the development of a more dynamic world. AI, a subfield of computer science that is interconnected with other disciplines, promises greater efficiency and higher levels of automation and autonomy. Simply put, it is a dual-use technology at the heart of the fourth industrial revolution. Together with machine learning (ML) -- a subfield of AI that analyzes large volumes of data to find patterns via algorithms -- enterprises, organizations, and governments are able to perform impressive feats that ultimately drive innovation and better business. The use of both AI and ML in business is rampant.
The future of corporate cybersecurity seems to lie in artificial intelligence (AI) and machine learning (ML) solutions, a new report from global IT company Wipro suggests. According to Wipro's annual State of Cybersecurity Report (SOCR), almost half (49 percent) of all cybersecurity-related patents filed in the last four years have centered on AI and ML application. Almost half of the 200 organizations that participated in the report also said they are expanding cognitive detection capabilities to tackle unknown attacks in their Security Operations Centers (SOC). From a global perspective, one of the main threats for organizations in the private sector seems to be potential espionage attacks from nation-states. Almost all (86 percent) cyberattacks that came from state-sponsored actors fall under the espionage category and almost half (46 percent) of those attacks targeted the private sector.
Artificial intelligence (AI) applications have attracted considerable ethical attention for good reasons. Although AI models might advance human welfare in unprecedented ways, progress will not occur without substantial risks. This article considers 3 such risks: system malfunctions, privacy protections, and consent to data repurposing. To meet these challenges, traditional risk managers will likely need to collaborate intensively with computer scientists, bioinformaticists, information technologists, and data privacy and security experts. This essay will speculate on the degree to which these AI risks might be embraced or dismissed by risk management.
Industry 4.0 signifies a seismic shift in the way the modern factories and industrial systems operate. They consist of large-scale integration across an entire ecosystem where data inside and outside the organization converges to create new products, predict market demands and reinvent the value chain. In Industry 4.0, we see the convergence of information technology (IT) and operational technology (OT) at scale. The convergence of IT/OT is pushing the boundaries of conventional corporate security strategies where the focus has always been placed on protecting networks, systems, applications and processed data involving people and information. In the context of manufacturing industries with smart factories and industrial systems, robotics, sensor technology, 3D printing, augmented reality, artificial intelligence, machine learning and big data platforms work in tandem to deliver breakthrough efficiencies.
The Right to be Forgotten is part of the recently enacted General Data Protection Regulation (GDPR) law that affects any data holder that has data on European Union residents. It gives EU residents the ability to request deletion of their personal data, including training records used to train machine learning models. Unfortunately, Deep Neural Network models are vulnerable to information leaking attacks such as model inversion attacks which extract class information from a trained model and membership inference attacks which determine the presence of an example in a model's training data. If a malicious party can mount an attack and learn private information that was meant to be removed, then it implies that the model owner has not properly protected their user's rights and their models may not be compliant with the GDPR law. In this paper, we present two efficient methods that address this question of how a model owner or data holder may delete personal data from models in such a way that they may not be vulnerable to model inversion and membership inference attacks while maintaining model efficacy. We start by presenting a real-world threat model that shows that simply removing training data is insufficient to protect users. We follow that up with two data removal methods, namely Unlearning and Amnesiac Unlearning, that enable model owners to protect themselves against such attacks while being compliant with regulations. We provide extensive empirical analysis that show that these methods are indeed efficient, safe to apply, effectively remove learned information about sensitive data from trained models while maintaining model efficacy.
Back in 2008, New York Times best-selling author and Boing Boing alum, Cory Doctorow introduced Markus "w1n5t0n" Yallow to the world in the original Little Brother (which you can still read for free right here). The story follows the talented teenage computer prodigy's exploits after he and his friends find themselves caught in the aftermath of a terrorist bombing of the Bay Bridge. They must outwit and out-hack the DHS, which has turned San Francisco into a police state. Its sequel, Homeland, catches up with Yallow a few years down the line as he faces an impossible choice between behaving as the heroic hacker his friends see him as and toeing the company line. The third installment, Attack Surface, is a standalone story set in the Little Brother universe. It follows Yallow's archrival, Masha Maximow, an equally talented hacker who finds herself working as a counterterrorism expert for a multinational security firm. By day, she enables tin-pot dictators around the world to repress and surveil their citizens.