Goto

Collaborating Authors

Results


OAIC determines AFP interfered with privacy of Australians after using Clearview AI

ZDNet

In an investigation conducted by Australia's Information Commissioner (OAIC), it has found the Australian Federal Police's (AFP) use of the Clearview AI platform interfered with the privacy of Australian citizens. Clearview AI's facial recognition tool is known for breaching privacy laws on numerous fronts by scraping biometric information from the web indiscriminately and collecting data on at least 3 billion people, with many of those people being Australian. From November 2019 to January 2020, 10 members of the AFP's Australian Centre to Counter Child Exploitation (ACCCE) used the Clearview AI platform to conduct searches of certain individuals residing in Australia. ACCCE members used the platform to search for scraped images of possible persons of interest, an alleged offender, victims, members of the public, and members of the AFP, the OAIC said. While the AFP only used the Clearview AI platform on a trial basis, Information and Privacy Commissioner Angelene Falk determined [PDF] the federal police failed to undertake a privacy impact assessment of the Clearview AI platform, despite it being a high privacy risk project.


Clearview AI in hot water down under – TechCrunch - MadConsole

#artificialintelligence

After Canada, now Australia has found that controversial facial recognition company, Clearview AI, broke national privacy laws when it covertly collected citizens' facial biometrics and incorporated them into its AI-powered identity matching service -- which it sells to law enforcement agencies and others. In a statement today, Australia's information commissioner and privacy commissioner, Angelene Falk, said Clearview AI's facial recognition tool breached the country's Privacy Act 1988 by: In what looks like a major win for privacy down under, the regulator has ordered Clearview to stop collecting facial biometrics and biometric templates from Australians; and to destroy all existing images and templates that it holds. The Office of the Australian Information Commissioner (OAIC) undertook a joint investigation into Clearview with the UK data protection agency, the Information Commission's Office (IOC). However the UK regulator has yet to announce any conclusions. In a separate statement today -- which possibly reads slightly flustered -- the ICO said it is "considering its next steps and any formal regulatory action that may be appropriate under the UK data protection laws".


Clearview AI slammed for breaching Australians' privacy on numerous fronts

ZDNet

Australia's Information Commissioner has found that Clearview AI breached Australia's privacy laws on numerous fronts, after a bilateral investigation uncovered that the company's facial recognition tool collected Australians' sensitive information without consent and by unfair means. The investigation, conducted by the Office of the Australian Information Commissioner (OAIC) and the UK Information Commissioner's Office (ICO), found that Clearview AI's facial recognition tool scraped biometric information from the web indiscriminately and has collected data on at least 3 billion people. The OAIC also found that some Australian police agency users, who were Australian residents and trialled the tool, searched for and identified images of themselves as well as images of unknown Australian persons of interest in Clearview AI's database. By considering these factors together, Australia's Information Commissioner Angelene Falk concluded that Clearview AI breached Australia's privacy laws by collecting Australians' sensitive information without consent and by unfair means. In her determination [PDF], Falk explained that consent was not provided, even though facial images of affected Australians are already available online, as Clearview AI's intent in collecting this biometric data was ambiguous.


Precision Health Data: Requirements, Challenges and Existing Techniques for Data Security and Privacy

arXiv.org Artificial Intelligence

Precision health leverages information from various sources, including omics, lifestyle, environment, social media, medical records, and medical insurance claims to enable personalized care, prevent and predict illness, and precise treatments. It extensively uses sensing technologies (e.g., electronic health monitoring devices), computations (e.g., machine learning), and communication (e.g., interaction between the health data centers). As health data contain sensitive private information, including the identity of patient and carer and medical conditions of the patient, proper care is required at all times. Leakage of these private information affects the personal life, including bullying, high insurance premium, and loss of job due to the medical history. Thus, the security, privacy of and trust on the information are of utmost importance. Moreover, government legislation and ethics committees demand the security and privacy of healthcare data. Herein, in the light of precision health data security, privacy, ethical and regulatory requirements, finding the best methods and techniques for the utilization of the health data, and thus precision health is essential. In this regard, firstly, this paper explores the regulations, ethical guidelines around the world, and domain-specific needs. Then it presents the requirements and investigates the associated challenges. Secondly, this paper investigates secure and privacy-preserving machine learning methods suitable for the computation of precision health data along with their usage in relevant health projects. Finally, it illustrates the best available techniques for precision health data security and privacy with a conceptual system model that enables compliance, ethics clearance, consent management, medical innovations, and developments in the health domain.


Australian Face Verification Service starts with citizenship imagery

ZDNet

Australia's descent into a federal government-backed biometric future began on Wednesday, with the first three agencies to be on board the Department of Foreign Affairs and Trade (DFAT), the Australian Federal Police (AFP), and the Department of Immigration and Border Protection (DIBP). The first tranche of data to be swapped by the new Face Verification Service will be citizenship images, with visa, passport, and driver licence photos to follow. In August last year, the Attorney-General's Department (AGD) said access would be expanded to include the Australian Security Intelligence Organisation, Defence, and the AGD. Clear-cut definition of de-identified data critical in legislation: Pilgrim Department of Finance to rebuild two-year-old IT system Turnbull's DTA to establish a whole-of-government program management office Vodafone continues losing customers as Telstra, Optus, Amaysim gain: Kantar The AU$18.5 million system is designed to replace existing manual, ad-hoc facial image sharing arrangements between agencies to verify identities, and avoids the creation of a centralised database by agencies receiving queries running image searches against their own databases, an AGD fact sheet [PDF] claims. "Often, this response will be a simple'yes' or'no' to indicate whether two images are of the same person," AGD said.