Marriott's disclosure of a data breach impacting as many as 500 million consumers is going to result in technology, security, and legal expenses for years to come -- and the tab is likely to be in the billions of dollars. The hotel company said that information on about 500 million guests may have been breached on its Starwood network since 2014. For about 327 million of those guests, personal information such as date of birth, gender, email, passport numbers, and phone numbers may have been exposed. In some cases, payment card information may have been exposed, but that data was encrypted. A recent IBM study by Ponemon on the cost of large data breaches estimated that a breach of 50 million records will have a total price tag of $350 million.
UK EDITION Ethics Guide to Artificial Intelligence in PR 2. The AIinPR panel and the authors are grateful for the endorsements and support from the following: In May 2020 the Wall Street Journal reported that 64 per cent of all signups to extremist groups on Facebook were due to Facebook's own recommendation algorithms. There could hardly be a simpler case study in the question of AI and ethics, the intersection of what is technically possible and what is morally desirable. CIPR members who find an automated/AI system used by their organisation perpetrating such online harms have a professional responsibility to try and prevent it. For all PR professionals, this is a fundamental requirement of the ability to practice ethically. The question is – if you worked at Facebook, what would you do? If you're not sure, this report guide will help you work out your answer. Alastair McCapra Chief Executive Officer CIPR Artificial Intelligence is quickly becoming an essential technology for ...
The Commonwealth Scientific and Industrial Research Organisation (CSIRO) has highlighted a need for development of artificial intelligence (AI) in Australia to be wrapped with a sufficient framework to ensure nothing is set onto citizens without appropriate ethical consideration. The organisation has published a discussion paper [PDF], Artificial Intelligence: Australia's Ethics Framework, on the key issues raised by large-scale AI, seeking answers to a handful of questions that are expected to inform the government's approach to AI ethics in Australia. Highlighted by CSIRO are eight core principles that will guide the framework: That it generates net-benefits, does no harm, complies with regulatory and legal requirements, appropriately considers privacy, boasts fairness, is transparent and easily explained, contains provisions for contesting a decision made by a machine, and that there is an accountability trail. "Australia's colloquial motto is a'fair go' for all. Ensuring fairness across the many different groups in Australian society will be challenging, but this cuts right to the heart of ethical AI," CSIRO wrote.
The Office of the Australian Information and Privacy Commissioner (OAIC) is seeking public comment on draft resources it has published relating to Australia's impending data breach notification laws. The draft resources include guidelines on how to prepare an eligible data breach statement for when the scheme takes effect on February 22, 2017, how to assess a suspected breach, what quantifies reporting, how to notify the OAIC of an incident, and exceptions under the legislated obligations. The new laws mandated under the Privacy Amendment (Notifiable Data Breaches) Act require organisations covered by the Australian Privacy Act 1988 to notify any individuals likely to be at risk of serious harm by a data breach. This notice must include recommendations about the steps that individuals should take in response to the data breach, the OAIC explains in its draft material. Australian Information Commissioner Timothy Pilgrim must also be notified.
This past week, everyone's been so focused on Hillary and Trump that few noticed that the Majority Staff of the House Homeland Security Committee finally released its encryption report -- with some pretty big falsehoods in it. "Going Dark, Going Forward: A Primer on the Encryption Debate" is a guide for Congress and stakeholders that makes me wonder if we have a full-blown American hiring crisis for fact-checkers. The report relied on more than "100 meetings with ... experts from the technology industry, federal, state, and local law enforcement, privacy and civil liberties, computer science and cryptology, economics, law and academia, and the Intelligence Community." The first line of the report is based on flat-out incorrect information. "Public engagement on encryption issues surged following the 2015 terrorist attacks in Paris and San Bernardino, particularly when it became clear that the attackers used encrypted communications to evade detection -- a phenomenon known as'going dark.'"