Transaction data is like a friendship tie: both parties must respect the relationship and if one party exploits it the relationship sours. As data becomes increasingly valuable, firms must take care not to exploit their users or they will sour their ties. Ethical uses of data cover a spectrum: at one end, using patient data in healthcare to cure patients is little cause for concern. At the other end, selling data to third parties who exploit users is serious cause for concern.2 Between these two extremes lies a vast gray area where firms need better ways to frame data risks and rewards in order to make better legal and ethical choices.
Back in 2008, New York Times best-selling author and Boing Boing alum, Cory Doctorow introduced Markus "w1n5t0n" Yallow to the world in the original Little Brother (which you can still read for free right here). The story follows the talented teenage computer prodigy's exploits after he and his friends find themselves caught in the aftermath of a terrorist bombing of the Bay Bridge. They must outwit and out-hack the DHS, which has turned San Francisco into a police state. Its sequel, Homeland, catches up with Yallow a few years down the line as he faces an impossible choice between behaving as the heroic hacker his friends see him as and toeing the company line. The third installment, Attack Surface, is a standalone story set in the Little Brother universe. It follows Yallow's archrival, Masha Maximow, an equally talented hacker who finds herself working as a counterterrorism expert for a multinational security firm. By day, she enables tin-pot dictators around the world to repress and surveil their citizens.
Helen Dixon, head of Ireland's Data Protection Commission, in May submitted a draft decision to more than two dozen of the bloc's privacy regulators for review, as required under the law. Eleven regulators objected to the proposed ruling, sparking a lengthy dispute-resolution mechanism, she said. The contents of the draft decision haven't been disclosed. Twitter's European operations are based in Dublin. "It's a long process," Ms. Dixon said at The Wall Street Journal's virtual CIO Network conference.
Infer Genetic Disease From Your Face - DeepGestalt can accurately identify some rare genetic disorders using a photograph of a patient's face. This could lead to payers and employers potentially analyzing facial images and discriminating against individuals who have pre-existing conditions or developing medical complications.
We fuel our ambitions with our hard work and persistence every day to make our lives easier and convenient. Spiderman is truly a visionary when he says "with great power, comes great responsibility". Machine Learning is one such power that boosts our convenience from Spotify's suggestions based on our previous playlists to filtering spam and phishing emails. Though ML is an ingenious gift of advanced technology to us, it always remains in the ring succumbed by notorious malware and attacks. Every business develops with the Trust of its customers and investors.
This article is a transcript of a presentation I gave to the Rotary eClub of Silicon Valley about Clearview AI, a facial recognition company which the New York Times said "might end privacy as we know it." My presentation was based on an article earlier this year in Medium's OneZero. Thanks to the whole Rotary eClub team for the opportunity to present. This is the Rotary eClub of Silicon Valley. Every week, we are trying to bring you cool and interesting material that will make you go, "Hmm. That's interesting," and hopefully will inspire you to act in some way, whether that's act in service, or perhaps even act in self defense. Because we are going to learn some really interesting stuff over the coming minutes, and that is a function of having as our speaker today, Thomas Smith. He goes by Tom when we were just speaking, so I'll refer to him as Tom. And Tom wrote an article recently that I found in OneZero, I think, via Medium. And I finished reading that article and thought, "Holy poop." So, so as a result of that, I actually reached out to him to say, "Could you speak to our Rotary eClub of Silicon Valley? And he was gracious enough to write back.
Privacy-preserving recommendations are recently gaining momentum, since the decentralized user data is increasingly harder to collect, by recommendation service providers, due to the serious concerns over user privacy and data security. This situation is further exacerbated by the strict government regulations such as Europe's General Data Privacy Regulations(GDPR). Federated Learning(FL) is a newly developed privacy-preserving machine learning paradigm to bridge data repositories without compromising data security and privacy. Thus many federated recommendation(FedRec) algorithms have been proposed to realize personalized privacy-preserving recommendations. However, existing FedRec algorithms, mostly extended from traditional collaborative filtering(CF) method, cannot address cold-start problem well. In addition, their performance overhead w.r.t. model accuracy, trained in a federated setting, is often non-negligible comparing to centralized recommendations. This paper studies this issue and presents FL-MV-DSSM, a generic content-based federated multi-view recommendation framework that not only addresses the cold-start problem, but also significantly boosts the recommendation performance by learning a federated model from multiple data source for capturing richer user-level features. The new federated multi-view setting, proposed by FL-MV-DSSM, opens new usage models and brings in new security challenges to FL in recommendation scenarios. We prove the security guarantees of \xxx, and empirical evaluations on FL-MV-DSSM and its variations with public datasets demonstrate its effectiveness. Our codes will be released if this paper is accepted.
Precision health leverages information from various sources, including omics, lifestyle, environment, social media, medical records, and medical insurance claims to enable personalized care, prevent and predict illness, and precise treatments. It extensively uses sensing technologies (e.g., electronic health monitoring devices), computations (e.g., machine learning), and communication (e.g., interaction between the health data centers). As health data contain sensitive private information, including the identity of patient and carer and medical conditions of the patient, proper care is required at all times. Leakage of these private information affects the personal life, including bullying, high insurance premium, and loss of job due to the medical history. Thus, the security, privacy of and trust on the information are of utmost importance. Moreover, government legislation and ethics committees demand the security and privacy of healthcare data. Herein, in the light of precision health data security, privacy, ethical and regulatory requirements, finding the best methods and techniques for the utilization of the health data, and thus precision health is essential. In this regard, firstly, this paper explores the regulations, ethical guidelines around the world, and domain-specific needs. Then it presents the requirements and investigates the associated challenges. Secondly, this paper investigates secure and privacy-preserving machine learning methods suitable for the computation of precision health data along with their usage in relevant health projects. Finally, it illustrates the best available techniques for precision health data security and privacy with a conceptual system model that enables compliance, ethics clearance, consent management, medical innovations, and developments in the health domain.
In a socially distanced ceremony at a Delaware high school this week, presidential hopeful Joe Biden introduced Senator Kamala Harris as his running mate in the 2020 election. Like Barack Obama, Hillary Clinton, and Biden's own campaign staff, Harris has a long relationship with Silicon Valley that includes fundraisers and soliciting advice from tech leaders and employees. Her rise in political prominence in the San Francisco Bay Area coincides with the exponential growth of the tech industry and major companies that call California home -- including Apple, Facebook, and Google. As attorney general of California and as a U.S. Senator for the state, Harris has played a role in determining what constitutes online crime, defining online privacy, hammering out parameters for the major app stores, and supporting the prosecution of revenge porn or online sexual harassment. She is someone who has both threatened legal action against tech companies and referred to them as "family."
In the last few years, companies have started using such race-detection software to understand how certain customers use their products, who looks at their ads, or what people of different racial groups like. Others use the tool to seek different racial features in stock photography collections, typically for ads, or in security, to help narrow down the search for someone in a database. In China, where face tracking is widespread, surveillance cameras have been equipped with race-scanning software to track ethnic minorities. The field is still developing, and it is an open question how companies, governments and individuals will take advantage of such technology in the future. Use of the software is fraught, as researchers and companies have begun to recognize its potential to drive discrimination, posing challenges to widespread adoption.