Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society. Over two days of testimony before Congress earlier this month, Facebook founder and CEO Mark Zuckerberg dodged a litany of questions from lawmakers about how the data of 87 million Americans ended up in the hands of voter profiling firm Cambridge Analytica. The spectacle put a spotlight on the company's murky data-collection and sharing practices, and sparked a much-needed discussion about if and how to hold companies accountable for their handling of user data. However much deserved, Facebook has, so far, born the brunt of public scrutiny for what has unfortunately become standard practice for web platforms and services. As the Ranking Digital Rights 2018 Corporate Accountability Index--an annual ranking of the some of the world's most powerful internet, mobile, and telecommunications companies that was released this week--shows, companies across the board lack transparency about what user data they collect and share, and tell us alarmingly little about their data-sharing agreements with advertisers or other third parties.
Adversaries leverage social network friend relationships to collect sensitive data from users and target them with abuse that includes fake news, cyberbullying, malware, and propaganda. Case in point, 71 out of 80 user study participants had at least 1 Facebook friend with whom they never interact, either in Facebook or in real life, or whom they believe is likely to abuse their posted photos or status updates, or post offensive, false or malicious content. We introduce AbuSniff, a system that identifies Facebook friends perceived as strangers or abusive, and protects the user by unfriending, unfollowing, or restricting the access to information for such friends. We develop a questionnaire to detect perceived strangers and friend abuse. We introduce mutual Facebook activity features and show that they can train supervised learning algorithms to predict questionnaire responses. We have evaluated AbuSniff through several user studies with a total of 263 participants from 25 countries. After answering the questionnaire, participants agreed to unfollow and restrict abusers in 91.6% and 90.9% of the cases respectively, and sandbox or unfriend non-abusive strangers in 92.45% of the cases. Without answering the questionnaire, participants agreed to take the AbuSniff suggested action against friends predicted to be strangers or abusive, in 78.2% of the cases. AbuSniff increased the participant self-reported willingness to reject invitations from strangers and abusers, their awareness of friend abuse implications and their perceived protection from friend abuse.
Recommender Systems are nowadays successfully used by all major web sites (from e-commerce to social media) to filter content and make suggestions in a personalized way. Academic research largely focuses on the value of recommenders for consumers, e.g., in terms of reduced information overload. To what extent and in which ways recommender systems create business value is, however, much less clear, and the literature on the topic is scattered. In this research commentary, we review existing publications on field tests of recommender systems and report which business-related performance measures were used in such real-world deployments. We summarize common challenges of measuring the business value in practice and critically discuss the value of algorithmic improvements and offline experiments as commonly done in academic environments. Overall, our review indicates that various open questions remain both regarding the realistic quantification of the business effects of recommenders and the performance assessment of recommendation algorithms in academia.
Xu, Anbang (IBM Research - Almaden) | Liu, Haibin (IBM Research - Almaden) | Gou, Liang (IBM Research - Almaden) | Akkiraju, Rama (IBM Research - Almaden) | Mahmud, Jalal (IBM Research - Almaden) | Sinha, Vibha (IBM Research - Almaden) | Hu, Yuheng (IBM Research - Almaden) | Qiao, Mu (IBM Research - Almaden)
Brand personality has been shown to affect a variety of user behaviors such as individual preferences and social interactions. Despite intensive research efforts in human personality assessment, little is known about brand personality and its relationship with social media. Leveraging the theory in marketing, we analyze how brand personality associates with its contributing factors embodied in social media. Based on the analysis of over 10K survey responses and a large corpus of social media data from 219 brands, we quantify the relative importance of factors driving brand personality. The brand personality model developed with social media data achieves predicted R 2 values as high as 0.67. We conclude by illustrating how modeling brand personality can help users find brands suiting their personal characteristics and help companies manage brand perceptions.
Around the world, more than 2.3 billion people are on Facebook, actively communicating and posting and consuming on the platform, a figure that continues to grow and drive record profits, despite a barrage of privacy scandals and heightened scrutiny from U.S. lawmakers. Masses of people are not abandoning Facebook, according to the company's fourth quarter earnings, released on Wednesday. In fact, the company has reversed a troubling trend in its most important market: Facebook added users in North America for the first time all year. For Facebook fans, the benefits of using the platform are clear: It's a way to stay connected with friends, to consume news and entertainment, and, for businesses, to find potential customers and audiences. In recent years, however, researchers and consumer advocates have scrutinized what the downsides of all that growth and connectivity could mean for society and individual health and well-being.