Goto

Collaborating Authors

Results


Elon Musk's Twitter Bot Problem Is Fake News

WSJ.com: WSJD - Technology

With his professed concern about fake accounts on Twitter, Elon Musk appears to be grasping at legal straws in an attempt to back out of his commitment to buy the social networking company for $54.20 a share, or at least to pay less for it. But his gambit has shined a light on a real scourge of online companies and their users. Counting the autonomous accounts that mimic real people is just as slippery as valuing companies. A 2020 study by Adrian Rauchfleisch and Jonas Kaiser looking at thousands of Twitter accounts, including hundreds of verified politicians as well as "obvious" bots, found Botometer, the industry-standard learning algorithm trained to calculate the likelihood an account is a bot, yields imprecise scores leading to both false negatives and false positives.


Snap introduces pocket-size camera drone called Pixy

ZDNet

Since completing a degree in journalism, Aimee has had her fair share of covering various topics, including business, retail, manufacturing, and travel. She continues to expand her repertoire as a tech journalist with ZDNet. Snap has unveiled its yellow, pocket-size camera drone called Pixy, designed to be used in conjunction with Snapchat. According to the social media company, Pixy does not require any setup or a controller to operate, rather, it can be activated with a tap of a button found on the device. Users can select how it flies based on four preset flight modes, which can be changed using the device's camera dial.


Liberal mag mocked for knocking 'petromasculinity', hoping 'climate crisis will help change masculinity'

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Liberal magazine The New Republic (TNR) garnered the scorn of critics after publishing an article Friday celebrating "petromasculinity" being rejected by younger generations, specifically those using online dating apps. In the piece, headlined "'Petromasculinity' Is Becoming Toxic, Too--at Least to Online Daters," TNR praised what appeared to be a shift in online daters preferring a potential partner who cares about climate change and "rejecting petromasculinity: the climate denial, authoritarian politics, and sexism that are too often inextricably linked." The dating app Tinder is shown on an Apple iPhone in this photo illustration taken February 10, 2016.


Can AI's Voracious Appetite Be Tamed?

#artificialintelligence

In the spring of 2019, artificial intelligence datasets started disappearing from the internet. Such collections -- typically gigabytes of images, video, audio, or text data -- are the foundation for the increasingly ubiquitous and profitable form of AI known as machine learning, which can mimic various kinds of human judgments such as facial recognition. In April, it was Microsoft's MS-Celeb-1M, consisting of 10 million images of 100,000 people's faces -- many of them celebrities, as the name suggests, but also many who were not public figures -- harvested from internet sites. In June, Duke University researchers withdrew their multi-target, multi-camera dataset (DukeMTMC), which consisted of images taken from videos, mostly of students, recorded at a busy campus intersection over 14 hours on a day in 2014. Around the same time, people reported that they could no longer access Diversity in Faces, a dataset of more than a million facial images collected from the internet, released at the beginning of 2019 by a team of IBM researchers. All together, about a dozen AI datasets vanished -- hastily scrubbed by their creators after researchers, activists, and journalists exposed an array of problems with the data and the ways it was used, from privacy, to race and gender bias, to issues with human rights.


Fine-grained Prediction of Political Leaning on Social Media with Unsupervised Deep Learning

Journal of Artificial Intelligence Research

Predicting the political leaning of social media users is an increasingly popular task, given its usefulness for electoral forecasts, opinion dynamics models and for studying the political dimension of polarization and disinformation. Here, we propose a novel unsupervised technique for learning fine-grained political leaning from the textual content of social media posts. Our technique leverages a deep neural network for learning latent political ideologies in a representation learning task. Then, users are projected in a low-dimensional ideology space where they are subsequently clustered. The political leaning of a user is automatically derived from the cluster to which the user is assigned. We evaluated our technique in two challenging classification tasks and we compared it to baselines and other state-of-the-art approaches. Our technique obtains the best results among all unsupervised techniques, with micro F1 = 0.426 in the 8-class task and micro F1 = 0.772 in the 3-class task. Other than being interesting on their own, our results also pave the way for the development of new and better unsupervised approaches for the detection of fine-grained political leaning.


The New Intelligence Game

#artificialintelligence

The relevance of the video is that the browser identified the application being used by the IAI as Google Earth and, according to the OSC 2006 report, the Arabic-language caption reads Islamic Army in Iraq/The Military Engineering Unit – Preparations for Rocket Attack, the video was recorded in 5/1/2006, we provide, in Appendix A, a reproduction of the screenshot picture made available in the OSC report. Now, prior to the release of this video demonstration of the use of Google Earth to plan attacks, in accordance with the OSC 2006 report, in the OSC-monitored online forums, discussions took place on the use of Google Earth as a GEOINT tool for terrorist planning. On August 5, 2005 the user "Al-Illiktrony" posted a message to the Islamic Renewal Organization forum titled A Gift for the Mujahidin, a Program To Enable You to Watch Cities of the World Via Satellite, in this post the author dedicated Google Earth to the mujahidin brothers and to Shaykh Muhammad al-Mas'ari, the post was replied in the forum by "Al-Mushtaq al-Jannah" warning that Google programs retain complete information about their users. This is a relevant issue, however, there are two caveats, given the amount of Google Earth users, it may be difficult for Google to flag a jihadist using the functionality in time to prevent an attack plan, one possible solution would be for Google to flag computers based on searched websites and locations, for instance to flag computers that visit certain critical sites, but this is a problem when landmarks are used, furthermore, and this is the second caveat, one may not use one's own computer to produce the search or even mask the IP address. On October 3, 2005, as described in the OSC 2006 report, in a reply to a posting by Saddam Al-Arab on the Baghdad al-Rashid forum requesting the identification of a roughly sketched map, "Almuhannad" posted a link to a site that provided a free download of Google Earth, suggesting that the satellite imagery from Google's service could help identify the sketch.


Technology Ethics in Action: Critical and Interdisciplinary Perspectives

arXiv.org Artificial Intelligence

This special issue interrogates the meaning and impacts of "tech ethics": the embedding of ethics into digital technology research, development, use, and governance. In response to concerns about the social harms associated with digital technologies, many individuals and institutions have articulated the need for a greater emphasis on ethics in digital technology. Yet as more groups embrace the concept of ethics, critical discourses have emerged questioning whose ethics are being centered, whether "ethics" is the appropriate frame for improving technology, and what it means to develop "ethical" technology in practice. This interdisciplinary issue takes up these questions, interrogating the relationships among ethics, technology, and society in action. This special issue engages with the normative and contested notions of ethics itself, how ethics has been integrated with technology across domains, and potential paths forward to support more just and egalitarian technology. Rather than starting from philosophical theories, the authors in this issue orient their articles around the real-world discourses and impacts of tech ethics--i.e., tech ethics in action.


Top 20 Digital Transformation Pros you NEED To Follow - The AI Journal

#artificialintelligence

Digital Transformation moved at a relatively slow pace for the past ten years, mainly focusing on improving products, employee experience and processes. But then, after COVID – 19 hit, IT decision-makers were forced to prioritize their IT initiatives in order to increase digital investments. According to IDC, over the next four years, worldwide Digital Transformation technology investment is set to reach at least $7.4 trillion and will be the first time that DX will account for the majority of IT spending – predicted to be a huge 53% of budgets. Digital transformation is a set of methodologies and tools which are used by modern companies to optimize their operational activities, such as increasing their reach power, providing differentiated service and increasing performance. However, digital transformation is not just a new department in the firm, but it is definitely a game-changer in technology's role in the corporate environment. That's why it is increasingly being seen as the 4th Industrial Revolution. "Think of digital transformation less as a technology project to be finished than as a state of perpetual agility, always ready to evolve for whatever customers want next, and you'll be pointed down the right path."-


CES2022 Twitter NodeXL SNA Map and Report for Thursday, 23 December 2021 at 06:58 UTC

#artificialintelligence

The graph represents a network of 5,523 Twitter users whose recent tweets contained "CES2022", or who were replied to or mentioned in those tweets, taken from a data set limited to a maximum of 18,000 tweets. The network was obtained from Twitter on Thursday, 23 December 2021 at 07:58 UTC. The tweets in the network were tweeted over the 2-day, 13-hour, 10-minute period from Monday, 20 December 2021 at 17:44 UTC to Thursday, 23 December 2021 at 06:54 UTC. Additional tweets that were mentioned in this data set were also collected from prior time periods. These tweets may expand the complete time period of the data.


Do You See What I See? Capabilities and Limits of Automated Multimedia Content Analysis

arXiv.org Artificial Intelligence

The ever-increasing amount of user-generated content online has led, in recent years, to an expansion in research and investment in automated content analysis tools. Scrutiny of automated content analysis has accelerated during the COVID-19 pandemic, as social networking services have placed a greater reliance on these tools due to concerns about health risks to their moderation staff from in-person work. At the same time, there are important policy debates around the world about how to improve content moderation while protecting free expression and privacy. In order to advance these debates, we need to understand the potential role of automated content analysis tools. This paper explains the capabilities and limitations of tools for analyzing online multimedia content and highlights the potential risks of using these tools at scale without accounting for their limitations. It focuses on two main categories of tools: matching models and computer prediction models. Matching models include cryptographic and perceptual hashing, which compare user-generated content with existing and known content. Predictive models (including computer vision and computer audition) are machine learning techniques that aim to identify characteristics of new or previously unknown content.