Collaborating Authors


Opinion: Artificial intelligence is changing hiring and firing


The BDN Opinion section operates independently and does not set newsroom policies or contribute to reporting or editing articles elsewhere in the newspaper or on Keith E. Sonderling is a commissioner on the U.S. Equal Employment Opportunity Commission.The views here are the author's own and should not be attributed to the EEOC or any other member of the commission. With 86 percent of major U.S. corporations predicting that artificial intelligence will become a "mainstream technology" at their company this year, management-by-algorithm is no longer the stuff of science fiction. AI has already transformed the way workers are recruited, hired, trained, evaluated and even fired. One recent study found that 83 percent of human resources leaders rely in some form on technology in employment decision-making.

Human Detection of Machine-Manipulated Media

Communications of the ACM

The recent emergence of artificial intelligence (AI)-powered media manipulations has widespread societal implications for journalism and democracy,7 national security,1 and art.8,14 AI models have the potential to scale misinformation to unprecedented levels by creating various forms of synthetic media.21 For example, AI systems can synthesize realistic video portraits of an individual with full control of facial expressions, including eye and lip movement;11,18,34,35,36 clone a speaker's voice with a few training samples and generate new natural-sounding audio of something the speaker never said;2 synthesize visually indicated sound effects;28 generate high-quality, relevant text based on an initial prompt;31 produce photorealistic images of a variety of objects from text inputs;5,17,27 and generate photorealistic videos of people expressing emotions from only a single image.3,40 The technologies for producing machine-generated, fake media online may outpace the ability to manually detect and respond to such media. We developed a neural network architecture that combines instance segmentation with image inpainting to automatically remove people and other objects from images.13,39 Figure 1 presents four examples of participant-submitted images and their transformations. The AI, which we call a "target object removal architecture," detects an object, removes it, and replaces its pixels with pixels that approximate what the background should look like without the object.

A Machine Learning Pipeline to Examine Political Bias with Congressional Speeches


Machine learning, with advancements in natural language processing and deep learning, has been actively used in studying political bias on social media. But the key challenge to model political bias is the requirement of human effort to label the seed social media posts to train machine learning models. Although very effective, this approach has disadvantages in the time-consuming data labeling process and the cost to label significant data for machine learning models is significantly higher. The web offers invaluable data on political bias starting from biased news media outlets publishing articles on socio-political issues to biased user discussions about several topics in multiple social forums. In this work, we introduce a novel approach to label political bias for social media posts directly from US congressional speeches without any human intervention for downstream machine learning models.

Truth or Fake - How artificial intelligence on Whatsapp can help fight disinformation


The tagline of Spanish fact-checking outlet Maldita puts readers at the centre of the team's journalistic work: the Spanish phrase "Hazte Maldito" (meaning "Be part of Maldita!") invites the public to send in potentially fake news items and ask questions about the virus. Before the pandemic, Maldita received about 200 messages a day on their WhatsApp number, occupying a full-time journalist. After the pandemic started in March 2020 in Europe, their daily messages increased to nearly 2,000. Maldita has launched a WhatsApp chatbot to automate and centralize their interactions with their community. After a user sends in a social media post to the WhatsApp number - either a photo, a video, a link, or a WhatsApp channel that's been sharing questionable content, the bot analyses the content.

The Case for Claim Difficulty Assessment in Automatic Fact Checking Artificial Intelligence

Fact-checking is the process (human, automated, or hybrid) by which claims (i.e., purported facts) are evaluated for veracity. In this article, we raise an issue that has received little attention in prior work - that some claims are far more difficult to fact-check than others. We discuss the implications this has for both practical fact-checking and research on automated fact-checking, including task formulation and dataset design. We report a manual analysis undertaken to explore factors underlying varying claim difficulty and categorize several distinct types of difficulty. We argue that prediction of claim difficulty is a missing component of today's automated fact-checking architectures, and we describe how this difficulty prediction task might be split into a set of distinct subtasks.

Three Sunday shows ignored NYT report on botched drone strike Pentagon now admits killed 10 Afghan civilians

FOX News

Fox News anchor Bret Baier offers analysis on that and other breaking news stories, on'Your World'. Three of the five prominent Sunday morning newscasts avoided the explosive New York Times report about the botched U.S. drone strike the Pentagon finally admitted killed Afghan civilians rather than ISIS-K terrorists the Biden administration previously touted. During a Friday press conference, the Pentagon confirmed that the Aug. 28 drone strike was a "tragic mistake" that killed ten civilians, including seven children, which was meant to be in response to the Aug. 26 terrorist attack outside the Kabul airport that left 13 U.S. servicemen dead. This came one week after the Times published a stunning visual investigation that came to the same conclusion. The Biden administration had announced that "two high profile" ISIS-K fighters who were dubbed as "planners and facilitators" of the suicide bombing were killed in the strike.

Drone footage shows thousands of migrants under bridge in Del Rio, Texas as local facilities overwhelmed

FOX News

Fox News correspondent Bill Melugin reports live from Del Rio, Texas, as border crisis intensifies and migrant facilities are overrun. Fox News drone footage over the International Bridge in Del Rio Texas shows thousands of migrants being kept there as they wait to be apprehended after crossing illegally into the United States -- as local facilities are overwhelmed and the crisis at the border continues. Border Patrol and law enforcement sources told Fox News that over 4,200 migrants are waiting to be apprehended under the bridge after crossing into the United States. The new footage shows how the migrant crisis that has rocked border states, with a knock-on effect in states across the country, appears to be far from over. Click here to see the footage.

Assisting the Human Fact-Checkers: Detecting All Previously Fact-Checked Claims in a Document Artificial Intelligence

Given the recent proliferation of false claims online, there has been a lot of manual fact-checking effort. As this is very time-consuming, human fact-checkers can benefit from tools that can support them and make them more efficient. Here, we focus on building a system that could provide such support. Given an input document, it aims to detect all sentences that contain a claim that can be verified by some previously fact-checked claims (from a given database). The output is a re-ranked list of the document sentences, so that those that can be verified are ranked as high as possible, together with corresponding evidence. Unlike previous work, which has looked into claim retrieval, here we take a document-level perspective. We create a new manually annotated dataset for the task, and we propose suitable evaluation measures. We further experiment with a learning-to-rank approach, achieving sizable performance gains over several strong baselines. Our analysis demonstrates the importance of modeling text similarity and stance, while also taking into account the veracity of the retrieved previously fact-checked claims. We believe that this research would be of interest to fact-checkers, journalists, media, and regulatory authorities.

Hetero-SCAN: Towards Social Context Aware Fake News Detection via Heterogeneous Graph Neural Network Artificial Intelligence

Fake news, false or misleading information presented as news, has a great impact on many aspects of society, such as politics and healthcare. To handle this emerging problem, many fake news detection methods have been proposed, applying Natural Language Processing (NLP) techniques on the article text. Considering that even people cannot easily distinguish fake news by news content, these text-based solutions are insufficient. To further improve fake news detection, researchers suggested graph-based solutions, utilizing the social context information such as user engagement or publishers information. However, existing graph-based methods still suffer from the following four major drawbacks: 1) expensive computational cost due to a large number of user nodes in the graph, 2) the error in sub-tasks, such as textual encoding or stance detection, 3) loss of rich social context due to homogeneous representation of news graphs, and 4) the absence of temporal information utilization. In order to overcome the aforementioned issues, we propose a novel social context aware fake news detection method, Hetero-SCAN, based on a heterogeneous graph neural network. Hetero-SCAN learns the news representation from the heterogeneous graph of news in an end-to-end manner. We demonstrate that Hetero-SCAN yields significant improvement over state-of-the-art text-based and graph-based fake news detection methods in terms of performance and efficiency.

Sequential Modelling with Applications to Music Recommendation, Fact-Checking, and Speed Reading Artificial Intelligence

Sequential modelling entails making sense of sequential data, which naturally occurs in a wide array of domains. One example is systems that interact with users, log user actions and behaviour, and make recommendations of items of potential interest to users on the basis of their previous interactions. In such cases, the sequential order of user interactions is often indicative of what the user is interested in next. Similarly, for systems that automatically infer the semantics of text, capturing the sequential order of words in a sentence is essential, as even a slight re-ordering could significantly alter its original meaning. This thesis makes methodological contributions and new investigations of sequential modelling for the specific application areas of systems that recommend music tracks to listeners and systems that process text semantics in order to automatically fact-check claims, or "speed read" text for efficient further classification.