Goto

Collaborating Authors

Results


This new Google search engine feature will compete with Facebook, Twitter in curating news

USATODAY - Tech Top Stories

Google is developing a new feature called Big Moments, which will compete with rivals Facebook and Twitter in delivering the latest breaking news updates during major events. The COVID-19 pandemic forced the search engine to react quickly and constantly to its users' needs for the latest and most authoritative information, according to Google. A team at Google has been working on the project for over a year, after the company struggled to provide the latest updates on the U.S. Capitol attack in January and Black Lives Matter protests last summer, says The Information, a Silicon Valley-basedtechnology news site. Big Moments hopes to build upon Google's Full Coverage feature, which it launched in Google News in 2018 and later integrated with its search engine in March of 2021. Full Coverage allows users to tap into a news headline and see how that story is reported from a variety of sources.


Is Disney World the New Netflix?

#artificialintelligence

Netflix leans on machine learning to power its recommendation algorithms and shape its future … How deep will Disney go in catering its suggestions?


Model Bias in NLP -- Application to Hate Speech Classification

arXiv.org Artificial Intelligence

This document sums up our results forthe NLP lecture at ETH in the spring semester 2021. In this work, a BERT based neural network model (Devlin et al.,2018) is applied to the JIGSAW dataset (Jigsaw/Conversation AI, 2019) in order to create a model identifying hateful and toxic comments (strictly seperated from offensive language) in online social platforms (English language), inthis case Twitter. Three other neural network architectures and a GPT-2 (Radfordet al., 2019) model are also applied on the provided data set in order to compare these different models. The trained BERT model is then applied on two different data sets to evaluate its generalisation power, namely on another Twitter data set (Tom Davidson, 2017) (Davidsonet al., 2017) and the data set HASOC 2019 (Thomas Mandl, 2019) (Mandl et al.,2019) which includes Twitter and also Facebook comments; we focus on the English HASOC 2019 data. In addition, it can be shown that by fine-tuning the trained BERT model on these two datasets by applying different transfer learning scenarios via retraining partial or all layers the predictive scores improve compared to simply applying the model pre-trained on the JIGSAW data set. Withour results, we get precisions from 64% to around 90% while still achieving acceptable recall values of at least lower 60s%, proving that BERT is suitable for real usecases in social platforms.


Truth or Fake - How artificial intelligence on Whatsapp can help fight disinformation

#artificialintelligence

The tagline of Spanish fact-checking outlet Maldita puts readers at the centre of the team's journalistic work: the Spanish phrase "Hazte Maldito" (meaning "Be part of Maldita!") invites the public to send in potentially fake news items and ask questions about the virus. Before the pandemic, Maldita received about 200 messages a day on their WhatsApp number, occupying a full-time journalist. After the pandemic started in March 2020 in Europe, their daily messages increased to nearly 2,000. Maldita has launched a WhatsApp chatbot to automate and centralize their interactions with their community. After a user sends in a social media post to the WhatsApp number - either a photo, a video, a link, or a WhatsApp channel that's been sharing questionable content, the bot analyses the content.


Gartner Identifies Four Trends Driving Near-Term Artificial Intelligence Innovation

#artificialintelligence

Increased trust, transparency, fairness and auditability of AI technologies continues to be of growing importance to a wide range of stakeholders,


How artificial intelligence on Whatsapp can help fight disinformation – Truth or Fake

#artificialintelligence

… a team from the Spanish fact-checking organisation Maldita created a WhatsApp chatbot, which uses artificial intelligence to provide automated …


Survey XII: What Is the Future of Ethical AI Design? – Imagining the Internet

#artificialintelligence

Results released June 16, 2021 – Pew Research Center and Elon University's Imagining the Internet Center asked experts where they thought efforts aimed at ethical artificial intelligence design would stand in the year 2030. Some 602 technology innovators, developers, business and policy leaders, researchers and activists responded to this specific question. The Question – Regarding the application of AI Ethics by 2030: In recent years, there have been scores of convenings and even more papers generated proposing ethical frameworks for the application of artificial intelligence (AI). They cover a host of issues including transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and non-maleficence, freedom, trust, sustainability and dignity. Our questions here seek your predictions about the possibilities for such efforts. By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public ...


MONITOR: A Multimodal Fusion Framework to Assess Message Veracity in Social Networks

arXiv.org Artificial Intelligence

Users of social networks tend to post and share content with little restraint. Hence, rumors and fake news can quickly spread on a huge scale. This may pose a threat to the credibility of social media and can cause serious consequences in real life. Therefore, the task of rumor detection and verification has become extremely important. Assessing the veracity of a social media message (e.g., by fact checkers) involves analyzing the text of the message, its context and any multimedia attachment. This is a very time-consuming task that can be much helped by machine learning. In the literature, most message veracity verification methods only exploit textual contents and metadata. Very few take both textual and visual contents, and more particularly images, into account. In this paper, we second the hypothesis that exploiting all of the components of a social media post enhances the accuracy of veracity detection. To further the state of the art, we first propose using a set of advanced image features that are inspired from the field of image quality assessment, which effectively contributes to rumor detection. These metrics are good indicators for the detection of fake images, even for those generated by advanced techniques like generative adversarial networks (GANs). Then, we introduce the Multimodal fusiON framework to assess message veracIty in social neTwORks (MONITOR), which exploits all message features (i.e., text, social context, and image features) by supervised machine learning. Such algorithms provide interpretability and explainability in the decisions taken, which we believe is particularly important in the context of rumor verification. Experimental results show that MONITOR can detect rumors with an accuracy of 96% and 89% on the MediaEval benchmark and the FakeNewsNet dataset, respectively. These results are significantly better than those of state-of-the-art machine learning baselines.


Facebook Apologizes For Embarrassing Mistake Caused By A.I.

#artificialintelligence

In this photo illustration Facebook logo can be seen, Kolkata, India, 28 February, 2020. Facebook ... [ ] Inc on Thursday announced its decision to cancel its annual developer conference due to Coronavirus outbreak according a news media report. Some crisis situations are caused by what people say or do. On occasion, a crisis--or an embarrassing incident--is caused by technology. The New York Times reported yesterday that, "Facebook users who recently watched a video from a British tabloid featuring Black men saw an automated prompt from the social network that asked if they would like to'keep seeing videos about Primates', causing the company to investigate and disable the artificial intelligence-powered feature that pushed the message. "This was clearly an unacceptable error and we disabled the entire topic recommendation feature as soon as we realized this was happening so we could investigate the cause and prevent this from happening again," Facebook spokeswoman Dani Lever said in a statement to USA Today. "As we have said, while we have made improvements to our AI, we know it's not perfect and we have more progress to make," she said. "We apologize to anyone who may have seen these offensive recommendations." This is not the first time that advanced technology has created an embarrassing situation for an organization. The Washington Post reported yesterday that "a judge ruled that Apple will have to continue fighting a lawsuit brought by users in federal court in California, alleging that the company's voice assistant Siri has improperly recorded private conversations." Last week at the Paralympics in Tokyo, Toyota self-driving pods injured a pedestrian. Reuters reported that, "In a YouTube video, Toyota Chief Executive Akio Toyoda apologized for the incident and said he offered to meet the person but was unable to do so.


How machine learning powers Facebook's News Feed ranking algorithm

#artificialintelligence

Now that we have all the predictions, we can combine them into a single score. To do this, multiple passes are needed to save computational power and to apply rules, such as content type diversity (i.e., content type should be varied so that viewers don't see redundant content types, such as multiple videos, one after another), that depend on an initial ranking score. First, certain integrity processes are applied to every post. These are designed to determine which integrity detection measures, if any, need to be applied to the stories selected for ranking. Then, in pass 0, a lightweight model is run to select approximately 500 of the most relevant posts for Juan that are eligible for ranking.