Goto

Collaborating Authors

Twitter bags deep learning talent behind London startup, Fabula AI – TechCrunch

#artificialintelligence

Twitter has just announced it has picked up London-based Fabula AI. The deep learning startup has been developing technology to try to identify online disinformation by looking at patterns in how fake stuff vs genuine news spreads online -- making it an obvious fit for the rumor-riled social network. Social media giants remain under increasing political pressure to get a handle on online disinformation to ensure that manipulative messages don't, for example, get a free pass to fiddle with democratic processes. Twitter says the acquisition of Fabula will help it build out its internal machine learning capabilities -- writing that the UK startup's "world-class team of machine learning researchers" will feed an internal research group it's building out, led by Sandeep Pandey, its head of ML/AI engineering. This research group will focus on "a few key strategic areas such as natural language processing, reinforcement learning, ML ethics, recommendation systems, and graph deep learning" -- now with Fabula co-founder and chief scientist, Michael Bronstein, as a leading light within it.


Twitter acquires Deep Learning Startup, Fabula AI

#artificialintelligence

Twitter announced that it has acquired London-based Fabula AI. The financial terms of transactions are not disclosed. The announcement stated that Twitter has established a research group lead by Sandeep Pandey. The research groups look into areas like natural language processing, reinforcement learning, ML ethics, recommendation systems, and graph deep learning. In one of the posts titled "Fake News revealed through artificial intelligence", it was revealed that Fabula AI team, Michael Bronstein, professor and researcher at the USI Institute of Computational Science (ICS), fellow ICS researchers Federico Monti and Dr Davide Eynard, developed a new method based on algorithms and artificial intelligence that could prove to be the most effective solution to the spreading of fake news through the Internet.


Twitter acquires Fabula AI to strengthen its machine learning expertise

#artificialintelligence

Machine learning plays a key role in powering Twitter and our purpose of serving the public conversation. To continually advance the state of machine learning, inside and outside Twitter, we are building out a research group at Twitter, led by Sandeep Pandey, to focus on a few key strategic areas such as natural language processing, reinforcement learning, ML ethics, recommendation systems, and graph deep learning. We are excited to announce that, to help us get there, we have acquired Fabula AI (Fabula), a London-based start-up, with a world-class team of machine learning researchers who employ graph deep learning to detect network manipulation. Graph deep learning is a novel method for applying powerful ML techniques to network-structured data. The result is the ability to analyze very large and complex datasets describing relations and interactions, and to extract signals in ways that traditional ML techniques are not capable of doing.


Fake News Detection on Social Media using Geometric Deep Learning

arXiv.org Machine Learning

Social media are nowadays one of the main news sources for millions of people around the globe due to their low cost, easy access and rapid dissemination. This however comes at the cost of dubious trustworthiness and significant risk of exposure to 'fake news', intentionally written to mislead the readers. Automatically detecting fake news poses challenges that defy existing content-based analysis approaches. One of the main reasons is that often the interpretation of the news requires the knowledge of political or social context or 'common sense', which current NLP algorithms are still missing. Recent studies have shown that fake and real news spread differently on social media, forming propagation patterns that could be harnessed for the automatic fake news detection. Propagation-based approaches have multiple advantages compared to their content-based counterparts, among which is language independence and better resilience to adversarial attacks. In this paper we show a novel automatic fake news detection model based on geometric deep learning. The underlying core algorithms are a generalization of classical CNNs to graphs, allowing the fusion of heterogeneous data such as content, user profile and activity, social graph, and news propagation. Our model was trained and tested on news stories, verified by professional fact-checking organizations, that were spread on Twitter. Our experiments indicate that social network structure and propagation are important features allowing highly accurate (92.7% ROC AUC) fake news detection. Second, we observe that fake news can be reliably detected at an early stage, after just a few hours of propagation. Third, we test the aging of our model on training and testing data separated in time. Our results point to the promise of propagation-based approaches for fake news detection as an alternative or complementary strategy to content-based approaches.


Nuovi algoritmi per neutrini e fake-news

#artificialintelligence

Do neutrinos, the elementary particles, have something in common with fake news on social media? The peculiar and positive answer comes from a group of researchers at USI Institute of Computational Science, and it shows how both their behaviour can be represented using the same data structure. Such structure is based on a non-Euclidean geometry and can be studied through a new class of algorithms: the Graph Convolutional Neural Networks (GCNN). Such algorithms are highly complex mathematical models, and the research work carried out by Federico Monti, member of Prof. Michael Bronstein group, earned him the award for the best scientific contribution assigned by ICMLA, the most important international conference in the field. Monti, in collaboration with other colleagues from New York University, Berkeley and Imperial College, had the opportunity to collaborate with the Lawrence Berkley National Laboratory on data acquired by the IceCube Neutrino Observatory at the South Pole.