If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Machine learning plays a key role in powering Twitter and our purpose of serving the public conversation. To continually advance the state of machine learning, inside and outside Twitter, we are building out a research group at Twitter, led by Sandeep Pandey, to focus on a few key strategic areas such as natural language processing, reinforcement learning, ML ethics, recommendation systems, and graph deep learning. We are excited to announce that, to help us get there, we have acquired Fabula AI (Fabula), a London-based start-up, with a world-class team of machine learning researchers who employ graph deep learning to detect network manipulation. Graph deep learning is a novel method for applying powerful ML techniques to network-structured data. The result is the ability to analyze very large and complex datasets describing relations and interactions, and to extract signals in ways that traditional ML techniques are not capable of doing.
Twitter announced on Monday it has acquired Fabula AI, a London-based machine learning research company. Fabula AI's team will join Twitter and work alongside Sandeep Pandey as part of Twitter's research group focused on natural language processing, reinforcement learning, machine learning ethics, recommendation systems and graph deep learning. Fabula's graph deep learning research is used to detect network manipulation. With this acquisition, Twitter aims to use Fabula's capabilities to better identify bad actors and malicious behavior on the platform, in addition to enhancing its recommendations processes. While the Fabula research team will initially focus on improving the health of conversations happening on Twitter, the company said the team's efforts will expand in the future, aiming to help stop spam and abuse, as well as improve recommendations, the explore tab and the onboarding experience.
Twitter announced that it has acquired London-based Fabula AI. The financial terms of transactions are not disclosed. The announcement stated that Twitter has established a research group lead by Sandeep Pandey. The research groups look into areas like natural language processing, reinforcement learning, ML ethics, recommendation systems, and graph deep learning. In one of the posts titled "Fake News revealed through artificial intelligence", it was revealed that Fabula AI team, Michael Bronstein, professor and researcher at the USI Institute of Computational Science (ICS), fellow ICS researchers Federico Monti and Dr Davide Eynard, developed a new method based on algorithms and artificial intelligence that could prove to be the most effective solution to the spreading of fake news through the Internet.
We are excited to announce that, to help us get there, we have acquired Fabula AI (Fabula), a London-based start-up, with a world-class team of machine learning researchers who employ graph deep learning to detect network manipulation. Graph deep learning is a novel method for applying powerful ML techniques to network-structured data. The result is the ability to analyze very large and complex datasets describing relations and interactions, and to extract signals in ways that traditional ML techniques are not capable of doing. Twitter has been criticized for the amount of fake news and misinformation that easily spreads on its platform. Though the company has taken steps to combat such misinformation in recent years, fake news is still a major problem for the social network.
Twitter has just announced it has picked up London-based Fabula AI. The deep learning startup has been developing technology to try to identify online disinformation by looking at patterns in how fake stuff vs genuine news spreads online -- making it an obvious fit for the rumor-riled social network. Social media giants remain under increasing political pressure to get a handle on online disinformation to ensure that manipulative messages don't, for example, get a free pass to fiddle with democratic processes. Twitter says the acquisition of Fabula will help it build out its internal machine learning capabilities -- writing that the UK startup's "world-class team of machine learning researchers" will feed an internal research group it's building out, led by Sandeep Pandey, its head of ML/AI engineering. This research group will focus on "a few key strategic areas such as natural language processing, reinforcement learning, ML ethics, recommendation systems, and graph deep learning" -- now with Fabula co-founder and chief scientist, Michael Bronstein, as a leading light within it.
UK startup Fabula AI reckons it's devised a way for artificial intelligence to help user generated content platforms get on top of the disinformation crisis that keeps rocking the world of social media with antisocial scandals. Even Facebook's Mark Zuckerberg has sounded a cautious note about AI technology's capability to meet the complex, contextual, messy and inherently human challenge of correctly understanding every missive a social media user might send, well-intentioned or its nasty flip-side. "It will take many years to fully develop these systems," the Facebook founder wrote two years ago, in an open letter discussing the scale of the challenge of moderating content on platforms thick with billions of users. "This is technically difficult as it requires building AI that can read and understand news." But what if AI doesn't need to read and understand news in order to detect whether it's true or false? Step forward Fabula, which has patented what it dubs a "new class" of machine learning algorithms to detect "fake news" -- in the emergent field of "Geometric Deep Learning"; where the datasets to be studied are so large and complex that traditional machine learning techniques struggle to find purchase on this'non-Euclidean' space.
A hierarchical, bipartite model can characterize many complex narrative phenomena associated with coordinating plot and communication in storytelling (e.g., cinematography), but the predominant pipeline-based strategy for generating narratives has inadvertently limited the expressiveness of storytelling systems. We introduce computational steps for merging story and discourse languages in plan-based storytelling systems with hierarchical knowledge which avoids this problem and motivates more expressive narrative discourse reasoning.
This work addresses the problem of generating narrative fiction by using a plan-based language to model schematic knowledge of storyworld mechanics (fabula) and communicative plans (discourse). The paper outlines an approach to extract fabula and discourse from screenplays as a way to overcoming an authorial bottleneck problem.
Story generators typically adopt a pipelined model of generation wherein fabula structure is decided independently and prior to discourse structure. In this paper, we propose a novel story generator, PlotShot, capable of reasoning over discourse materials during fabula generation such that these materials meaningfully constrain the development of a causally and intentionally coherent story. PlotShot incorporates user-supplied photographs as optional story states through an oversubscription planning paradigm. Further, to leverage existing work on planning-based models of generation, we present a technique to compile the photo story planning problem to classical narrative planning. Our system attempts to maximize quality of an illustrated story by analyzing the affinity between a photo and the action it is meant to depict. An evaluation of generated artifacts shows advantage over heuristic baseline techniques.
In this paper we describe a system for generating textual narrations of what happened in a simulation-based serious game, focusing on the use of focalization (telling the story from the perspective of one of the characters) and flashbacks to give the player insights into the internal state of non-player characters.