Goto

Collaborating Authors

 fable


Deepfaking Orson Welles's Mangled Masterpiece

The New Yorker

A.I. re-creations of the "Magnificent Ambersons" stars Joseph Cotten, Agnes Moorehead, Dolores Costello, and Tim Holt. Edward Saatchi first saw "The Magnificent Ambersons," Orson Welles's mangled masterpiece from 1942, when he was twelve years old, in the private screening room of his family's crenellated mansion, in West Sussex. Saatchi's parents had already shown him and his brother "Citizen Kane." But "Ambersons," Welles's follow-up film, about a wealthy Midwestern clan brought low, came with a bewitching backstory: R.K.O. had ripped the movie from the director's hands, slashed forty-three minutes, tacked on a happy ending, and destroyed the excised footage in order to free up vault space, leaving decades' worth of cinephiles to obsess over what might have been. Part of this outcome was the result of studio treachery, but Welles, owing to some combination of hubris and distraction, had let his film slip from his grasp. Saatchi recalled, "Around the family dinner table, that was always such a big topic: How much was Welles responsible for this? Mum was always quite tough on him." Saatchi's father, Maurice, a baron also known as Lord Saatchi, is one of two Iraqi British brothers who founded the advertising firm Saatchi & Saatchi, in 1970, which led their family to become one of the richest in the U.K. Edward's mother, Josephine Hart, who died in 2011, was an Irish writer best known for her erotic thriller "Damage," which was adapted into a film by Louis Malle. Edward, born in 1985, grew up in London and at the sprawling country estate, surrounded by palatial gardens and classical statuary. He described his parents as "movie mad." The actor and Welles biographer Simon Callow, a Saatchi family friend, recalled, "They had a cinema of their own inside the house, and it was a ritual of theirs every week to watch a film together." Aside from old movies, Edward was obsessed with "Star Trek"--especially the Holodeck, a device that conjured simulated 3-D worlds populated by characters who could interact with the members of the Starship Enterprise. That kind of wizardry didn't exist in the real world, at least not yet. But the young prince of the Saatchi castle had faith that someday it would, and that it could bring the original "Ambersons" back from oblivion. "To me, this is the lost holy grail of cinema," Saatchi told me recently, like Charles Foster Kane murmuring about Rosebud. "It just seemed intuitively that there would be some way to undo what had happened."


MORABLES: A Benchmark for Assessing Abstract Moral Reasoning in LLMs with Fables

Marcuzzo, Matteo, Zangari, Alessandro, Albarelli, Andrea, Camacho-Collados, Jose, Pilehvar, Mohammad Taher

arXiv.org Artificial Intelligence

As LLMs excel on standard reading comprehension benchmarks, attention is shifting toward evaluating their capacity for complex abstract reasoning and inference. Literature-based benchmarks, with their rich narrative and moral depth, provide a compelling framework for evaluating such deeper comprehension skills. Here, we present MORABLES, a human-verified benchmark built from fables and short stories drawn from historical literature. The main task is structured as multiple-choice questions targeting moral inference, with carefully crafted distractors that challenge models to go beyond shallow, extractive question answering. To further stress-test model robustness, we introduce adversarial variants designed to surface LLM vulnerabilities and shortcuts due to issues such as data contamination. Our findings show that, while larger models outperform smaller ones, they remain susceptible to adversarial manipulation and often rely on superficial patterns rather than true moral reasoning. This brittleness results in significant self-contradiction, with the best models refuting their own answers in roughly 20% of cases depending on the framing of the moral choice. Interestingly, reasoning-enhanced models fail to bridge this gap, suggesting that scale - not reasoning ability - is the primary driver of performance.



TF1-EN-3M: Three Million Synthetic Moral Fables for Training Small, Open Language Models

Nadas, Mihai, Diosan, Laura, Piscoran, Andrei, Tomescu, Andreea

arXiv.org Artificial Intelligence

Moral stories are a time-tested vehicle for transmitting values, yet modern NLP lacks a large, structured corpus that couples coherent narratives with explicit ethical lessons. We close this gap with TF1-EN-3M, the first open dataset of three million English-language fables generated exclusively by instruction-tuned models no larger than 8B parameters. Each story follows a six-slot scaffold (character -> trait -> setting -> conflict -> resolution -> moral), produced through a combinatorial prompt engine that guarantees genre fidelity while covering a broad thematic space. A hybrid evaluation pipeline blends (i) a GPT-based critic that scores grammar, creativity, moral clarity, and template adherence with (ii) reference-free diversity and readability metrics. Among ten open-weight candidates, an 8B-parameter Llama-3 variant delivers the best quality-speed trade-off, producing high-scoring fables on a single consumer GPU (<24 GB VRAM) at approximately 13.5 cents per 1,000 fables. We release the dataset, generation code, evaluation scripts, and full metadata under a permissive license, enabling exact reproducibility and cost benchmarking. TF1-EN-3M opens avenues for research in instruction following, narrative intelligence, value alignment, and child-friendly educational AI, demonstrating that large-scale moral storytelling no longer requires proprietary giant models.


Popular book app's AI is deemed 'bigoted' and 'racist' after calling one user a 'diversity devotee' and telling another to 'surface for the occasional white author'

Daily Mail - Science & tech

A popular book app's AI has been scrapped after being deemed'bigoted and racist'. Fable, a social media app for book enthusiasts, used an AI to create a Spotify-like'wrapped' experience, summarising users' reading habits throughout the year. However, outraged readers soon complained that the feature, designed to offer a'playful roast', was lashing out with racist putdowns. One user was shocked when the app told them to'surface for the occasional white author' after spending the year reading'Black narratives and transformative tales'. Another was slammed by their AI summary as a'diversity devotee', with the app questioning whether they were'ever in the mood for a straight, cis white man's perspective'.


A Book App Used AI to 'Roast' Its Users. It Went Anti-Woke Instead

WIRED

Fable, a popular social media app that describes itself as a haven for "bookworms and bingewatchers," created an AI-powered end-of-year summary feature recapping what books users read in 2024. It was meant to be playful and fun, but some of the recaps took on an oddly combative tone. Writer Danny Groves's summary for example, asked if he's "ever in the mood for a straight, cis white man's perspective" after labeling him a "diversity devotee." Books influencer Tiana Trammell's summary, meanwhile, ended with the following advice: "Don't forget to surface for the occasional white author, OK?" A reader summary as shown on the 2024 stats page from the Fable app. Trammell was flabbergasted, and she soon realized she wasn't alone after sharing her experience with Fable's summaries on Threads.


Multi-Facet Blending for Faceted Query-by-Example Retrieval

Do, Heejin, Ryu, Sangwon, Kim, Jonghwi, Lee, Gary Geunbae

arXiv.org Artificial Intelligence

With the growing demand to fit fine-grained user intents, faceted query-by-example (QBE), which retrieves similar documents conditioned on specific facets, has gained recent attention. However, prior approaches mainly depend on document-level comparisons using basic indicators like citations due to the lack of facet-level relevance datasets; yet, this limits their use to citation-based domains and fails to capture the intricacies of facet constraints. In this paper, we propose a multi-facet blending (FaBle) augmentation method, which exploits modularity by decomposing and recomposing to explicitly synthesize facet-specific training sets. We automatically decompose documents into facet units and generate (ir)relevant pairs by leveraging LLMs' intrinsic distinguishing capabilities; then, dynamically recomposing the units leads to facet-wise relevance-informed document pairs. Our modularization eliminates the need for pre-defined facet knowledge or labels. Further, to prove the FaBle's efficacy in a new domain beyond citation-based scientific paper retrieval, we release a benchmark dataset for educational exam item QBE. FaBle augmentation on 1K documents remarkably assists training in obtaining facet conditional embeddings.


Fable at 20: a uniquely British video game with a complex legacy

The Guardian

In 1985, brothers Dene and Simon Carter vowed to each other that they would one day start their own development studio together. The game they imagined was ambitious, as Simon outlined in a developer diary: a fantasy role-playing game, "populated with compelling and convincing characters with real personality, people who actually reacted to what you did … We wanted each and every person who played our game to have a unique experience, to have their own stories to tell." The idea of a living, reactive game world was an obsession for many game creators (and players) at the time, largely because it had never yet been done. In the 1980s, a virtual fantasy world like this was far beyond the realms of technological possibility. Thirteen years later, they got the opportunity to make the game of their dreams, at their own studio Big Blue Box.


Seven things we learned from Gamescom opening night

BBC News

It has been a year with no major new console launches and where the industry has seen strikes and cuts with thousands of workers being laid off. The opening night of Gamescom is often an opportunity for a big shiny night to get fans all excited for the year ahead. Setting the stage for the next 12 months, here are the biggest things we found out from Europe's biggest gaming show in Germany. In a year when games became films, and films became games, the convention centre in Cologne saw a night all about the big trailers. This year, Borderlands has taken attention for its movie adaptation starring Cate Blanchett and Kevin Hart. That film received some of the year's harshest reviews, but that has not scuppered plans for a new game in the mainline series.


A Perspectival Mirror of the Elephant

Communications of the ACM

Buddhism means different things to different cultures. To Westerners, Buddhism is generally associated with spirituality, meditation, and philosophy, while many Vietnamese associate it with the lunar calendar, holidays, mother god worship, and a lifestyle capable of bringing good luck. In Nepal, people typically see Buddhism as a protector that destroys bad karma. To move beyond these local views in an attempt to see the global picture, you might type "Buddhism" in Google's search bar. Instead of helping, however, the top 50 results skew strongly toward these distinct cultural impressions depending on the language you use for your query.