ending
Why has an AI-altered Bollywood movie sparked uproar in India?
New Delhi, India – What if Michael had died instead of Sonny in The Godfather? Or if Rose had shared the debris plank, and Jack hadn't been left to freeze in the Atlantic in Titanic*? Eros International, one of India's largest production houses, with more than 4,000 films in its catalogue, has decided to explore this sort of what-if scenario. It has re-released one of its major hits, Raanjhanaa, a 2013 romantic drama, in cinemas – but has used artificial intelligence (AI) to change its tragic end, in which the male lead dies. In the AI-altered version, Kundan (played by popular actor Dhanush), a Hindu man who has a doomed romance with a Muslim woman, lives.
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
Investigating Gender Bias in LLM-Generated Stories via Psychological Stereotypes
Masoudian, Shahed, Escobedo, Gustavo, Strauss, Hannah, Schedl, Markus
As Large Language Models (LLMs) are increasingly used across different applications, concerns about their potential to amplify gender biases in various tasks are rising. Prior research has often probed gender bias using explicit gender cues as counterfactual, or studied them in sentence completion and short question answering tasks. These formats might overlook more implicit forms of bias embedded in generative behavior of longer content. In this work, we investigate gender bias in LLMs using gender stereotypes studied in psychology (e.g., aggressiveness or gossiping) in an open-ended task of narrative generation. We introduce a novel dataset called StereoBias-Stories containing short stories either unconditioned or conditioned on (one, two, or six) random attributes from 25 psychological stereotypes and three task-related story endings. We analyze how the gender contribution in the overall story changes in response to these attributes and present three key findings: (1) While models, on average, are highly biased towards male in unconditioned prompts, conditioning on attributes independent from gender stereotypes mitigates this bias. (2) Combining multiple attributes associated with the same gender stereotype intensifies model behavior, with male ones amplifying bias and female ones alleviating it. (3) Model biases align with psychological ground-truth used for categorization, and alignment strength increases with model size. Together, these insights highlight the importance of psychology-grounded evaluation of LLMs.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- North America > Dominican Republic (0.04)
- (9 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
The video games you may have missed in 2024
PS4/5, Xbox, PC, Nintendo Switch Taiwanese studio Red Candle Games broke through in 2019 with the first-person horror game, Devotion. Its follow-up, Nine Sols, is less grungy but no less distinct, a robust 2D action-platformer with an exquisite "taopunk" aesthetic. This vivid sci-fi world feels as if it is constructed as much from bamboo and jade as steel and microchips. Alongside absorbing exploration and blistering combat, you study and grow various strains of alien flora found aboard a labyrinthine spaceship. The ultimate goal is escape, but you may never actually want to leave the strange, bioluminescent garden you come to cultivate.
- North America > United States > New York (0.04)
- North America > United States > Hawaii (0.04)
- Europe > North Sea (0.04)
- Atlantic Ocean > North Atlantic Ocean > North Sea (0.04)
Crafting Narrative Closures: Zero-Shot Learning with SSM Mamba for Short Story Ending Generation
Sharma, Divyam, Santhanam, Divya
Writing stories is an engaging yet challenging endeavor. Often, authors encounter moments of creative block, where the path forward in their narrative becomes obscured. This paper is designed to address such moments by providing an innovative solution: A tool that completes stories based on given prompts. By inputting a short story prompt, users can receive a conclusion to their story, articulated in one sentence or more, thereby enhancing the storytelling process with AI-driven creativity. This tool aims not only to assist authors in navigating writer's block but also to offer a fun and interactive way for anyone to expand on story ideas spontaneously. Through this paper, we explore the intersection of artificial intelligence and creative writing, pushing the boundaries of how stories can be crafted and concluded. To create our final text-generation models, we used a pre-trained GPT-3.5 model and a newly created finetuned SSM-Mamba model, both of which perform well on a comprehensive list of metrics including BERT score, METEOR, BLEU, ROUGE, and Perplexity. The SSM model has also been made public for the NLP community on HuggingFace models as an open source contribution, which for the timebeing is a first of its kind state-space model for story-generation task on HuggingFace.
K-UniMorph: Korean Universal Morphology and its Feature Schema
Jo, Eunkyul Leah, Kim, Kyuwon, Wu, Xihan, Lim, KyungTae, Park, Jungyeul, Park, Chulwoo
We present in this work a new Universal Morphology dataset for Korean. Previously, the Korean language has been underrepresented in the field of morphological paradigms amongst hundreds of diverse world languages. Hence, we propose this Universal Morphological paradigms for the Korean language that preserve its distinct characteristics. For our K-UniMorph dataset, we outline each grammatical criterion in detail for the verbal endings, clarify how to extract inflected forms, and demonstrate how we generate the morphological schemata. This dataset adopts morphological feature schema from Sylak-Glassman et al. (2015) and Sylak-Glassman (2016) for the Korean language as we extract inflected verb forms from the Sejong morphologically analyzed corpus that is one of the largest annotated corpora for Korean. During the data creation, our methodology also includes investigating the correctness of the conversion from the Sejong corpus. Furthermore, we carry out the inflection task using three different Korean word forms: letters, syllables and morphemes. Finally, we discuss and describe future perspectives on Korean morphological paradigms and the dataset.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > France > Île-de-France > Paris > Paris (0.06)
- Europe > France > Provence-Alpes-Côte d'Azur > Bouches-du-Rhône > Marseille (0.05)
- (16 more...)
Unsupervised Neural Stylistic Text Generation using Transfer learning and Adapters
Kumar, Vinayshekhar Bannihatti, Gangadharaiah, Rashmi, Roth, Dan
Research has shown that personality is a key driver to improve engagement and user experience in conversational systems. Conversational agents should also maintain a consistent persona to have an engaging conversation with a user. However, text generation datasets are often crowd sourced and thereby have an averaging effect where the style of the generation model is an average style of all the crowd workers that have contributed to the dataset. While one can collect persona-specific datasets for each task, it would be an expensive and time consuming annotation effort. In this work, we propose a novel transfer learning framework which updates only $0.3\%$ of model parameters to learn style specific attributes for response generation. For the purpose of this study, we tackle the problem of stylistic story ending generation using the ROC stories Corpus. We learn style specific attributes from the PERSONALITY-CAPTIONS dataset. Through extensive experiments and evaluation metrics we show that our novel training procedure can improve the style generation by 200 over Encoder-Decoder baselines while maintaining on-par content relevance metrics with
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- Europe > Romania > Sud - Muntenia Development Region > Giurgiu County > Giurgiu (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts
Jang, Joel, Ye, Seonghyeon, Seo, Minjoon
Previous work has shown that there exists a scaling law between the size of Language Models (LMs) and their zero-shot performance on different downstream NLP tasks. In this work, we show that this phenomenon does not hold when evaluating large LMs on tasks with negated prompts, but instead shows an inverse scaling law. We evaluate 9 different tasks with negated prompts on (1) pretrained LMs (OPT & GPT-3) of varying sizes (125M - 175B), (2) LMs further pretrained to generalize to novel prompts (InstructGPT), (3) LMs provided with few-shot examples, and (4) LMs fine-tuned specifically on negated prompts; all LM types perform worse on negated prompts as they scale and show a huge performance gap between the human performance when comparing the average score on both original and negated prompts. By highlighting a critical limitation of existing LMs and methods, we urge the community to develop new approaches of developing LMs that actually follow the given instructions. We provide the code and the datasets to explore negated prompts at this link.
- North America > United States > Washington > King County > Seattle (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (4 more...)
Possible Stories: Evaluating Situated Commonsense Reasoning under Multiple Possible Scenarios
The possible consequences for the same context may vary depending on the situation we refer to. However, current studies in natural language processing do not focus on situated commonsense reasoning under multiple possible scenarios. This study frames this task by asking multiple questions with the same set of possible endings as candidate answers, given a short story text. Our resulting dataset, Possible Stories, consists of more than 4.5K questions over 1.3K story texts in English. We discover that even current strong pretrained language models struggle to answer the questions consistently, highlighting that the highest accuracy in an unsupervised setting (60.2%) is far behind human accuracy (92.5%). Through a comparison with existing datasets, we observe that the questions in our dataset contain minimal annotation artifacts in the answer options. In addition, our dataset includes examples that require counterfactual reasoning, as well as those requiring readers' reactions and fictional information, suggesting that our dataset can serve as a challenging testbed for future studies on situated commonsense reasoning.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > China > Hong Kong (0.04)
- North America > Dominican Republic (0.04)
- (8 more...)
- Research Report > Experimental Study (0.67)
- Research Report > New Finding (0.46)
An Ion Exchange Mechanism Inspired Story Ending Generator for Different Characters
Jiang, Xinyu, Zhang, Qi, Shi, Chongyang, Jiang, Kaiying, Hu, Liang, Wang, Shoujin
Story ending generation aims at generating reasonable endings for a given story context. Most existing studies in this area focus on generating coherent or diversified story endings, while they ignore that different characters may lead to different endings for a given story. In this paper, we propose a Character-oriented Story Ending Generator (CoSEG) to customize an ending for each character in a story. Specifically, we first propose a character modeling module to learn the personalities of characters from their descriptive experiences extracted from the story context. Then, inspired by the ion exchange mechanism in chemical reactions, we design a novel vector breaking/forming module to learn the intrinsic interactions between each character and the corresponding context through an analogical information exchange procedure. Finally, we leverage the attention mechanism to learn effective character-specific interactions and feed each interaction into a decoder to generate character-orient endings. Extensive experimental results and case studies demonstrate that CoSEG achieves significant improvements in the quality of generated endings compared with state-of-the-art methods, and it effectively customizes the endings for different characters.
Passing the Turing Test: AI creates human-like text
"The baseball legend Yogi Berra once had a manager tell him to think more when he was up at bat. Berra responded, 'How can a guy hit and think at the same time?' It was a fair question. After all, when a pitcher throws a fastball, the batter has about 400 milliseconds to see the pitch, judge its direction, and swing the bat. "The human eye takes about 80 milliseconds to react to a stimulus.
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.58)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.58)
- Information Technology > Artificial Intelligence > Issues > Turing's Test (0.40)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.39)