Goto

Collaborating Authors

The Case for Intention Revision in Stories and its Incorporation into IRIS, a Story-Based Planning System

AAAI Conferences

Character intention revision is an essential component of stories, but it has yet to be incorporated into story generation systems. However, intentionality, one component of intention revision, has been explored in both narrative generation and logical formalisms. The IRIS system adopts the belief/desire/intention framework of intentionality from logical formalisms and combines it with preexisting concepts of intentionality in narrative. IRIS also introduces the crucial concept of intention revision for characters in the story. The intent of this synthesis is to create stories with dynamic and believable characters that update their beliefs, replan, and revise their intentions over the course of the story.


Scene Text Magnifier

arXiv.org Machine Learning

Scene text magnifier aims to magnify text in natural scene images without recognition. It could help the special groups, who have myopia or dyslexia to better understand the scene. In this paper, we design the scene text magnifier through interacted four CNN-based networks: character erasing, character extraction, character magnify, and image synthesis. The architecture of the networks are extended based on the hourglass encoder-decoders. It inputs the original scene text image and outputs the text magnified image while keeps the background unchange. Intermediately, we can get the side-output results of text erasing and text extraction. The four sub-networks are first trained independently and fine-tuned in end-to-end mode. The training samples for each stage are processed through a flow with original image and text annotation in ICDAR2013 and Flickr dataset as input, and corresponding text erased image, magnified text annotation, and text magnified scene image as output. To evaluate the performance of text magnifier, the Structural Similarity is used to measure the regional changes in each character region. The experimental results demonstrate our method can magnify scene text effectively without effecting the background.


Neural Character-level Dependency Parsing for Chinese

AAAI Conferences

This inconvenience makes us do necessary restorations from character-level dependency parsing results Table 2: Character-level evaluation. Character-level dependency parsing covers all levels of language processing within a Chinese sentence. Our model shows that even integrating the least character position simplifies the pipeline into two steps, character POS tagging, information, it is beneficial to the parser.. and character dependency parsing, while traditional processing Finally, effective integration of two levels of tags boosts has to handle word segmentation, POS tagging for word, the performance most. For CHAR WORD strategy, it is more and word-level dependency parsing as shown in Figure 2. straightforward but also brings too many tags or labels and With different processing hierarchies, we also provide complete thus will slow down the parsing and make the learning more matches (CM) as one metric for the related evaluation. The character parsing performance comparison is given in Table reason might be that since characters instead of words are 1, in which the following observations are obtained.


Hello, Narratives: Character Development in Automated Narrative Generation

AAAI Conferences

Development of interesting and complex characters is the most important element of a narrative. Presented in this work is fAIble II, an automated narrative generation system that focuses on character development. fAIble II leverages a graph database, containerized modules, knowledge templates, and language structuring to produce diverse and coherent stories. Story progression is driven by character perception, emotion, personality, and interaction with the story world. The resultant system has been tested via anonymous questionnaire. Responses suggest its ability to create diverse, sensible narratives using character development.