Mirowski, Piotr
Dialogue with the Machine and Dialogue with the Art World: Evaluating Generative AI for Culturally-Situated Creativity
Qadri, Rida, Mirowski, Piotr, Gabriellan, Aroussiak, Mehr, Farbod, Gupta, Huma, Karimi, Pamela, Denton, Remi
This paper proposes dialogue as a method for evaluating generative AI tools for culturally-situated creative practice, that recognizes the socially situated nature of art. Drawing on sociologist Howard Becker's concept of Art Worlds, this method expands the scope of traditional AI and creativity evaluations beyond benchmarks, user studies with crowd-workers, or focus groups conducted with artists. Our method involves two mutually informed dialogues: 1) 'dialogues with art worlds' placing artists in conversation with experts such as art historians, curators, and archivists, and 2)'dialogues with the machine,' facilitated through structured artist- and critic-led experimentation with state-of-the-art generative AI tools. We demonstrate the value of this method through a case study with artists and experts steeped in non-western art worlds, specifically the Persian Gulf. We trace how these dialogues help create culturally rich and situated forms of evaluation for representational possibilities of generative AI that mimic the reception of generative artwork in the broader art ecosystem. Putting artists in conversation with commentators also allow artists to shift their use of the tools to respond to their cultural and creative context. Our study can provide generative AI researchers an understanding of the complex dynamics of technology, human creativity and the socio-politics of art worlds, to build more inclusive machines for diverse art worlds.
Neural Compression of Atmospheric States
Mirowski, Piotr, Warde-Farley, David, Rosca, Mihaela, Grimes, Matthew Koichi, Hasson, Yana, Kim, Hyunjik, Rey, Mรฉlanie, Osindero, Simon, Ravuri, Suman, Mohamed, Shakir
This paper presents a family of neural network compression methods of simulated atmospheric states, with the aim of reducing the currently immense storage requirements of such data from cloud scale (petabytes) to desktop scale (terabytes). This need for compression has come about over past 50 years, characterized by a steady push to increase the resolution of atmospheric simulations, increasing the size and storage demands of the resulting datasets (e.g., Neumann et al. (2019), Schneider et al. (2023), Stevens et al. (2024)), while atmospheric simulation has come to play an increasingly critical role in scientific, industrial and policy-level pursuits. Higher spatial resolutions unlock the ability of simulators to deliver more accurate predictions and resolve ever more atmospheric phenomena. For example, while current models often operate at 25 - 50 km resolution, resolving storms requires 1 km resolution (Stevens et al., 2020), while resolving the motion of (and radiative effects due to) low clouds require 100 m resolution (Satoh et al., 2019; Schneider et al., 2017). Machine learning models for weather prediction also face opportunities and challenges with higher resolution: while additional granularity may afford better modeling opportunities, even the present size of atmospheric states poses a significant bottleneck for loading training data and serving model outputs (Chantry et al., 2021). To put the data storage problem in perspective, storing 40 years of reanalysis data from the ECMWF Reanalysis v5 dataset (ERA5, Hersbach et al. (2020)) at full spatial and temporal resolution (i.e.
Designing and Evaluating Dialogue LLMs for Co-Creative Improvised Theatre
Branch, Boyd, Mirowski, Piotr, Mathewson, Kory, Ppali, Sophia, Covaci, Alexandra
Social robotics researchers are increasingly interested in multi-party trained conversational agents. With a growing demand for real-world evaluations, our study presents Large Language Models (LLMs) deployed in a month-long live show at the Edinburgh Festival Fringe. This case study investigates human improvisers co-creating with conversational agents in a professional theatre setting. We explore the technical capabilities and constraints of on-the-spot multi-party dialogue, providing comprehensive insights from both audience and performer experiences with AI on stage. Our human-in-the-loop methodology underlines the challenges of these LLMs in generating context-relevant responses, stressing the user interface's crucial role. Audience feedback indicates an evolving interest for AI-driven live entertainment, direct human-AI interaction, and a diverse range of expectations about AI's conversational competence and utility as a creativity support tool. Human performers express immense enthusiasm, varied satisfaction, and the evolving public opinion highlights mixed emotions about AI's role in arts.
Collaborative Storytelling with Human Actors and AI Narrators
Branch, Boyd, Mirowski, Piotr, Mathewson, Kory W.
Large language models can be used for collaborative storytelling. In this work we report on using GPT-3 \cite{brown2020language} to co-narrate stories. The AI system must track plot progression and character arcs while the human actors perform scenes. This event report details how a novel conversational agent was employed as creative partner with a team of professional improvisers to explore long-form spontaneous story narration in front of a live public audience. We introduced novel constraints on our language model to produce longer narrative text and tested the model in rehearsals with a team of professional improvisers. We then field tested the model with two live performances for public audiences as part of a live theatre festival in Europe. We surveyed audience members after each performance as well as performers to evaluate how well the AI performed in its role as narrator. Audiences and performers responded positively to AI narration and indicated preference for AI narration over AI characters within a scene. Performers also responded positively to AI narration and expressed enthusiasm for the creative and meaningful novel narrative directions introduced to the scenes. Our findings support improvisational theatre as a useful test-bed to explore how different language models can collaborate with humans in a variety of social contexts.
Generative Art Using Neural Visual Grammars and Dual Encoders
Fernando, Chrisantha, Eslami, S. M. Ali, Alayrac, Jean-Baptiste, Mirowski, Piotr, Banarse, Dylan, Osindero, Simon
Whilst there are perhaps only a few scientific methods, there seem to be almost as many artistic methods as there are artists. Artistic processes appear to inhabit the highest order of open-endedness. To begin to understand some of the processes of art making it is helpful to try to automate them even partially. In this paper, a novel algorithm for producing generative art is described which allows a user to input a text string, and which in a creative response to this string, outputs an image which interprets that string. It does so by evolving images using a hierarchical neural Lindenmeyer system, and evaluating these images along the way using an image text dual encoder trained on billions of images and their associated text from the internet. In doing so we have access to and control over an instance of an artistic process, allowing analysis of which aspects of the artistic process become the task of the algorithm, and which elements remain the responsibility of the artist.
The StreetLearn Environment and Dataset
Mirowski, Piotr, Banki-Horvath, Andras, Anderson, Keith, Teplyashin, Denis, Hermann, Karl Moritz, Malinowski, Mateusz, Grimes, Matthew Koichi, Simonyan, Karen, Kavukcuoglu, Koray, Zisserman, Andrew, Hadsell, Raia
Navigation is a rich and well-grounded problem domain that drives progress in many different areas of research: perception, planning, memory, exploration, and optimisation in particular. Historically these challenges have been separately considered and solutions built that rely on stationary datasets - for example, recorded trajectories through an environment. These datasets cannot be used for decision-making and reinforcement learning, however, and in general the perspective of navigation as an interactive learning task, where the actions and behaviours of a learning agent are learned simultaneously with the perception and planning, is relatively unsupported. Thus, existing navigation benchmarks generally rely on static datasets (Geiger et al., 2013; Kendall et al., 2015) or simulators (Beattie et al., 2016; Shah et al., 2018). To support and validate research in end-to-end navigation, we present StreetLearn: an interactive, first-person, partially-observed visual environment that uses Google Street View for its photographic content and broad coverage, and give performance baselines for a challenging goal-driven navigation task. The environment code, baseline agent code, and the dataset are available at http://streetlearn.cc
Learning To Follow Directions in Street View
Hermann, Karl Moritz, Malinowski, Mateusz, Mirowski, Piotr, Banki-Horvath, Andras, Anderson, Keith, Hadsell, Raia
Navigating and understanding the real world remains a key challenge in machine learning and inspires a great variety of research in areas such as language grounding, planning, navigation and computer vision. We propose an instruction-following task that requires all of the above, and which combines the practicality of simulated environments with the challenges of ambiguous, noisy real world data. StreetNav is built on top of Google Street View and provides visually accurate environments representing real places. Agents are given driving instructions which they must learn to interpret in order to successfully navigate in this environment. Since humans equipped with driving instructions can readily navigate in previously unseen cities, we set a high bar and test our trained agents for similar cognitive capabilities. Although deep reinforcement learning (RL) methods are frequently evaluated only on data that closely follow the training distribution, our dataset extends to multiple cities and has a clean train/test separation. This allows for thorough testing of generalisation ability. This paper presents the StreetNav environment and tasks, a set of novel models that establish strong baselines, and analysis of the task and the trained agents.
Learning to Navigate in Cities Without a Map
Mirowski, Piotr, Grimes, Matt, Malinowski, Mateusz, Hermann, Karl Moritz, Anderson, Keith, Teplyashin, Denis, Simonyan, Karen, kavukcuoglu, koray, Zisserman, Andrew, Hadsell, Raia
Navigating through unstructured environments is a basic capability of intelligent creatures, and thus is of fundamental interest in the study and development of artificial intelligence. Long-range navigation is a complex cognitive task that relies on developing an internal representation of space, grounded by recognisable landmarks and robust visual processing, that can simultaneously support continuous self-localisation ("I am here") and a representation of the goal ("I am going there"). Building upon recent research that applies deep reinforcement learning to maze navigation problems, we present an end-to-end deep reinforcement learning approach that can be applied on a city scale. Recognising that successful navigation relies on integration of general policies with locale-specific knowledge, we propose a dual pathway architecture that allows locale-specific features to be encapsulated, while still enabling transfer to multiple cities. A key contribution of this paper is an interactive navigation environment that uses Google Street View for its photographic content and worldwide coverage. Our baselines demonstrate that deep reinforcement learning agents can learn to navigate in multiple cities and to traverse to target destinations that may be kilometres away.
Learning to Navigate in Cities Without a Map
Mirowski, Piotr, Grimes, Matt, Malinowski, Mateusz, Hermann, Karl Moritz, Anderson, Keith, Teplyashin, Denis, Simonyan, Karen, kavukcuoglu, koray, Zisserman, Andrew, Hadsell, Raia
Navigating through unstructured environments is a basic capability of intelligent creatures, and thus is of fundamental interest in the study and development of artificial intelligence. Long-range navigation is a complex cognitive task that relies on developing an internal representation of space, grounded by recognisable landmarks and robust visual processing, that can simultaneously support continuous self-localisation ("I am here") and a representation of the goal ("I am going there"). Building upon recent research that applies deep reinforcement learning to maze navigation problems, we present an end-to-end deep reinforcement learning approach that can be applied on a city scale. Recognising that successful navigation relies on integration of general policies with locale-specific knowledge, we propose a dual pathway architecture that allows locale-specific features to be encapsulated, while still enabling transfer to multiple cities. A key contribution of this paper is an interactive navigation environment that uses Google Street View for its photographic content and worldwide coverage. Our baselines demonstrate that deep reinforcement learning agents can learn to navigate in multiple cities and to traverse to target destinations that may be kilometres away. A video summarizing our research and showing the trained agent in diverse city environments as well as on the transfer task is available at: https://sites.google.com/view/learn-navigate-cities-nips18
Improbotics: Exploring the Imitation Game Using Machine Intelligence in Improvised Theatre
Mathewson, Kory W. (University of Alberta) | Mirowski, Piotr (HumanMachine)
Theatrical improvisation (impro or improv) is a demanding form of live, collaborative performance. Improv is a humorous and playful artform built on an open-ended narrative structure which simultaneously celebrates effort and failure. It is thus an ideal test bed for the development and deployment of interactive artificial intelligence (AI)-based conversational agents, or artificial improvisors. This case study introduces an improv show experiment featuring human actors and artificial improvisors. We have previously developed a deep-learning-based artificial improvisor, trained on movie subtitles, that can generate plausible, context-based, lines of dialogue suitable for theatre. In this work, we have employed it to control what a subset of human actors say during an improv performance. We also give human-generated lines to a different subset of performers. All lines are provided to actors with headphones and all performers are wearing headphones. This paper describes a Turing test, or imitation game, taking place in a theatre, with both the audience members and the performers left to guess who is a human and who is a machine. In order to test scientific hypotheses about the perception of humans versus machines we collect anonymous feedback from volunteer performers and audience members. Our results suggest that rehearsal increases proficiency and possibility to control events in the performance. That said, consistency with real world experience is limited by the interface and the mechanisms used to perform the show. We also show that human-generated lines are shorter, more positive, and have less difficult words with more grammar and spelling mistakes than the artificial improvisor generated lines.