Scenario is a world-first 360-degree 3D cinematic installation whose narrative is interactively produced by the audience and humanoid characters imbued with artificial intelligence (AI). The title is a Commedia dell’Arte term, referring to how the dramatic action depends on the way actors and audience interact.
Scenario is inspired by the experimental television work of Samuel Beckett. A female humanoid character has been imprisoned in a concealed basement, along with her four children by her father, who lives above ground with his daytime family. Set within this underground labyrinth, she and her children take the audience through various basement spaces in an attempt to discover the possible ways in which they and the audience can resolve the mystery of their imprisonment and so effect an escape before certain death. Watching them are a series of shadowy humanoid sentinels who track the family and the audience, physically trying to block their escape.
Rapidly interpreting and responding to audience behaviour by means of a sophisticated AI system, the humanoid sentinels work effortlessly to try and block the audience and the family at every turn. This two-fold dramatic action enables the work to create a narrative that evolves according to how the humanoids and the audience physically interact with each other. This is effected by means of a vision system that tracks the audience’s behaviour, linked to an AI system that allows the humanoids to independently interpret and respond to audience behaviour.
Background
Scenario investigates the differences in narrative reasoning between humanoids and human participants in interactive cinema. It proposes that when humanoids are provided with a modest ability to sense and interpret the actions of human participants sharing a digital cinematic environment, their interactive responses will co-evolve autonomously with those of human participants. In an experimental encounter with human participants, inspired by Samuel Beckett’s Quad 1 + 2, the study tests the narrative autonomy of humanoids as the capacity to make independent decisions using AI language.
Objectives
The project has three key research objectives:
- Explain co-evolutionary narrative as the interaction between human participants and autonomous humanoids. Narrative autonomy of humanoids is defined as their capacity to act in reference to a vision tracking system that tracks the human participants, using an Artificial Intelligence (AI) system, in consultation with a knowledge database. The implementation of narrative autonomy by humanoids is twofold. Firstly, a humanoid is obliged to act on its own decision-making in response to meanings they ascribe to human behaviour. Secondly, a humanoid’s motivation can be expressed through their behavioural response.
- Test outcomes of co-evolutionary interaction between human participants and humanoids in a cinematic experiment. Scenario involves the apprehension, interpretation and response by multiple humanoids to clusters of more than one human participant, thus focusing upon group interaction.
- Evaluate the significance of co-evolutionary narrative as a condition of: differentiated clarity in sensing and tracking; autonomy recoverable in the deliberation of humanoids; quality of aesthetic conviction in the co-evolutionary responses of humanoids and human participants.
- Project Details
- Technical Features
Methodology
Scenario undertakes a narrative experiment within a framework that integrates independently proven experimental systems so as to test the hypothesis that co-evolutionary narrative can aesthetically demonstrate levels of autonomy in humanoid characters.
The materials are integrated from established interactive technologies. First among these is iCinema’s world-first 360 degree 3D cinematic theatre AVIE3, which enables the engagement of an experimental interactive framework. This framework incorporates digital sensing, interpretive and responsive systems. The narrative experiment is designed to dramatise distinct behavioral processes, thus probing humanoid autonomy and the cognitive gap between humanoid and human participant. The design is sufficient to provide humanoids with minimal perceptive, reasoning and expressive capabilities that allow them to track, deliberate and react to human participants, with an autonomy and deliberation characteristic of co-evolutionary narrative.
The framework is structured so as to respect autonomous humanoid intentionality, as opposed to the simulated intentionality of conventional digital games. While narrative reasoning in human-centred interactivity focuses exclusively on human judgments, co-evolutionary narrative allows for deliberated action by humanoids. This involves providing these characters with a number of capacities beyond their rudimentary pre-scripted behaviour: First, the ability to sense the behaviour of participants; second, the facility to represent this behaviour symbolically; and third, the capacity to deliberate on their own behaviour and respond intelligibly.
The humanoids define their autonomy experimentally through their ability to deliberate within a performance context inspired by the experimental film and television work of Samuel Beckett. In this “Beckettian” performance, individual and group autonomy is determined by physical interactive behaviour, whereby characters define themselves and each other through their reciprocal actions in space. The reciprocal exchange of behaviour in this context is sufficiently elastic to allow for the expression of creative autonomy. As Scenario focuses on human and humanoid clustering, its experiments examine the relation between groups of humanoids and groups of humans.
Beckett’s research is extended because it provides an aesthetic definition of group autonomy as other-intentional, that is, predicated on shared actions between groups of human participants. In Beckett’s Quad, for example, characters mutually define each other by means of their respective territorial manoeuvres as they move backwards and forwards across the boundaries of a quadrant. Quad is drawn on as a way of aesthetically conceptualising the relationship between spatialisation and group consciousness. For example, in one scene, participants are confronted with several humanoids, who cluster themselves into groups in order to block the ability of the human participants to effectively negotiate their way through the space. The better the humanoids can work as a group, the more effective is their blocking activity.
The type of interaction generates a cascading series of gestural and clustering behaviours, testing and evaluating the network of meaningful decisions by humanoids and human participants as they attempt to make sense of each other’s behaviour.
The digital world of Scenario has a number of technical features:
AI System
The AI system is based on a variant of a symbolic logic planner drawn from the cognitive robotics language Golog developed at the University of Toronto, capable of dealing with sensors and external actions. Animations that can be performed by a humanoid character are considered actions that need to be modelled and controlled (e.g., walking to a location, pushing a character, etc.). Each action is modelled in terms of the conditions under which it can be performed (e.g., you can push a character if you are located next to it) and how it affects the environment when the action is performed. Using this modelling, the AI system plans or coordinates the actions (i.e., animations) of the humanoid characters by reasoning about the most appropriate course of action.
AI Interface
A networked, multi-threaded interface interacts with an external Artificial Intelligence system that can accept queries of the digital world state (e.g. character positions, events occurring) and can be used to control humanoid characters and trigger events in the humanoid world. Currently, a cognitive robotics language based on Golog is used, but the AI Interface has been developed in a modular fashion that allows for any other language to be plugged in instead.
Real-Time Tracking and Localisation System
The tracking system uses advanced technologies in Computer Vision and Artificial Intelligence to identify and locate persons in space and time. A total of 16 cameras are used to identify individuals as they enter the AVIE3 environment, and maintain identities as they move around. Spatial coherence is exploited, with overhead cameras providing a clear view but without height information, and oblique cameras providing better location information. A distributed architecture over sophisticated network hardware, along with software techniques, provides real-time results. Between camera updates, innovative prediction techniques are used to maintain accurate person positions at incredibly fast rates. The tracking system is also very robust in the presence of multiple people. A real-time voxel reconstruction of every individual within the environment leverages the tracking system to considerably speed up the construction of the 3D model. A head and fingertips recognition and tracking system allows users to interact with the immersive environment through the use of pointing gestures.
Animation Interface
A custom software toolset iC AI functions as a virtual laboratory for constructing humanoid characters to be used in narrative scenarios. It enables characters to appear within the AVIE space to exhibit a high level of visual quality, with realistic human-like animations. This includes the ability to instruct characters at a higher programmatic level (walk to point, looking at objects, turning, employing inverse kinematics, rather than individual joint commands), and the ability to schedule these behaviors to produce believable, fluid characters.
Mixed Reality System
A custom 3D behaviour toolset AVIE-MR allows the creation of ‘scenarios’ that exhibit a cycle of cause and effect between the real world and the digital world. Its principal feature is that it allows the cognitive robotic language to implement realistic behaviour in the humanoid characters and a minimum of programming effort. This ensures enhanced levels of human interactive and immersive experience.
- Overview
- Exhibition
- Publications
- Credits
Աپٴǰ:Dennis Del Favero, Jeffrey Shaw, Steve Benford, Johannes Goebel
Programmers: Adrian Hardjono, Jared Berghold, Som Guan, Alex Kupstov. Piyush Bedi, Rob Lawther
Project Funding:ARC DP0556659
2011-2015
- Jeffrey Shaw & Hu Jieming Twofold Exhibition, Chronus Art Centre, Shanghai, 2014/15
- Child, Nation & World Cinema Symposium, UNSW, Sydney, 2014
- ISEA13, UNSW, Sydney, 2013
- Sydney Film Festival, Sydney, 2011
- 15th Biennal Film & History Conference, UNSW, Sydney, 2010
Books
Ed Scheer. (2011). Scenario-The Atmosphere Engine. Press and ZKM: Sydney and Karlsruhe.
Journal articles
Neil C.M. Brown, Timothy Barker and Dennis Del Favero. (2011). “Performing Digital Aesthetics: The Framework for a Theory of the Formation of Interactive Narratives”, Leonardo 44(3) (forthcoming – accepted July 2010).
A.Sridhar and A.Sowmya. (2011). “Distributed, Multi-Sensor Tracking of Multiple Participants within Immersive Environments using a 2-Cohort Camera Setup”, Machine Vision and Applications (Forthcoming – accepted August 2010).
A.Sridhar and A.Sowmya. (2008). “Multiple Camera, Multiple Person Tracking with Pointing Gesture Recognition in Immersive Environments”, Lecture Notes in Computer Science 5358(I), G. Bebis et al. (Eds.), Berlin: Springer Verlag: 508-519.
Conference papers
Dennis Del Favero and Timothy Barker. (2010). “Scenario: Co-Evolution, Shared Autonomy and Mixed Reality”, Proceedings of IEEE International Symposium on Mixed and Augmented Reality (ISMAR2010), Seoul, 13-16 October.
Timothy Barker. (2010). “Interactive Aesthetics: iCinema, Interactive Narratives and Immersive Environments”, 15th Biennial Conference of The Film and History Association of Australia and New Zealand, Sydney, 30 November – 3 December.
Maurice Pagnucco. (2010). “What is Artificial Intelligence in Scenario?”, 15th Biennial Conference of The Film and History Association of Australia and New Zealand, Sydney, 30 November – 3 December.
Laura Aymerich. (2010). “Respuesta emocional de los participantes en un juego interactivo en un ambiente virtual”, II Congreso Internacional AE-IC Málaga 2010 Comunicación y desarrollo en la era digital (II Interational Congress, AE-IC), Malaga, Spain, 3-5 February.
A.Sridhar, A.Sowmya and P.Compton. (2010). “On-line, Incremental Learning for Real-Time Vision Based Movement Recognition”, IEEE 9th International Conference on Machine Learning and Applications ICMLA 2010, Washington, DC, USA, 12-14 December.
A.Sridhar and A.Sowmya. (2009). “SparseSPOT: Using A Priori 3-D Tracking for Real-Time Multi-Person Voxel Reconstruction”, Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology (VRST 2009), S. N. Spencer (Ed.), Kyoto, Japan, November, 135-138.
Laura Aymerich. (2009). “Identification with an Animated Object and its Relationship to Emotions in a Virtual Environment”, Entertainment = Emotion Conference, Benasque, Spain, 15-21 November.
Volker Kuchelmeister, Dennis Del Favero, Ardrian Hardjono, Jeffrey Shaw and Matthew McGinity. (2009). “Immersive Mixed Media Augmented Reality Applications and Technology”, Advances in Multimedia Information Processing, 10th Pacific Rim Conference on Multimedia, Paisarn Muneesawang, Feng Wu, Itsuo Kumazawa, Athikom Roesaburt, Mark Liao, Xiaoou Tang (Eds.), Bangkok Thailand, 15-18 December, 112-118.
Volker Kuchelmeister. (2009). “Universal Capture through Stereographic Multi-Perspective Recording and Scene Reconstruction”,Advances in Multimedia Information Processing, 10th Pacific Rim Conference on Multimedia, Paisarn Muneesawang, Feng Wu, Itsuo Kumazawa, Athikom Roesaburt, Mark Liao, Xiaoou Tang (Eds.), Bangkok Thailand, 15-18 December, 974-981.
Tim Ströder and Maurice Pagnucco, (2009). “Realising Deterministic Behavior from Multiple Non-Deterministic Behaviors”, Proceedings of the Twenty First International Joint Conference on Artificial Intelligence (IJCAI’09), Pasedena, USA, 11-17 July, 936 – 941.
Dennis Del Favero, Neil Brown, Jeffrey Shaw and Peter Weibel. (2007). “Experimental Aesthetics and Interactive Narrative”, ACUADS Conference Report, Sydney, University of New South Wales.
Matthew McGinity, Jeffrey Shaw, Dennis Del Favero and Volker Kuchelmeister. (2007). “AVIE: A Versatile Multi-User Stereo 360-Degree Interactive VR Theatre”, The 34th International Conference on Computer Graphics and Interactive Techniques, San Diego, 5-9 August.
Director: Dennis Del Favero
Writer: Stephen Sewell
Artificial Intelligence System: Maurice Pagnucco, Timothy Cerexhe
Real-Time Computer Vision System and Interpretation System: Anuraag Sridhar, Arcot Sowmya, Paul Compton
Composer: Kate Moore
Designer: Karla Urizar
Lead Technical Architect: Ardrian Hardjono
Software Engineers: Jared Berghold, Som Guan, Alex Kupstov. Piyush Bedi, Rob Lawther
Hardware Integration Engineer: Robin Chow
Pianist: Saskia Lankhoorn (playing Zomer)
Sound Engineer: Marc Chee
Animation Modellor: Alison Bond
Post-Doctoral Fellow: Tim Barker
Motion Capture Actors: Dianne Reid, Rebekkah Connors, Alethia Sewell, Stephanie Hutchison
Body Models: Corrie Morton, Sylvia Lam, Jennifer Munroe, Taylor Callaghan, Zachary Collie
Motion Capture: MoCap Studios Deakin University
Voice-over Actors: Noel Hodda, Steve Bisley, Heather Mitchell, Katrina Foster, Marcella and Justine Kerrigan, Bonni Sven, Chaquira Cussack
Production Managers: Sue Midgley, Joann Bowers
Australian Research Council Queen Elizabeth II Fellow: Dennis Del Favero
Australian Research Council Discovery Project Investigators: Dennis Del Favero, Jeffrey Shaw, Steve Benford, Johannes Goebel
Scenario is an experimental study for a research project supported under the Australian Research Council’s Fellowship and Discovery Projects funding schemes.