Machines on Stage: Exploring the World of Robot Opera

Posted by Evelyn Ficarra and Chris Kiefer on January 6, 2022

Machines on Stage: Exploring the World of Robot Opera

A number of Emute Lab members are interested in robotics and AI. A key area where music, robotics and AI meet is Robot Opera. Robot Opera in its simplest sense is an opera which has robots in it. You could argue that in some sense this idea goes way back, as automata and stage machinery have been part of theatre and opera for a long time (for example of the latter, see Corneille’s Andromède, described as a tragedy represented by machines). The word ‘robot’ itself has its origins in theatre - a play by Czech playwright Karel Çapek, Rossum’s Universal Robots’ coined the term in 1921.

What’s behind the current wave of operatic interest in robots? There’s a broader context at work, in both theatre and in music. At the Performing Robots Conference in Utrecht in 2019, the point was repeatedly made that theatre could be a ‘sandbox’ or ‘test bed’ for human robot interaction, with much to be learnt about how humans see themselves (through creating robotic doubles) and about how we can imagine our future relationships with robots.

Performing Robots Conference 2019, Utrecht (23rd-25th May) from TiM on Vimeo.

On the music side, an example of robotic musical innovation can be found at Georgia Tech, where researchers have developed Shimon, a marimba playing robot who can improvise jazz with human musicians. Shimon is currently be trained to sing and write songs collaboratively with humans.

In robot opera itself, there’s a wide variety of practice out there, featuring both humanoid and non-humanoid robots, singing and performing in a range of different styles. One of the most high profile of these is Todd Machover’s Death and the Powers (2010) produced at MIT. An iconic work of the modern genre, it featured eight non-humanoid robots and a human cast, including video and complex moving stage scenery.

Interestingly, the non-humanoid robots sounded, at time, a lot like humans.

Wade Marynowski’s Robot Opera (2015) by contrast featured non-humanoid robots that sounded very non-human, exploring the tropes of electronic music, and combining sound, movement, sculpture and interactivity, blurring boundaries between set, character, performers and audience.

In a more traditional staging, a humanoid robot in Gob Squad’s 2015 robot opera My Square Lady was trained to conduct and sing as humans do, in a quest to learn about human emotion.

In the Royal Opera House’s 2015 commission Glare, a human singer played a robot character, who was trying to ‘pass’ as human.

Clearly this opera did not have an actual robot on stage, nevertheless such scenarios allow for serious exploration of human/robot identities and relationships. Who gets to decide who is human, and how important are relationships in these definitions? On stage, if a ‘real’ robot performs, how are we relating to that as an audience, compared to a human performer pretending to be a robot? These sorts of questions are also being explored extensively in France, where the Ecole Universitaire de Recherche hosts an ongoing international webinar entitled Robots on Stage. Interdisciplinary Convergences between Robotics and Theatre.

Conférence - Robots and theatre : creations, challenges and research prospects from EUR ArTeC on Vimeo.

Robot Opera @ Sussex

At the Centre for Research in Opera and Music Theatre (CROMT) we started a robot opera project in 2017, and we have had and additional two research events in 2019 and 2021. Our focus is on: • voice, embodiment and performance • creative exploration of human robot interaction • scholarly reflection on robotic otherness and human/robot hybridity, including ethical dimensions of our relationships with robots • how new technologies create new forms of expression and new forms for opera, including how explorations of robotic vocal virtuosity might feed back into human music making

How would a robot sing if it sang like a robot, rather than emulating a human? What would it sound like if its voice came from its own materiality and physicality? How do we read a robot’s presence on stage? Can a robot perform, or is it in some sense, always performing? Music software can enable computers to ‘improvise’ musically - if a robot is really just a computer that can move around, how might that movement be used to influence its vocal production?    How could embodiment feed into robotic voice? What would robotic vocal virtuosity sound like? On a more philosophical level, as Dr Ron Chrisley has explored, what does it mean for a robot to sing, or alternatively - who is singing, when a robot sings?

2017 Robot Opera: a Mini Symposium.

Using commercially available robots already deployed in the School of Engineering and Informatics at Sussex, we launched the robot opera project with a half day event in June 2017. This consisted in a set of papers delivered by Sussex researchers (Ron Chrisley, Thanos Polymeneas-Liontiris, and Chris Kiefer) followed by performances of two five-minute operas, both directed by Tim Hopkins, The Opposite of Familiarity composed by Ed Hughes with a libretto by Eleanor Knight,    and O, One by Evelyn Ficarra, for two Nao robots. The robots were programmed by Ron Chrisley and the human musicians were    Alice Eldridge (cello) Joe Watson (piano). The talks ranged over subjects such as Learning and Performing with AIs (Kiefer); Operatic Bots (Polymeneas-liontiris); and What would it be for a robot to sing? (Chrisley), introduced by Evelyn Ficarra. Full documentation of the event, including talks and performances, can be seen here.

Our creative work with the Nao robots (two toddler-sized robots designed and built by Softbank Robotics) revealed some interesting challenges. Our aim was that the primary ‘on stage’ performers would be the two robots, with the human musicians entirely in support in the background. We were determined the the Naos would sing with their own voices - that is, we would not simply record humans and put the voices inside the robots. We would create the ‘singing’ with the same voice the robots used to speak - accessed through the text to speech software that comes with the robot operating & animation system. Ron Chrisley and the composers met weekly to try out different techniques, with Ron programming in Python to effect the pitch and rhythmic pace of the robots. One interesting hitch was that the robots could not elongate a vowel sound, they could only repeat - so instead of ‘aaaaaah’ stretched out, they would sing a, a, a, a, a, - pronouncing each letter separately.

Although the Naos do have some speech recognition and facial recognition, and we wanted to give a sense of them acting and relating to each other, in reality they were more like puppets, enacting pre-programmed sequences triggered by either a tap on their heads (where they have a button) or by showing them a particular symbol trigger which they are programmed to recognise to initiate a sequence. Tim Hopkins’s direction set the robots out on a sort of runway between two strips of seated audience members, with musicians at either end, so that the audience were effectively looking at each other over the heads of the performing robots. There were also robots in the audience, as Sussex has a fleet of Naos and we had asked all the Nao caretakers on campus to bring their robots with them, further blurring identities between robots and humans as both audience and performers. The ’set’ was a simple strip of white paper on the floor, marking out the performance space, on which ‘cave’ drawings showing the evolution of machines (an imagined past for the robots) were drawn, and on which we wrote out words of the libretto in real time during the performance. For further details on the music of O, One by Evelyn Ficarra, see

2019 Robot Opera - What’s Next?

In 2019, we developed the project to focus on human robot interaction, particularly on how robotic vocal virtuosity might inform human vocal practice. This time we used a Pepper robot (Softbank Robotics) and brought in a human opera singer, Loré Lixenberg of Apartment House, together with her collaborator, cellist Anton Lukoszevieze, and a robot cat, because - of course you need a robot cat. Evelyn worked with two programmers, Deepeka Khosla and Kopiga Kugananthaval, to animate the robot and again explored how to push the robot’s ‘natural’ speaking voice into operatic and virtuosic expressive shapes. We commissioned Sam Bilbow to make a ‘robot cello’, so that both our human performers, Anton and Loré, had robotic counterpoints. The performance was co-devised by the creative team and again the event concluded with a reflective panel, this time chaired by Nick Till, with Ron Chrisley as first respondent and a Q & A featuring Evelyn and the performers. Taster clip:

Robot Opera 2019 "I Want Your Job" excerpt from Evelyn Ficarra on Vimeo.

2021 Robo_Po /// Robo_Op

This was a two day event consisting of a robot poetry reading and an operatic performance featuring Cleo, a robot designed and built by Engineered Arts. Both days were streamed live to an international audience and concluded with a reflective panel discussion by artists, academics and industry professionals. Cleo, unlike the Nao robots or Pepper, is a hyper-realistic humanoid robot, with silicon skin and real human hair. Cleo is full size, taller than many humans, and has face recognition and some limited AI driven features such as the ability to mimic certain movements. She has cameras in her eyes, and you can tap into the video stream of what she is seeing and project it. You can also ventriloquize her voice using a microphone - though we did not do that directly in real time.

Cleo’s hyperrealistic appearance brought us into different dramatic and musical territory with this project, raising questions of the uncanny. This allowed us to explore ideas around human / robot mirroring and hybridity, both visually in the staging, and vocally in the music. Evelyn considered how human voices are made from our physical materiality of muscle and bone, and wanted to reflect the robot’s physicality in its voice. Therefore in this project she did not limit herself to using the robot’s built in text to speech mechanism. Instead, she visited the robot factory in Falmouth and recorded sounds of robot body movement and robot manufacture. She also met with Loré Lixenberg the opera singer to record her voice, and then chopped up and fragmented both machine and human recordings to create a hybrid vocal world for the robot, eventually also including its own speaking voice in heightened ways. The robot sounds also formed the basis of an electronic soundscape which supported the voices.

We were very pleased in this new phase of the project to bring in a choreographer, Janine Fletcher, from Southeast Dance. Having learned through working with the Pepper robot about the importance of robotic movement in making the robot voice believable, we knew we needed an expert in movement this time, who could also work with the human performers. Janine Fletcher added a valuable dimension to the performance particularly in her duet with the robot, and spoke very interestingly afterwards about the complexities of performing with a body that has no awareness of others in the space - Cleo can see faces but does not have sensors on her body to prevent collisions, for example.

See a short documentary on the Robo_Op project


Full documentation of the performance and panel discussion:

A panel discussion on Robo_Op:

You can also see the robot poetry reading which occurred on day one of    Robo_Po /// Robo_Op here:

For further scholarly reflection and context, see this excellent article by Alexander Sigman: Sigman, A. (2020). Robot Opera: Bridging the Anthropocentric and the Mechanized Eccentric. Computer Music Journal, 43(1), 21–37.

For further information about robot opera at Sussex contact Evelyn Ficarra or Chris Kiefer.