top of page

DESCENDENT, 2021

Original performance conceived and written by: Roberto Alonso Trillo and Peter A C Nelson

Musical Composition: Stylianos Dimou and Roberto Alonson Trillo

Violin performnace: Roberto Alonso Trillo

Dance and choreography: Sudhee Liao 

3D Design and Animation: Peter A C Nelson and MetaObjects 

Motion Synthesis System: Chen Jie and Ryan Au 

Interactive Sound System (Max): Stylianos Dimou 

Costume design: Irene Kiriiaka 

Film Director: Vincent Ip 

This project was commissioned and supported by the TRS Grand led by professor Yike Guo and Professor Johnny Poon 

A performance for violin, dance and machine learning, 2022. By Peter A C Nelson, Roberto Alonso Trillo and Chen Jie.

Descendent is a performance conceived for violin and dance, augmented by a sophisticated motion synthesis system. It began as a weekend project between the Sudhee Liao, Dr Roberto Alonso Trillo (HKBU Music Department  & Augmented Creativity Lab member) and Dr Peter A C Nelson (HKBU Academy of Visual Arts & Augmented Creativity Lab member). It was then integrated into the Theme Based Research project and joined by Dr Chen Jie (HKBU Computer Science).

 

On a technical level, the project is innovating a number of approaches to music-to-motion synthesis. We recorded a unique motion capture dataset which we use to synthesise movements in real time. We also created a ‘live’ stage, where the touch of the dancer connects to a musical synthesiser, allowing her to ‘play’ the stage back to the violinist, who functions as the director of the performance.

 

Together, we are striving for a performance that communicates critical philosophical concepts in art and technology in a way that is transparent and understandable for an audience. By using two identical digital avatars to compare the real-time movement of a human dancer with the synthesised movement of an algorithmic system, we encourage the audience to speculate on authorship and human agency. When a human dancer is dancing with a synthesised version of herself and a violinist is playing with his own sounds synthesised back to him, who is leading the performance?

 

If such a system is then augmented with machine learning, could a performance create the illusion of artificial creativity and agency? The exploration of these questions is made possible by the close collaboration between our interdisciplinary Professors, where we constantly invent new tools and experiment together on how to best use them to communicate with our audience.

Be the first to know what is new!

Thanks for subscribing!

bottom of page