A Live Coding Session With the Cloud and a Virtual Agent
Date
Authors
Advisors
Journal Title
Journal ISSN
ISSN
DOI
Volume Title
Publisher
Type
Peer reviewed
Abstract
This live coding performance is a collaboration between a human live coder and a virtual agent (VA). MIRLCa is a self-built SuperCollider extension and a follow-up of the also self-built SuperCollider extension MIRLC. The system combines machine learning algorithms with music information retrieval techniques to retrieve crowdsourced sounds from the online database Freesound.org, which results in a sound-based music style. In this performance, the live coder will explore the online database by only retrieving sounds predicted as “good” sounds when using the retrieval methods from the live coding system. This approach aims at facilitating serendipity instead of randomness in the retrieval of crowdsourced sounds. The VA has been trained to learn from the musical preference of a live coder within context-dependent decisions, ‘situated musical actions’. A binary classifier based on a multilayer perceptron (MLP) neural network has been used for sound prediction. The themes of legibility, agency and negotiability in performance will be sought through the collaboration between the human live coder, the virtual agent live coder and the audience. This project has been funded by the EPSRC HDI Network Plus Grant - Art, Music, and Culture theme.