Live Coding with the Cloud and a Virtual Agent

Date

2021-04-29

Advisors

Journal Title

Journal ISSN

ISSN

2220-4806

DOI

Volume Title

Publisher

Type

Conference

Peer reviewed

Yes

Abstract

The use of crowdsourced sounds in live coding can be seen as an example of asynchronous collaboration. It is not uncommon for crowdsourced databases to return unexpected results to the queries submitted by a user. In such a situation, a live coder is likely to require some degree of additional filtering to adapt the results to her/his musical intentions. We refer to this context-dependent decisions as situated musical actions. Here, we present directions for designing a customisable virtual companion to help live coders in their practice. In particular, we introduce a machine learning (ML) model that, based on a set of examples provided by the live coder, filters the crowdsourced sounds retrieved from the Freesound online database at performance time. We evaluated a first illustrative model using objective and subjective measures. We tested a more generic live coding framework in two performances and two workshops, where several ML models have been trained and used. We discuss the promising results for ML in education, live coding practices and the design of future NIMEs.

Description

Keywords

AI, live coding, virtual agents, MIR

Citation

Xambó, A., Roma, G., Roig, S., and Solaz, E. (2021) Live Coding with the Cloud and a Virtual Agent. International Conference on New Interfaces for Musical Expression, NYU Shanghai, China, June 2021.

Rights

Research Institute