Inspired by and adapted from the Timbre Resources webpage, this is a living document of resources to support the development of hacks & prototypes for the AI Performance Playground organised by Sónar+D 2025 with the goal of exploring and deepening the use of machine learning tools, AI, and other related technologies with a critical perspective. Some of the tools have been mentioned by the Hacklab participants as part of their arsenal. The resources mainly focus on sound with a section related to visuals. Collaboratively and through the exchange of skills and knowledge, the aim of the Hacklab is to learn collectively to critically apply new tools in musical and performative practices, and what surrounds them.
Audiostellar | AI-powered experimental sampler
Wekinator | Machine learning for building new musical instruments, gestural game controllers, computer vision or computer listening systems, among others.
Stable Audio SPA | a web app that generates 3 minute songs in 10 seconds.
diffusion.cam | an artificial social network that allows you to use the camera to transform your photograph into text, which gets turned back into an image using an AI diffusion model: img2text2img.
Concatenator (DataMind Audio) | AI-powered audio mosaicing plug-in.
Morpho (Neutone) | a realtime tone morphing plugin powered by advanced machine learning technology.
Dicy2 | a package for Max and a plugin for Ableton Live implementing interactive agents using machine-learning to generate musical sequences that can be integrated into musical situations.
Pitchshop | upload your own sound file and let the machine clone and manipulate the pitch.
SP-Tools | A set of machine learning tools that are optimised for low latency and real-time performance. The tools can be used with Sensory Percussion sensors, ordinary drum triggers, or any audio input.
FluCoMa | The FluCoMa software consists of objects for decomposing and describing audio, and for manipulating collections of sonic data by querying, matching, learning and transforming. The complete toolkit is available for Max, SuperCollider and Pure Data, and the decomposition / description tools are available for the command line.
Stable Audio Open | an open source text-to-audio model optimised for generating short audio samples, sound effects and production elements using text prompts.
MusicFX DJ (Google) | an extension of Google Music FX, a generative AI tool. In DJ Mode, you can create constantly evolving and generative soundscapes by inputting text prompts.
Stable Audio Open model available on huggingface.
Stable Audio Tools | Generative models for conditional audio generation.
nn~ | At its core, nn~ is a translation layer between Max/MSP or PureData and the libtorch c++ interface for deep learning. Alone, nn~ is like an empty shell, and requires pretrained models to operate.
neutone.space | A platform where researchers can share real-time AI audio processing models for creators to experiment with transformative AI audio instruments.
vschaos2 | a vintage neural audio synthesis package.
You can find a few RAVE models here.
Shuoyang Zheng/Jasper's RAVE models.
You can find a few vschaos2 models here.
Brunzit | a C++ application for experimenting with flocks of sonic agents through live coding where agents traverse a data terrain, a 2D visualization which can be created from a grey scale image or an audio file.
Mob | a SuperCollider program that allows you to live-code a group of agents animated in a 2D surface, where each agent controls a synth.
Fuzz | Autocoder agent producing Tidal patterns and atom-auto suggestion package.
MIRLCa | A set of SuperCollider extensions for live coding using MIR and machine learning.
SOL | Ircam instrumental sound database coming from the Studio On Line project.
Nsynth | 305,979 musical notes, each with a unique pitch, timbre, and envelope.
Freesound | a collaborative collection of 618,244 free sounds.
OpenMIC-2018 | 20000 audio clips with annotations of the presence or absence of 20 instrument classes.
URMP | 44 pieces of orchestral recordings with note-level and frame-level annotations.
MIS | single instrument notes with different playing techniques.
Medley-solos-DB | an instrument recognition dataset, audio extracted from MedleyDB and solosDB.
Using nn~ for PureData, RAVE can be used in realtime on embedded platforms, such as Bela or Raspberry Pi 4.
Hugging Face | a resourceful collaborative platform with models, datasets, and applications.
Cursor | AI code editor.
MIMIC | a creative coding platform (courses, tutorials, code...) ranging from graphics, music, machine learning and stuff from Tensorflow.JS.
- MIMIC examples:
- Classification - pose as input, controlling a classifier
- Regression - Pose as input, controlling audio parameters using regression
Stable Diffusion | a deep learning, text-to-image model based on diffusion techniques.
Figma AI | a visualisation tool for idea representation.
ComfyUI | generate video, images, 3D, audio with AI.
Luma AI | software enabling the creation of realistic 3D images, videos, and game assets using iPhone or web.
Luma AI API | Luma has made its NeRF and meshing models accessible via their API, offering developers advanced 3D modeling.
[Book] Nao Tokui (2023) "Surfing human creativity with AI — A user’s guide"
[Talk] Lauren Klein (April 24, 2024) "Data Feminism for AI" (1:16:07)
[Article] Lauren Klein, Catherine D'Ignazio (2024) "Data Feminism for AI" (FAccT ’24, June 03–06, 2024, Rio de Janeiro, Brazil). DOI: https://doi.org/10.1145/3630106.3658543
[Article] Fabio Morreale (2021) Where Does the Buck Stop? Ethical and Political Issues with AI in Music Creation. Transactions of the International Society for Music Information Retrieval, 4(1), pp. 105–113. DOI: https://doi.org/10.5334/tismir.86
[Article] Karolina Jawad and Anna Xambó (2023) Feminist HCI and narratives of design semantics in DIY music hardware. Frontiers in Communication, Volume 8 - 2023. DOI: https://doi.org/10.3389/fcomm.2023.1345124
Gerard Roma: Big fry-up | Dialogues Festival, Edinburgh College of Art, on the 7th of July 202
AI and Music - Holly Herndon presents Holly+ feat. Maria Arnal, Tarta Relena and Matthew Dryhurst
Jennifer Walshe: A Late Anthology Live | LIVE +RAIN Film Festival 2023
Nao Tokui: Emergent Rhythm | LIVE +RAIN Film Festival 2023
Albert Barqué-Duran: Slowly fading into data | LIVE +RAIN Film Festival 2023
Anna Xambó: Ceci n'est pas une usine | LIVE +RAIN Film Festival 2023
Performing Critical AI I: Feedback Cell featuring Ollie Bown | Cafe Oto, London November 27, 2022
Performing Critical AI I: 4 Boxes at Cafe OTO (Anna Xambó) | Cafe Oto, London November 27, 2022