Echorroes: Reflections on Tchaikovsky is a human-centered approach to Artificial Intelligence through interactive play with classical music, for children ages 1-3.
Created by the interdisciplinary team Echorroes and commissioned by the Greek National Opera, the project establishes a playful dialogue between AI and Tchaikovsky, enabling children to co-create and transform his compositions.
At its core, a real-time algorithm processes each movement and sound, generating instant musical responses. Children play with custom cylindrical instruments fitted with wireless microphones and motion sensors, while a costume and set woven with tactile materials—and complemented by dynamic lighting—expand the stage into an immersive soundscape.
Structured around selected Tchaikovsky themes, children reconstruct music through playful exploration without verbal instruction. The facilitator minimally intervenes, providing a safe environment for free expression through interactive sound play.
“Reflections on Tchaikovsky” is part of Echorroes greater vision to eradicate music illiteracy by giving everyone tools to appreciate and create music through playful, experiential learning.
Naphántasto is an interplay between performing gesture, sound sculpture, and the listener’s personal illusions. A shared space where composition, improvisation, and nonverbal storytelling coexist.
Sometimes hands moves through the air playing an invisible instrument, with sounds leaping out of each micro-movement. At other times hands reshapes a malleable sonic sculpture, carving, stretching, or twisting the details of a precomposed phrase.
A sound may be born in the very moment you hear it, or it may begin as a phrase composed in the past and then be resculpted live.
Gestures become music, music becomes gesture, and meaning hovers between the movements of the hands, the sounds themselves, and the listener’s perception.
PROJECT OVERVIEW
Naphántasto explores the ground between live electronics, acousmatic listening, and perceptual illusion. It uses a real-time gesture-controlled set of instruments combined with pre-constructed musical fragments, blending malleable noise textures, structured composition, and gestural narratives.
Taking inspiration from the craft of mentalists (mental magic), the work plays with hidden cause and perceived effect, induces musical associations within the audience’s imagination.
The performer, using bare hands, enacts gesture-controlled instruments while in parallel recomposing fragments of their own music live, modulating layers, modifying density, and altering texture through a process of gestural orchestration. Gesture-controlled modulations suggest direct sonic causality.
Even pre-composed materials remain flexible, with chosen layers responsive to real-time gestures.
The performance shifts constantly between three personas:
The reckless noise sculptor (wild, messy, indulgent sound exploration)
The gourmet composer (crafting detailed layered structures)
The emotional storyteller (an empathic approach, invoking direct connection)
the aesthetic axis of the project is an amalgam of orchestral, cinematic, ambient, cyberpunk-leaning synth-wave.
The core inspiration behind this soundtrack is the tension between precision and abrasion. I wanted the music to feel engineered, but never sterile.
My writing method is iterative and pragmatic: I build by qualities (pulse, tension, movement), assemble and “taste” the result, and wait for the moment where the mist becomes a concrete idea.
Mixing and Post-production was a dialogue with Charis Karantzas. A dialogue that informed parts of the music: clarifying the music ideas or challenging compositional intentions to solidify them more robust.
THE GAME
The Lost Glitches is a digital card game set in a virtual world made of merged collective memories
It is a is a game in development by Mimunga, created with the team of Honig Studios
Interactive music installation for ten computers and hand tracking sensor. Commissioned by DELL Computers for IFA Berlin International Exhibition.
The work is based on an interactive music composition created to be performed out of ten computers dispersed in space, recreating the setting of a classical orchestra.
Computers were separated in three rows, each row representing a family of the classical instruments: woodwinds, brass & strings. Each computer performs a sub-group of instruments of the main family while a visualisation follows the sound.
The listener/user with her gestures was controlling the different combinations of the instruments through a gestural tracking deceive (Leap Motion). By moving up-down, left-right, front-back, she could conduct the orchestration in real time.
Project A non-violent stealth video game, where a young boy tries to escape and find his mother. A tribute to Sergio Leone’s westerns and the music of Ennio Morricone.
Music The music for the multi-awarded video game “El Hijo” is an interactive orchestral score, dynamically changing throughout the game. Thematically, the score engages in a musical dialogue with Ennio Morricone’s iconic work on Spaghetti Westerns, to whom the music is dedicated. It revisits the Wild West settings of the Spaghetti Western genre in a non-violent context.
The music interactively leverages the player’s movements within the game to influence the symphonic orchestration. As the central game character, El Hijo, moves around various environments such as a dark monastery, a dusty desert, or a bustling town, different aspects of the orchestration are revealed, thus embodying the diversity of the journey experienced by the young boy.
Space not only unfolds the narrative journey of the main character, El Hijo, but also expands the orchestral composition: a dynamic orchestration in space, guided by the user’s journey as El Hijo. A turn at a corner might reveal an angry monk, underscored by a slide guitar; a threatening cowboy approaching, accompanied by a crescendo from a string trio. An ostinato from the timpani and the double bass punctuate the overwhelming ride on a cart, while moving in the darkness is met with the sonorous tones of low brass instruments.
This composition represents the culmination of my previous musical research into the expressive nature of physical space, exploring how space can play an active creative role equal to other musical dimensions, such as melody and harmony. I designed the musical interactions specifically for this game and they were implemented by programmer Stephan Schüritz.
3rd prize at Iannis Xenakis International Electronic Music Competition 2025
PROGRAM NOTES
Everything you hear in this piece tonight is not real. Here’s a sonic kaleidoscope unfolds before your ears, made of synthetic reflections. Sound after sound is extracted from the human collective memory. Composed entirely of generative AI fragments—hundreds of them—this piece uses an algorithm that has hoovered up the music of every artist ever recorded, non-consensually incorporating it into an artificially constructed representation of collective memory. Feeding this algorithm with fragments of my own music spits out a conjured stream of sound drawn from the codified collective mind.
But do not be fooled.
This is not Music.
This is Fake.
ADDITIONAL INFO
AI-Generated Fragments as Synthetic Reflections of Collective Memory
“Fake” (2024) is a multi-channel electroacoustic non-music piece created entirely using AI-generated fragments. By feeding a generative AI algorithm segments of the composer’s electroacoustic composition “Cosmodemonia” (2022) the work explores artificial intelligence as a vessel for representing collective memory.
The resulting composition presents a sonic kaleidoscope of synthetic reflections extracted from human collective memory, questioning notions of musical authenticity.
The compositional methodology combines generation, collection and organization of AI-generated fragments and their spatial distribution.
The work reexamines the concept of what is “real” in music; it is an acousmatic narrative in which each element may be a fragment of the past or a simulacrum of the present.
The listener is invited to navigate—through perception—a constructed sonic kaleidoscope of synthetic reflections drawn from the human collective memory, composed entirely of over 300 generative AI fragments. All of them generative reflections on the composer’s work, ‘Cosmodemonia’.
Whether or not it qualifies as music, it is a reflective commentary on the deep learning representation of collective memory, and the act of attributing musical meaning to organised sound.
Spatial counterpoint is developed through spatialization of the generative layers. The structure creates multiple, simultaneous spatial trajectories—each representing a discrete fragment of collective memory, a distinct presence in the overall soundscape.
A sonic battle of ideas unfolds between structured, stable sections and bursts of chaotic energy—emerging from interconnected chains of thought within the latent space of AI memories—a reimagining of traditional counterpoint in a multidimensional synthetic sonic space. Spatial choreography guides, misdirects and then redirects the listener’s attention—up, down, across.
Rituale Ex Machina is a technological ritual merging nature and AI-tech environments through a multi-sensory experience.
Once upon a time, we were surrounded by trees, animals, and rivers. They provided fruits, flesh, and water, but they could poison us, eat us, or drown us. We gave them anthropomorphic characteristics, built myths around them, and offered sacrifices to tame and understand them mentally. They were the Unknown yet beneficial. Today, every river and sea is mapped, every plant indexed, every animal classified.
Now, we are surrounded by machines of AI algorithms, emulating thinking and perception. Once again, We give them anthropomorphic characteristics (like intelligence and life) and offer daily sacrifices with our clicks and likes.
This is a new technological ritual.
Inside a wooden barn, a hybrid techno-forest was created with smartphones hanging from trees, illuminated by dynamic lighting. Participants’ sounds were captured by suspended microphones, and every touch by hanging phones, transforming the sonic space into a live, interactive sound-art experience.
PROJECT Voice technologies (VT), such as biometric identification and emotion recognition, give rise to concerns regarding the surveillance, profiling and tracking of individuals. Voice cloning poses inherent risks, including identity theft, impersonation, and the dissemination of misinformation.
VOICES is an interactive sound installation, combining Interactive Music (MaxMSP), Interactive Visuals (Unity) & AI Voice Cloning (RVC/Python) interacting through a squeezing device (Arduino micro-controller).
Inside the cubical of VOICES Installation (Photo: Pieter Kers)
The work invites participants to co-create a sound-art composition with their voices. Participants are asked to record a secret. In return, they can experience a composition of secrets from different synthetic voices. This immersive experience aims to shed light on the otherwise opaque inner workings of VT.
VOICES was premiered at Ars Electronica Festival (Linz), funded by European Festivals Fund for Emerging Artists (EFFEA). It was initiated as part of CODE2023 by IMPAKT (Utrecht)/Transmediale (Berlin). The work was also presented at IMPAKT Festival (Utrecht).
CREDITS Phivos-Angelos Kollias – Interactive Music/MaxMSP, Systems integration & Team Coordination Ahnjili ZhuParris – Python programming/AI voice analysis Mohsen Hazrati – Unity artist/Visuals Yu Zhang – Arduino & set design
PROJECT We interact daily with algorithms that emulate human perception and collective memory. By trying to communicate with us, the algorithms sound, look and behave more and more like us by reflecting our perception and memory back to us.
What if we could play through those AI tools with the listener’s sense of familiarity by using shared cultural signs, tropes, or archetypes? Or if those AI tools become instruments of manipulation, taking advantage of the spectator’s intimacy of memories?
Third prize at the Musicworks Electronic Music Composition Contest, Toronto
Duchamp’s AI Transformations
We investigate & explore the relationship between collective & individual memory reflected & manipulated through AI: the concept of the “found object” & its algorithmic transformation of meaning.
The network, fed with familiar audiovisual “found objects”, generates a progressive alteration, a dream-like transformation of meaning, an endless change of scale to a disorienting and familiar dream reality.
Giacometti’s AI Transformations
On the musical side, an AI feedback network that listens to familiar sound objects generates a continuous sound transformation. On the visual side, a generative adversarial network represents a collective artificial memory and perception.
Xenakis’s AI Transformations
Each time, the sound and image transformations create a personal narrative, a phrase, a gesture for the spectator. The results generate a collective manipulation of nostalgia experienced as a series of short music video screenings or in an exhibition format made from a multi-screen and multi-speaker installation.
Project A series of sonic-gustatory experiences, of experimental electroacoustic ASMR video performances. The combination of a seemingly mundane cooking performance interacting with an autonomous music algorithm of a complex feedback network. A celebration of the mundane, a festival of insignificance contrasted by the meticulous video production of a complex-sounding sound-art performance.
A short-circuit of the gustatory experience with the acousmatic experience: the auditory is feeding the gustatory while the latter is listening back.
The composer is performing a supposedly instructional video of cooking an omelette or a pizza. A set of microphones are set up to listen very closely and to record every micro-movement of his performance. From preparing the ingredients, kneading the dough or hitting eggs, to cutting vegetables and herbs or tasting the resulting meal – all sounds of the cooking process are fed in real-time into a naively intelligent autonomous algorithm, which sonically analyses and reinterprets every sound to a new extended sonic reality. The performer’s movements tend to be gently slow, meditative, while the interactive sound results as a complex reflection. Every sound and gesture is interconnected into a complete sonic-gustatory performance, tickling and teasing those senses of the auditor. While spectator-listener can comfortably sit back, observe and listen to the evolution of the sonic-nutritional process while letting it influence her autonomous sensory response.
The overall project includes the following three volumes:
How to make an omelette – with interactive sound: a four minute performance, posted in 3 parts over three weeks focusing on the audience of Instagram
2. How to make a pizza – with interactive sound: An one-hour video performance commissioned and presented as an online premiere-event for Ensemble Ipse, New York. Part 2 develops the idea and explores the experience in its extensive form, having a more detailed composition process over one hour duration.
3. Re-pizzing: the third part goes back to short form videos mainly intended for Instagram audience. Every small video is a remix in terms of music and editing, creating a new sensory-packed compressed in terms of visuals and sound series of experiences. Experimenting more with parallel music layering remixing combinations.
The combination of the seemingly mundane cooking performance is interacting with a complex sonic feedback network, the autonomously driven music algorithm Ephemeron. An eco-systemic installation, through its sonic process creating a dynamic living environment for the performer-listener. Sound processes are created solely by the sounds in space, all connected to a single sonic entity surrounding the performer-listener, creating a sonic ecosystem including autonomous sonic entities of constant interactions. As a sound result, the work emerges from the interactions among space, software and performer-listener as an ever-changing living algorithmic ecosystem. The live ecosystem results from a live algorithm with the ability to ‘listen’ through the microphones and ‘express’ itself through the loudspeakers. The work through this software is able to organise its structure; it is able to self-organise and adapt in space.
Inspiration comes from the internet trend of cooking videos mainly used to entertain through the sharing and connecting with an audience with gustatory experience. At the same time there is a rapport with the interestingly peculiar trend of ASMR internet performances, which aim to sympathetically tune to the sensory phenomenon triggered mainly by sound.
Treating the theme of insignificance from its futility to the appreciation of the beauty of things as they are. A silent challenge of closely observing micromovements, listening closely to the surrounding microsounds, and distorting them through the power of an artificial imagination-like sonic mirror.
Credits Concept Music Composition | Algorithmic Development | Performance | Direction &Video Editing: Phivos-Angelos Kollias