PROJECT Voice technologies (VT), such as biometric identification and emotion recognition, give rise to concerns regarding the surveillance, profiling and tracking of individuals. Voice cloning poses inherent risks, including identity theft, impersonation, and the dissemination of misinformation.
VOICES is an interactive sound installation, combining Interactive Music (MaxMSP), Interactive Visuals (Unity) & AI Voice Cloning (RVC/Python) interacting through a squeezing device (Arduino micro-controller).
The work invites participants to co-create a sound-art composition with their voices. Participants are asked to record a secret. In return, they can experience a composition of secrets from different synthetic voices. This immersive experience aims to shed light on the otherwise opaque inner workings of VT.
VOICES was premiered at Ars Electronica Festival (Linz), funded by European Festivals Fund for Emerging Artists (EFFEA). It was initiated as part of CODE2023 by IMPAKT (Utrecht)/Transmediale (Berlin). The work was also presented at IMPAKT Festival (Utrecht).
CREDITS Phivos-Angelos Kollias – Interactive Music/MaxMSP, Systems integration & Team Coordination Ahnjili ZhuParris – Python programming/AI voice analysis Mohsen Hazrati – Unity artist/Visuals Yu Zhang – Arduino & set design
Ephemeron is a music living organism. During each concert, the work is born, evolves, and dies at the end of the performance. The performance space, including the audience, is the “natural environment” of this music organism. This music organism is fed from the sound of its environment through microphones while it feeds back the environment with sound through the speakers. Music organism and the environment are an interconnected and inseparable ecosystem.
There is no pre-recorded material at any stage. The work emerges as a music organism from the interactions among space algorithm and (if any) user-performer. In the original version, the initial material of the composition is the applause coming from the previous piece.
The recording is a fixed acousmatic version based on the material of Ephemeron. It uses material from two organisms in different environments and audiences’ reactions (the De Montfort University, U.K. concert hall and the Kubus of Z.K.M., Germany). The material of these live organisms was taken from particular spatio-temporal situations to construct transferable realities in time and space.
Now, there is no live organism any more. Here, you are in this place, observing the history of another reality. You are experiencing the construction of a new reality surrounding you, the reality of your actual perception.
The recording can be considered a ‘photograph’ of the music organism that existed under particular circumstances. The recording provided is a stereo mix of the original version.
Ephemeron was commissioned by the Z.K.M. institution and saxophonist Pedro Bittencourt. Originally, it was conceived as a real-time electronics “concert work”.
It was initially developed in the concert hall of Z.K.M. Karlsruhe and the Maison de Science de l’Homme/Univerité de Paris 8. It premiered at Z.K.M. Kubus Concert Hall.
Ephemeron has been presented as a concert piece or installation numerous times worldwide. The algorithm has become an ecosystemic music instrument, the backbone of several projects of Kollias.
PROJECT We interact daily with algorithms that emulate human perception and collective memory. By trying to communicate with us, the algorithms sound, look and behave more and more like us by reflecting our perception and memory back to us.
What if we could play through those AI tools with the listener’s sense of familiarity by using shared cultural signs, tropes, or archetypes? Or if those AI tools become instruments of manipulation, taking advantage of the spectator’s intimacy of memories?
We investigate & explore the relationship between collective & individual memory reflected & manipulated through AI: the concept of the “found object” & its algorithmic transformation of meaning.
The network, fed with familiar audiovisual “found objects”, generates a progressive alteration, a dream-like transformation of meaning, an endless change of scale to a disorienting and familiar dream reality.
On the musical side, an AI feedback network that listens to familiar sound objects generates a continuous sound transformation. On the visual side, a generative adversarial network represents a collective artificial memory and perception. Each time, the sound and image transformations create a personal narrative, a phrase, a gesture for the spectator. The results generate a collective manipulation of nostalgia experienced as a series of short music video screenings or in an exhibition format made from a multi-screen and multi-speaker installation.
AI Music Creativity Conference-Festival (Japan)
Sound-Image Festival (London)
Resilience Festival (Italy)
Passages of Time/A.D. Gallery (University of North Carolina, USA)
Project A series of sonic-gustatory experiences, of experimental electroacoustic ASMR video performances. The combination of a seemingly mundane cooking performance interacting with an autonomous music algorithm of a complex feedback network. A celebration of the mundane, a festival of insignificance contrasted by the meticulous video production of a complex-sounding sound-art performance.
A short-circuit of the gustatory experience with the acousmatic experience: the auditory is feeding the gustatory while the latter is listening back.
The composer is performing a supposedly instructional video of cooking an omelette or a pizza. A set of microphones are set up to listen very closely and to record every micro-movement of his performance. From preparing the ingredients, kneading the dough or hitting eggs, to cutting vegetables and herbs or tasting the resulting meal – all sounds of the cooking process are fed in real-time into a naively intelligent autonomous algorithm, which sonically analyses and reinterprets every sound to a new extended sonic reality. The performer’s movements tend to be gently slow, meditative, while the interactive sound results as a complex reflection. Every sound and gesture is interconnected into a complete sonic-gustatory performance, tickling and teasing those senses of the auditor. While spectator-listener can comfortably sit back, observe and listen to the evolution of the sonic-nutritional process while letting it influence her autonomous sensory response.
The overall project includes the following three volumes:
How to make an omelette – with interactive sound: a four minute performance, posted in 3 parts over three weeks focusing on the audience of Instagram
2. How to make a pizza – with interactive sound: An one-hour video performance commissioned and presented as an online premiere-event for Ensemble Ipse, New York. Part 2 develops the idea and explores the experience in its extensive form, having a more detailed composition process over one hour duration.
3. Re-pizzing: the third part goes back to short form videos mainly intended for Instagram audience. Every small video is a remix in terms of music and editing, creating a new sensory-packed compressed in terms of visuals and sound series of experiences. Experimenting more with parallel music layering remixing combinations.
The combination of the seemingly mundane cooking performance is interacting with a complex sonic feedback network, the autonomously driven music algorithm Ephemeron. An eco-systemic installation, through its sonic process creating a dynamic living environment for the performer-listener. Sound processes are created solely by the sounds in space, all connected to a single sonic entity surrounding the performer-listener, creating a sonic ecosystem including autonomous sonic entities of constant interactions. As a sound result, the work emerges from the interactions among space, software and performer-listener as an ever-changing living algorithmic ecosystem. The live ecosystem results from a live algorithm with the ability to ‘listen’ through the microphones and ‘express’ itself through the loudspeakers. The work through this software is able to organise its structure; it is able to self-organise and adapt in space.
Inspiration comes from the internet trend of cooking videos mainly used to entertain through the sharing and connecting with an audience with gustatory experience. At the same time there is a rapport with the interestingly peculiar trend of ASMR internet performances, which aim to sympathetically tune to the sensory phenomenon triggered mainly by sound.
Treating the theme of insignificance from its futility to the appreciation of the beauty of things as they are. A silent challenge of closely observing micromovements, listening closely to the surrounding microsounds, and distorting them through the power of an artificial imagination-like sonic mirror.
Credits Concept Music Composition | Algorithmic Development | Performance | Direction &Video Editing: Phivos-Angelos Kollias
A Symphony of Noise is a music VR experience, letting the user experience a fantastic world made and governed by sound.
Immerse yourself into a sensational symphony of everyday sounds, which are lifted into the unusual through mixing, superimposition and modulation.
Loosely based on Matthew Herbert’s book “The Music”, A Symphony of Noise aims to inspire users to think about the way they understand music. The complete soundscape of the application refrains from the usage of musical instruments and instead works with familiar and common everyday noises. These are complemented with the user’s voice or other auditory input in order to create an individual and personal experience.
The user embarks on a journey through inner and outer dimensions of the human being – starting with the first heartbeat, he gradually moves into the world of nature, the man-made, and the supernatural. Visual hints invite the viewer to participate: Conscious breathing, talking, singing, and hand movements are all different interactions triggering something special in the digital worlds and leaving a personal auditory fingerprint.
All music comes from the outside, created from other minds & intentions, entering our inside through ears & perception. How would it be for music to emerge from your mind, body & intentions? Listening to music directly emerging from processes acknowledging your current states?
What if the external sounding world & the inner perceptual music reflection to become one? What if, instead of an external instrument, we had our body expressing itself musically?
We want to explore the possibilities and what ifs of an everyday experience of ambient intelligence, based primarily on the auditory. To enhance everyday life and to root for personal development and eudeamonia. An everyday experience away from visual interaction, based on biofeedback and aural interactions with ambient intelligent. Where self-generated music is an integrate part of the living experience, enhancing life and productivity. Using techniques of guided meditation, bio-feedback technology and real-time generated music.
Concept of artistic research
My interest in this artistic research is equally philosophical, aesthetical & technologically driven.
On a technological level, I will be exploring the direct connection of bio-feedback tools & their rapport with interactive algorithms of sonification. In our early prototype, we are using bio-feedback tools, mapping them to sound process as the first step of technical investigation. Currently, those tools are EEG (brain receptor), Heart monitoring receptor, breathing receptor & gesture tracking.
On a deeper level, I am interested to study the internal human states & their direct relation of creating music, in other terms by inversing an internal process, how to exteriorise the direct creation of music.
The proposed artistic research aims to contribute to the fields of self-organising music & bio-art, fields of interactive music & art. It is the continuation of my music research in interdisciplinary self-organising principles (complexity science, cybernetics, systems theories), equally found in nature, societies & the brain & how we can use them to create music. On-going research through peer-reviewed publications & a PhD (see list of publications).
I aim to contribute with an inspiring new step for the field of self-organising music (technologically, aesthetically & creatively). Also, being an artistic example of cognitive research can make us understand more about human perception & music.
The experimental research will include the continuous development of a prototype that can be a testimony of the findings & at the same time an inspiring interactive music artwork.
In addition, a series of open experiments will be conducted in Berlin, in Honig Studios, inviting a representative sample of people to test & explore those ideas. At a later stage, a series of presentations in the form of workshops will be organised, as well as the invitation of other specialists.
Pic. 1. – EEG Brain Waves Sensor, connects as a wireless wearable to the head
Pic. 2. –Brain Waves Monitor, reported in real-time in iPad
Pic. 3. – Gesture control, translates hand gestures to controllable data
Project A small Virtual Reality music experience created as part of hack/jam Patchathon of Nordic Game Conference. The work was created within few days within the Virtual Reality environment of the software Patch.XR that gives the opportunity to creators to directly program sound and visuals in VR.
The project was as part of my research activity in music and VR, while learning and testing a new Virtual Reality programming app which is still in development, with direct interaction of the developing team to improve the program and do user testing on the spot.
Music-Visuals The music VR experience is a study of some of the capabilities of the VR software Patch.XR. I created a grotesque nightmare for the user, both the visuals and music, with a score composed and orchestrated in space. Increasingly over-sized babies are turning cogs of machine while the sounds control their rotation speed.
Wir interagieren täglich mit Algorithmen, die die menschliche Wahrnehmung und das kollektive Gedächtnis nachahmen. Indem sie versuchen, mit uns zu kommunizieren, klingen, sehen und verhalten sich die Algorithmen mehr und mehr wie wir, indem sie unsere Wahrnehmung und unser Gedächtnis an uns zurückspiegeln.
Was wäre, wenn diese KI-Tools zu Instrumenten der Manipulation werden, indem sie das Gefühl der Vertrautheit des Zuschauers durch gemeinsame kulturelle Zeichen, Tropen oder Archetypen ansprechen?
Auf der musikalischen Seite erzeugt ein KI-Rückkopplungsnetzwerk, das auf bekannte Klangobjekte hört, eine kontinuierliche Klangumwandlung. Auf der visuellen Seite stellt ein generatives kontradiktorisches Netzwerk ein kollektives künstliches Gedächtnis und eine kollektive Wahrnehmung dar.
Jedes Mal erzeugen die Klang- und Bildtransformationen eine persönliche Erzählung, eine Phrase, eine Geste für den Betrachter. Die Ergebnisse sind in einer Installation mit mehreren Bildschirmen und Lautsprechern zu erleben, die eine kollektive Manipulation der Nostalgie erzeugt.
Wir haben einen Prototyp entwickelt, der Ton/Bild-Transformation kombiniert: MaxMSP für Ton, CLIP (OpenAI) und VQ-GAN (ImageNet) für Bild
eine Reihe von kurzen audiovisuellen Etüden in Form von Kurzvideos, um die Forschung zu dokumentieren und die Reaktionen des Publikums zu testen