VOICES

PROJECT
Voice technologies (VT), such as biometric identification and emotion recognition, give rise to concerns regarding the surveillance, profiling and tracking of individuals. Voice cloning poses inherent risks, including identity theft, impersonation, and the dissemination of misinformation.

VOICES is an interactive sound installation, combining Interactive Music (MaxMSP), Interactive Visuals (Unity) & AI Voice Cloning (RVC/Python) interacting through a squeezing device (Arduino micro-controller).

The work invites participants to co-create a sound-art composition with their voices. Participants are asked to record a secret. In return, they can experience a composition of secrets from different synthetic voices. This immersive experience aims to shed light on the otherwise opaque inner workings of VT.

VOICES was premiered at Ars Electronica Festival (Linz), funded by European Festivals Fund for Emerging Artists (EFFEA). It was initiated as part of CODE2023 by IMPAKT (Utrecht)/Transmediale (Berlin). The work was also presented at IMPAKT Festival (Utrecht).

CREDITS
Phivos-Angelos Kollias – Interactive Music/MaxMSP, Systems integration & Team Coordination
Ahnjili ZhuParris – Python programming/AI voice analysis
Mohsen Hazrati – Unity artist/Visuals
Yu Zhang – Arduino & set design

Nostophiliac AI

PROJECT
We interact daily with algorithms that emulate human perception and collective memory. By trying to communicate with us, the algorithms sound, look and behave more and more like us by reflecting our perception and memory back to us.

What if we could play through those AI tools with the listener’s sense of familiarity by using shared cultural signs, tropes, or archetypes? Or if those AI tools become instruments of manipulation, taking advantage of the spectator’s intimacy of memories?

Third prize at the Musicworks Electronic Music Composition Contest, Toronto

Duchamp’s AI Transformations

We investigate & explore the relationship between collective & individual memory reflected & manipulated through AI: the concept of the “found object” & its algorithmic transformation of meaning.

The network, fed with familiar audiovisual “found objects”, generates a progressive alteration, a dream-like transformation of meaning, an endless change of scale to a disorienting and familiar dream reality.

Giacometti’s AI Transformations

On the musical side,  an AI feedback network that listens to familiar sound objects generates a continuous sound transformation. On the visual side, a generative adversarial network represents a collective artificial memory and perception.

Xenakis’s AI Transformations

Each time, the sound and image transformations create a personal narrative, a phrase, a gesture for the spectator. The results generate a collective manipulation of nostalgia experienced as a series of short music video screenings or in an exhibition format made from a multi-screen and multi-speaker installation.  

Xenakis’s AI Transformations

SCREENINGS/PERFORMANCES

  • AI Music Creativity Conference-Festival (Japan)
  • Sound-Image Festival (London)
  • Resilience Festival (Italy)
  • Passages of Time/A.D. Gallery (University of North Carolina, USA)
  • Technarte Conference (Bilbao, Spain)
  • Provocation Ideas Festival (Toronto, Canada)
  • MusLab Festival (Guayaquil, Ecuador)
  • International Short Film Week (JukeBoxx NewMusic Award – Germany)
  • Filmgalerie Leerer Beutel cinema (Regensburg, Germany)

CREDITS
All music and visuals created by Phivos-Angelos Kollias
Funded by MusikFonds e.V., Berlin

Cooking Music Algorithms

Project
A series of sonic-gustatory experiences, of experimental electroacoustic ASMR video performances. The combination of a seemingly mundane cooking performance interacting with an autonomous music algorithm of a complex feedback network. A celebration of the mundane, a festival of insignificance contrasted by the meticulous video production of a complex-sounding sound-art performance.

A short-circuit of the gustatory experience with the acousmatic experience: the auditory is feeding the gustatory while the latter is listening back.

The composer is performing a supposedly instructional video of cooking an omelette or a pizza. A set of microphones are set up to listen very closely and to record every micro-movement of his performance. From preparing the ingredients, kneading the dough or hitting eggs, to cutting vegetables and herbs or tasting the resulting meal – all sounds of the cooking process are fed in real-time into a naively intelligent autonomous algorithm, which sonically analyses and reinterprets every sound to a new extended sonic reality. The performer’s movements tend to be gently slow, meditative, while the interactive sound results as a complex reflection. Every sound and gesture is interconnected into a complete sonic-gustatory performance, tickling and teasing those senses of the auditor. While spectator-listener can comfortably sit back, observe and listen to the evolution of the sonic-nutritional process while letting it influence her autonomous sensory response.

The overall project includes the following three volumes:   

  1. How to make an omelette – with interactive sound: a four minute performance, posted in 3 parts over three weeks focusing on the audience of Instagram

2. How to make a pizza – with interactive sound:  An one-hour video performance commissioned and presented as an online premiere-event for Ensemble Ipse, New York. Part 2 develops the idea and explores the experience in its extensive form, having a more detailed composition process over one hour duration.

3. Re-pizzing: the third part goes back to short form videos mainly intended for Instagram audience. Every small video is a remix in terms of music and editing, creating a new sensory-packed compressed in terms of visuals and sound series of experiences. Experimenting more with parallel music layering remixing combinations.

The combination of the seemingly mundane cooking performance is interacting with a complex sonic feedback network, the autonomously driven music algorithm Ephemeron. An eco-systemic installation, through its sonic process creating a dynamic living environment for the performer-listener. Sound processes are created solely by the sounds in space, all connected to a single sonic entity surrounding the performer-listener, creating a sonic ecosystem including autonomous sonic entities of constant interactions.  As a sound result, the work emerges from the interactions among space, software and performer-listener as an ever-changing living algorithmic ecosystem. The live ecosystem results from a live algorithm with the ability to ‘listen’ through the microphones and ‘express’ itself through the loudspeakers. The work through this software is able to organise its structure; it is able to self-organise and adapt in space.

Inspiration comes from the internet trend of cooking videos mainly used to entertain through the sharing and connecting with an audience with gustatory experience. At the same time there is a rapport with the interestingly peculiar trend of ASMR internet performances, which aim to sympathetically tune to the sensory phenomenon triggered mainly by sound.

Treating the theme of insignificance from its futility to the appreciation of the beauty of things as they are. A silent challenge of closely observing micromovements, listening closely to the surrounding microsounds, and distorting them through the power of an artificial imagination-like sonic mirror.

Credits
Concept Music Composition | Algorithmic Development | Performance | Direction &Video Editing: Phivos-Angelos Kollias

Cinematography | Camera: Jade Wu

Shot in: Honig Studios, Berlin

Thanks to : Jiannis Sotiropoulos for Honig Studios | Mike Robbins for the feedback

Long form performance (vol.2) commissioned by: Ensemble Ipse | Max Giteck Duykers | Joseph Di Ponio

Music Against Police Brutality

Project
A music work, accompanied with a video editing made out of footage of twitter users, created as a spontaneous reaction against Police Brutality in Greece. A result of thoughts and feelings against the tragic incidents happening in the streets of Greece, where democracy and freedom is daily sacrificed in the name of order and security.

“When a human body sends its own antibodies to attack healthy cells, we are talking about symptoms of cancer.
When a human society sends its own defense mechanisms to attack peaceful civilians, we are talking about symptoms of police violence.”

This is part of the music dialogue against police brutality, started and encouraged by fellow musicians Andreas Polyzogopoulos and Spyros Polychronopoulos that ended up creating a collective music album along with 34 other musicians: “PONAO – 37 composers against police brutality”  

The album is released on Bandcamp available for free with voluntary contributing to support the victims of police brutality:

https://ponao.bandcamp.com/album/ponao

Interoception

Bio-Feedback & Music

https://vimeo.com/589820167/f34c3ce7b9

Project

All music comes from the outside, created from other minds & intentions, entering our inside through ears & perception. How would it be for music to emerge from your mind, body & intentions? Listening to music directly emerging from processes acknowledging your current states?

What if the external sounding world & the inner perceptual music reflection to become one? What if, instead of an external instrument, we had our body expressing itself musically?

We want to explore the possibilities and what ifs of an everyday experience of ambient intelligence, based primarily on the auditory. To enhance everyday life and to root for personal development and eudeamonia. An everyday experience away from visual interaction, based on biofeedback and aural interactions with ambient intelligent. Where self-generated music is an integrate part of the living experience, enhancing life and productivity. Using techniques of guided meditation, bio-feedback technology and real-time generated music.

Concept of artistic research

My interest in this artistic research is equally philosophical, aesthetical & technologically driven.

On a technological level, I will be exploring the direct connection of bio-feedback tools & their rapport with interactive algorithms of sonification. In our early prototype, we are using bio-feedback tools, mapping them to sound process as the first step of technical investigation. Currently, those tools are EEG (brain receptor), Heart monitoring receptor, breathing receptor & gesture tracking.

On a deeper level, I am interested to study the internal human states & their direct relation of creating music, in other terms by inversing an internal process, how to exteriorise the direct creation of music.

The proposed artistic research aims to contribute to the fields of self-organising music & bio-art, fields of interactive music & art. It is the continuation of my music research in interdisciplinary self-organising principles (complexity science, cybernetics, systems theories), equally found in nature, societies & the brain & how we can use them to create music. On-going research through peer-reviewed publications & a PhD (see list of publications).

I aim to contribute with an inspiring new step for the field of self-organising music (technologically, aesthetically & creatively). Also, being an artistic example of cognitive research can make us understand more about human perception & music.

The experimental research will include the continuous development of a prototype that can be a testimony of the findings & at the same time an inspiring interactive music artwork.

In addition, a series of open experiments will be conducted in Berlin, in Honig Studios, inviting a representative sample of people to test & explore those ideas. At a later stage, a series of presentations in the form of workshops will be organised, as well as the invitation of other specialists.

Photo documentation

Pic. 1. – EEG Brain Waves Sensor, connects as a wireless wearable to the head

Pic. 2. –Brain Waves Monitor, reported in real-time in iPad

Pic. 3. – Gesture control, translates hand gestures to controllable data

BabyMachine VR

Project
A small Virtual Reality music experience created as part of hack/jam Patchathon of Nordic Game Conference. The work was created within few days within the Virtual Reality environment of the software Patch.XR that gives the opportunity to creators to directly program sound and visuals in VR.

The project was as part of my research activity in music and VR, while learning and testing a new Virtual Reality programming app which is still in development, with direct interaction of the developing team to improve the program and do user testing on the spot.

Music-Visuals
The music VR experience is a study of some of the capabilities of the VR software Patch.XR. I created a grotesque nightmare for the user, both the visuals and music, with a score composed and orchestrated
in space. Increasingly over-sized babies are turning cogs of machine while the sounds control their rotation speed.

Nostophiliac AI

KI generiert audiovisuelle Erlebnisse

Wir interagieren täglich mit Algorithmen, die die menschliche Wahrnehmung und das kollektive Gedächtnis nachahmen. Indem sie versuchen, mit uns zu kommunizieren, klingen, sehen und verhalten sich die Algorithmen mehr und mehr wie wir, indem sie unsere Wahrnehmung und unser Gedächtnis an uns zurückspiegeln.

Was wäre, wenn diese KI-Tools zu Instrumenten der Manipulation werden, indem sie das Gefühl der Vertrautheit des Zuschauers durch gemeinsame kulturelle Zeichen, Tropen oder Archetypen ansprechen?

Duchamp’s AI Transformations

Auf der musikalischen Seite erzeugt ein KI-Rückkopplungsnetzwerk, das auf bekannte Klangobjekte hört, eine kontinuierliche Klangumwandlung. Auf der visuellen Seite stellt ein generatives kontradiktorisches Netzwerk ein kollektives künstliches Gedächtnis und eine kollektive Wahrnehmung dar.

Giacometti’s AI Transformations

Jedes Mal erzeugen die Klang- und Bildtransformationen eine persönliche Erzählung, eine Phrase, eine Geste für den Betrachter. Die Ergebnisse sind in einer Installation mit mehreren Bildschirmen und Lautsprechern zu erleben, die eine kollektive Manipulation der Nostalgie erzeugt.

Xenakis’s AI Transformations

Wir haben einen Prototyp entwickelt, der Ton/Bild-Transformation kombiniert: MaxMSP für Ton, CLIP (OpenAI) und VQ-GAN (ImageNet) für Bild

Xenakis’s AI Transformations

eine Reihe von kurzen audiovisuellen Etüden in Form von Kurzvideos, um die Forschung zu dokumentieren und die Reaktionen des Publikums zu testen

CREDITS

Musik und Bildmaterial erstellt von Phivos-Angelos Kollias
Gefördert durch MusikFonds e.V., Berlin

Ephemeron: Cooking Music Algorithms

eine klanglich-gustatorische interaktive Musikperformance

Project

Eine Reihe von klanglich-gustatorischen Erfahrungen, von experimentellen elektroakustischen ASMR-Videoperformances. Die Kombination einer scheinbar alltäglichen Kochperformance in Interaktion mit einem autonomen Musikalgorithmus eines komplexen Feedback-Netzwerks. Eine Feier des Alltäglichen, ein “Festival der Bedeutungslosigkeit”, kontrastiert mit der akribischen Videoproduktion einer komplex klingenden Klangkunst-Performance.

Ein Kurzschluss der gustatorischen Erfahrung mit der akusmatischen Erfahrung: das Auditive speist das Gustatorische, während letzteres zurückhört.

Der Komponist führt ein vermeintliches Lehrvideo über die Zubereitung eines Omeletts oder einer Pizza auf. Eine Reihe von Mikrofonen ist so aufgestellt, dass sie sehr genau zuhören und jede Mikrobewegung seiner Darbietung aufzeichnen. Von der Vorbereitung der Zutaten, dem Kneten des Teigs oder dem Schlagen der Eier bis hin zum Schneiden von Gemüse und Kräutern oder dem Verkosten des fertigen Gerichts – alle Geräusche des Kochvorgangs werden in Echtzeit in einen naiv-intelligenten autonomen Algorithmus eingespeist, der jedes Geräusch analysiert und zu einer neuen erweiterten Klangrealität uminterpretiert. Die Bewegungen des Performers sind eher langsam und meditativ, während der interaktive Klang als komplexe Reflexion entsteht. Jeder Klang und jede Geste ist zu einer vollständigen klanglich-ästhetischen Performance verbunden, die die Sinne des Zuhörers kitzelt und reizt. Der Zuschauer/Zuhörer kann sich bequem zurücklehnen, beobachten und der Entwicklung des klanglichen Ernährungsprozesses lauschen, während er ihn seine autonome sensorische Reaktion beeinflussen lässt.

Ein Teil des Projekts wurde vom Ensemble Ipse, New York, in Auftrag gegeben und im Rahmen seines Jahresprogramms online uraufgeführt. Eine komprimierte Version wurde auf dem Online-Festival von Expresiones Contemporáneas in Mexiko präsentiert. Das Projekt wurde auf dem Sound-Image Festival (London) physisch präsentiert.

Credits
Konzept Musikkomposition | Algorithmische Entwicklung | Performance | Regie &Videoschnitt: Phivos-Angelos Kollias

Kinematographie | Kamera: Jade Wu

Gedreht in: Honig Studios, Berlin

Dank an: Jiannis Sotiropoulos für Honig Studios | Mike Robbins für das Feedback

Leistung in Langform (vol.2) in Auftrag gegeben von: Ensemble Ipse | Max Giteck Duykers | Joseph Di Ponio