PORTRAIT XO

INSIGHT 002

Few artists can claim to be so demonstrably ahead of the cultural curve as Portrait XO. Experimenting with AI to process her own vocals and generate a wide variety of sonic and visual outputs since long before anyone had heard of ChatGPT, she continues to push forward with inventive multimedia initiatives and curations. Recognized as a visionary, she has collected numerous awards and recently presented at this year’s Sónar+D. We caught up with her as her latest full-length, Wire, is out now.

 

MFA: How do you generally start a track? What's your approach to those first moments of a blank project?

Portrait XO: This is a hard question because I don't have a set formula for how I translate inspirations as they happen. If I'm in the mood to write a song, lately my favourite go-to has been pulling up a random audio clip from ten hours worth of AI-generated audio that Dadabots made from their custom SampleRNN model that was trained on my voice. Hearing my own voice sing an unexpected lyric and melody has been the most fascinating journey for me with AI. If I'm more in the mood to create something more instrumental or experimental, usually I like making a visual and then scoring to it. I love sonifying visual textures.

That’s super interesting to hear how you integrate visuals into your production work. So beyond the generated material from Dadabots, how else does AI factor into your creative process?

I love using different AI models to make different types of music and visuals. For example, taking a finished track and using audio as input for one AI model to generate animated audio-reactive AI visuals, then take that video output and upload it to another AI model and text engineer prompts to transform the first video into different textures and visuals to support a story I'm trying to tell. Sometimes the visuals are more abstract, but I've loved this two-step process to create all the 4k visuals I use when performing my album.

Hearing my own voice sing an unexpected lyric and melody has been the most fascinating journey for me with AI.

I think AI is opening artists up to co-create with these technologies to make the most intimate expressions of human-machine collaboration. When there's an opportunity for it, I love making interactive AI audio-visual installations. My most recent installation, COLLECTIVE VOICE ID, was part of Refraction Festival’s NYC exhibition at Zerospace integrating the MFA Note Composer for the audio generative side, Dubler by Vochlea using their AI calibration software for detecting vowels from their proprietary microphone, and Ebosuite for mapping vowels of voices to dynamically impact the visuals. This was the first time I got to create a workflow that allowed people to live-mint a few seconds of an animated generative audio-visual sonic portrait of their own input. My goal with this installation was to give people sonified digital collectibles portraits of themselves as a reminder to celebrate their most unique sonic identity: the voice.

That sounds like a very intricate and impressive project. So what happens when you hit a wall? What are your strategies for getting unstuck?

My usual go-to strategy would be to look up new plug-ins, instruments, interfaces, and hunt for strange new sounds from old field recordings — or look for new samples on Splice. Sometimes I'll just play around with manipulating audio and create a collage of sounds I find interesting. ut usually inspiration hits me at times when I'm not thinking about making music — going for long walks, days in nature, bike rides, listening to interviews with interesting people, or just engaging with a lot of different types of people.

Over the last few years, my pool of friends has expanded to include some really interesting scientists — I really love the way scientists describe curiosities and doubts about life. Their scientific approach to doubt was something that really struck me that helped me review every time I feel stuck or doubtful in a more granular and productive way. Getting more detailed with whatever I feel challenged with has helped me create more detailed actions to try new things, and now I rarely feel stuck. To the contrary — I now find I have the opposite problem of having too many ideas and inspirations now, so my list of visions I'd like to bring to life is overflowing. I never imagined I'd have this problem of not feeling like I'll have enough time in this lifetime to make everything I dream.

When I can't express myself through words or visuals, I always turn to sound. Music is the space that translates complex emotions that are maybe too difficult to articulate or don't have words to describe them through any other medium.

That’s fascinating. So more on the music production side — how do you know when a track is finished? Do you have a system for this you can share?

This is really tough to answer. I need deadlines to set a definition of “finished” for me. I can create a ton of new work and rarely feel like anything is truly finished. I try my best to stay sane by sharing my works in different stages: first, as soon as I have the initial demo made capturing the seed of inspiration, I share it with another artist friend who I look up to for feedback; secondly, as soon as I feel like I've achieved 80% of a new production and mix, I send it to Michele Balduzzi who is one of the most incredible sound designers and producers I know, who has a surgical ear and can give me really constructive feedback.

I also journal and meditate regularly to check-in with myself. My journaling process when I'm trying to finish something is to create my own list of goals I'm trying to achieve. I stay focused on growing and learning and that's helped tame my inner childish tantrums that sometimes kick up when I don't feel like something sounds as perfect as I'd like. If I’m really not satisfied with something, I'm happy to accept it as a fun practice and move onto another idea rather than dwell on something for too long.

It's really great when I get to co-write or co-produce with others because when the collaboration works really well, it can speed up the process of getting from ideation to execution. I try to balance out my solo works with collaborations to keep myself inspired in different ways. For example, it's been fun collaborating with Moritz Simon Geist who's been great at taking jam sessions of a long take I'll record of my voice and keys and turn it into a track.

What fuels your creative appetite? What drives you to keep making music? Where do you find inspiration?

When I can't express myself through words or visuals, I always turn to sound. Music is the space that translates complex emotions that are maybe too difficult to articulate or don't have words to describe them through any other medium. It's been a tricky balance for me because of all the different types of creative works I do, but music is always the most all-encompassing and consuming emotionally, mentally, and physically. By the time I get through new musical work, there's a sense of relief and connection that I don't get through working with other media. Sometimes making music comes from the need to transmute trauma and seek catharsis through sound, while other times it's because I feel the need to translate something more explicitly from something really abstract like AI.

Are there any Manifest Audio devices you can talk about using?

I need to try more! But beyond the Note Composer rack I used for COLLECTIVE VOICE ID, lately I've been having some fun experimenting with X-Ponder and X-Translate.

I imagine as these tools and workflows become more accessible, artists and producers will create more of their own AI models based on their own music that they can use as in a co-creative context.

With the profusion of generative AI music tools, how do you see them fitting into the creative process of producers going forward?

Witnessing more artists and creative technologists create AI plugins for music production is really exciting. I love Moisés Horta Valenzuela a.k.a. hexorcismos’ forthcoming Max device called SEMILLA. It allows real-time neural synthesis that I used for performing live with SOMI-1 sensors that allowed me to travel through so-called latent-space real-time in 3D — spatialized as part of MONOM's first Spatial Music Festival in October 2022. I imagine as these tools and workflows become more accessible, artists and producers will create more of their own AI models based on their own music that they can use as in a co-creative context.

An AI self-portrait by Portrait XO.

Do you have any recent and/or upcoming projects you’re excited to share?

I self-released my first ever research-based NFT-to-vinyl AI audio-visual album, WIRE, on 9th December 2022. The vinyl has been supported by twelve x twelve and they will be physically arriving in my hands by July. Later this summer, I'll be performing a new show with kinetic, new media, and sound artist Neil Mendoza at Pop-Kultur. And I have a new short AI audio-visual composition I'm showcasing around events and festivals called ALMOST LIFE.

I was always fascinated by how different instruments and interfaces made me make music differently.

Wow — busy! What can you tell us about what inspired these projects?

CJ Carr is the unicorn human who got me into AI. I've never met a musician who's also a data scientist before him. CJ introduced me to AI in 2015 when we first met, and he poked me about trying some experiments at the time, when it was far more abstract than it is now. My journey with AI started from hitting my own writer's block during a bout of creative depression. In 2019, CJ and I got to collaborate as part of the Factory Berlin x Sonar+D Artist Residency program. I've been obsessed with AI, art, and science ever since. Before AI, I was always fascinated by how different instruments and interfaces made me make music differently. Whether it's a synth, sequencer, plug-in, or a completely new instrument, I became obsessed with music technology.

The thing I've become really obsessed lately with is the unexpected and unpredictable nature of AI. While AI is used by corporations to achieve different types of (so-called) perfection, it's the glitches and strange artifacts I've fallen in love with. Witnessing machines behave in these in-between states of latent space has inspired my mind to co-imagine with AI in fun and inspiring ways. Seeing and hearing things somewhere between form and non-form created new spaces to contemplate what form even means. Making sense then not making sense — this is how AI used to create art and music has been inspiring because to me what I hear and see is the most explicit translation of our current state of human-machine collaboration. We're trying to understand what's happening in machine learning and even the experts don't fully understand. It’s emergent.

For more information on Portrait XO, check her website and follow her on Instagram.