Fi Sullivan’s music intersects science, coding and pop music
For a week, the singer and producer Fi Sullivan unsure if she would be able to take part in important career milestones after a devastating cycling accident left her in intensive care. However, last week she was given the green light to take part in a marathon of creative engagements, which included three nights on an interactive stage at Bonnaroo and a headlining concert tonight, June 24, at Globe Room in support of his new EP, Shades of forest.
Sullivan’s compositions sit at the crossroads between high technology and the natural world. A Thomas J. Watson Fellow Graduated in computer science and music, she travels the world integrating her studies with her creativity. Though pop-inspired and accessible, his music is complemented by a code that warps, stretches and bends the planet’s timbre and underlying mathematics into lush, empowering and fleeting tones. Leading the compositions is a three-octave vocal range that undulates and vibrates with the rhythms, delivered in a way reminiscent of the chaotic order of the Anthropocene.
forest shades is mainly inspired by nature. She first started the EP while on her fellowship, where she researched “human vocal continuity at the intersection of music and technology” by traveling the world to learn more about how the human voice exists. , evolves and expands in different forms through time, technological innovation and cultures. Thanks to the scholarship, Sullivan has lived in a wide range of cultures, with stints in Europe, Australia, South America and the Arctic Circle, where she encountered many types of forests – urban, rural, frozen. , rocky and tropical. When COVID escalated, she was forced to cut short her explorations and return to Colorado. She spent a lot of time in the evergreen forests of the Rocky Mountains, thinking about the forests she had passed through the previous year.
Westword met Sullivan just as she arrived at Bonnaroo and spoke to her about how technology influences her compositions, the human voice, and sound in general.
West word: How was the process of creating this EP?
Sullivan: All of my compositions begin with mind mapping and sound – the imagination and visualization of sounds that I physically and mentally bring together – all alive, layered and interacting with the natural environment and humans in new worlds, their own worlds . My songs start with a dream scene that pops into my mind, either before or while I’m playing guitar or jamming through Ableton on my computer. I usually find the chorus or main dance moment first, then the vocals and lyrics come intermittently in waves as I try to describe my soundscape and feelings. I usually go into a feverish flow of creative energy when writing and producing songs; it is difficult for me to detach myself from it.
On your new EP, how do you integrate technology into the productions, beyond your DAW (Digital Audio Workstation) and standard plug-ins?
I was exploring and researching certain algorithms and ideas during the writing periods for a few of my songs on the new EP which then influenced the characteristics and sounds of the songs. Then, the influence of the natural environment appears organically in my sound and my compositions.
“West Water” was originally composed during my period of research into the evolution and analysis of art when I was a research assistant in Professor Andrews’ Middlebury College Analytical Arts Laboratory. The original “West Water” piece was generative and evolving – unfolding slowly and patiently in an Ableton Live session piece that lasted ten to twelve minutes. I played live with a sax improviser friend and also used “The Cave” vocal patch in Max MSP. The lyrics came later, from rafting in Utah in the Westwater Canyon. I was thinking about the evolution of the sandstone tower.
For “Shades of Forest,” at the time, I was exploring notch theory and algorithms, as well as algorithmic rave at the Australian National University in Canberra while researching as a Thomas J. Watson in 2020. I was there during the tragic bushfires, and had to wear a P2 Haz mask every day, tape my windows, and put a damp towel under my door to keep the smoke out. The piece takes influence from the growth and movement of bushfires, and the lag between peaks in the data of bushfire growth and movement, and the overall back and forth of the fire. “Shades of Forest” has that call-response interaction delay between its spikes.
About your thesis: how does the intersection of technology and the human voice create continuity in music and sound art?
The intersection of technology and the human voice creates continuity in music and sound art because technology allows the human voice to expand into new realms and forms of sound that can be continued and evolved at as the technology itself continues and evolves. I’m obsessed with sound – how it appears and disappears so instantly with grandeur or subtlety; the way it is sculpted like this invisible medium to convey intense emotion. It’s magic for me. This is why the intersection of technology and the human voice is so fascinating to me – the idea that this sacred and inherent instrument that all humans carry with them at all times can be layered, delayed, harmonized and transformed into new dimensions of sound is exciting.
Is there naturally a vocal continuity?
Yes! Many shapes and meanings. There is vocal continuity across vocal traditions and extended vocal techniques such as overtone singing, overtone singing, throat singing, Kulning [First Nation] song lines. All these vocal techniques have been perpetuated over time, have crossed the generations as forms of art but also of survival and play.
What are some of your favorite music technologies and how do you use them in innovative ways?
Maximum MSP is by far my favorite music technology, because you can imagine anything and find a way to create it within the Max MSP environment – it’s so beautiful! I love using Max MSP to develop generative music, visuals and sound synthesis. I also like to use it to build my own generative digital vocal instruments that I can use in live performance or for production.
My vocal instruments tend to be inspired by the natural world. [I call them] biomorphic digital vocal instruments. One of my favorites is a patch I created called “The Cave”, which allows me to improvise with my voice in a random cave-like soundscape. I love this for live performances, because I’ll improvise a line and that vocal line will come back to me ten minutes later, after being modified by the system, and then I can sing and improvise with myself. I also use MaxMSP to create generative instruments. My favorite has been an instrument called “weather pattern” which I use to control sounds in Ableton during live session performances by turning Denver weather data into sound.
Is there a particular technological evolution that you think hasn’t been applied properly to music yet?
I’d love to see wearable technology evolve and interact more with music – I’ve always been obsessed with Imogen Heap’s MiMu gloves. It would be amazing to see this kind of wearable technology – even the attachment of an accelerometer to a guitar, drumstick, or jacket sleeve – become more accessible and seamlessly integrated into live performances, even classic rock concerts. The idea of your hand movements and gestures being your instruments and sound control is so cool! Wow, that would be amazing!
What do you think of generative music?
I’m excited for so many reasons – generative soundscapes for a 4D sound experience or sound art installation; video game music; generative music for theater or cinema; generative music as an environment for improvisation, like another musician with you on stage. I’ve even seen the coolest research from Ars Electronica in Linz, Austria, done on use machine learning to communicate with birds.
Do you believe that algorithms could one day support music composition at the same level that algorithms support music selection?
I sincerely believe that the computer will never replace the human musician. Sorry, computers, I love you, but music needs the human heart and soul – not to be cheesy, but it’s true. I wouldn’t want musicians to be replaced by generative music for monetary or operational convenience; that would be tragic. I envision and hope to see generative music as a tool to aid human composition and creation, yes. But I believe the human composer will always have the artistic, creative, and beautiful edge that the algorithms won’t have, so the algorithms won’t necessarily take over. I hope that algorithms will primarily become tools to help humans compose and create.
forest shades was released on all platforms. Fi Sullivan headlines Globe Hall, 4483 Logan Street, Friday, June 24; tickets are $15.