The ever-inventive YACHT used AI to deconstruct their entire back catalogue and then painstakingly put the whole lot back together to create ‘Chain Tripping’, an album of the year contender that has to be heard to be believed

“When you cut into the present, the future leaks out” observed William Burroughs, discussing the creative cut-up techniques pioneered by Brion Gysin and continued by himself and later adopted by the likes of David Bowie, to great effect.

The roots of those experiments lie in rudimentary technology, sometimes as simple as literally cutting up and re-assembling words on paper, at others manipulating reel-to-reel tape. Staying true to their spirit in a post-artificial intelligence age proved to be an altogether more complex and, at times, literally painful process. ‘Chain Tripping’, the latest album from the Los Angeles-based (although they originally hail from Portland, Oregon) electronic pop conceptualists YACHT, is the fruit of that labour. The result of taking those ideas as far as modern advances allow them to be stretched.

It is a remarkable record, both in the story of its construction and its otherworldly sound. It couldn’t have been made by humans alone, but is almost more human as a result.

YACHT – it’s an acronym, not just shouty caps for the sake of it, standing for Young Americans Challenging High Technology – used two central questions as a starting point for the album’s creation.

“We made ‘Chain Tripping’ by stringing a handful of disparate artificial intelligence processes together with a very specific goal in mind: to make new YACHT songs,” explains the band’s multi-instrumentalist Jona Bechtolt. “That raised two questions immediately: how do we define a YACHT song? And how do we work with AI when we’re not programmers?”

The answer to the first involved the breaking themselves down, piece by painstaking piece. They paid someone sourced via freelancers for hire site Fiverr to notate their entire 82-song back catalogue into MIDI information. Then they went through every song and chose their favourite patterns in two and 16-bar chunks, until they’d collected together the best drum patterns, keyboard lines, vocal melodies and guitar riffs from their 17-year career.

“This data didn’t necessarily define us, but it was a start,” says Bechtolt.

“We then ran pairs of patterns through an open-source latent space interpolation model, an AI system that allowed us to navigate the high-dimensional mathematical space between melodies,” explains singer Claire L Evans (third member, all-round collaborator Rob Kieswetter, was otherwise engaged on the day we chatted). “By repeatedly running our back catalogue through this model, we generated hundreds of new weird patterns. We worked through all this melodic data, found what we loved most and assigned each melody a role: as a bassline or a vocal, a guitar riff, a keyboard part. Drum patterns worked in the same way. We then pieced these fragments into songs, one by one, and then learned and performed each song live.”

Simple, right? But learning to play some of these generated melodies was harder than you might expect.

“Of course, they have no relationship to the human body,” says Evans, “a seemingly simple two-bar drum loop took Jona 30 minutes to learn.”

For the lyrics, they worked with creative technologist Ross Goodwin to build a text-generating model trained on a massive corpus of song lyrics, based on two million words from songs they’d previously written, as well as songs they love, and grew up listening to.

“The idea was to replicate the influences already mixed up in our heads,” continues Evans. “Ross’ model generated thousands of lines of text, which we printed out on one giant continuous sheet of dot-matrix printer paper and brought into the studio with us.

“We literally sat on the floor for two weeks, highlighting the best lines from this endless document of lyrical nonsense. Sometimes it blew us away. The model was programmed to also name the songs, and, once trained, could generate infinite songs at varying temperatures.”

Temperature, for the uninitiated, is a concept in machine learning that determines how much risk the model takes when generating output.

“Low-temperate models don’t take many risks,” explains Bechtolt, “and high-temperature models are all over the map. In the case of our lyrics model, low temperature results in lizard-brain, punk-rock lyrics: think “I want your mind” over and over across seven pages. High temperature gives us nonsense words – telepathetic, foodsteps, commuscle…”

To cope with the sheer volume of material these systems generated, and to ensure the finished product would feel like YACHT, they created a strict set of rules.

“We couldn’t add a single note or word to the generated material, but we could remove and arrange things however we wanted,” says Evans. “It was essentially a cut-up method, which has a long history in both the avant-garde and in pop music. Our muses were William Burroughs and Brion Gysin, who pioneered the cut-up, and David Bowie, who used custom software in the 90s to generate lyrical fragments. We were attracted to the idea of taking super low-tech approach to working with really hi-tech systems. Ultimately, technology is only as interesting as what you do with it.”

Evans admits that they went into the project with zero programming knowledge and only the vaguest sense of what was possible. But, she says, YACHT have always appreciated that technology and art are deeply interrelated, often pushing each other forward.

“We had a sense that machine learning might be like the synthesiser: an opportunity for music to evolve in unexpected ways. We wanted to understand what was happening, and we always learn by making.”

However, they needed help to get their heads around the technology, which, especially when they started experimenting a few years back, is not particularly friendly to non-programmers.

They were fortunate to connect with collaborators early on, building a dream team of “research scientists, gonzo coder poets, visual artists and designers” who share their fascination with machine aesthetics.

The aforementioned Ross Goodwin, a former Obama speechwriter and creative technologist, trained the text-generating model on their influences, handing them a 10,000-word document to work from. Machine learning research scientists at Google Brain’s Magenta project tweaked their open-source models for YACHT to experiment with in the studio. While Dadabots, a duo of creative technologist/producers, pointed a raw audio model called SampleRNN at stems of previous YACHT songs, making hundreds of samples to act as a sonic library of source material.

Processed with VSCO with ke1 preset

Even the album’s artwork, typography, band photos and music videos are all showcases for collaborations with different artists working in the machine learning field. No easy task in itself, as Evans points out, as the disciplines of art and science still remain very separate entities, especially when it comes to ways of working.

“There are very, very few people on this Earth who have both the cognitive capacity to code at the level necessary to work with machine learning and the specific madness it takes to make interesting art,” she says. “It’s just too much brain for one person. So making music this way often requires collaboration between technologists and musicians, either in the form of close collaboration, with the technologist building models to execute ideas, cluing the artist into what’s possible, adapting and refining models in an iterative, conversational process, or more abstractly, with the technologist building tools that can be adapted by artists to suit their purposes.

“We’ve done it both ways. At their best, these collaborations force both parties to move away from what’s comfortable, face their assumptions, learn new languages, and find common ground. Artists deal in the subjective, and engineers deal in the quantifiable, and so even the simplest things can prompt long philosophical discussion. Defining ‘success’ for example. What makes an AI’s output interesting, or good? Scientists want metrics, something they can feed back into the system to improve it. All we can do is respond to what these systems generate and make judgments about what speaks to us. Sometimes tools don’t do what they are meant to do at all, but they do something totally different, and maybe better.”


By feeding their entire back catalogue into the process, we suggest, they’ve created an alternative to a greatest hits album. A sort of greatest hits they never wrote, maybe?

“You’re right,” reckons Bechtolt, “but instead of it being the greatest hits we never wrote, it’s more like the greatest hits that could never exist under capitalism, or in this dimension, for that matter.”

In actual fact, the results themselves, far from a homogenised YACHT product that simply rolled off the production line, and were so much weirder than anything they’d ever dare to write.

“Quite often, the stuff these AI models generated barely hung together,” says Evans.

“Sentences led in one direction and then took a sharp turn sideways. We’d get 10 pages of repetitive lyrics and then one bonkers digression. Notes seemed to form a melody before falling down the stairs completely. Meaning when it emerged, it emerged out of nowhere and knocked us sideways. Honestly, there was nothing cold about it. We taught a machine how to write songs by feeding it only the songs we love, and its surrealist attempts to emulate that data, our influences, were a constant joy.”

It’s true. The parts, both lyrical and musical, that were plucked out for use in the songs, definitely have a weird poetry to them. From the album’s opening line “Welcome to your pleasure” on ‘(Downtown) Dancing’, with its inverted ESG-style bassline and instinct-defying onbeat cowbells, to the dub/synthpop hybrid ‘Sad Money’ and its chorus of “I’m gonna have sad money”, a new way of expressing meaning seems to be being established.

‘Loud Light’, for instance, offers up a priceless chorus: “I’m so in love / I can feel it in my car”, a line that arguably no human would ever write, but one that has an uncanny grasp of the wayward language of pop. Contrary to what you might think, Bechtolt says, the process wasn’t about discovering “crazy” moments. Quite the opposite in fact. It was more about being shocked at the sheer beauty of a melody, or being delighted at the wonkiness of a drum pattern. He points to the instrumental break a minute and a half into the album’s third track ‘Scatterhead’ as a good example.

“That was the first melody that really surprised us,” he says. “It was a loose, aimless, over-long digression that breaks all the rules and yet somehow still works. It’s the melody that convinced us this whole project was worthwhile. A melody like that is worth building a house around.”

The hardest part was trying to bypass the highly instinctual, intuitive creative process that was second nature to them.

“We tend to lean on what feels natural and fun for our brains and bodies,” says Bechtolt. “The specific process we imposed on ourselves for this album was not natural, and it often wasn’t fun at all. Since these AI models have no relationship to the human body, and even less relationship to their very specific musical limitations, some of the parts are really weird and hard to perform. Learning the bassline for ‘(Downtown) Dancing’ nearly dislocated my shoulder!”

He acknowledges it is very much of a case of no pain, no gain.

“We’re not alone in our belief that the good stuff lives in the space between ambition and means,” he says. “Our favourite music was made by people struggling to just get through the songs.”

The proof is in the pudding, of course, and ‘Chain Tripping’ proves beyond doubt that all that pain, both mental and physical, was well worth it. Far from feeling cold and emotionless, it actually has a feel of vivid hyper-reality. The whole process has also helped to break YACHT out of the habits of a lifetime, something which will stay with them beyond this point and can only be a good thing for future creativity.

“It’s been extremely challenging,” says Bechtolt in conclusion, “but for every habit we break, we make space for something new. AI is a strange mirror that helps us to see ourselves. When used in this way, we genuinely believe it makes us better.”

‘Chain Tripping’ is out on DFA

0 Shares:
You May Also Like
Read More

Nicolás Jaar: Shock Of The New

Going down to the woods and big surprises is music to the ears of producer and sonic artist Nicolás Jaar. His audio-visual residency in the Shock Forest went down a bomb, so to speak
Read More

In The Beginning There Was Industrial…

Industrial Music. The genre that swept around the world from Hull (Throbbing Gristle) and Düsseldorf (DAF) to Chicago (WAX TRAX!) and Brussels (Play It Again Sam), and then span out into EBM and New Beat and a whole lot more besides. We pick out the main protagonists for your listening pleasure…
Read More

Daniel Miller: The Forward Thinker

As a recording artist, first as The Normal and then as The Silicon Teens, his career was over in a flash. but as the head honcho of Mute records, Daniel Miller has stayed the course and then some, building one of the greatest labels in the world. and with Mute gearing up for the 2020š and beyond, his vision remains as strong and clear as it ever was. He still has time for a bit of DJing too. no requests please
Read More

Shackleton: Most Haunted

Shackleton’s stock has never been higher. In a rare interview, the sound designer and arch-collaborator opens up on his gloriously hypnotic new album,  a disorientating miasma of woozy hauntology and “cracked-mirror oddness”
Read More

New Wave Of New Age

Call it ambient, call it deep listening – whatever you call it, the gentle noise and healing power of The New Wave Of New Age is taking over. We talk to some of the artists and labels and hear why this once maligned genre is experiencing such a healthy second wind
Read More

Moon Shot

Charting NASA’s extraordinary eight-day mission in July 1969 that saw the first Moon landing, the new ‘Apollo 11’ documentary is not just a visual feast, but an audio one too. Director Todd Douglas Miller, composer Matt Morton and sound designer Eric Milano explain how they brought this remarkable achievement to life