Alex McLean on Embedded with the Open Data Institute
Every year, Sound and Music shine a light on the work of the composers who are currently in residence on our Embedded and Portfolio programmes. These are our New Voices of 2016. They are creating new, exciting and innovative music, across disciplines, all over the UK.
We caught up with live-coding composer and Algorave pioneer, Alex McLean, to reveal his motivations for creating generative dance music, the future of music programming, and his crowdfunding project Sponge Spicule. Sharing his application to Sound and Music’s portfolio scheme, McLean also describes how his residency at the Open Data Institute allowed him to work freely and “develop a new strand of work,” focusing on the ODI’s Data as Culture series. As an advocate of performance technology, his works exude a richness of experimentation and innovation, exploring music and code as a means to ‘think out loud’…
Alex – you normally work under an alias. How did this come about?
Yes – my solo project is called Yaxu, made with a kind of algorithm that I came up with while slightly inebriated some years ago… You go through every letter of the alphabet, and decide which letter you like starting words with the best (in my case it was ‘y’). Then you go through again to decide which fits best after the first letter… I then just kept going, adding letters until there was no letter that I thought improved the word. This happened quickly (maybe I was getting bored of repeatedly considering the relative aesthetics of every letter in the alphabet), so I ended up with the short word, ‘Yaxu’.
If you had to replace your alias with a symbol, what would you chose and why?
Oh, tricky. There are so many symbols around these days; we live in the time of the emoji. I think maybe the humble tilde: it has an air of vagueness which I identify with, and also looks like a wave or oscillation which fits with my love of sound and repetition.
When did you first take an interest in music coding and live-coding and more importantly, why?
Around the year 2000 when I started my first job and was able to afford my first synthesiser (Roland JV1080)! As a programmer I thought, ‘why not write some code to control it, and hear what it sounds like when it's massively overloaded’ - what's now known as ‘black midi’. My friend Ade was already making generative art, including writing code and experimental interfaces that generated music, and he really encouraged me and we ended up forming a band called Slub. The motivation for this was really making people dance; we were listening to Autechre a lot and loved the idea of pushing dance music forward by making our own software. When we started, creative coding wasn't in the public consciousness – people didn't understand that programming could be creative at all, rather than a means to implement ready-formed ideas.
So live coding was a natural way to counter this - rather than trying to argue our case, we could just demonstrate it. Then there was a point around 2003 when a live coding community came together at a meeting in Hamburg. It turned out that there was just something in the air, and that a few different communities were exploring live coding techniques. That's when TOPLAP (http://toplap.org) was formed and we became an international organisation of people working out how to turn programming into live performance.
You recently started working with the Open Data Institute on Sound and Music’s Embedded scheme. What attracted you to the residency?
As someone who is both a musician and technologist, and enjoys working in an open way using and making free/open source software, it seemed a great fit. But also, I was just in need of some time out to reflect on what has been a busy couple of years, and develop a new strand of work. So, I wrote my application with the dream of doing a residency where I didn't have any real plan or expectations about what I'd make. For this reason I was very surprised I got it.
In the spirit of openness I've shared my application here:
What has been your highlight so far?
It's just so friendly there, and the people are so full of curiosity. The focus so far has been working with curator-in-residence Hannah Redler on an exhibition in the ODI HQ, as part of the ODI's Data as Culture series. We decided not to base it on my own work and it's been great bringing together various collaborators and influences into what should be a really interesting collection of work, offering different perspectives on "thinking out loud".
What do you think the future of music programming and performance technology entails?
My utopian hope is that live coding will break away from computer programming and become its own distinct set of technologies, practices and cultural meanings. As programming languages designed especially for creative coding and live coding develop, I think they will become closer to natural languages: combining gestural and symbolic expressions in a similar way but allowing us to work with abstractions beyond our imagination. On the other hand, my fear is that as live coding becomes more main-stream it will be become mundane: less about risk taking and improvisation, and ending up largely dismissed as a gimmick. Probably the truth will be a mix of both.
Do you feel music programming liberates the composer musically?
Yes, in a way -- it allows the composer to set their own constraints, and think at a level of abstraction that suits their music. However, the ultimate freedom is being able to place each note or sound one by one, so in truth programming doesn't let you do anything that you couldn't do normally. It's a big trade-off really -- by working with coded abstractions you lose the direct connection with the musical surface of sound, but you get closer to the higher level structures. Of course, there's nothing to stop you jumping between these different compositional levels.
When live coding, do you always know exactly what to expect after executing a command?
No, I think that would be pointless. I work with pattern, which for me involves exploring interferences that create a musical result in perception, and not in the original idea. I was heavily influenced by reading Paul Klee's pedagogical sketchbook when I was developing my first live coding environment; Klee puts a lot of focus on perception in a feedback loop of action and reaction - you make a mark, and only then are you able to perceive what you have made and decide what to do next based on that. This is how I think about live coding. So I think about live coding as an exploration, working with code as a material, rather than implementing some abstract idea. I'm actually starting a five year project lead by mathematician and weaver Ellen Harlizius-Klück, which explores this way of making from multiple perspectives but is centred on ancient textiles.
How does live coding enhance the experience for the audience?
I don't know! You'll have to come along to an Algorave and find out! Live coders do generally project their screens as a gesture of openness, and I think that this is appreciated in different ways by different people. Actually, I've had the most positive responses to this from non-coders. Karen Burland in the University of Leeds specialises in audience research, and has turned her attention to live coding, so hopefully we'll have some clearer answers soon.
Tell us about your crowdfunding campaign, Sponge Spicule.
This is for my first solo album as Yaxu, and I think could be an interesting direction to take for live coding - using it for making fixed tracks, without an audience in the usual sense, but live streaming the whole process of making it online. So the making process is a core part of the music itself and it all gets grounded in feedback from the good people joining the crowdfund. The campaign is partly about that, but also supports the creation of a hardware device for live coding based on my TidalCycles live coding environment. I'm lucky to be working with some great institutions in Sheffield: Computer Club, Human and Pimoroni. Having moved here about six years ago – in part as I was attracted by the history of electronic music here – this feels like a real privilege. Anyway, there will be more announcements on that soon.
Do you have any recommendations of composers/artists to look out for in the next year?
I'm really excited by new collaborations forming in the world of live coding and Algorave. One new duo is AlgoBabez (Shelly Knotts and Joanne Armitage) who have been doing some amazing solo work but teamed up to completely destroy an Algorave in Leeds recently, taking things to the next level. It was phenomenal.
Interview by Emma Sugarman (Communications Intern - Sound and Music)