Providing resources and trai­ning in the practices and tools of the digital humanities

Faculty Interview: Kip Haaheim

Interviewed by Brendan Allen, November 2012

Would you like to introduce yourself, your department, and your general areas of work?

I’m Kip Haaheim, and I am a composer and an Associate Professor of Music Composition in the School of Music. I teach composition and music theory sorts of things, and my specialty is using music technology, so that’s where the digital part comes in. I’m the only person on the faculty that really specializes in that, so I get lots of business.

We have a studio set up for composition students and other students to use computers to compose music, and that entails both the recording of music, and also the development of using computers to actually process sound or use sound in less conventional ways. In other words, we do both composing – like Beethoven would have done with pencil and paper – but we also use the technology either as an aid for that, or as an actual processing tool to create things that you couldn’t do any other way.

Within the field of electronic music, I adhere to a lineage of what they call musique concrete, a French idea that was founded in Paris during the mid-50s. Because recording technology had finally gotten to the point where they could actually use it out in the field instead of only in a big studio, they could use real-world sounds as part of musical composition. That has been one of the possible threads of electronic music ever since, and it’s one of my main interests. I guess the reason I like that is because there’s something about using sounds that have a built-in organicism to them – things that we may have a built-in association with, an internal richness that I really like to work with. That would be in contrast to, say, using synthesizers – where you build the sounds from the ground up. They don’t have quit the same appeal to me as using processed, recorded sound.

How does “found sound” relate to your own work?

Almost all of my electronic music involves some use of recorded live sounds. There’s basically two different ways that it would work – one would be, let’s say, I’m writing a piece for clarinet. What I might do is record the clarinet – not just playing notes, but also the key clicks and other nontraditional sounds that you might be able to make with a clarinet as well. Then I use the computer to process those sounds in a way that alters or changes them dramatically in terms of their color, their affect, and their rhythms. I find that when the clarinet player plays along with that, there’s a sort of built in connection between the electronic computer part and the clarinet part – since all the sounds were developed from the clarinet, there’s a sort of built-in coherency.

Now, another thing I do is use real world sounds, but in a musical way. For example, I wrote a piece last year for cello, and it’s based on a poem. One of the poem’s main images in the poem is rain, so we recorded rain and used those recordings to develop a kind of soundscape, by using the textures and sound of rain as part of the raw material. The piece actually is for cello and voice – when we perform it live, there’s a cellist and I speak the poem out loud as the cellist plays. The computer takes the two sounds and blends them together into a hybrid, so it sounds like the cello is actually talking. The background is made up of these sounds of rain, because that came from the imagery of the poem itself. That’s another way that I might use real world sounds.

Another piece I did like explored the interface between wilderness, the natural world, and the developing world – especially an idea of where conflicts of interest arise. I went and recorded sounds of lumber milling, trains pulling coal, mines, and I went out to some actual lumbering sites where they were cutting down trees and recorded the sounds. I also recorded sounds of nature – waterfalls, streams, birds. Then I composed a piece that involves those sounds interacting with each other. So in that piece, the notion of what we would normally think of as music, such as harmony and chord progressions and melodies, isn’t really so much a part of that piece. I like to play around with that, with the interface between those two ideas.

How do you go about modifying and incorporating these sounds?

To start, you need to actually be able to record these sounds into the digital domain. I have a little, portable, high quality recording device that I can bring on the field – or I’ll do sounds in the studio. Most recording programs, at least the ones on the professional level, will have a significant amount of what we call signal-processing capabilities. For example, you can repeat sounds, you can make them go backwards, you can change the pitch, you can filter them in some ways – there are all kinds of things you can do with the computer that alters the basic sound. Sometimes by using multiple steps you can end up with something that bares very little resemblance to the actual beginning, because the capabilities are really quite remarkable.

One of the pieces of software that I like to use allows me to modify the sound in various ways, and sort of blend the characteristic of one sound with another. The result is that you end up with a sound where you can actually hear the qualities of both, but the sound isn’t really either one. It’s a process called convolution, which is quite interesting. You can do some particularly beautiful things with it.

I use different types of synthesis techniques, like spectral synthesis and vocoder synthesis – different things that computers are really good at. A lot of it is just basic stuff that they used to be able to do with tape, like cutting a piece of it and flipping it upside down so it goes backwards when you play it through a recorder. A lot of times that yields very interesting results. I also use samplers a lot – the same kind of samplers you might use if you were playing in, say, Lady Gaga’s band. It’s the same technology; I just use it very differently.

Those are the kinds of software I use for the production of the music itself. When I actually perform the music live, I use a different set of software, which is designed to manipulate the sounds and handle them in a live performance situation. What I use for that is called Max/Msp. It’s particularly good at developing both live performance audio, and it’s also good for creating automated interactive kinds of things, like you would find if you go someplace where you’re actually interacting with a computer and that creates different effects in the audio you’re hearing – say in an art museum installation or something like that.

With everything I do, I like to be able to do it live. I’m not the kind of electronic musician who only makes things that can only be done in the studio. I like to use the studio as a tool, but ultimately it’s about the live performance – how do we bring this to life? I almost always involve a live performer; in some cases that performer is a musician who’s playing an instrument, like a cello. Other times it will be me, playing the computer. I have a piece that I did with one of those Tibetan balls that you use a stick with to make a ringing sound – it’s a beautiful sound. I have this piece where I play that live, and the computer processes the sound in various ways. I like to do that, because I think that it’s part of the humanities part of the IDRH – there’s a performance aspect that I find important. I don’t like to just live in the studio. The digital part is important as a way of getting the ideas out there. But, for me, the focus is the audience-performer-composer interaction.

How have these modern digital audio techniques have allowed you to reach new levels of interaction between performance and technology?

First of all, the computer – you can think of it as kind of an extension of your own capabilities in a way that you couldn’t possibly handle if you had to do it by yourself. That’s one aspect. There are certain themes, musical effects and levels of complexity that you can obtain with a computer that you couldn’t possibly do as a performer playing a musical instrument.

Another aspect of my work is that I like to create a kind of immersive environment. You go to see a concert and the sound is actually surrounding the audience – it’s not just going from the stage to the audience. The audience is really in the middle of it. Being able to control that in a way that you couldn’t possibly do with knobs and buttons on a mixing console and letting a computer handle some of those things allows for a really much more elaborate kind of immersive experience. It would just not be possible without a bunch of people – you can imagine an orchestra of people performing with mixers – it would be impractical, at the very least. Ultimately, it’s very accessible with a computer.

Some of the processes themselves also lend a kind of accessibility. There are things you can do with sound in a computer that are really quite beautiful but would not be achievable any other way. Just as an example, can you imagine what a vibraphone would sound like if it were 30 feet long, instead of, say, 6 feet long? You can go beyond the normal physical limitations of things. There’s a type of synthesis called “physical modeling,” where the computer really just describes the physics of a musical instrument, and then it creates the sound based on that, so you’re not limited to actual physical instruments. You could say, “Well, let’s have a guitar that’s 200 feet long and has a string that’s as thick as your arm. What would that sound like?” Sometimes those can be very interesting and beautiful.

The analytical tools of a computer can give you access to manipulating sound in a really interesting way too. Most physical instruments are subject to the laws of physics, of course, but they work in a certain way. With a computer, you can figure out how that works and change something about it, so it’s not exactly in line with how it works physically. That can create a really interesting and unusual effect; it’s kind of like finding a new kind of color in a painting.

I find it kind of liberating – but also intimidating – to try to keep up with the technology. Things are moving so fast that it’s just not possible to read everything and try everything out. Every week something else comes out saying, “Oh, try this new way of processing sound!”

You’ve mentioned before that the field of music has always been waiting on technology to catch up. Is an inverse relationship beginning to emerge as technology gets faster and faster?

Yeah, I think that has definitely been true since about the 1970s. The technology began to move forward in a way that sort of precludes developing a normal sense of “virtuosity.” Think about a violin – it’s been around for a long time, and a lot of really fabulous musicians have learned how to play it. They’ve developed it to a point where we go hear a concert or see a violinist play, and they have to obtain a certain level of virtuosity even to be taken seriously. With music technology as it has been since, say, the 70s, it’s just not possible to develop that. It takes years, many years to develop that kind of virtuosity. So, it’d be sort of like if you were a violinist, and every year they redesigned the violin so that all the fingerings you learned didn’t work anymore, it’s sort of like that.

So on the downside, it does make it difficult to really obtain a kind of depth with the technology. But on the good side, it does favor innovation and not becoming defined. The tradeoff with instruments like the violin is that there’s this fabulous virtuosity and tradition, but it ends up being a limiting factor too, with how we associate with the violin and what kind of music is written for the violin – it’s kind of predetermined in a way that isn’t the case with music technology. It all depends on how you look at it.

Do you collaborate with anyone in particular?

I like doing collaborative work a lot. I would say that most of my work is collaborative in some level. For example, last year I did a piece where I worked with a video artist and we created a video – he basically did most of the video work and I did most of the audio work, but we influenced each other in the production of the movie. It’s a short, experimental film. That kind of work I really like – there are certain things that arise when you’re working with one or more other people that are creative and experts in some aspect. He was able to bring in things to the film process and the creation of the visual aspects that I had no real expertise in, but was natural for him.

With the cello piece I mentioned earlier, that was a collaborative process that involved both the poet who wrote the poem and also the cellist – we got together and worked it through. Ultimately, I wrote the music for it, but I was clearly influenced by their input in a variety of ways, from the standpoint of a kind of editor. I would run ideas by them and see how they would react, but they were also able to generate ideas that I wouldn’t have thought of in a million years that were just really natural and easy for them to do, which I then would incorporate into the piece. It ends up being something that is a lot more than either one of us could have come up with on our own.

How has collaboration worked for you in terms of multimedia?

Every piece you write is going to be somewhat different. A lot of times, it’s driven by what instrument it’s for. If you write for a string quartet, you come up with music that will play well on a string quartet, but the same exact music might not work very well with a group of brass players, for example. The techniques and strengths of the instruments are very different.

It’s the same when you’re working with multimedia – you adapt to the aesthetic of what you’re working with. I did the music for Kevin Willmott’s movie, The Only Good Indian, and in that case it was a commercial venture that was for a wide audience. There’s a certain factor that that feeds into – you have to write music that’s like other movies. Also, the subject matter of the film comes into play – it determines what kind of music you might write, it has to be appropriate to the mood of the scene. There are many different opinions when you’re working in a group like that – each person has their own take of what the music ought to be like, there’s a group of people you have to please. There’s this whole process that pushes the music in one direction or the other. You have to go with that – if you stand rigidly, then nobody’s happy, and the product ends up not being very good.

In other situations, you have much more flexibility or control over what you want. Like in the video piece, I created the music and we cut the video to the music – we realized that the timing of the musical events, since it was a non-narrative film, could be adapted to the music more easily than the other way around. That’s a little bit unusual on film music – usually you have to know that, “This scene takes 1 minute and 13 seconds, and you can’t write 1 minute and 14 seconds.” It all depends on the situation.

That’s one of the things I really like about it – every new piece is a different set of challenges, a different kind of learning experience for me. I can’t remember the last time that I was bored, it just doesn’t happen.

To tie back into the digital aspect, the computer nowadays allows for a much faster turnaround of ideas. So, for example, in many cases if I’m working on an idea with somebody either in audio or video, or even a choreographer – it’s something that I can do right then. I can say, “Let’s try this, and spend two minutes making some kind of adjustment that ten years ago would have taken two hours to do. And the same thing with video editing – what normally would have taken a week, as a much more cumbersome process, now you can just cut the section five seconds shorter and listen to it right now. That’s just been a tremendous boon for collaborative work.

What’s the funding process look like for your projects? Have any grants helped your work come to fruition?

A lot of the time, I’m the creative person who says, “This is what I got, let’s see what I can make with that,” instead of having some grand vision and seeking a way to make that happen. I see that as a strength and a weakness at the same time.

Is there anything else that you’d like to say about your work and its relationship with the Digital Humanities?

I think that part of my personal aesthetic has to do with beauty, and with a kind of humane, kinder, gentler electronic music. I don’t intend to go for music that involves extremes of virtuosity. There are certain things about music that can be quite interesting, but I’m more interested in exploring beauty and things like that. I’m more attracted the uses of the computer which still have a strong human component – I think of it more of a tool, as a means to an end, rather than the end itself. That’s not the only approach you can take, and many of my colleagues in electronic music are pushing the envelope in other ways that involve more technical aspects. In my heart, I want the music to be beautiful in some way, to have at least a kind of richness to it. I sort of fall more on the humanities side of the Digital Humanites. That’s just one way to be, I suppose.


Directory of DH Scholars

Looking for collaborators, expertise, or other scholars with related interests? 

Please see our list of affiliated scholars at KU.

If you would like to be included in this list please complete our affiliated scholars form.

 

KU Today
Home to 50+ departments, centers, and programs, the School of the Arts, and the School of Public Affairs and Administration
KU offers courses in 40 languages
No. 1 ranking in city management and urban policy —U.S. News and World Report
One of 34 U.S. public institutions in the prestigious Association of American Universities
44 nationally ranked graduate programs.
—U.S. News & World Report
Top 50 nationwide for size of library collection.
—ALA
23rd nationwide for service to veterans —"Best for Vets," Military Times