8 micro-studies in mapping: a collection of single-fader instruments

ABSTRACT

This paper presents a collection of Csound instruments designed to be controlled with a single fader each. These instruments try to bridge the gap between the physicality of tape or vynil manipulation and the possiblities of digital instrument design. The focus is put on the expressivity and quality of control of the sound output of each instrument. The resulting collection is meant to be part of a toolbox for live electro-acoustic improvisation.

1. INTRODUCTION

This is a practical exploration of parameter mapping in digital instrument design. It stems from a personal desire to develop a live electro-acoustic improvisation practice, and from the experiential observation that to find one’s voice, the appropriation of the sound-generating devices is of the utmost importance.

Being primarily a composer of fixed media music, my experience with performing electronic or acoustic instruments is limited, and i am used to exerting precise control over sound. Thus i tend to find that most existing self-contained instruments are either too specific, requiring great effort to make them do what i want, or too idiosyncratic, forcing their identity over my sound.

Starting from there, the path of least resistance is to use a general purpose device —the computer— to create intruments tailored to my will. Of course, the problem with using the computer as a musical instrument lies in the physicality of control, in the relationship between intent, gesture and sound. Based on my previous literature review on digital instrument design, i have been exploring this path in a focussed, specific way which i present in this document.

2. FRAME OF THE RESEARCH

2.1 Controller interface

Trying to solve the problem of control in digital instrument design is actually making binding choices on the subsequent stages of design. The way we interact with a sounding device, whether digital or physical, is supposed to greatly influence the outcome, not only on the purely acoustic level but also from the gestural aspect and its relationship to what we hear, as a performer or as an audience.

After reviewing all sorts of controllers, i decided to focus on faders. Whereas rotating knobs or encoders are widely used in electronic and digital instruments, faders are more scarcely seen, and i wanted to give them some appreciation. For the most part, fader interfaces in computer music seem to be relegated to utility duties: mixing, recording automations. A notable expection is the crossfader present in controllers used by DJs. This ”special” horizontal fader perpetuates its performative function notably developed in turntablism. In tape music, particularly in the french acousmonium tradition, faders are an important part of the performance, sending a fixed stereo track to pairs of spe- cialized loudspeakers, but they still keep to their function of controlling levels.

Having used faders extensively as a board-operator in radio, i find that they afford two important dimensions of control: tiny adjustments, as well as very fast and large gestures can be made, without compromising precision. This versatility seemed in line with the kind of sound-producing processes that i imagined for my live practice.

After choosing the form of control, i experimented with a few designs and quickly decided to use a single fader for each instrument. The rationale behind that, is that in a live situation, having too much different faders on an instrument quickly leads to difficulty remembering the interactions between parameters and turns ”playing” the instrument into an often unnecessary micro-management chore.

I also wanted to embrace some kind of minimalist ethos and adopted this stance as an attempt to explore the types of complex mapping that are found in acoustic instruments and pertain to their richness.

2.2 Sound generation

In the first stages of conception, the main envisioned use- case was real-time sampling of other musicians. But it quickly became obvious that not all kinds of sounds would be interesting through any given process. Moreover, it would be better suited for the development stage to use sound files, as it would facilitate the testing and validation of an instrument with a range of various material. The ability to sample external signals and the modalities of assignation are indeed out of the scope of this project. So it was decided to focus on buffer manipulation. Somehow, this can also be seen as following the path of the early tape music experimentation, when the manipulation of recorded sound was paramount.

Another decision has been to keep all instruments monophonic. In my theoretical context of chamber electroacoustic music, the sound of the computer musician is equal to all other musicians, that is, a punctual source placed in an acoustic space. Thus spatialization is being considered irrelevant to this instrument design project.

2.3 Technical choices

For the implementation of this collection, Csound seemed the better language, since i prefer textual environments over visual ones. The profusion of opcodes make it relatively easy to get various dimensions from a single control signal, and the similarity to an actual modular synthetizer means that i can transfer ideas from the world of wires. Using Csound also means making the choice, to an extent, of platform and architecture independance.

Regarding the controller, i decided to make extensive use of an open-source DIY fader controller, the 16n faderbank1. Its form factor influenced the project in the sense that having all instruments accessible at once means that the instruments do not need to be overly complex on their own. The faderbank is used as a MIDI USB controller. It sends 7 bits values which, with a fader length of 60mm, roughly translates to one value every half-millimeter. That’s probably subtle enough in our case.

3. IMPLEMENTATION DETAILS

The goal of this section is not to thoroughly explain the code, but rather to highlight the thought process behind it and the main points developped in each instrument.

This collection is divided in three groups of instruments. Each of these groups correspond to a metaphorical way of interaction with the fader regarding the production of sound.

These instruments are mostly written in a linear fashion, without rewriting old instruments in the light of newer find- ings. I want this project to keep track of how different solutions may emerge from each other.

3.1 to interfere

The paradigm in this group is that the fader movement is an interference to a mostly autonomous process (ie. a virtual tape loop running). Two dimensions of the fader are used, its position and its movement or absence thereof. If the control is left alone and not at zero, sound is flowing. When moving the control, the sounding process is disturbed and goes on with its new parameters.

Positioning a fader at zero stops, in a sensible manner given the instrument, the sound output of the relative instrument. This was the obvious way to allow the performer to mute an otherwise infinite process.

3.1.1 instr 100

lecture : instr100 example ­

This is a very basic yet effective instrument based around the diskin2 opcode. A file is looped continuously. Every time the fader is moved, the speed at which the file is playing is randomly changed, with a random range proportionnal to the fader opening. This can yield scratch-like sounds, as if someone was grabbing the tape and scrubbing it against the playhead. In an improvisational context, it is also interesting to not knowing exactly what will happen and stresses the importance of listening. As the speed is often changing, i feel the results were way more playable and coherent with sounds bearing no obvious tonal or harmonic content.

3.1.2 instr 101

lecture : instr101 example ­

This instrument is basically the same as the previous one, with a quirk. When the fader goes above half-way, a healthy dose of reverberation is instantly added to the signal before progressively decreasing. Since the playhead never jumps around in the buffer, this allows spotting interesting moments and throwing them into the reverb.

3.1.3 instr 102

lecture : instr102 example ­

Here, i introduce the use of mincer. This opcode allows changing the timescale of a buffer without affecting its pitch, which is exactly what is needed for the idea of instr 100 to be expanded to tonal material. The fader course is divided in several zones, each featuring a speed range from very fast to almost frozen. The slowest range is also the longest on the fader, and features some playhead jitter. The fastest zone makes it easy to dip in and quickly relocalize in the buffer before going back to the slow zone.

3.1.4 instr 103

lecture : instr103 example ­

This instrument takes the slow zone of the former, and expands it on the entire fader exept a tiny fast zone at the beginning of the range, for the relocalization purpose previously exposed. The output is then passed through filters which response is dependent on the control input. As an anticipation to the second group of instruments, it makes use of the speed of the fader movement by deriving a control signal from it.

3.2 to impulse

This group focusses on using an initial gesture on the fader to impulse a sound. The parameters of the sound should be derived from properties extracted from the gesture. After the initial impulse, the controller may still influence the sound, but not significantly extend its duration.

3.2.1 instr 200

lecture : instr200 example ­

This instrument is actually implemented with two Csound instruments. The first one recognizes and conditions the gesture, and instantiates the second (instr 2000) which is actually making sound. In the analysis of the gesture, it is posited that the speed is equal to the maximum rate of change recorded during the entire gesture. This ”initial” speed determines how long the generated event will be. For a gesture to trigger a sound, it must start with the fader at zero. This is needed to be able to use the fader to modulate the on-going sound without triggering unwanted instances.

3.2.2 instr 201

lecture : instr201 example ­

This instrument uses the same mechanism as before but drives a different sounding instrument, instr 2001, for which the time-constants defining the gesture have been modified. Gesture speed influence resonance frequencies, the faster gesture yielding the brighter results, in a subtle way. Also added in instr 2001 is the use of fader speed after the initial gesture, to modulate parts of the resonating sound, temporarily sending more signal into the reverb.

3.3 the fader as a bow

The metaphor of using the fader as a bow, as in requiring a movement to keep sound going, is not new. In this group i take it rather literally, chosing to make the fader movement either the source of many tiny excitements to a larger sound, or as a direct table pointer.

3.3.1 instr 300

lecture : instr300 example ­

Fader movements through changed opcode generate triggers at a rate directly related to the movement itself. This makes a straightforward bow-like fader use quite easy. In this intrument, the triggers are converted into a smoothly decaying curve (using portk); this curve controls the level of the main output, at which is connected an ever-playing loop. I have used slewing via portk extensively in this collection, but this is the first instrument where those smoothed triggers are so closely tied to the sound when playing it.

3.3.2 instr 301

lecture : instr301 example ­

In this instrument like in instr 300 the movement of the fader allows sound to pass. But on top of the steady triggers output, the fader also scrubs an audio file in a table to modulate amplitude. The actual audio output is not a loop anymore, but a two-layer sound composed of a mincer buffer that the fader can address in time, and a granular gesture dependent on the energy applied on the fader. When the energy is high enough, a resonant filter adds texture and sustains into feedback-like territory.

4. CONCLUSION

The making of this collection has brought up interesting questions regarding expressiveness. While some instruments work pretty well on this aspect, getting a wide range of fine control is not as simple as it might have felt at the start of the project. The multiplication of small-scoped instruments is probably a way to mitigate the issue. Because all these instruments derive time-related features from ctrl7, they are closely tied to the ksmps value. This might be a shortcoming in optimizing for lower CPU usage, in the perspective of using more instruments at once.

To map a linear fader to parameters in the instrument, two main properties have been extracted: the position (value) of the fader and the rate of change of this value (speed). Speed, as the first derivative of position, has been obtained by the diff opcode.

Those two properties have been conditionned in various way to be meaningful with regard to the physical gesture. Notably, instant speed was almost unusable: slowing the response with portk was necessary. Some control flow conditions (particularly in group 2) helped define how gestures were recognized. Remapping the position of the fader from a linear range to an exponential or quadratic function helped getting some sound parameters behave more interestingly. Finally, small amounts of randomness controlled by value and speed were essential to enhance the playability and the sonic results.

For sound itself, the perceived parameters mostly influenced were length, texture (rugosity,vibrato,. . . ), brightness/distortion, resonance. The precise mapping from gesture to sound is best examined directly in the instruments’ code. Of course, in their minimalism, these instruments are not as versatile as most. I think of each of them as a sound object that can be presented in a dynamic way, and it is in that light that they were made. More diversity can be attained by changing the audio files. Although the playability seems to depend on the performer’s knowledge of the audio material, this knowledge can be constructed while playing with no need for prior exposure.

While using a single fader might have been a somewhat radical starting point, i found many different ways to interact with a fader when testing and playing the instruments. For the kind of physical engagement that emerged, it would be worth continuing this research in two ways. First by inventorising the various modes of interaction with the fader, and by trying different kind of faders: the ones i used have a nice feeling but are rather short and present much friction. Long faders found on large mixers, or fast ones like DJ’s crossfaders would probably open new interactions. Then, i think instruments should be done with an even more specific sonic gesture in mind. That is, narrowing the sonic possibilities to gain precision and granularity on the definite aspects of the sound object to be played.

5. APPENDIX

The Csound code and sound samples are downloadable [here](http://deferlements.audio/sons/one-fader_ collection_20200629.zip) (~20 Mo)


  1. https://16n-faderbank.github.io/ ↩︎