Hey David!
"It sounded like the more-or-less standard FL Studio strings to me."
But how would you define standard FL Studio strings? A preset from one of
the stock synths? Or, a preset from one of the stock effects?
One of the interesting paradoxes of minimal music is that it sounds very
simple, but there's usually much more to it than meets the eye. I would
dare anyone to re-create Healing Fountain
<https://soundcloud.com/louigiverona/healing-fountain> using "standard FL
Studio strings". And I really mean it - try it! You can even download the
demo version of FL and run it through WINE.
If I were asked to try to reproduce it, I would probably take whatever
strings or pads I could find that sound similar and put a phaser on them.
But it's the details that matter. The transparency of the texture, the many
subtle movements. And this is where, I think, this track shines.
Having said that, I also don't want to oversell the originality or quality
of my work 😂
So, here's how I approached this track.
I am using two synths. Each plays a chord: one plays C#-D#-G#, another
plays A#-F#-A#. Collectively, they are reproducing a full pentatonic scale,
a time-honored method to not worry about chord compatibility. I pan them to
different channels and then use EQs for each, but not to clean anything up,
but to actually shape the sound - I cut out some frequencies completely. At
this point I am not yet creating a track per se, but sculpting what would
become source material.
I then apply a bit of reverb to both synths to blur the details slightly.
Now, you would think that I am using a phaser here, but I'm actually not!
Instead, I take one of the EQ bands, raise it and then automate it: this
gives me total control over the glissando effect, which is, thus, fully
derived from just the played chords - I am sort of stroking its vibrating
strings by moving through the spectrum and gently picking out note after
note. It's not impossible to get that effect with a phaser, but you'll have
much less control and you'll have the phaser do other things to the sound.
And here, I am just focusing on the notes and getting this really clean
"singing".
Okay, the source is ready. I render the result to a flac file.
After this I open it in another project. I then play the render at note C5
and note C4, apply reverb to glue the whole thing together, some broad
strokes EQing, to mostly clean up the mid sections, and then apply a subtle
filter and automate it throughout the track. And only at this point the
track takes shape and actually begins to sound like simple strings with a
phaser applied. I then also add a stream recording and create a bunch of
sounds that pop up from time to time in the track.
Was it done the hard way? I would argue - no! I think that this process
allowed me to create a texture that is deceptively simple, but would be
very difficult to reproduce with just some strings and a phaser.
And with electronic music, the sound you end up with matters. It's as much
sculpting, as it is composing.
"Do you repeatedly listen through each and tweak as you make one?"
Most dronings were made in a similar way that cooking is done: you put
things in boiling water and see what happens. Brian Eno used a metaphor of
gardening: you put seeds in the ground and see what comes out.
I would have things run, while I play around with the effects and try to
create interesting movement. Usually the process would involve three
stages: preparing some sort of source material, then loading it up into
Kluppe (or a separate project in FL Studio) and having multiple copies of
it play at different speeds, while passing the result through a bunch of
effects, and then third stage is finalizing the mix, adding more details or
even running another looping session with different sounds.
In case of droning142, the reason why it's so long is that it uses phasing
<https://en.wikipedia.org/wiki/Phase_music>: there are two copies of a
sequence that play against each other, but are at different lengths. One
can argue that this is actually not phasing, but a form of polymeter, but
both terms are usually applied to notes, whereas I am simply going through
audio recordings of a sequence, so the difference doesn't really matter.
And so I think I made a rough calculation of how long should the track be
to exhaust all the permutations of the sequences. I am actually not sure
that it did exhaust them, but I gave it enough time to explore through the
permutations. Because the sequence changes frequently - and each time it's
a slightly new rhythm.
I don't remember exactly, but I am quite sure that I first recorded the
sequences and then separately went through the same process with the
strings/pads. I played them manually into Kluppe and then phased against
each other, sending them through Rakarrack.
L.V.
_______________________________________________
Linux-audio-user mailing list
Linux-audio-user@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-user
Received on Thu Feb 4 04:15:01 2021
This archive was generated by hypermail 2.1.8 : Thu Feb 04 2021 - 04:15:01 EET