[LAU] placement.

From: alex stone <compose59@email-addr-hidden>
Date: Sun Jan 04 2009 - 03:32:14 EET

Ok, this might be a bit of curly question, and as i don't know if this is
possible, either valid or not.

The subject is placement, and pertains to orchestral recording. (My own work
composed within the box with linuxsampler, from midi in RG, and recorded in
Ardour.)

I'd like to place my instruments as close as possible to an orchestral
setup, in terms of recorded sound. That is, once i've recorded, i'd like to
use convolution and other tools to 'correctly' place instruments within the
overall soundscape.

example:

With the listener sitting 10 metres back from the stage, and facing the
conductor (central) my 1st violins are on the listener's left. Those first
violins occupy a portion of the overall soundscape from a point
approximately 2 metres to the left of the conductor, to an outside left
position, approximately 10 metres from the conductor, and with 8 desks (2
players per desk) about 4 metres deep at the section's deepest point, in
the shape of a wedge, more or less. That's the pan width of the section.

Now as i understand it, a metre represents approximately 3ms, so calculating
the leading edge of the section across the stage as 'zero', the first violin
players the furthest in depth from the front of the stage, should, in
theory, (and i know this is approximate only, as i sat as a player in
orchestras for some years, and understand the instinctive timing
compensation that goes on) play about 12ms later than those at the front.
Using the ears, and experimenting, this actually translates as about 6ms,
before the sound becomes unrealistic, using layered violin samples, both
small section and solo. (highly subjective i know, but i only have my own
experience as a player and composer to fall back on here.)

A violin has it's own unique characteristics in distribution of sound
emanating from the instrument. The player sits facing the conductor, and the
bulk of the overall sound goes up, at an angle, at more or less 30degrees
towards the ceiling to a 'point' equivalent to almost directly over the
listener's right shoulder. Naturally the listener 'hears' the direct sound
most prominently, (both with ears, and the 'visual perception' he gains from
listening with his eyes.) Secondly, the violin also sounds, to a lesser
degree, downwards, and in varying proportions, in a reasonably 'spherical'
sound creation model, with the possible exception of the sound hitting the
player's body, and those in his immediate vicinity. (and other objects, like
stands, sheet music, etc, all playing a part too.)

I've experimented with this quite a bit, and the best result seems to come
from a somewhat inadequate, but acceptable, computational model based on
using, you guessed it, the orchestral experience ears.

So i take one 'hall' impulse, and apply it to varying degrees, mixed with as
precise a pan model as possible (and i use multiple desks to layer with,more
or less, so there's a reasonably accurate depiction of a pan placed section,
instead of the usual pan sample model of either shifting the section with a
stereo pan, or the inadequate right channel down, left channel up method.)

to make this more complicated (not by intent, i assure you), i'm attempting
to add a degree of pseudo mike bleed, from my 1st violins, into the cellos
sitting deeper on the stage, and in reduced amounts to the violas and second
violins sitting on the other side of the digital stage.

All of this is with the intent of getting as as lifelike a sound as possible
from my digital orchestra.

The questions:

In terms of convolution, , can i 'split' a convolution impulse with some
sort of software device, as to emulate the varying degrees of spherical
sound from instruments as described above?

So, one impulse (I use Jconv by default, as it does a great job, far better
than most gui bloated offerings in the commercial world) that can be, by way
of sends, and returns, be 'split' or manipulated not only in terms of length
of impulse, but fed as 'panned' so as to put more impulse 'up', less impulse
'down' and just a twitch of impulse 'forward' of the player, with near
enough to none on the sound going back into the player.

I've written this rather clumsily, but i hope some of you experts may
understand what i'm trying to achieve here.
Can the impulse be split down it's middle, separating left from right,
aurally speaking, and if this is possible, can i split the impulse into
'wedges' emulating that sphere i wrote of, more or less?

if there's a way to do this, then i'm all ears, as my mike bleed experiments
suffer from a 'generic' impulse per section affecting everything to the same
degree, including the instruments bled in. I should note here, this is not
about gain, but a wedge of impulse, cut out of the overall chunk, that
represents a 'window' or pan section of the whole.

This might seem somewhat pedantic in recording terms, but i'm trying to
build a model, per 'hall' and create half a dozen templates to represent
different size ensembles in different halls.
So my small string section hall template might be a small church or
performance room, like a library, or dancing hall. I would then only need to
be creative with where i put my 'human' soloists, as emulators of slightly
different interpretations of where note start and finish, velocity
variation, etc....., and not have to reconstruct a convoluted model each
time, for......my particular orchestra. (tongue firmly in cheek here.)

Any help would be appreciated. I'm open to suggestions of placement
convolution models, etc, but would prefer to use Jconv, as it works all day
every day, and would be a constant across all templates.

I suppose an analogy for the chunk of impulse idea would be to stretch a
ribbon across a stage, and cut a metre out of the middle. That metre would
be the bit i'd use, as a portion of the whole, in an aural soundscape, to
manipulate, or place, instruments, to a finer degree, in the attempt to
create a more realistic '3d' effect for the listener. That metre along with
other cut out sections of the impulse soundscape could help me introduce a
more....'human' element to a layered instrument section.

I still rue the day orchestral sample devs decided to record sections at a
time. This would have been so much easier if they'd recorded a number of
desks for each section instead.

Alex.

p.s. I'm having a modest degree of success using Ardour sends, as a mike
bleed template. Lot of work, and a lot of sends, but it's slowly coming
together, and has reached the stage where i no longer think it's
impossible.....

_______________________________________________
Linux-audio-user mailing list
Linux-audio-user@email-addr-hidden
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-user
Received on Sun Jan 4 04:15:01 2009

This archive was generated by hypermail 2.1.8 : Sun Jan 04 2009 - 04:15:02 EET