Re: [LAU] placement.

From: alex stone <compose59@email-addr-hidden>
Date: Sun Jan 04 2009 - 16:16:09 EET

Jorn, thanks for feedback. I've just tried one of Fon's amb plugs, and it
repeatedly crashed ardour, so i think i'd better fix that before going
further.

As for mike bleed, it's not a full multimix of pseudo mic blend, but more a
'hint' of signal from adjacent instruments. It's artifical, imho, to
completely remove any resonant blend of adjacent instruments, and i already
have a modicum of success in terms of 'more lifelike' response using this
method. I'm also using orchestral samples here, not a live orchestra, so i'm
keen to explore just how far we can get down the 'real' road, before
limitations prevail.
As an aside to this, the VSL orchestral sample library team have already
started a project not dissimilar to this, called MIR, so the concept is not
just mine, or even theirs... :)

I knew i was kinda hopeful when i asked about cutting an impulse into
chunks, so i'm not surprised at all.

Now to get this Amb problem sorted out.
Alex.

On Sun, Jan 4, 2009 at 1:32 PM, Jörn Nettingsmeier <
nettings@email-addr-hidden-hochschule.de> wrote:

> alex stone wrote:
> > Ok, this might be a bit of curly question, and as i don't know if this
> > is possible, either valid or not.
> >
> > The subject is placement, and pertains to orchestral recording. (My own
> > work composed within the box with linuxsampler, from midi in RG, and
> > recorded in Ardour.)
> >
> > I'd like to place my instruments as close as possible to an orchestral
> > setup, in terms of recorded sound. That is, once i've recorded, i'd like
> > to use convolution and other tools to 'correctly' place instruments
> > within the overall soundscape.
> >
> >
> > example:
> >
> > With the listener sitting 10 metres back from the stage, and facing the
> > conductor (central) my 1st violins are on the listener's left. Those
> > first violins occupy a portion of the overall soundscape from a point
> > approximately 2 metres to the left of the conductor, to an outside left
> > position, approximately 10 metres from the conductor, and with 8 desks
> > (2 players per desk) about 4 metres deep at the section's deepest
> > point, in the shape of a wedge, more or less. That's the pan width of
> > the section.
> >
> > Now as i understand it, a metre represents approximately 3ms, so
> > calculating the leading edge of the section across the stage as 'zero',
> > the first violin players the furthest in depth from the front of the
> > stage, should, in theory, (and i know this is approximate only, as i sat
> > as a player in orchestras for some years, and understand the instinctive
> > timing compensation that goes on) play about 12ms later than those at
> > the front. Using the ears, and experimenting, this actually translates
> > as about 6ms, before the sound becomes unrealistic, using layered violin
> > samples, both small section and solo. (highly subjective i know, but i
> > only have my own experience as a player and composer to fall back on
> here.)
>
> make sure that you are using different samples for each desk if you use
> individual delays, otherwise you will introduce comb filtering artefacts.
> but i doubt these delays will have any perceptible benefit.
>
> > A violin has it's own unique characteristics in distribution of sound
> > emanating from the instrument. The player sits facing the conductor, and
> > the bulk of the overall sound goes up, at an angle, at more or less
> > 30degrees towards the ceiling to a 'point' equivalent to almost directly
> > over the listener's right shoulder. Naturally the listener 'hears' the
> > direct sound most prominently, (both with ears, and the 'visual
> > perception' he gains from listening with his eyes.) Secondly, the violin
> > also sounds, to a lesser degree, downwards, and in varying proportions,
> > in a reasonably 'spherical' sound creation model, with the possible
> > exception of the sound hitting the player's body, and those in his
> > immediate vicinity. (and other objects, like stands, sheet music, etc,
> > all playing a part too.)
> >
> > I've experimented with this quite a bit, and the best result seems to
> > come from a somewhat inadequate, but acceptable, computational model
> > based on using, you guessed it, the orchestral experience ears.
> >
> > So i take one 'hall' impulse, and apply it to varying degrees, mixed
> > with as precise a pan model as possible (and i use multiple desks to
> > layer with,more or less, so there's a reasonably accurate depiction of a
> > pan placed section, instead of the usual pan sample model of either
> > shifting the section with a stereo pan, or the inadequate right channel
> > down, left channel up method.)
>
> phew! ambitious!
>
> > to make this more complicated (not by intent, i assure you), i'm
> > attempting to add a degree of pseudo mike bleed, from my 1st violins,
> > into the cellos sitting deeper on the stage, and in reduced amounts to
> > the violas and second violins sitting on the other side of the digital
> > stage.
> >
> > All of this is with the intent of getting as as lifelike a sound as
> > possible from my digital orchestra.
>
> why simulate mike bleed? i thought you were after creating a "true"
> orchestra sound, not one including all unwanted multi-miking
> artefacts... i'd rather concentrate on instruments and room.
>
> > The questions:
> >
> > In terms of convolution, , can i 'split' a convolution impulse with some
> > sort of software device, as to emulate the varying degrees of spherical
> > sound from instruments as described above?
>
> you could get a b-format response from every place in the orchestra
> (with all other musicians sitting there, for damping), and then convolve
> it with the violin (which would also have to be shoehorned to b-format,
> simulating the desired radiation pattern).
> but if you have the room and the orchestra, you might as well let them
> play your stuff ;)
>
> > So, one impulse (I use Jconv by default, as it does a great job, far
> > better than most gui bloated offerings in the commercial world) that can
> > be, by way of sends, and returns, be 'split' or manipulated not only in
> > terms of length of impulse, but fed as 'panned' so as to put more
> > impulse 'up', less impulse 'down' and just a twitch of impulse 'forward'
> > of the player, with near enough to none on the sound going back into the
> > player.
>
> i'm not sure i understand 100%, but you might want to look into
> ambisonics for that. ardour can do it just fine, all you need to do is
> bypass the panners and use fons' AMB plugins instead. as to target
> format, you could use UHJ stereo. if you desire 5.1, you might want to
> consider working in second order ambisonics.
>
> > I've written this rather clumsily, but i hope some of you experts may
> > understand what i'm trying to achieve here.
> > Can the impulse be split down it's middle, separating left from right,
> > aurally speaking, and if this is possible, can i split the impulse into
> > 'wedges' emulating that sphere i wrote of, more or less?
>
> no, i don't think so. you will need a spatial impulse response. the
> simplest way to obtain one is to use a soundfield microphone (or a
> tetramic, for that matter).
>
> > if there's a way to do this, then i'm all ears, as my mike bleed
> > experiments suffer from a 'generic' impulse per section affecting
> > everything to the same degree, including the instruments bled in. I
> > should note here, this is not about gain, but a wedge of impulse, cut
> > out of the overall chunk, that represents a 'window' or pan section of
> > the whole.
>
> i still don't understand why you're after "mike bleed".
>
> > I suppose an analogy for the chunk of impulse idea would be to stretch a
> > ribbon across a stage, and cut a metre out of the middle. That metre
> > would be the bit i'd use, as a portion of the whole, in an aural
> > soundscape, to manipulate, or place, instruments, to a finer degree, in
> > the attempt to create a more realistic '3d' effect for the listener.
> > That metre along with other cut out sections of the impulse soundscape
> > could help me introduce a more....'human' element to a layered
> > instrument section.
>
> yeah, well, *if* we had a way of capturing a sound field completely over
> such a vast area, we would all be very happy indeed. it can be recreated
> using wave field synthesis or very high order ambisonics, but currently
> there is no way of capturing it, other than measuring a set of
> individual points in that sound field.
>
>
> hth,
>
> jörn
>
>
>
>

_______________________________________________
Linux-audio-user mailing list
Linux-audio-user@email-addr-hidden
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-user
Received on Sun Jan 4 20:15:01 2009

This archive was generated by hypermail 2.1.8 : Sun Jan 04 2009 - 20:15:02 EET