Subject: Re: [linux-audio-dev] EVO spec 0.0.1
From: larry d byrd (larrybyrd_AT_juno.com)
Date: Thu Jul 20 2000 - 00:41:44 EEST
Okay Im going to further try to explain what I mean by the made up term
GESTURE MODELING....and again these are higher level functions that wont
help in development at this point but just something to keep in mind..
This ones for you richard...!! *BIG BUCK TOOTHED GRIN*
Gigasampler has something called behavior sampling..*THATS A JOKE*
(NOTE *You could do the same type of thing with a conventional sampler
like the K2000.
They simply throw in the ability to name your controller assignments
differently than the names of your keymaps..)
Here is a scenario for building awesome sample instruments that mimic
instrument techniques..
THE GOAL IS TO BE ABLE MIMIC EITHER A PARTICULAR MUSICIANS PLAYING STYLE
OR A PARTICULAR INSTRUMENTS TECHNIQUE IN REAL-TIME..
(not such a new idea but has many problems to be addressed..)
Take a famous guitarist like "STEVIE RAY VAUGHN" which in this case we
need to go dig him up.."NO DISRESPECT INTENDED"...and record him with his
own unique sound and style.
Now the whole idea is to be able to model this mans original sound and
techniques.
You would have sample keymaps for each technique and controllers
assigned to compliment the technique that is being replicated.
Say we have one keymap for his regular picked notes mapped across the
range including velocity switching for this technique..(Thats alot of
damn samples there but we can handle that)
Now in stevies style he tends to pick in varying degrees away from the
bridge which is one of the unique elements that makes him sound like he
does..
Well we need to sample another keymap of regular picked notes at varying
velocities in varying positions from the bridge to be able to have
samples that somewhat 'mimics'
this particular technique.(Well give me a month and I will have it
somewhat completed.?)
Now stevie is being difficult because he also likes to apply pinch
harmonics to his notes
at varying degrees from the bridge.
So now we have :
1.PICKED SAMPLES: (SAY 6 VELOCITIES PER NOTE )
We just happened to be sampling in one out of seven playing positions
on the guitar so
we only sample say three octaves.(12 * 3 * 6= 216 samples)
Now we want to be able to mimic picking distances from the bridge..
We will sample new keymaps in three discrete positions .
So now we have (216 * 3 = 648 samples) if we keep every keymap with the
same amount of velocities and pitches.
Then the pinch harmonics we need to sample as well..(stevies really
starting to stink by now) ..
So now we multiply 648 samples [which is the total amount of samples that
make up this element] by 2 [which is the amount of techniques we are up
to ] and we have 1,296 samples total for that instrument ..We could add
more techniqeus at this point like
finger picking,slide guitar,tap-ons,etc.
But we need to get the another 40 gig hard drive ready before we do
that..
So lets go map out those samples..
Without the auto-mapping features I suggested at this point we need to
schedule in some vacation time so that we can stay up all night for the
next week staring at zero-crossings and mapping samples to there velocity
ranges and key ranges and applying control sources to each technique so
that I can perform the instrument in real-time without standing on one
leg and a mouth full of breath controllers.
Well I might do that anyway.. <G>
The point is that physical modeing is not to the point of emulating a
persons playing style ..But I can play stevies wave file and well
,..there you have him..
But if we are going to get even close to making this instrument sound
convincing we need to have atleast the amount of samples I was talking
about above or more.
Unless we come up with a magic DSP that can fill in the gaps from say
bridge position
one and bridge position three...In graphics we can morph to recreate this
effect..
Can we not apply the same to audio..? (I know its like me saying can we
clone a human)
Probobly can but would you want to...
But it is for me the missing link between physical modeling and
sampling..
How do you implement it .?
You got me...Im sure thats real helpful...
Now guitar is unique in its physical properties..Most of the time the
note will
decay naturally after the initial attack..Unless its feedback..
So we usually wouldnt ever want to sustain a note and fade thru the
velocity layers
without re-triggering a note..(Except volume swells)
But with many instruments especially wind instruments we would want to
fade thru
multiple velocity layers without re-triggering the midi note...
Because we want to hear the change in timbre when we blow harder not when
we strike a midi note harder...Im still not sure that this can be done
with simple crossfading.???
It sounds like more of a morphing thing'...Maybe Im wrong.?
If we can crossfade between multiple keymaps without re-trigger then we
still need a DSP function that we can apply on the sample level...Then we
can fade between the morphs that the DSP created for us and that should
be okay ..
Even better would be a function that can transpose elements of one sample
to another.
I think this is comprable to raytracing in graphics...?
Is that possible with audio signals..?
I know we can match freqeuncies from one mix and apply to another...
I have a plug-in that does that..But thats not enough ..We need
waveshaping and such.
Maybe its possible to integrate physical modeling algorithms with
waveforms so that
you could take any sample set of waves and manipulate them in the same
way as computer generated waveforms?)You would have different phymod
algorithms for different instruments..
Well I look forward to explaining this again soon,... <G>
MIKE
________________________________________________________________
YOU'RE PAYING TOO MUCH FOR THE INTERNET!
Juno now offers FREE Internet Access!
Try it today - there's no risk! For your FREE software, visit:
http://dl.www.juno.com/get/tagj.
This archive was generated by hypermail 2b28 : Wed Jul 19 2000 - 23:14:30 EEST