Re: [linux-audio-dev] News about sequencers (not my own though!)

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] News about sequencers (not my own though!)
From: David Olofson (audiality_AT_swipnet.se)
Date: ke tammi  19 2000 - 20:25:56 EST


On Wed, 19 Jan 2000, Fredrik wrote:
> I grew up on trackers, where you
> have precise control over every parameter at every time. Sequencers
> are built on the assumption that you specify parameter changes
> beforehand in the synth, with EG's and LFO's, and just trigger those
> with a simple "play note at pitch X and velocity Y now"-event. I'd
> like to see some new ideas.

And in addition to designing a new kind of GUI, I think the MIDI
protocol has to be dropped to really rival the trackers in the areas
they still have significant advantages. (Such as building complex
loops with lots of samples and effects - programming that on a
sequencer + a sampler is nothing but frustrating, even with a good
editor...)

This is what I'll do on top of the MuCoS event system, rather than
throwing in a "MIDI emulation layer", like they did in VST 2.0...
Haven't looked into the details, but I think the concept of using
the pitch to refer to notes should be removed - it made sence with a
simple protocol like MIDI, but with more direct control than that, it
turns into a nightmare.

Haven't put much effort into this yet, but my current idea is that the
protocol should be kind of object oriented instead. That is, rather
than just starting a note and use the pitch of that note for further
references, use a "voice index", somewhat like a MIDI channel
controlling a monophonic patch. (Or, if you prefer, like a tracker
channel.) Voice index 0 could be used as a code for "affect all
voices", to support the normal way MIDI handles controllers.

The big logic difference from MIDI is actually just that you *always*
tell the synth/sampler/whatever what you want to control (a specific
voice, or everything on the channel), rather than the normal case
being "control everything".

Note: Voice allocation isn't affected by this protocol (other than
that you can explicitly say what notes to steal, if you like) - it
can still be handled by the synth. These protocol voices are "voice
contexts" rather than real synth voices.

With this system you can

 * Send controller events to individual notes on the same channel

 * Send events to adjust parameters *before* starting a note on

 * Eliminate the monophonic/polyphonic distinction in patches

There is a lot of work to do on this, but I thought it could be worth
pointing out that MIDI is just a very restrictive communication
protocol - not the definition of how synths and samplers should work!

//David

.- M u C o S -------------------. .- A u d i a l i t y ----------------.
| A Free/Open Multimedia | | Rock Solid, Hard Real Time, |
| Plugin & Integration Standard | | Low Latency Signal Processing |
`------> www.linuxdj.com/mucos -' `--> www.angelfire.com/or/audiality -'
.- D a v i d O l o f s o n ------------------------------------------.
| Audio Hacker, Linux Advocate, Open Source Advocate, Singer/Composer |
`----------------------------------------------> audiality_AT_swipnet.se -'


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:23:26 EST