Re: [linux-audio-dev] News about sequencers (not my own though!)

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] News about sequencers (not my own though!)
From: David Olofson (audiality_AT_swipnet.se)
Date: ti tammi  18 2000 - 18:39:21 EST


On Tue, 18 Jan 2000, David Slomin wrote:
> David Olofson wrote:
>
> > Kind of one extra dimension... I've thought some about this audio +
> > MIDI integration thing, but haven't been able to think of any perfect
> > solution. It's just too many dimensions for a flat screen! :-)
>
> Actually, just representing something as simple as MIDI data is
> too many dimensions for a flat screen: start time, duration, pitch,
> velocity, release velocity, channel, patch, etc, etc, etc. For PEGS,
> I actually came up with a way to display all of these at once in
> a novel but easy to get used to manner. Other than start-time being
> the X axis, any other parameter can be set to render using any of
> the following means: First, the obvious ones: y-axis, color, and
> pen shape (ie: a plus sign instead of a circular dot). Then, a
> clever thing I call "spikes", which are rays drawn from the
> event's central dot. The angle of the ray tells which parameter it
> represents, and the length tells the value. For instance, try the
> following:
>
> |
> | |
> *------ *---
>
> If duration was the East spike and amplitude was the North spike,
> the second note would be twice as loud but half as long as the
> first. This gives you eight additional easily recognisable
> dimensions (eight from the compass rose) which can be displayed on
> a flat screen.
>
> > The point is that I don't want to be forced to shuffle my audio data
> > between different systems (samplers, HDR,...) with very similar
> > hardware and engines, just because they have slightly different UIs.
>
> If the programs are lightweight enough and play well together, why
> not?

Moving megabytes of data from one application to another can never be
lightweigt... Unless it all happens in real time - which is where the
communication/integration part of MuCoS gets in! :-) Using that
interface, they could all use the same database and streaming daemon
(which is required anyway, unless you have one drive/application),
and integrate seamlessly on the engine level. Then the UI is the only
remaining problem - still not trivial, but at least there's a point
in trying, as it'll really work the way intended for a change. (That
is, no minutes of moving/converting data, even if it could be
automated...)

> I personally can't think of a way to combine an event sequencer
> and a signal editor into a single interface in a way that doesn't
> make one or the other completely useless. Then again, maybe I just
> haven't thought long enough on the subject.

Hmm... This is where it's getting complicated! *hehe*

Displaying waveforms inside piano roll style clips could be a useful
visual representation, but it hardly turns the sequencer into a
useful audio editor. (The point is to be able to see the contents
of drumloop samples and similar stuff without switching to an audio
clip editor where you can't see the non audio events.)

<brainstorm>

There probably has to be a separation of sequencer and detailed audio
editor... Unless the audio clips automatically turns into audio
editing applets when zoomed in close enough. :-) A fast zoom in/out
method would be needed; perhaps

        Zoom In shortcut key down -->
                Show rect representing the boundaries of
                the new view

        Zoom In shortcut key up -->
                Zoom in the area indicated by the rect

        Zoom Out shortcut key press -->
                Return to previous scale and position

The audio editor would preferably be an embeded external
application, so that the user can pick his/her favourite audio
editor. It should, however, support the same audio database as the
sequencer, so that preview waveforms ("peak data") can be reused.

The whole point with this insane idea is that the user shouldn't lose
track of where he/she is in the project, when doing some audio
editing.

Note that this is probably only useful for sequencer/sampler or
tracker style editing. Using the Y axis for pitch isn't very handy
when dealing with normal HDR tracks... Reserving a part of the Y axis
display range for an area with a different mapping? Split the window
vertically? Just make the HDR events use Y axis for display only, and
use some other dimension for pitch? Sounds like hacks to me, but
anything to avoid this jumping around between uncoordinated views! It
can't get much worse than that... (Ok, you *can* use a sampler with a
tiny LCD display for editing, together with a stopwatch style UI
sequencer, instead of a HDR! :-)

Next thought: This could replace the pop-up-a-new-window thing when
editing events containing other events. (What I used to call sub
clips - what about container events?) If you zoom in enough (or just
doubleclick for direct full zoom), the clips turn into embeded edit
windows. One problem; what about Y axis = pitch when zooming is like
that? Would a make it practically impossible to edit multiple
container events at once. You could multiselect and tile the edit
windows or something...

</brainstorm>

Ok, that's it for now...

//David

.- M u C o S -------------------. .- A u d i a l i t y ----------------.
| A Free/Open Multimedia | | Rock Solid, Hard Real Time, |
| Plugin & Integration Standard | | Low Latency Signal Processing |
`------> www.linuxdj.com/mucos -' `--> www.angelfire.com/or/audiality -'
.- D a v i d O l o f s o n ------------------------------------------.
| Audio Hacker, Linux Advocate, Open Source Advocate, Singer/Composer |
`----------------------------------------------> audiality_AT_swipnet.se -'


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:23:26 EST