Re: [linux-audio-dev] multitrack and editor separate?

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] multitrack and editor separate?
From: Josh Green (jgreen_AT_users.sourceforge.net)
Date: Sun Oct 28 2001 - 01:16:12 EEST


On Sat, 2001-10-27 at 11:47, Paul Davis wrote:
>
> ok, i've been thinking about for the last couple of days. here's my
> proposal for ardour's handling of such an idea:
>
> * user selects "edit region externally"
> * relevant data is written to a file
> * external editor is forked using some environment or config variable
> (e.g. "snd %f")
> * when editor process exits, check the exit status:
> if 0: take the new file
> ...
>
> no. this is pretty bad. as you note, you really need to be able to
> tell ardour that the file has been changed before exiting, so that you
> can hear the result "in the mix".
>
> defining Yet Another API for inter-editor communication? i'd rather
> just write or merge an existing wave editor.
>
> --p

I think an API for embedding an audio editor would be nice though. Since
an EDL based wave editor is responsible for rendering its own edits, why
not use something like JACK to access the rendered audio? I guess you
would need some control stream from the client program to the wave
editor, so maybe something like this:

client <---- Jack audio stream <----- Wave Editor
   | ^
   +------Control stream------------------+

The client program would hand off the audio to the wave editor in some
defined fashion (give it a file name, shared memory or through a pipe
for smaller wave files). The wave editor then is responsible for
rendering this audio on demand. There would then need to be some API for
the Control stream that would contain commands for Play, Stop, Seek,
Speed, etc. I guess there would also need to be a status stream, or some
way to get the current length of the edited audio to the client. When
you are done editing the audio, it could be rendered to disk and given
control back to the client. I don't really see how the EDL data could be
stored along with the audio file, unless it was always rendered by the
Wave editor. In this case perhaps some method could be provided by the
wave editor for saving its EDL lists to a byte stream which the client
could then store in a manner it chooses.
Does this make any sense? I still see some things missing from this.
Like being able to see a waveform display of the edited file in the
client program. For my application (Smurf Sound Font Editor) I've always
wanted to be able to hand the editing of the actual sample to an
external editor. But I still need to provide a way for the user to set a
loop on the sample data. I remember discussions about trying to come up
with standard ways of displaying waveforms, I seem to remember them
always ending without resolve.

-- 
    Josh Green
    Smurf Sound Font Editor (http://smurf.sourceforge.net)


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Sun Oct 28 2001 - 01:07:46 EEST