Re: [linux-audio-dev] multitrack and editor separate?

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] multitrack and editor separate?
From: Paul Davis (pbd_AT_Op.Net)
Date: Sun Oct 28 2001 - 19:44:15 EET


>I think an API for embedding an audio editor would be nice though. Since
>an EDL based wave editor is responsible for rendering its own edits, why
>not use something like JACK to access the rendered audio? I guess you

JACK handles real-time streaming with sample-accurate sync. Its not a
general protocol for shipping audio from A to B.

>Does this make any sense? I still see some things missing from this.

Frankly, no. Once again, there are two different levels of operation:
working with specific samples, and working in an audio sequencer.
Take a look at the main interface in DP, ProTools, Samplitude, SONAR,
Nuendo and the rest. They are not (primarily) waveform editors. The
data that each "track" represents is not (necessarily) stored in a
single file. The same is true even of each "region" in each "track".
There is no simple 1:1 correspondence between data appearing in the
track and bytes on a disk. To force this correspondence breaks all
kinds of other cool things about nonlinear, nondestructive editing.
If you don't force it, then everything that operates on the data has
to understand the nature of the "track" (i.e. a playlist or EDL style
thing), and there has to be a communication protocol between all of
them (not just point-to-point, since there might be 3 things operating
on the same data at the same time) to notify of changes and handle
mutual exclusion.

Compare these issues with the relatively simple task of taking the
code to an existing waveform editor and using it within a system like
Ardour as a dedicated "region editor".

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Sun Oct 28 2001 - 19:42:31 EET