Re: [linux-audio-dev] ardour, LADSPA, a marriage

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] ardour, LADSPA, a marriage
From: Tom Pincince (stillone_AT_snowcrest.net)
Date: Tue Nov 14 2000 - 22:25:14 EET


>>This added functionality raises a question. Do you plan to develop
>>ardour into a monolithic daw application?
>
>That has always been the goal, where "monolithic" is actually the
>"monolithic+plugin" system

Just checking :)

Onward and upward, protools and beyond...

The reason I asked is because, while many open projects rally around a
single program, lad looks to me like a collection of individuals
developing and sharing their personal knowledge base regarding digital
audio through the process of developing their own apps. Ardour clearly
has the potential to unify the efforts of developers aiming for a
complete linux daw. I wanted to consider the possibility of making
ardour as accessible to others as possible, and the modular approach
would be nice if it could be done within acceptable latency limits. In
any case it is always a good idea to pause for a moment when standing on
the threshold of greatness. Let the fun begin!

EDL's

The move to standardize edl's for platform neutral portability is
definitely a work in progress. There is no reason why a standard
metasession file can't include plugin configuration as well as edl info
(edl meaning just track specific time referenced pointers that define
regions of soundfiles for playback). Don't wait for the audio industry
to expand the definition though, since such developments assume
proprietary plugins as the norm, which will not reside on all daw's, and
can't be shared because of copy protection. So a portable metasession
file can only include areas that can safely be predicted to be resident
on all daw's. A fully gpl compliant system has no such limitations.
This could become a large issue in favor of linux daw adoption.

Busses

The great thing about digital audio is that there are no such things as
busses. Once the audio is fetched from the soundcard or hd there are
just a bunch of static blocks of data residing in buffers. Audio is not
pushed from source to destination, it is pulled. Maybe this suggests
that instead of determining the destination of a channel's output, think
in terms of the input for the next item in the signal path. A buss then
becomes a list of pointers to buffer locations whose contents are to be
added frame by frame, possibly with individual scale factors (aux and
monitor busses generally use individual scale factors, subs and main
busses generally don't). Of course the gui can look like the signal is
source driven since this is the prevailing human model (electronic
circuit theory was originally modeled after positively charged particles
flowing from source to ground, so now we have holes).

Tom


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Tue Nov 14 2000 - 23:09:48 EET