Re: [linux-audio-dev] ardour, LADSPA, a marriage

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] ardour, LADSPA, a marriage
From: David Olofson (david_AT_gardena.net)
Date: Tue Nov 14 2000 - 11:40:30 EET


On Mon, 13 Nov 2000, Paul Barton-Davis wrote:
> >...if you plan on developing a monolithic software daw. You could go
> >modular, allowing ardour to stay as ardour. This approach would require
> >the identification of the main components of a hardware studio,
> >developing equivalent software modules, and developing a standard
> >communication protocol between modules.
>
> The moment someone defines such a protocol, and it can work with 5msec
> end-to-end latencies, I promise you that Ardour will be converted to
> use it. Until that time, there is no way of doing that you describe
> with individual processes.

The *protocol* could be done, but we'll need a fully preemptive RTOS
kernel to run it on... :-)

(They have existed for quite some time, but they're non-Free, they
have small user bases, lack of drivers and applications, and they're
not very efficient if you try to use them as all purpose OSes. I'm
not expecting Linux to ever become one, even if it eventually becomes
preemptive. Reliable peak latencies of a few µs are way out of reach,
at least for systems that even slightly resembles PCs or
workstations.)

> >because there is less communication overhead. However, new developments
> >like 5.1 mixing are harder to implement if they must be integrated into
> >a pre-existing monolithic application. Assuming that hardware
> >technology development will continue to advance, communication overhead
> >will be a non-issue (and for modest track counts it probably is
> >already).
>
> This is not true. OS overhead has increased as chips have gotten
> faster, not decreased. I spent 4-1/2yrs working in a research group at
> UWashington that was focused on figuring out what this happened and
> how to reverse it (ask me more if you really want to know). Anyway,
> its not track counts that are the problem, its the number of elements
> in the processing chain. Each time we have to do a context switch, we
> deterministically add time, and we run the risk of actually blowing up
> due to unforeseen scheduling decisions on the part of the kernel.

There is a shortcut when it comes to streaming, actually. However, we
were talking about *chainining* processes here, as opposed to
parallel processing, so the shortcut does not apply. (And even when
it does apply, hardware or kernel driver mixing is still required not
to increase the latency - there must never be more than one thread
between an input and an output.)

> >The modular approach will allow users to assemble virtual
> >studios, with individual modules plugging into each other in a fully
> >compatible mix-and-match way just like current analog technology.
>
> I'm afraid that for the foreseeable future, the closest we can get to
> this are plugins with their own GUIs.

And that's not entirely horrible. Indeed, the DSP parts of all plugins
will have to run inside a single thread, but the user interfaces can
be "protected" from that mess.

There will always have to be lock free, thread safe communication
between user interfaces and plugins, and there are basically three
ways to achieve that:

1) Custom solutions for every plugin.

2) Standard API that allows GUI modules to be loaded and executed by
   hosts.

3) Standard API that allows external applications to connect to
   plugins running inside other applications.

Well, there may be more variants, but I think these three
demonstrate some fundamentally different ways of designing a system
of this kind.

My take on it:

Use a "streamable" command protocol that can be used for
applications (IPC) as well as plugins ("event system").

The point with that is that below the top level APIs (calls for
sending/transmitting events between applications, or
handling/generating events inside plugins), both applications and
plugins use the same interface. This interface allows more or less
direct connection of IPC event queues to plugin event ports, which
means that hosts only have to implement the basic infrastructure -
not a massive bunch of API calls to/from plugins.

*still trying to keep the task of implementing the above from
drowning in the scheduling overhead...*

//David

.- M u C o S -------------------------. .- David Olofson --------.
| A Free/Open Source | | Audio Hacker |
| Plugin and Integration Standard | | Linux Advocate |
| for | | Open Source Advocate |
| Professional and Consumer | | Singer |
| Multimedia | | Songwriter |
`-----> http://www.linuxdj.com/mucos -' `---> david_AT_linuxdj.com -'


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Tue Nov 14 2000 - 13:14:18 EET