Re: [linux-audio-dev] acid, linux

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] acid, linux
From: 'Stefan Westerfeld' (stefan_AT_space.twc.de)
Date: to joulu  02 1999 - 16:35:20 EST


   Hi!

On Thu, Dec 02, 1999 at 03:04:02PM -0000, Richard W.E. Furse wrote:
> Perhaps we should talk - MNLib (http://www.muse.demon.co.uk/mn_index_html )
> has a VERY powerful internal architecture for configuring and linking
> objects together. I've not published source mostly because I ought to look
> into publishing a paper or two.
So I can't really see how it is built and how aRts compares to it.

> The approach you are describing works fine
> until the object structures start to become complex. A trivial example is a
> multi-channel mixer for which the number of channels is configurable, a
> more complex example I've been working with is an acoustic space model.
> Also, would you want to allow object grouping? Polyphonic synthesisers
> built from component objects? A few other things handled by the approach
> I've been using are feedback support and latency compensation.

Well, yes I want to do all that. The current aRts already provides something
like a "multichannel mixer" with configurable channel count, though the
current implementation is a hack ;)

Object composition (which seems to be what you mean with your last two points)
should be supported as well. So for instance, you can draw the flow graph for
one voice of a synthetic instrument (such as a synthetic string), and
parametrize this by velocity, pitch and some other parameters which you may
want to configure by GUI or by midi effect, or whatever else. Then you can
take some kind of routing object, which will start per-voice instantiations
of that structure as required while playing an incoming midi stream.

The current aRts does that already. Feedback support has also been in aRts
since quite some time. (Though not yet ported to the MCOP version in the CVS).

What you mean with latency compensation isn't quite clear to me. If you mean
timestamping for instance incoming events from the midi sequencer and then
play them exactly, (I mean: too late, but each event exactly the same time
too late) - I've thought of that. However, it is not in aRts currently,
though if somebody comes up with a plan how to implement that properly and
consistent, no problem.

The main problem seems to be that if your sequencer says "play that at 1.324s
from start", you need some exact source of timing, which is completely
identical to all partners of the communication - which is bad if you can
play midi events only after the "now" or the "midi clock" scheme, and audio
only after the "sample clock" scheme, because those are no identical sources
of time.

> Such issues might be considered too complex to bother about for
> standardised portable interfacing - it's been hard enough in raw C++, so
> fair enough. In which case I'm at least interested at level of providing
> support for 'plugins' using some straightforward interface or other and
> yours sounds sensible - perhaps I should write an aRt skeleton generator
> for MNLib.

As I don't know how MNLib looks internally, I can't comment on that. However
about complexity:

The idea behind aRts is, that you handle the complexity (synchronization,
recursive flow, communication, threading, io, scheduling) outside the
plugins, and use those real simple plugin implementations.

Using the IDL, skeletons and whatever else, I think that goal is implementable.
The plugin writer won't be bothered with things like that, though they will
work, because the aRts framework takes care of them.

> I'd prefer a dynamic loading approach to a process switching
> approach as last time I looked process switching wasn't terribly fast
> (well, compared to a virtual function call) although the flexibility to use
> both would be nice.

Yes. The idea was that you can decide later if you want threading, virtual
function call or multiple processes, if you do it with the IDL and middleware
thing.

> Intuitively I prefer the idea of handling distributed
> processing using 'bridge' units (shm/TCP) maintained by higher-level
> organisation rather than leaving the OS to handle it - using the latter
> model makes timing information much harder to quantify.

Yes - and that higher level organisation would be the aRts MCOP layer, which
also makes it possible to exchange SHM/TCP by a specially adapted kernel
extension only for aRts, if you really want that. (MidiShare seems to do
it that way, implementing a message passing thing in the kernel, though
it wasn't stable as I tried it).

Regarding the MNLib issue: as I see it, the most sensible thing to do would
be linking MNLib against libmcop (and perhaps libartsflow), if it's a thing
that offers something that you can't break up in aRts components. So you
would have access to the aRts components, flow graphing, scheduling, and
plugin loading and whatever. On the other hand you could do whatever magic
isn't implementable using aRts flow stuff seperately.

If MNLib can be broken into aRts components completely, that would be even
better.

Regarding licensing issues: of course I can't depend aRts on something
(currently) closed source like MNLib. I could go using windows and VST
then, instead ;). If you on the other hand want to depend MNLib on the
aRts infrastructure, it would be no problem, the relevant parts of aRts
are supposed to be LGPLd.

Merging would only be possible if MNLib was free, too.

Switching aRts to MNLib completely would be possible if starting from aRts
would be more work than starting from MNLib to get where I want to get.
Well, but as I can't even look at the code, I can't say anything about
MNLib right now.

   Cu... Stefan

> -----Original Message-----
> From: Stefan Westerfeld [SMTP:stefan_AT_space.twc.de]
> Sent: Thursday, December 02, 1999 1:19 AM
> To: MOULET Xavier CNET/DMR/ISS
> Cc: 'linux-audio-dev_AT_ginette.musique.umontreal.ca'
> Subject: Re: [linux-audio-dev] acid, linux
>
> Hi!
>
> On Wed, Dec 01, 1999 at 05:56:56PM +0100, MOULET Xavier CNET/DMR/ISS wrote:
> > [...]
> >
> > Thus, I am waiting eagerly of an implementation of the API discussions
> (or
> > is there one ?), and of course, of an engine.
> >
> > Is there already one or many good engines fundations for such a thing ?
> > Would Quasimodo (without the interface) do the job or is it too Csound
> > oriented ?
> > Raw Esd ? THE API (mucows (?)), aRts ? xmms ?
>
> I can tell you what aRts is doing right now.
>
> The idea I have is moving away from CORBA towards a lighter consistent
> object model which spreads throughout aRts. The thing will be called
> MCOP, which expands to multimedia communication protocol. But keep in
> mind that it cares about communcation and object model (the things that
> were handled by CORBA before). One will write things like
>
> interface Synth_ADD {
> in audio stream signal1, signal2;
> out audio stream result;
> };
>
> in an idl file, and it will generate C++ skeleton classes, which you
> only need to implement. For Synth_ADD, this could look somthing along
> the lines:
>
> void SynthAdd_impl::calculateBlock(unsigned long cycles)
> {
> float *end = result + cycles;
>
> while(result != end) *result++ = *signal1++ + *signal2++;
> }
>
>
> that's about all you need to write. By the way, you could also have
> something like
>
> interface BinaryOperation {
> in audio stream signal1, signal2;
> out audio stream result;
> };
>
> interface Synth_ADD : BinaryOperation {
> //
> };
>
> which means you get the wonders of polymorphism and inheritance in your
> plugin system.
>
> The component communication will be handled transparently to the
> programmer,
> and also whether it is in the same process space (linked as library), or in
> another thread, or in another process or on a computer somewhere on the
> network shall be transparent.
>
> So using aRts as library should finally be possible for those who want
> to do that. (This is really important for wave editors for instance). Also,
> people will not any longer need to install mico to use aRts. Another
> advantage is that the overhead is much much less.
>
> Perhaps other people who are still considering whether they want to do
> something similar could also join in. Basically, writing a middleware
> like MCOP is a lot of work. It is more or less as complete as CORBA (at
> least the relevant parts I used) in it's current development stadium,
> though not everything works fine yet, but it supports
>
> - remote method invocations
> - inheritance
> - multiple inheritance
> - using components transparently as libs or remote
> - do marshalling
> - define own datatypes (e.g. structs and such) in the idl file, such as
> struct MidiEvent or struct FFTPacket
> - C++ language binding (if you want - write a C implementation of MCOP,
> at least apps/plugins will be able to communicate properly then)
>
> from my speed tests, it wins factor three against mico for synchronous
> invocations, but the slowness is obtained by using a TCP transport,
> rewriting
> that to sharedmem should give another major boost. Also, the real gain is
> that you can support streaming, QoS, event transport, etc by your
> communication layer, which was impossible to do with CORBA before.
>
> KDE2 will use aRts as multimedia framework, which also means the sound
> server that will be running for the desktop will be implemented on top
> of that. It will also not necessarily remain an audio-only thing, video
> for instance would also be great - but that has some time.
>
> The engine is partially ported from CORBA, but it currently can't do more
> than a beep (but at least that: fully modular) - also the plugins aren't
> ported, and some infrastructure is still completely missing (Qt GUI stuff).
>
> So the dependancies aRts has are reduced to: C++ with STL (no Qt, no Mico,
> no KDE, no X11, if you really want not even Unix are required).
>
> The code is available from the KDE CVS (kdemultimedia/arts), which you also
> can get anonymously if you like via CVSup. It is still in its experimental
> phase, but I'll port over all aRts infrastructure to MCOP in the next time,
> though that may take a while.
>
> Cu... Stefan
>
> PS: Please - everybody who is now writing something similar - step back a
> second and think of you couldn't join aRts. What is missing?
> --
> -* Stefan Westerfeld, stefan_AT_space.twc.de (PGP!), Hamburg/Germany
> KDE Developer, project infos at http://space.twc.de/~stefan/kde *-

-- 
  -* Stefan Westerfeld, stefan_AT_space.twc.de (PGP!), Hamburg/Germany
     KDE Developer, project infos at http://space.twc.de/~stefan/kde *-


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:23:26 EST