Re: [alsa-devel] Re: [linux-audio-dev] Re: Toward a modularization of audio component

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [alsa-devel] Re: [linux-audio-dev] Re: Toward a modularization of audio component
From: Scott McNab (sdm_AT_fractalgraphics.com.au)
Date: Fri May 04 2001 - 10:02:09 EEST


Abramo Bagnara wrote:
>
> Paul Davis wrote:
> >
> > >In the past days I've thought a lot about whether, how and why current
> > >alsa-lib PCM model is not suitable for all-in-a-process model (that
> > >seems the only way to write applications with pseudo-RT needs).
> > >
> > >The result is the following simple proposal.
> > >
> > >The short resume is:
> > >- every audio producer (a synthesizer, an mp3 file decoder, an hardware
> > >capture stream, a .wav reader, etc.) have a PCM capture API
> > >- every audio consumer (an mp3 file encoder, a .voc writer, an hardware
> > >playback stream, etc.) have a PCM playback API
> > >
> > >Following this model, by example an FX plugin is an audio producer that
> > >take as input one ore more audio producers.
> >
> > Its a cool (though not new) model.

I was actually thinking of this a while back and I dont think it would
be too hard to do. You can even get the kernel to do the scheduling in
the usual manner (although the verdict is still out as to whether this
would indeed be the best idea..).

I guess one possible implementation would be to have a "dummy" ALSA
kernel
driver which would basically hang together as follows:

 - application opens this device with whatever number of channels and
   datasize that it supports (eg. 44khz/stereo/16-bit or whatever) and
   this call always succeeds.

 - the writing application then sits blocked since noone is reading it.

 - a second application now queries the dummy device and finds that it
   has a stereo pair at 44khz (or whatever is there) and then opens it
   for reading as any normal PCM device (using alsa-lib plugins if
   necessary for format conversion etc).

 - the second application does a read to get audio data and blocks.
   Meanwhile the first application is unblocked by the kernel and able
   to write the chunk of data which the second application then gets
   when the call completes and it is unblocked.
   
   If this second application is in turn writing data to the sound card
   then obviously it will block waiting for the hardware. Thus the whole
   process can be dependent on the one output source (be it sound card,
   disk, network, whatever).

Obviously there are heaps of issues that would need to be resolved to
realise this, but it does seem like an extremely simple and scalable
solution (especially for SMP).

Having the kernel do all the scheduling for us seems like a fairly
natural and desirable thing to do (it saves us having to write a
scheduler for one!) and is also a good idea since we would benefit
directly from whatever optimisations the kernel folk come up with.

Anyway just my 2c worth. Please ignore me if this has been discussed
before...I only monitor these lists from a distance these days.

Cheers,
Scott


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Fri May 04 2001 - 10:37:10 EEST