Re: [gst-devel] Re: [linux-audio-dev] Toward a modularization of audio component

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [gst-devel] Re: [linux-audio-dev] Toward a modularization of audio component
From: Erik Walthinsen (omega_AT_temple-baptist.com)
Date: Fri May 04 2001 - 09:24:17 EEST


On Thu, 3 May 2001, Paul Davis wrote:

> some people have observed to me that GStreamer is strong on the
> component model, graphing, and avoiding jitter at the end of the
> stream, but weak on latency characteristics.

GStreamer has effectively no bearing on the latency of a pipeline. The
application that constructs the pipeline, and the elements used in the
pipeline, have everything to do with it. If you build a pipelie that has
an osssrc connected directly to an osssink, you'll get poor performance,
because the defaults are tuned (by OSS, not the element) for 'consumer'
apps. If you proceed to set the parameters to tell OSS to minimize
latency, and osssrc to read some small number of samples at a time, you
suddenly have a very low-latency graph. That's because the only thing
GStreamer does is enable the actual data flow, in a very direct manner
(put the buffer pointer in a holding pen, *cothread* switch in the worst
case, call on the stack best case, hand the buffer to peer element).

If the application decides to use explicit pthreads (as opposed to *just*
cothreads), then you're gonna have latency problems, unless you have a
kernel that likes you (and even then....).

Several recent changes make it very easy to construct a new 'scheduler'
that decides what order to run the elements in. If you have a large mixer
pipeline with the same chain of elements for each of N channels, you then
have a decision to make, depending on whether you're more interested in
keeping the code or the data in cache. If you're dealing with 64 samples
at a time with lots of effects, you want to run all the effects of the
same type at the same time, then go to the next one. If you're dealing
with a few effects and a large buffer, you may want to do it the other way
around (push a channel's worth of data through all effects, then go on to
the next channel).

The point is that this is very pluggable, and if you run into a situation
that causes some grief, you can either enhance the existing scheduler, or
write a new one.

> imagine: an audio interface is asking you to supply 64 frames of audio
> data at a time, generated/mutated/processed on demand. you've got
> 1.3ms (best case) in which to do it, maybe just half that.

This is the application I had in mind when I original started the
GStreamer project some 2 years ago. I want to eventually have a
fully-automated mixing surface that controls a computer (and vice versa),
in order to do *live* mixing. When someone steps to a specific mic, a
script fires that lowers all the other channels, for instance. Large
pipelies for this kind of stuff are going to be the norm, and that's why I
build GStreamer the way I did.

> how do you plead?
Not guilty.

It's been suggested that I write up a quick program to measure the latency
of a buffer passing through some number of elements. I'll try to do that
soon and get back with some numbers. I'll try to get the LADSPA wrapper
fully functional first, then get numbers from the Pentium TSC.

TTYL,
    Omega

      Erik Walthinsen <omega_AT_temple-baptist.com> - System Administrator
        __
       / \ GStreamer - The only way to stream!
      | | M E G A ***** http://gstreamer.net/ *****
      _\ /_


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Fri May 04 2001 - 10:52:04 EEST