Re: [linux-audio-dev] those latency numbers

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] those latency numbers
From: Benno Senoner (sbenno_AT_gardena.net)
Date: Sat Mar 23 2002 - 21:15:30 EET


On Saturday 23 March 2002 04:46 am, you wrote:
>
> there is no "threads" mode unless you mean "IPC" mode. that works fine
> at a hardware interrupt time of 64 frames @ 48kHz with at least 3
> clients, at least on some user's machines (other people seem to have
> problems that are hard to identify). "in process" ("plugin") mode is
> close to zero overhead compared to writing a dedicated audio program.

Yes I meant the IPC mode. I like it alot that you implemented both modes
because as an audio developer it does not restrict you to a single model
while giving you a nice abstraction layer.
Hoping to get some time to play around with jack soon :-)

No a related note: I saw Karl MacMillan's paper:
http://mambo.peabody.jhu.edu/~karlmac/publications/latency-icmc2001.pdf
The results about MacOS X are interesting
I'm a reader of the german "Keyboards" magazine and they sometimes test
latencies using realworld apps (eg VST instruments/cubase/logic etc) on the
various OSes.
They say lots of positive things about MacOS X.
I'm not familiar with CoreAudio so I was wondering what the audio engine
model looks like: is it similar to jack's IPC mode or is it a "plugin" mode ?

The guys from the "Keyboards" magazine, when testing the lowest achievable
latencies under window (latest tests I've read were on a Hammerfall card)
they talk about "maximum CPU load till overruns do occur". (most test
performed with VST instruments)
For example in most of tests this value lies around 50% of CPU load.
I'm just curious why this is so CPU load dependent (speaking of windows).
Perhaps due the fact of using only two buffers, or more probably due to
windows chewing up a fixed amount of cpu time in high-load situations which
added up to the 50% cputime used by the audio sw causes a cpu overload which
leads to an overrun.
In my latency tests on linux I always drive the CPU load up to 80-85% so this
should reflect pretty worst case situations. Testing in a 50% CPU load
enviroment is just cheating. Who wants to waste half of the available cpu
horsepower (1GHz) of a 2Ghz P4 system ? :-)

 
PS: I'm now using Redhat 7.2 boxes, and I noticed artsd is started as default.
I don't like this that much because it messes up the correct functioning of
some audio apps.
Eg mplayer AFAIK outputs to OSS (does not support arts output) and while
artsd is running mplayer does not start the video and waits for something.
I'm testing on a SB Live card and the driver usually supports more apps
simultaneously. (eg two OSS apps simultaneously works).
So my question is if arts plays with tricks like LD_PRELOADin.

cheers,
Benno


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Sat Mar 23 2002 - 21:06:35 EET