Subject: Re: [linux-audio-dev] Re: minimum tick time
From: Paul Davis (pbd_AT_Op.Net)
Date: Wed Nov 07 2001 - 18:55:51 EET
>I don't think so, but I am probably completely missing the point.
>Let's start all over. I have a patched kernel, and I want to have
>low latency. I use latest alsa (cvs), and I run the latency test.
>(As you might have noticed, I submitted an filtersweep effect for
>the latency test, which Jaroslav added to the CVS. Try it with -e)
if i may so, this is absurd :)
>- if I run in nonblock mode, it eats all CPU. Latency is excellent,
> but it would be nice if I could do something else in the meantime,
> for example run a GUI.
> $ latency -m 128 -M 128
>- if I run in block mode, i get XRUNs, even with larger bufsize
> $ latency -e -b -m 256 -M 256
>- if I run in poll mode, idem
> $ latency -e -b -m 256 -M 256 -p
>
>Is this expected behaviour?
i don't know if its properly written to measure these things at low
latency settings. i don't have the recent source code for this, but
the version i have doesn't have poll mode in it. the program isn't
meant to test "low latency", its meant to test round-trip latency,
which is related, but not the same.
all i can tell you is that, as you probably know by know, my apps run
just fine with latency settings down as low as period_frames = 64 at
48kHz (about 1.3msec per interrupt). admittedly, thats on a dual CPU
system. but its basically fairly easy to get this performance out of
any program that is engineered properly. latency.c is not the correct
place to start such a program from, however. its a test tool, not a
viable application.
what are you trying to do?
--p
This archive was generated by hypermail 2b28 : Wed Nov 07 2001 - 18:55:04 EET