Subject: Re: [linux-audio-dev] best method for timing
From: Tim Goetze (tim_AT_quitte.de)
Date: Thu Apr 18 2002 - 01:20:39 EEST
Juan Linietsky wrote:
>Well, besides that i just tried it and my test program doesnt go above
>0.00%, and sometimes 0.01% on my duron 850, I agree that it would be ki
>nda overkill anyway. Still that's not the most practical behavior to sh
>aring rtc between programs, what I meant to say with max speed is that
>you have,
>say, 3 apps running, one at 256hz, another one at 512hz and another one at
>64hz, RTC can be easily configured to allways respond to the higher tim
>ing, while delivering events to the divisors too. Also from what I reme
>mber when I used int8 in the old DOS days, you can switch the frequency
> with no delay involved (and even if there was a little delay, there wo
>nt be nearly as much delay as you get with a kernel syscall in some sit
>uations) , so programs currently using rtc wont have any sort of bad sk
>ip while a new process opens the device. This approach
>seems to me to be a lot more transparent for the end user and developer
> than having to use ALSA timing or piping rtc. What do you think about it?
>I feel like going and trying to implement this on rtc.c and make it sel
>ectable via some parameter or proc var..
go ahead, i would not object to sharing the rtc. i haven't yet felt i
needed it, either. ;)
if your test program did not do any work, it will hardly spend more
than a few hundred cycles upon an interrupt. the kernel however that
handles these interrupts before your program does will do a little
more work that is not accounted for if you ran under time(1). on this
450 MHz box it's about 7-9% of constant load, and i cannot imagine this
number dropping to zero within one hardware generation.
tim
This archive was generated by hypermail 2b28 : Thu Apr 18 2002 - 01:40:23 EEST