Re: [linux-audio-user] This criticism of jackd valid?

From: Maarten de Boer <mdeboer@email-addr-hidden>
Date: Sun Jan 21 2007 - 23:21:47 EET

Hi Paul

I have a question about something you say in your slashdot post:

"The overhead of calling the graph associated with the data flow for
the frames is not insignificant, even on contemporary processors.
Therefore, calling the graph the minimum number of times is of some
significance, significance that only grows as the latency is reduced.
Because of this, all existing designs, including ASIO and CoreAudio
(with the proviso that CoreAudio is *not* driven by the interrupt from
the audio interface) call the graph only once for every hardware buffer
segment/period/whatever."

Do you have some numbers to show how relevant this overhead actually is?
I mean, if I use a specific internal buffer size (say 128 samples),
independend of the system buffer size, would that really be noticable?
I can think of some situations where this would be preferable. For
example, if you have many points in your callgraph where a fixed buffer
size is required (say some FFT's). Rather than doing buffering at all
these points, it seems to make more sense to do the whole callgraph
with that buffer size. I hope I make myself clear...

I did some experiments, and did not notice any significant difference
using different internal buffer sizes for my call graph. I am talking
about a call graph within a single application, and maybe you were
talking about a call graph with context switches?

maarten
 

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
Received on Mon Jan 22 00:15:04 2007

This archive was generated by hypermail 2.1.8 : Mon Jan 22 2007 - 00:15:04 EET