Re: [LAD] Jack audio questions to aid in my development of audiorack

From: Fons Adriaensen <fons@email-addr-hidden>
Date: Sun Nov 17 2019 - 12:03:21 EET

On Sat, Nov 16, 2019 at 05:50:16PM -0700, Ethan Funk wrote:

> A very big structural difference between the APIs is how "rendering"
> sample buffers is accomplished.
> ...
> After years of tweaking the PID filter, I had it working very well,
> with no cost (other than processor overhead of the re-sampling)

If I understand what you write correctly, it seems to me that
zita-j2a and a2j are doing exactly the same thing. They contain
the separate callback, the resampler, buffering, and the control
loop.

E.g. for an output you'd have

  buffer -> resampler -> hardware

Your separate callback would read the buffer, run the control
loop and the resampler, and write to the hardware.

The input to the buffer is where you connect to the world
controlled by the master card. If your master period size
is N, you would write to that buffer N samples during each
master card callback.

In other words, the input to the buffer is used in exactly
the same way as you would use zita-j2a's Jack ports. So I'd
be surprised if you really need any fundamental restructuring.

...

But what really surprises me is that you had to do the
resampling and buffering explicitly as part of your
application. Doesn't OSX provide this at the system
level (IIRC, by creating an 'aggregated' sound card) ?

Ciao,

-- 
FA
_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev
Received on Wed Jan 1 02:16:48 2020

This archive was generated by hypermail 2.1.8 : Wed Jan 01 2020 - 02:16:49 EET