Re: [LAD] Jack audio questions to aid in my development of audiorack

From: Ethan Funk <ethan@email-addr-hidden>
Date: Sun Nov 17 2019 - 19:04:50 EET

Yes, same idea rolled into my application. That's why I was hoping the
zita bridges would work well. I'm still troubleshooting the glitching
with zita. Maybe it just on my test machine. I may end up having to
look at the source code to understand what the various command flags
actually do, and what the extra third number is on the verbose stats.
printout.

Audiorack was write way back in the OS X 10.4 days. Back then, the OS
did not have aggregate devices, so I coded it into my application. Even
now, with OS support, I suspect the OS adds latency across all the
devices in the a aggregate. With Apple of course, I can't look at
their source code to see what it does. I need the master to have the
lowest latency, so I kept my code in the application.

Ethan...

On Sun, 2019-11-17 at 11:03 +0100, Fons Adriaensen wrote:
> On Sat, Nov 16, 2019 at 05:50:16PM -0700, Ethan Funk wrote:
>
> > A very big structural difference between the APIs is how "rendering"
> > sample buffers is accomplished.
> > ...
> > After years of tweaking the PID filter, I had it working very well,
> > with no cost (other than processor overhead of the re-sampling)
>
> If I understand what you write correctly, it seems to me that
> zita-j2a and a2j are doing exactly the same thing. They contain
> the separate callback, the resampler, buffering, and the control
> loop.
>
> E.g. for an output you'd have
>
> buffer -> resampler -> hardware
>
> Your separate callback would read the buffer, run the control
> loop and the resampler, and write to the hardware.
>
> The input to the buffer is where you connect to the world
> controlled by the master card. If your master period size
> is N, you would write to that buffer N samples during each
> master card callback.
>
> In other words, the input to the buffer is used in exactly
> the same way as you would use zita-j2a's Jack ports. So I'd
> be surprised if you really need any fundamental restructuring.
>
> ...
>
> But what really surprises me is that you had to do the
> resampling and buffering explicitly as part of your
> application. Doesn't OSX provide this at the system
> level (IIRC, by creating an 'aggregated' sound card) ?
>
> Ciao,
>

_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
https://lists.linuxaudio.org/listinfo/linux-audio-dev

Received on Wed Jan 1 02:16:49 2020

This archive was generated by hypermail 2.1.8 : Wed Jan 01 2020 - 02:16:49 EET