On 03/01/2013 12:41 PM, Harry van Haaren wrote:
> Hey all,
>
> I'm currently attempting to stress test my setup of -rt kernel, rtirq
> scripts, and a Jack client program I've been working on.
> So my idea is to create a script that runs the programs, and also a
> cpu-load generating program (cpuburn or alternative).
There's a tool to cause jack DSP load:
http://subversion.ardour.org/svn/ardour2/branches/3.0/tools/jacktest.c
which may or may not come in handy.
I've hacked it somewhat for testing CPU freq scaling a while ago:
http://gareus.org/gitweb/?p=jackfreqd.git;a=blob;f=tools/busyjack.c
> Then collecting stats based on Xruns, % DSP load, etc.
drobilla and me did a similar analysis for LV2 plugins. Basically
measure the time it takes to process N samples for various block-sizes
and input data. The total execution time is only significant for a
single machine, however we were interested in variations (error bars).
If processing always takes the same amount of time per sample and is
independent of the input data, the algorithm is very likely RT safe.
Furthermore if there is no significant relationship to block-size that's
even better.
The scripts and data is available at
http://drobilla.net/2012/08/22/benchmarking-lv2-plugins/
> I intend to show (trough brute force) that an application is RT capable on
> machine X with a latency of Y ms.
> Of course this won't be 100% representative, but the stats will show some
> RT-safe-ness.
>
>
> Has anybody done this kind of profiling / stress testing with JACK before?
> Hints / tips / advice / etc welcomed! -Harry
>
>
_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev@email-addr-hidden
http://lists.linuxaudio.org/listinfo/linux-audio-dev
Received on Fri Mar 1 20:15:02 2013
This archive was generated by hypermail 2.1.8 : Fri Mar 01 2013 - 20:15:02 EET