[LAD] Threading Model for a Multi-Track Audio Timeline?

From: Richard Spindler <richard.spindler@email-addr-hidden>
Date: Thu Feb 14 2008 - 23:06:39 EET

Hi,

I was planning to spend some effort at LAC this year to get jack
support in my Video Editor right, and I was wondering what would be
the best model to implement.

My App has a simple timeline that can have an arbitrary number of
tracks, and each track can hold clips at arbitrary positions. Each
clip is one audiofile or audiotrack from a videofile, which does not
make much of a difference. Eventually I also perform operations like
resampling or filters per clip.

Potential Models:
A)
1 Thread that connects to the jack-callback with a ringbuffer, and
that does everything, mixing, effects, disk-IO, etc.
Has the advantage that it is probably not very difficult to implement.
However, compute intensive stuff that is not IO bound happens outside
the jack-thread, which I guess is not optimal.

B)
1 Disk thread+ ringbuffer per clip. Advantage: compute intensive stuff
is in jack-thread. Disadvantage: does not scale to long timelines with
many clips and therefore many threads.

C)
1 Disk thread+ringbuffer per Track. Advantage: could eventually be
implemented such that compute-bound stuff happens in jack-thread, or
at least that "some" of this happens in the jack-thread. The number of
threads is likely to be small enough to handle that.

D)
1-n Single Diskthread(s) for everything, and 1 Ringbuffer per Track.
Make number of Disk-Threads limited, and eventually add more for
multi-core/SMP, and let those feed into per Track Ringbuffers
according to some home-brew scheduling algorithm? Wouldn't waste a
thread per track?

E)
Be lazy and reuse something that someone else has written. ;-)

So, what suggestions can you make about what I should do? I know that
there will be some not so nice corner cases when seeking, the
necessity to reset ringbuffers, scrubbing during playback, etc.....

My "Latency" Requirements are probably fairly moderate, so I could
compromise in that respect.

Another "Use-Case" that I would like to implement is
"Project-Nesting", preferable in "Real-Time", and only additionally by
rendering the nested project into a file. For this I think it would be
ok to have a big enough buffer between the current project and the
embedded project, and do the "on the fly rendering+IO" in the
"Disk-Thread".

what do you think?

Cheers
-Richard

-- 
Don't contribute to the Y10K problem!
_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev@email-addr-hidden
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev
Received on Fri Feb 15 00:15:04 2008

This archive was generated by hypermail 2.1.8 : Fri Feb 15 2008 - 00:15:05 EET