Re: [linux-audio-dev] App intercomunication issues, some views.

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] App intercomunication issues, some views.
From: Phil Kerr (phil_AT_plus24.com)
Date: Tue Jul 23 2002 - 00:11:49 EEST


One of the strong points of Linux, and UNIX in general, is the way small
app's can be strung together, this is the most flexible approach.

I think the level of audio app integration is coming on strong but we
need to think of how we can interoperate using MIDI, preferably over
TCP/IP.

Being able to distribute audio applications over machines mean you can
use lower powered, and cheaper, hardware for processing. Being able to
run MIDI applications in a similar fashion follows on from this.

I'm in the process of finishing off a few DMIDI applications (mixer
controllers, sequencers and synth editors) and Jon Trinder has written a
DMIDI Palm mixer and piano keyboard controller app which he should
release in a few day's.

Moving MIDI onto the network is the way to fully distribute things.
Being able to integrate MIDI hardware into a distributed studio allows
real kit to interface with softsynths and effects.

So my suggestion is for MIDI application developers to start to look at
integrating networked MIDI protocols.

To start off with try DMIDI [http://www.dmidi.org] and MWPP
[http://www.cs.berkeley.edu/~lazzaro/sa/index.html].

Both are focusing on the same problem but different application spaces.

Cheers

Phil

Juan Linietsky wrote:

> We've been discussing this issue a lot recently on the #lad irc
> channel (irc.openprojects.net)
> and i thought it's worth a posting in the LAD list...
>
> There seems to be two strong postures on how to develop audio
> software, and how to proovide linux
> with a "modular" audio desktop. Such postures aim to solve the common
> linux-audio problem
> on how to do things "more modular" and improve app reusability. We can
> look at windows/mac apps
> such as Reason/Logic/Sound Forge/etc and wonder "Why dont we have this
> for linux?". And I
> think the answer is "not many in the community have enough time to
> focus on such a huge project".
> But at the same time, the community is full of developers which, may
> not have an eternity
> of time, but have enough to write _something_ and the will to do so.
> This could
> be pictured as a classical "bandwidth vs. speed" issue :). Because of
> this, I think
> it's extremely important to focus development on modularity and
> inter-communication
> as much as possible. Discussing this with irc folks, several people
> grouped into
> two postures.
>
> The first posture is to write big apps, with extra functionality added
> as host processing.
> This is the de-facto standard in windows & mac. Using this model, all
> kind of "extra" processing
> features (such as audio DSP, midi instruments/ports, and even some
> midi drivers) are realtime
> loaded into the program through a dynamic library model. This model
> has clear
> advantages in the sense that it can produce solid monstruous apps rich
> in functionality,
> and in general userfriendly for the new user who comes from other
> operating systems.
> But for us to develop these apps, we find ourselves with huge
> disadvantages, the first one
> is that, no matter how many host dsp is loaded, the app will end up
> being huge, and whoever
> develops it will need an enormous amount of time. The other issue with
> this is managing a common
> interface for the host plugins. The good thing also about this design
> is that it is compatible
> with the second posture mentioned (modular apps), since a host app can
> be easily turned
> into a network node.
>
> The second approach is to develop smaller and distributed apps, In
> this model, apps offer specific
> functionality over a solid/well defined api which proovides good way
> of intercommunication with
> other apps. This is probably the closest approach to the current
> status of linux audio right now,
> with apis such as JACK and ALSASEQ. The good thing about developing in
> this way is that it
> takes full advantage of the "bandwith" of developers in the community.
> The problem
> with this approach is that since we're using many apps instead of one,
> management of them
> can become annoying, specially at the time of saving our work/project
> and layout.
> I think this can be solved by developing a metadata protocol between
> apps, so the can intercommunicate
> status and other things, and having a "master" app that manages
> projects and things like that, by just
> retrieving/storing status in the other apps.
>
> Examples of common problems, how can they be solved by both
> approaches:
>
> 1- We are writing some music, we want to use a sequencer to do some
> midi, which controls
> a couple of softsynths and beat machines, then we want to add effects
> to the output, maybe equalize it,
> and from time to time, when we see that cpu usage is eating the
> machine, dump some audio tracks
> to a multitrack recorder/player. When we are done we save the project.
>
> Problem we have right now:
>
> Apps exist to do this, Muse/jazz++/etc work as a sequencer.. but we
> start to get into problems. Softsynths and beat machines can be
> triggered using midi and the aseq api, but we cant capture the output.
> Also each softsynth
> proovides its own set of effects, or its own host app interface
> (ladspa), so when you use them all together the CPU usage grows to
> insanity. Finally, a track can be recorded back by capturing the
> a specific track playing as solo using JACK (or the output->input
> feedback of the soudncard, if supported), but no sync is proovided so
> it has to be done by hand. At save time, we have to save
> what we did in each app and put it in a certain directory :) or even
> write a shellscript!
>
> Using the first approach:
>
> In this approach, the softsynths/beat machines would be host apps in
> the sequencer, and probably
> would have to be written as libs... the sequencer takes care of
> configuring them. Then also the sequencer takes care of the audio
> routing by offering a graphical interface where you trace audio
> routes,
> place ladspa plugins, etc. Finally, the sequencer also proovides a
> multuitrack recorder. At save time,
> you just write a sequencer project file. As seen, this requieres more
> work on the side of the sequencer programmer, but offers an app rich
> in functionality. This is how Windows/Mac software works. For this,
> a set of standard libs/apis will need to be defined, so each host app
> can be written accordingly.
> Another advantage of this approach is that existing wrappers can be
> easily written to be able to use VST/DXi plugins. This approach is
> good, and probably the most usefriendly, but forces you to build
> big monolithic apps that take care of just everything, and we'll also
> need to develop a set of apis
> also for what is missing. Another problem is that we'll end up with
> high difficulties in "importing/exporting" projects between apps, and
> in finding a common gui for working on this.
>
> Using the second approach:
>
> We start out a sequencer, the sequencer "connects" to:
> Alsa SEQ api: here we'll connect to the softsynths, the beat machines,
> and the multitrack recorder.
> The softsynths/beat machines will connect to the JACK api, prooviding
> output ports for
> the main mixing and for the effect send buffers, they wont do global
> effect processing (ie chorus/reverb).
> Using an interface to JACK (is there any aviable?) the user is able to
> connect the send buffers
> of the softsynths to a program which does in->out processing through
> LADSPA plugins, in this case,
> the chorus/reverb/sends of the softsynths will connect here. At the
> end, we connect those and the global
> mix buffers of the softsynths to a multitrack recorder (and why not,
> to the output). At play time,
> the multitrack recorder uses midi sync to record the final result. All
> the programs used will be connected
> to a "metadata server". This will be able to spawn/kill clients
> (programs), and retrieve/store configurations (and maybe other
> stuff?). So this will be a "Project Manager" App, and with it you will
> be able to save/load projects and go back to them in no time. A good
> thing about this is that it resembles
> more how studios work, with all their equipment interconnected, so in
> some way this should make easier
> to integrate external stuff to our "projects".
>
> So, what do you think about this issue? I'd really like to hear views
> on this since i guess it's probably
> the biggest problem the audio community is facing.
>
> Regards
>
> Juan Linietsky


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Tue Jul 23 2002 - 00:28:35 EEST