Re: [linux-audio-dev] more fundamental questions

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] more fundamental questions
From: Jay Ts (jay_AT_toltec.metran.cx)
Date: Tue May 22 2001 - 19:29:47 EEST


Paul Davis wrote:
>
> 3) Typed Ports
> --------------
> Although right now we are 100% focused on ports for handling PCM audio
> data, I want to keep our eyes on the idea that we need to offer easy
> ways to extend the Port types.

Yes, definitely! I've been (rabidly :) wanting to make a few comments
about MIDI support, and have been holding off while other matters were
being discussed. Now that Pandora's box has been opened, remember:
Paul did it! :-)

I consider MIDI support to be nearly essential, because it is needed
for at least the following:

- "VST Instrument" technology to implement software synthesizers that
  function similarly to plugins, or should I say "soundboxes". (I'm
  getting used to the term "soundbox", and for softsynths, the term
  fits rather nicely, IMO.)

  For those who need a quick summary: VST Instruments are a technology
  made possible by Steinberg with VST 2.0, that allow software synthesizers
  to function hand-in-hand with Cubase and other audio software. Usually,
  they are controlled by MIDI, which may originate from a MIDI controller
  (hardware) or the Cubase MIDI sequencer (software). Cubase handles the
  redirection/filtering/whatever of the MIDI stream, and sends it to the
  VSTi, which then sends its audio stream back to Cubase, and it goes into
  a Cubase audio track. The nice thing is that the output of the VSTi
  never leaves the digital domain (or the computer). For a sequencer-
  driven VSTi, another result is "zero latency". (There are no delays
  involved.)

  Since Steinberg introduced VST 2.0, the market has been flooded by
  a wonderful variety of VST Instruments, including:

  company product
  ------- -------------------------
  Native Instruments B4 (Hammond B3 recreation)
                        Pro5 (Prophet 5 recreation)
                        Reaktor (fully modular virtual analog synth)
  Propellerheads Reason (multifunction synth, mixer and effects)
  Steinberg HALion (sampler, yet to be released)
  Waldorf PPG (recreation of Waldorf's mid-1980's synth)
                          Attack (drum module)

  (I'm not trying to market these products, but to give Windows-unaware
  Linux audio developers some ideas!)

  I'm personally very excited about this technology, and am very much
  looking forward to being able to do this kind of thing with Linux.
  (And a new computer with a 100 GHz processor. :)

- MIDI-controlled effects "plugins". Another hot topic "out there"
  in the Windows and Mac worlds is control surfaces. These range from
  16-slider (or knob) MIDI controller boxes, to console-like boxes
  that have controls (including displays) that correspond to the
  software GUI controls in Cubase or other recording software.

  Consider, if we can have effect plugins (er, soundboxes) that allow
  the user to adjust ("play") effects in realtime while recording or
  performing live. Such as changing the cutoff frequency (3dB point)
  and resonance (Q) of a filter, to name an application that is popular
  in techno dance music.

Consider further that the division between software synthesizers and
effects is becoming more and more blurry... Having soundboxes that
respond to MIDI controls (including Note On and Note Off events!) carries
them from the lowly existence of digital effects, to being playable and
expressive instruments.

> 4) Ensuring correct scheduling
> ------------------------------
> In the MP case, each client must ensure that its audio thread is
> running SCHED_FIFO (or maybe just SCHED_RR, not sure about this
> yet). How do we do this easily? Do we care enough to try to enforce
> it? Presumably, its important that they run at the appropriate
> priority level within SCHED_FIFO as well ...

This is something that came up in the process of my own audio programming.
I forget my exact thinking, but I decided to use SCHED_FIFO. Somehow,
I decided that it was the better one to use. (???) But then I had no idea
of what to give as a priority! I set it *very* arbitrarily to 50,
and made a note to add a command line option to allow it to be
user-specified. But that doesn't help much if there are no standard
practices to use as a guide.

I think we really need to establish some sort of guidelines for this, so that
all of the developers can create code that works with everyone else's.

Just to provide an example to chew on, let me ask for some advice.
If I have a process that reads MIDI from /dev/midi, and another one
that plays the notes through /dev/dsp, then should it be

        a) set the priorities equal (50 and 50) and let
           the Linux scheduler handle it.
        b) set the priority of the audio process higher,
           to make sure there are no audio glitches
        c) set the priority of the MIDI process higher,
           to minimize MIDI response time (latency)
        d) stop worrying about it; it doesn't matter
        e) provide command line options, and pass the problem to the user :)
        f) there is no correct answer to this question.
   
(If everyone picks f, then I think we are in trouble! :-)

- Jay Ts
jayts_AT_bigfoot.com


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Tue May 22 2001 - 20:15:51 EEST