Re: [linux-audio-dev] A Python loop sequencer, was re: using csound as a sampling engine

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] A Python loop sequencer, was re: using csound as a sampling engine
From: Paul Winkler (slinkp_AT_ulster.net)
Date: ma syys   06 1999 - 01:37:36 EDT


est_AT_hyperreal.org wrote:
> Now that I've got csound's -L option working, I'm taking an quite
> different approach based on it. It will allow many parameters at note
> startup, more than 16 channels with parameters for each, and other
> goodies. :)
>
> Eric

Eric, I just played with s1 and was impressed by several things:
1) it works nicely.
2) it's in python (I was wondering if anyone on the list used python!).
3) it's in very few lines of code, as a result of (2).

Your last statement above about doing it with -L input instead is very
interesting to me because ... well, this seems like a good opportunity
to go off about my current mental project. :)

I was thinking about building a vaguely 303-ish sequencer that, while
not pretending to be a general-purpose sequencer, does a LOT more than a
303-clone. Sort of like a graphical tracker with unlimited parameters.
I was thinking of writing it in python with a Tkinter interface. And
most relevant to your post, I was thinking of ignoring midi entirely and
using Csound as the synthesis engine, via stdin or a pipe.

So, comments from the list about the following rant would be most
welcome. This is my notes to myself about the project, so some things
may not make any sense. :)

THE HYPOTHETICAL PYTHON/TKINTER SEQUENCER THINGIE
-------------------------------------------------

possible name: NATOTS (not another three-oh-three sequencer)
               NYATS (not yet another three-oh-three sequencer)
               or something pretentious and "kewl"
                Hmm, those are terrible names.

General Idea:
-------------

Compositional tool that controls Csound in realtime.
Based on a sort of 303 / drum machine paradigm.

Inspired by 303Seq (a dumb but fun Windows sequencer) and all the
things I wish it did, and Cecilia (a Tcl/Tk interface to Csound) and
some things I wish _it_ did.

THE BIG QUESTION
----------------

Need to do some feasibility testing to see if this _can_ work
realtime. How accurate/reliable is application-generated timing sent
to csound -L <devicename>?

So I need to at least construct an event-generating script that really
works the pipe... and does so in a way that I can verify (in)accuracy
of timing. Try it with both stdin and FIFOs -- does that make a
difference?

Previous experiments with csound events from stdin are not especially
encouraging: more than about 10 simultaneous events were audibly not
simultaneous. This is, at heart, a drum machine -- if simultaneous
hits are smeared, it'll sound awful. This may be an inherent problem
with mkfifo and stdin... Is that stuff tweakable?
But those experiments were done by flushing the buffer after each
line. If I want to send 50 simultaneous events, maybe they should fill
the buffer and _then_ flush it before going to the next event? If so, my
engine would need to efficiently know when it's done with "current"
events.

And another thing... how many simultaneous events am I really going to
have in practice?

      Things to try:
      --Add progressively more simultaneous events
      --Try sending events with non-zero start times WHILE sending
      streams of zero-start-time-events, e.g.:
              at 1:00:00 send "i1 0\n i1 1 \n i1 2 \n i1 3"
              at 1:00:01 send "i2 0" (should synchronize with i1 1)
              at 1:00:02 send "i2 0" (should synchronize with i1 2)
              at 1:00:03 send "i2 0" (should synchronize with i1 3)
        Issues: does the above start in sync? Can it be made to do so
        with compensatory delays? Does it drift out of sync (can't do
much
        about that)? If the latter, that would rule out sending big
        blocks of events with different start times..

      A big question is whether a python script is even capable of
      REALLY accurately timing its output. The built-in time and sched
      modules are probably what I'll end up using. Try them! How fast
are they?
      Do some experiments. Can't find anything on dejanews about this.
      For my purposes, timing resolution down to 1 msec is probably
      plenty. I bet 10 msec would even be useable for a lot of stuff
      (though probably ultimately frustrating).

This all probably depends on the speed of getting stuff into and out of
the
FIFO or pipe...

If this proves to be unworkable, is there any other (faster) way to get
realtime events to Csound? Some sort of IPC? I don't think so.
Somebody was working on reading k-rate values in realtime from X11
scrollbars, but I don't know if they ever finished it, or if it could
be used for "i" events...

Or if Python is the source of timing problems, could I implement the
core event-sending mechanism in C? I don't want to have to do
this... In fact, I'm NOT going to do that. If the feasibility test
fails, the project will probably die. let's hope it works.

WISH LIST:
----------

     --"Panic button" -- kills Csound.

     --Editable Csound command to toggle displays, message level,
       output buffer size, etc.

     --Unlimited parallel tracks in _each pattern_.

     --"Continuous controller" line graphs can be created and assigned
     to any parameter

     --303Seq-style rows of scrollbars can be used as an alternative to
       the line graphs.

     --Can convert (by interpolation or down-sampling) back and
       forth between line graphs and scrollbar rows

     --Events are all Csound "i" events, so they can really be
     anything; a track could be used just for csound global (or ZAK)
     parameter control

     --While I'm at it, maybe there's a more useful & general way to
     organize stuff:
     A "Pattern" (a period of time) is divided into "tracks" (a
     series of parameter values in time). So the "Notes" parameter
     would be the same as any other parameter -- a list of "at" values
     for p2. Does this make sense? Seen as a line graph, the "at" or
     "notes" graph would just be a collection of zeroes and ones --
     "yes, a note starts here" or "no, no note starts here".
     But it seems like that's a waste of a parameter; zero should
     always mean no note, but a non-zero value should be assignable to
     anything you want, right? (Most often, initial volume or some
     other "intensity" parameter.)

     --Would be nice if I could display more than one parameter's graph
at
      a time.

     --Unlimited user-definable parameters per pattern/track.

     --Parameters may be "linked" and modified: e.g.volume(track(1),
     pattern(B)) = (1 / pitch(track2, pattern B))^2 * 100
     ... though I'm sure I can come up with a better way to say that.
     The graph for the link should show the currently evaluated values
     and somehow indicate very obviously that it's a link, what it's a
link to,
     and any modifications. Linked graphs are not directly editable
     unless they are first frozen (see below).
     QUESTION: Can this and all continuous-controller type stuff be
     better implemented within a Csound orc?
     Certainly it would be faster. Maybe I should think about having
     my app write orcs as well as scores! But then it couldn't be
     modified during runtime, which would be a bummer. And it would be
     hard to get / edit displays. And it would make it harder to have
     the orc be totally user-designable.
     I should take another look at Cecilia and see if I can figure out
     how they create, edit, and send their graphs... probably just as
     ftables... hmmm: is there a way I could use ftables to do
     everything I want?
 
     --Linked parameters may be "frozen", i.e. the currently evaluated
     values are written to a line graph and saved.

     --Copying patterns is just a function or method that implements a
      special case of the previous two features:
      freeze(all_parameters(track(all), pattern(A)) = all_parameters
      (track(all), pattern(B)))
      Or something like that. A more OO syntax might be better. What
      I'm saying is: "All parameters for all tracks of pattern B are
      linked to all parameters for all tracks of pattern A. Now freeze
      pattern B."

     --Some sort of "song-editor": possibly just a text representation
     of named patterns. There's still a lot you can do with
     this. E.g.:
           "a a a b a c a2 tag chorus breakdown ending"
           or
           "intro (a * 3) b end"
           or
           "(a UPTO 8) * 2 a"
             ... that will play a up to beat 8 twice, then the full
             "a"
           or
           "((a * 3) WHILE (b * 2)) c"
           ...that will play a 3 times, and b 2 times
           simultaneously; c will play whenever a and b are both done.
           This kind of parallelism gets clunky in a text
           representation...
           Better would be a visual meta-piano-roll style, with
           "tracks" for placing patterns.

     --There really needs to be a way to save interactive stuff, in
       the same way that 303Seq writes a midi file whenever you start
       it. That's a really cool feature. Is there a way that the
       csound events can be simultaneously written to a file with
       meaningful timestamps?
       Actually this would be a nice feature in Csound itself: a flag
       to write all received events into a file with the times
       replaced by the times they were received.
        In any case, I really need to be able to play with it in
        realtime and capture the results for later editing / rendering.

     --Envelopes: I will probably start with ADSR and simple gates
       because it's easy and handles 99% of what I want to do.
       Anything more complicated can be assigned to a user-defined
       linseg. Hmm: How to control ADSR? Would that be four separate
        parameter graphs??

     --Held notes: I want a way to optionally say "Do this until I
       tell you to
       stop" -- that is, follow the "noteoff" paradigm rather than the
       "dur" paradigm. Sometimes this just makes more sense.
       Probably held notes would be a special note type implemented by
       the sequencer, and Csound wouldn't have to know about it -- dur
       would be calculated by the sequencer. Problem is, that would
       mean changing noteoff after the note starts would have no effect.
       Hmmm.

     --"Global" tracks: Need a way to control various parameters
        independent of patterns. E.g. fades, some effects parameters,
         whatever. How to best implement this?

SUGAR (features I don't care so much about, yet):
-----

    --It should be possible to run an arbitrary csound score at the
      same time as the realtime input from my sequencer. I'm not sure
      what I would use this for, but it might be useful for something,
      and probably not hard to implement.

    --The piano-roll-style "song editor"

    --It would be cool if, possibly in the song editor, you could
      place an arbitrary .wav file at an arbitrary time, so you could
      pre-render complicated stuff and keep working from there. (Much
      like Tkscore.)
      This would probably be implemented by always having a "diskin"
      instrument in the orc...

    --That "swing" scrollbar in 303Seq is kind of a cool
       feature. Consider adding something similar.

    --I should probably think about making the events generic so they
    can be mapped to MIDI or whatever instead of csound. That would
make it a lot easier
    to, for instance, collaborate with James.

    --"Randomize" button for current parameter; should operate on only
      selected part of the window. For that matter, there could be
      lots of interesting parameter-generate/edit tools like that.

    --A button to pop up your favorite text editor with the current
      orc for modifications.

TKINTER QUESTIONS AND RESOURCES
-------------------------------

        --I want to be able to build Cecilia-style graphs (at the very
          least, simple click-and-drag line segments). Freehand line
          drawing would be very nice too.
          BLT has a graph element... maybe it could be used.

        --Beat indicators:
          This is not so important.
          But it would be nice if we could see where we are. At the
          very least, like a tracker, show the current pattern name
somewhere.
          Aside from the difficulty of synchronizing anything to audio
          output (I don't want it to be as useless as 303seq...), how do
I display
          this? I think a light that blinks green at start of a new
          phrase, and another that blinks at a user-settable multiple
          of ticks (default 4). Look for a widget set that implements
          this. I think there's a blinker in foztools:
          http://www.python.org/ftp/python/contrib/Graphics/Tkinter/

-- 

---------------- paul winkler ------------------ slinkP arts: music, sound, illustration, design, etc.

zarmzarm_AT_hotmail.com --or-- slinkp AT ulster DOT net http://www.ulster.net/~abigoo/ ======================================================


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:27:11 EST