Re: [linux-audio-dev] A Python loop sequencer, was re: using csound as a sampling engine

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] A Python loop sequencer, was re: using csound as a sampling engine
From: est_AT_hyperreal.org
Date: ma syys   06 1999 - 12:11:56 EDT


Paul Winkler discourseth:
>
> Eric, I just played with s1 and was impressed by several things:
> 1) it works nicely.
> 2) it's in python (I was wondering if anyone on the list used python!).
> 3) it's in very few lines of code, as a result of (2).

Paul, this is most kind of you to say. It was a Saturday afternoon
hack and `my first csound .orc'. :)

Re (2): www.hyperreal.org/~est/oolaboola (major new release in two
weeks) has a gui written in Python. It communicates with the dsp over
a pair of pipes using Scheme symbolic expressions (which are logged
with timestamps to a file).

> Your last statement above about doing it with -L input instead is very
> interesting to me because ... well, this seems like a good opportunity
> to go off about my current mental project. :)
>
> I was thinking about building a vaguely 303-ish sequencer that, while
> not pretending to be a general-purpose sequencer, does a LOT more than a
> 303-clone. Sort of like a graphical tracker with unlimited parameters.
> I was thinking of writing it in python with a Tkinter interface.

oola is written using pygtk. This may have some bearing on using
python/tkinter for a similar project. In particular, the following
points may be of interest:

1) I think pygtk is faster than Tkinter (because there's no tcl level
   involved).

2) Even so, I perceive a subliminal mushiness to a pygtk gui, really only
   noticable by the comparative feeling of `solidity' of a C/gtk app.

3) I'm experiencing (2) even though I've migrated a fair amount of
   code to Python extensions in C.

4) When the gui is saturated with midi controller events it takes up
   to 40% of my processor time. Worse, there's no hot-spot..nothing I
   can migrate to C except the whole midi control architecture (the
   parsing is already in C) which I wanted to keep in Python for easy
   programmability. On the plus side, my midi parsing module does
   event compression if it runs out of cpu.

5) I may be abandoning pygtk anyhow because of X freezes people have
   been experiencing with the 0.6.x series (I recommend 0.5.12 at the
   moment). Perhaps I'm overreacting though.

> Inspired by 303Seq (a dumb but fun Windows sequencer) and all the
> things I wish it did, and Cecilia (a Tcl/Tk interface to Csound) and
> some things I wish _it_ did.

I wish it worked for me. It freezes on the very pretty splash screen.

> Need to do some feasibility testing to see if this _can_ work
> realtime. How accurate/reliable is application-generated timing sent
> to csound -L <devicename>?

If it's a problem, it may be worthwhile to fix.

> So I need to at least construct an event-generating script that really
> works the pipe... and does so in a way that I can verify (in)accuracy
> of timing. Try it with both stdin and FIFOs -- does that make a
> difference?

It shouldn't..stdin *would* be an (unnamed) fifo if you're piping to
it.

> Previous experiments with csound events from stdin are not especially
> encouraging: more than about 10 simultaneous events were audibly not
> simultaneous.

Hmm..what was your k-rate..and your overall system configuration.
Something that could produce this problem would be quite useful.

> This is, at heart, a drum machine -- if simultaneous
> hits are smeared, it'll sound awful. This may be an inherent problem
> with mkfifo and stdin... Is that stuff tweakable?

Generally not. Pipes typically have a 4K buffer internally. However,
I don't see that that should cause a problem.

> But those experiments were done by flushing the buffer after each
> line. If I want to send 50 simultaneous events, maybe they should fill
> the buffer and _then_ flush it before going to the next event? If so, my
> engine would need to efficiently know when it's done with "current"
> events.

This shouldn't make too much difference. I haven't checked the csound
code, but I'll bet it checks the -L input on every k-cycle and
processes anything it finds there. It may need to be tweaked in some
ways..we'll see! Also, if you really want to gather up your events
before sending them, the wait-to-flush approach will be unreliable
since stdio may autoflush anyhow.

> A big question is whether a python script is even capable of
> REALLY accurately timing its output. The built-in time and sched
> modules are probably what I'll end up using. Try them!

I don't think there's a problem here except for the limitations of the
kernel you're using. I think us linux-audio people generally want a
>=1000HZ kernel anyhow.

> This all probably depends on the speed of getting stuff into and out of
> the FIFO or pipe...

I doubt you need to worry about FIFO/pipe performance per se.

> --There really needs to be a way to save interactive stuff, in
> the same way that 303Seq writes a midi file whenever you start
> it. That's a really cool feature. Is there a way that the
> csound events can be simultaneously written to a file with
> meaningful timestamps?

Sure. :)

> --Held notes: I want a way to optionally say "Do this until I
> tell you to
> stop" -- that is, follow the "noteoff" paradigm rather than the
> "dur" paradigm. Sometimes this just makes more sense.

I don't know enough about csound yet. Is there a way for an
instrument to explicitly say "I'm done, deallocate me now!". If so
then this could be handled via table entries.

Thanks for posting these thoughts! :)

Eric


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:27:11 EST