[linux-audio-dev] Synth APIs, pitch control

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: [linux-audio-dev] Synth APIs, pitch control
From: Sami P Perttu (perttu_AT_cc.helsinki.fi)
Date: Tue Dec 10 2002 - 10:38:52 EET


Hi everybody. I've been reading this list for a week. Thought I'd "pitch"
in here because I'm also writing a softstudio; it's pretty far already and
the first public release is scheduled Q1/2003.

First, I don't understand why you want to design a "synth API". If you
want to play a note, why not instantiate a DSP network that does the job,
connect it to the main network (where system audio outs reside), run it
for a while and then destroy it? That is what events are in my system -
timed modifications to the DSP network.

On the issue of pitch: if I understood why you want a synth API I would
prefer 1.0/octave because it carries less cultural connotations. In my
system (it doesn't have a name yet but let's call it MONKEY so I won't
have to use the refrain "my system") you just give the frequency in Hz,
there is absolutely no concept of pitch. However, if you want, you can
define functions like C x = exp((x - 9/12) * log(2)) * middleA, where
middleA is another function that takes no parameters. Then you can give
pitch as "C 4" (i.e. C in octave 4), for instance. The expression is
evaluated and when the event (= modification to DSP network) is
instantiated it becomes an input to it, constant if it is constant,
linearly interpolated at a specified rate otherwise. I should explain more
about MONKEY for this to make much sense but maybe later.

Anyway, the question I'm most interested is: why a synth API?

--
Sami Perttu                       "Flower chase the sunshine"
Sami.Perttu_AT_hiit.fi               http://www.cs.helsinki.fi/u/perttu


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Tue Dec 10 2002 - 10:45:47 EET