Hallo,
Paul Davis hat gesagt: // Paul Davis wrote:
> the slightly longer answer is that pure data (and max) are derived from
> the original Music N language's conception of how to manage this kind of
> thing (the same ideas still found in CSound and SuperCollider). even
> though they have gone far beyond it, they continue to distinguish
> between audio & control datastreams for reasons that are mostly related
> to efficiency. there are many, many cases where control data being
> delivered at a bits-per-second rate significantly below that of audio
> (or even as a stream of events rather than a constant flow) is more than
> adequate, and saves a lot of CPU cycles.
I think, an important distinction to make is that between constantly
flowing signals and sporadic events. In computer music a typical
example would be the different time scales of audio signals and midi
events or between an audio recodding and a midi file or between a real
performance and a written score: These are fundamentally different
ways to desribe music, but both have a place in music making.
Ciao
-- Frank Barknecht Do You RjDj.me? _ ______footils.org__ _______________________________________________ Linux-audio-user mailing list Linux-audio-user@email-addr-hidden http://lists.linuxaudio.org/mailman/listinfo/linux-audio-userReceived on Sat Nov 15 12:15:02 2008
This archive was generated by hypermail 2.1.8 : Sat Nov 15 2008 - 12:15:02 EET