Re: [linux-audio-dev] Developing a music editor/sequencer

From: NadaSpam <NadaSpam@email-addr-hidden>
Date: Sun Jan 30 2005 - 17:58:15 EET

On Sunday 30 January 2005 05:54 am, you wrote:
> On Sat, Jan 29, 2005 at 09:24:58PM -0500, NadaSpam wrote:
> > End Notes for the Curious
> > ...
> > My degree is in applied mathematics.
>
> Since I am curious, are you also a musician or composer ? Would you be a
> _user_ of the kind of system you propose ?

Yes, and being able to use such a system is the primary reason for creating
it. My personality isn't that of the mountain climber who scales the mountain
"because it's there" or "for the challenge of it". I'm more of the George
Lucus vein -- the mentality of "I have a story I want to tell. The technology
to put the picture in my head on the screen doesn't exist, so we have to
create it." And I suppose, like Lucus, the basic story isn't anything new or
extrordinary, but it's in the telling of it that makes it unique. In short,
developing the technology isn't the point for me. Using it to create the
music that's in my head is. Since that music isn't all that avante-guard, the
tools don't need to be radical (IMO), but tools such as Rosegarden have a
fundamental design limitation which seems to be insurmountable. (You mention
it below.)

> If the answer is yes, and you want such a tool, then my pragmatic response
> would be to bite the bullet and learn to use things like SuperCollider.
> They wil give you complete freedom (and a hard time to exploit it), and
> virtually complete absense of the 'cultural bias' of traditional tools.

I don't know anything about SuperCollider. I'll look into it, though.
But I'm a pretty lazy user, so if it's hard to set up and record things, I
probably won't be happy. I grew up with Windoze (back to 2.0), and it seems
to have had its drowsing effect on me. I'll programs the down and dirty, but
when I want to use it, it better be simple.

> Some other points.
>
> 1. I don't think it will be good idea to put everything in an 'integrated
> environment'. We have even now all it takes to make applications work
> together and to sync them to sample accuracy. Why should instruments
> be built in or limited to what MIDI banks has to offer ? We have good
> synths, sample players and general synthesis engines such as scsynth.
> Why should a sequencer have audio tracks ? Just kick up Ardour and make
> the two work together. While it would be nice (in some cases) to have
> a WYSIWYG editor, in many cases that's just a pain (if parts of the score
> are defined algorithmically for instance). Anyway if you look at some
> contemporary scores, you'll see they start of with some pages that
> just define the notation - there is no standard for many things.

I'd like to have a program that gives me one location where I can organize an
entire work (primarily of music, but I was thinking that something like a
multimedia slide show would also be possible. Triggering audio events, video
clips, and image display all operate very similarly on a high enough level.).

My thought is of a system where MIDI, audio, etc. would be handled by plug-in
modules. In this way, new formats could be added. I'm not sure if a track of
type "algorithm" would be possible or not. I think an algorithmic section
within a track could be accomplished, though. I haven't thought much about
compositional algorithms. I don't use them. I think of them more in terms of
effects, to be handled in a similar way as echo, delay, etc.

I can envision a system that operates more on AI principles, where players
are sophisticated algorithms that subtly modify the score, and where they
play off each other as real musicians do. But I think such a system is well
beyond my programming capacity.

It brings up an interesting point, though. The relationship of the sequencer
to the score has implications that will shape what can be done with the
system.

> Starting with an existing sequencer (Rosegarden or any other I know of)
> could be hard. They have many built-in cultural dependencies (such as
> using a 'beat count' as the independant variable), and these ripple
> through from the initial design assumptions down to all levels of the
> architecture and the interfaces. It could be very hard to change that.

Yes, that 'beat count' is (IMO) the single-most limiting factor of
Rosegarden. It's something I don't think I can sidestep because it's so
fundamental to the design philosphy. The master timing mechanism should be in
seconds (probably microseconds, actually so that we can deal with integers).
All other convenience timings (beats, SMPTE) should map to seconds. (For MIDI
sequencing, seconds needs to eventually map to MIDI ticks, but this is
trivial function.)

> I have a long term project of developing a sequencer that would be free
> of these kind of limits, but don't wait for it. It will record events
> (e.g. notes) and parameter trajectories and arbitrary data as functions
> of time, and allow you to edit all of this. It will probably accept
> MIDI (sometimes it's practical to just play things on a keyboard rather
> having to write them) and OSC, and output mainly OSC.
> IF there is any notion of tempo or meter that could be defined in an
> hierarchical way, down to being local to a track, and the final inter-
> pretation of these elements will not always be done by the sequencer
> itself but could for example be delegated to an (external) instrument.
>
> 2. The model you use is based on a 'score' and 'instruments'. That's
> too simple to reflect the realities of making music. In between them
> there are players, and maybe a conductor. All of them interpret part
> of or some aspects of the score, and they interact to achieve the end
> result. Learning to interpret (rather than just play) a score and to
> play together in an ensemble or orchestra is an important part of any
> musician's education and training, and much of the 'magic' of music
> happens right at that level.

This is very true. I suppose I should think about this some. I could probably
come up with a class structure that would include 'musician' and 'conductor'.
They would be just stubs for a long, long time, but the potential would be
there to have them execute some sort of algorithm.

Long ago, I worked on a project (2 of them actually) that needed to account
for something similar. Wasn't music, but was about characteristic behavior
(in one case of people, in the other of clouds). Users could write "plug-in"
algorithms (via scripts) to modify the behavior of the system. I could
incorporate some of those design elements in order to leave the door open for
future development.
Received on Sun Jan 30 20:15:09 2005

This archive was generated by hypermail 2.1.8 : Sun Jan 30 2005 - 20:15:10 EET