Re: [LAD] Music, Undecidability, and the tiling problem (was Re: update: OT-ish: realtime 2d placement algorithms :-/)

From: Charles Henry <czhenry@email-addr-hidden>
Date: Thu May 27 2010 - 00:35:59 EEST

On Tue, May 25, 2010 at 3:33 PM, Chris Cannam
<cannam@email-addr-hidden-day-breakfast.com> wrote:

> I think the point Neils has is just that the outcome of your noodling
> is somewhat independent of your explicit intention.  Notes that sound
> satisfying together are probably going to sound satisfying largely
> because of some intrinsic mathematical relationship, or at least
> something that is probably open to analysis to some extent but that
> you don't yourself understand or plan.  Quite an interesting
> philosophical avenue here, and one that's fairly well trodden in other
> fields (ask an English theory student about Wimsatt and Beardsley).

I've been reluctant to weigh in, because I just know I'm going to blah
blah about math, waste time, and no one will care :) I'm good at it.

When we consider the analysis of what sounds good, we are probing a
psychological question. The mind, being completely un-observable and
distinct in study from the brain itself, is impossible to measure
directly. We can model the mind, and analyze whether or not our model
fits with observed behavior. The actual intrinsic "math" that goes on
in the brain and its counterpart, the mind, cannot be exactly known.
So, what we do is model the various interacting processes that make up
our direct experience of music and sound. This approach is, in fact,
objective despite the fact that we may not be able to explain all of
the significant interpersonal and moment-to-moment sources of
variation that affect our experience.

It's really an exciting time for the study of music psychology (and
I've been saying so for 10 years). The degree to which computers can
compose music depends on the success of modeling musical experience in
humans. As musicians and composers, we approach the "tiling problem"
with a set of techniques, instruments, and vocabulary. We are able to
get direct, immediate feedback on the effectiveness of a giving
tiling, which computers, at present, cannot. Currently, computers
have expanded our techniques and instruments while people have
expanded their vocabulary to compose new and novel music with them.

The point I'm getting at: the structure isn't in the music itself,
it's in the mind of the listener.

I've been toying around with the idea of modeling high-dimensional
psychoacoustic spaces as non-linear manifolds (I'll skip the subject
for now, b/c I'm not sure I can describe it). Because I intend to
work on it, I do think that successive approximations through modeling
are possible that will push the outer boundary of musical vocabulary
and instruments further than musicians and composers alone could.

Chuck
_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev@email-addr-hidden
http://lists.linuxaudio.org/listinfo/linux-audio-dev
Received on Thu May 27 04:15:01 2010

This archive was generated by hypermail 2.1.8 : Thu May 27 2010 - 04:15:01 EEST