I think it's the *computers* that need all the help they can get, not the composers.
I think we are in basic agreement on this stuff.. As for the last: Dang! It must be a lot easier for you to write music than me
No ... it's a lot easier for me to write *software* than music. But my own excursions into algorithmic composition have far less lofty goals. First of all, my music is purely digital electronic -- I don't compose for conventional instruments. Second, it's purely a studio, recorded music -- I don't do live performances. And third, mostly I work in conventional general-purpose programming languages, rather than using other peoples' packages that are music-specific.
I don't have a problem with coding up something that generates many hours of music, then going through the output and editing it down to a piece.
Joined: Jul 07, 2007 Posts: 1602 Location: Berks County, PA
Audio files: 34
Posted: Wed Oct 31, 2007 8:15 am Post subject:
Having sampled this thread numerous times, this post seems a good place to finally jump in.
Other than that, algorithms and randomness are actually an oxymoron: an algorithm is a procedure comprised of logical steps, it has a beginning and an end. There cannot be true randomness in computers.
My work on real-time improvisational accompaniment driven by MIDI guitar playing (see http://home.ptd.net/~dparson/dafx06_dparson_final.pdf and http://home.ptd.net/~dparson/elmu06talk.pdf) pulls nondeterminism out of nondeterminism in a human string player's real-time performance. Variations in player's state, venue acoustics, player-to-player interaction and player-audience interaction (to name a few) can introduce nondeterminism into the human performance, and an accompaniment algorithm that derives its behavior in part from nondeterministic input is nondeterministic. I don't know whether this qualifies as 'random,' and I am not really sure what 'random' means in this context, but if it means 'not correlated to the original code,' the there are at least some aspects of this computer behavior that are not correlated, hence surprising.
Of course there are conventional 'randomizing' tricks like using the real-time clock to seed a pseudo-random number generator, but these rapidly degenerate into noise. They are fundamentally non-correlated with the code. My code is looking for nondeterministic correlation with the playing.
In the search for interactive and easy-to-use systems, my personal quest was not very fruitful either: I have developed an application in Visual C++ for my BSc thesis some years ago, that uses evolutionary algorithms and the MIDI protocol to compose music. One of the most important observations is that telling a user how the system reacts in response to his input did not prove helpful at all. I have found that users adjust much easier if they are not informed at all and, instead, are left to explore the sonic possibilities.
. . .
Things can get very complex when you start working with larger "compositional" blocks: There are programs that use notes but once you start introducing chords, motives or whole compositions it can easily go out of control. For my program I have used chords - 4 voices soprano, tenor, alto and bass and criteria derived from classic harmony. The resultant music was often very "pointilistic" or it had sustained notes, depending on the input material and how mutations and crossovers interact.
It is a promising field nevertheless. I might as well draw a parallel between electronic music and algorithm-aided composition techniques: very people believed that it could be done.
My system analyzes input rhythm, chords, drones, and matches output of that analysis to stored motives and score. Being driven fundamentally by my guitar finger picking, it is not composing from a blank slate, but it does sometimes come up with interesting accompaniment. Two points are worth noting:
1. In the analysis stages of extracting rhythm, chords, etc. from my guitar playing, mistakes are not deadly. They are usually reasonably well correlated with my playing; small bugs serve as another source of nondeterminstic variation. Of course, I may later fix a bug and destroy some aspect of a particular accompaniment, but so it goes. My own playing changes over time, too. I am not looking to freeze a moment. recordings are good enough for that.
2. From above:
uses evolutionary algorithms and the MIDI protocol to compose music
One of my goals, stated in those cited papers, was to use the genetic algorithm to generate accompaniment, based on both the analysis of the incoming MIDI stream, and the matching of that to motives/score. I put a lot of work into that GA back end in late 2006, and it is the part that has FLOPPED so far, and the part that I was planning to discuss in this thread some time ago.
The GA is an example of what's called a 'weak method' in artificial intelligence jargon. It is 'weak' not because it's a faulty algorithm, but because its application does not require one to 'load the dice' so to speak with cheats and heuristics and special purpose code. It's a general purpose algorithm, in this case examining application specific data (my MIDI + the analysis sdata) and generating application specific data (output MIDI) using a general approach to preserving good musical 'genes' across generations (geners are code strings that conform to a typed R-H-M-T pattern, R being Rhythm, Harmony, Melody and Timbre).
The GA part of this has flopped so far, and I have therefore put GA on a back burner, because, as I see the system's behavior, *the search space is too big and mostly boring.* It's like going into deep space and trying to bump into interesting starts and planets -- interesting musical structures in this case -- by starting out with an initial random sample of space (no long range telescopes). There is just too much boring, empty space, so you wind up with chromosomes that are all boring, i.e., nothing interesting from which to apply the GA algorithm to select and cross over behaviors. Of course, I can seed the gene pool by hand, at which point we no longer have a 'weak method.' But I find it even less work to write accompanist code that is basically correlated to ideas I have for a given piece, and skip the GA altogether. It's much more productive and converges on interesting accompaniment quickly *when I am in a good compositional state!* Just like any other kind of composing, except that it *is* nondeterministic, depending in part on my own state at performance time.
I have run into an interesting problem that requires attention, although I haven't worked on this system in a few months because my small musical time has been going into novel signal processing on audio streams lately. The problem is outlined by the attached graph below from my talk at electro-music 2006. The minima in that graph are places where score matching was getting stuck. Basically, score matching takes the current chord-mode-drone triplet from my incoming MIDI stream, and tries to match it to a saved trace of a previous reference performance (which is actually a merger of chord-mode-drone triplets from a number of previous performances, in an attempt to average out the errors). When it matches a location in the 'score trace,' this gives down stream accompaniment modules a 'plan for the future' in predicting where to go.
What the graph showed from 2006 is that I found it necessary to give score matching more leeway in moving forward than in going backward. The original algorithm allowed forward or backward motion only if the triplet in the score trace matched my extracted playing to within one note. (The extracted playing is not exactly what I played -- chord-mode-drone have already been fitted into a framework, and they are octave invariant, so a max error of 1 note is to as tight as it sounds.) The trouble was, the score matcher tended to be too conservative, getting stuck in matches because moving forward would have required accepting mismatches bigger than one note. Any accompaniment based on score matching would get stuck and keep playing the same phrase over and over, because it was 'afraid' to move.
My solution in 2006 was to give it more leeway for moving forward in the score. It could allow two notes of error in moving forward, but only one backward. Statistically, the score matcher started reaching the score about the time I reached the end of the piece, so this seemed like an acceptable kludge.
In working with this again during summer, 2007, I found that a bass accompanist I wrote for a rather longer piece than previously tried (about 15 minutes) would sound really interesting at the start of the piece -- the first time my wife walked into the house during a duet with the machine, she asked whether my son was on the bass -- but it would seemingly *run out of ideas* near the end of the piece, and start noodling around in a dull phrase again. I haven't actually debugged the problem yet, but it appears that score matching is now being nudged to the end of the piece ahead of me, and which point it gets stuck in a local minimum at the end of the score.
My planned solution will be to make long distance jumps back into the score more feasible when stuck at the end. I am not too worried about getting to the "correct place," because this stuff is improvised and length tends to vary from performance to performance anyway. As long as its jumps to a reasonably consonant spot in the score, it will sound fine.
Score matching is also the most computationally expensive part of the running process, so I have additional work to do in there.
I guess my summary of the GA issue, which is the issue related to the original post in this thread, is that most of the potential musical space is empty space, entropic and dead boring, and its needs a serious compositional nudge from a composer who probably has no real clue where his/her musical ideas really come from. The search space is big, and people who are good at this have a wealth of space-reduction strategies that I have no clue how to put into a computer. GA needs interesting genes to start with; I find it faster just to compose the chromosomes directly out of C++ library calls!
There's a early example of this program in my audio files on this forum, called "Opposing Force," which uses the program to generate weapon and machine noises in response to my guitar playing. This is mostly percussive, bursty accompaniment; I don't have an example of more harmonic accompaniment posted at present. My son Jeremy is playing bass on this sample.
This image has been reduced to fit the page. Click on it to enlarge.
_________________ one finds oneself counting,
one knows not what,
notes in a stream, steps in a forest,
years in a life, items in a list of todo's;
always getting ready to come down on the one
What if we didn't ask "can computers write expressive music?" and instead asked wether writing algorithems can be a expressive act?
This is pretty much the issue I've been struggling with, and have asked members of this forum for help on.
Not so much whether, as how; and not so much how, as how without getting bogged down in the minutia of the process.
There are algorithmic composers who are really algorithmic music programmers, programming in a text based language with or without built in music composition extensions. Forth/HMSL, SAIL/Sampson, Lisp/Symbolic Composer to name a few I'm personally familiar with.
These do the job. You can get any level of nuance, any level of complexity, any level of subtlety you are willing to put the time in to achieve. Which is why a lot of serious composers use them and generally won't consider looking elsewhere.
My problem with them is twofold. 1. they are not interactive and 2. they require a huge effort for relatively little payback.
Regarding the effort, I'm not looking for short cuts because by and by and large you get what you pay for. As mentioned before, art takes time. But just putting in a ton of tedium has no virtue in of itself.
Also, the effort is not just in writing the code, it's also in knowing what you've written later. Dozens upon dozens of seriously clever sections that made the piece work so well artistically end up as write-only code you have little hope of modifying or reusing later because you no longer know what the hell they are doing. Most of us who've gone this route recognize the problem. We have perhaps one or two pieces we're very proud of, but they are pretty well frozen because the code became impenetrable.
A computer based algorithmic composing environment needs not only to be capable of dealing with large pieces without sacrificing nuance and subtlety, it needs to do it at multiple levels of abstraction, so the process for the composer need not be a coding grind, and cool processes either laboriously developed or serendipitously discovered can be reused as variations, transformations, and so on. If not effortlessly then at least with low enough overhead that it becomes routinely practical.
This is basically the wall I mentioned in an earlier post, that I believe bacus and I are trying to deal with coming from two different directions. More on that, perhaps, in another post.
Regarding interactiveness, this is a personal requirement, perhaps not a requirement needed by others to write perfectly expressive music, but I gotta have it. And, frankly, I'm suspicious of those who claim they don't. Beethoven may have done fine without it, but even he in his earlier years composed with a piano within reach.
In any event, I think that for the time being, a computer based algorithmic composing program that is not fully interactive, is probably not going to be able to provide the environment needed to create expressive music. I do not claim it has to be real-time and immediate as I prefer and as my software always has been constructed, but at least there needs to be a very short write-listen-modify cycle. Very short.
Given the above, I think the answer is yes, algorithmic composing can be an expressive act.
I hear what you're saying about algorthmic composition software needs to have a very short write-listen-modify cycle. Something similar can be said about software development - if it takes too long to see the results it's not very encouraging.
The improvisor software I built (see http://land.percussa.com/audiocubes-improvisor/) is based on the idea that you draw a series of patterns you want to compose algorithmically with, and then you can combine these pattenrs in various ways using physical cubes which represent each of the patterns. A video makes this more clear:
_________________ AudioCubes - smart blocks and software for composition, sound design and live performance. Learn more at http://www.percussa.com
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum You cannot attach files in this forum You can download files in this forum
Please support our site. If you click through and buy from our affiliate partners, we earn a small commission.