Author |
Message |
Acoustic Interloper

Joined: Jul 07, 2007 Posts: 2073 Location: Berks County, PA
Audio files: 89
|
Posted: Tue Jun 23, 2009 7:41 am Post subject:
Multi-rate sequencing of algorithmic synthesis? Subject description: Generating music at different per-channel rates. |
 |
|
Composition seems as good a place as any to pose this question.
Is anybody aware of a multirate approach to performance-time algorithmic generation of music, with one rate per channel, where a channel for these purposes is a MIDI channel? A channel could be any voice being generated in the piece. I mention MIDI only because that is what I used in my Summer Solstice piece, and multi-rate MIDI voices is where that software is headed.
Simple example: In the good old days Steve Reich would run two identical audio tapes at slightly different speeds so that one would be slower than the other. The music was cyclic, so that the leader would eventually get an entire cycle ahead of the lagger, at which point they would sync up.
You could emulate this by playing a static MIDI sequence at two, simultaneous, different tempos. That might be my next step in enhancing the Scrabble-to-music program. Right now it generates a sequence of one or more tracks after each move in the game, based on the latest board configuration and the placement of the most recent word. Tempo is currently global across all MIDI channels.
I could have a per-channel tempo (all other game-to-MIDI mapping parameters are per channel), but the only problem is, if I want all subsequences to end at the same time, a sequence can become *very* long if the least-common-multiple of all the subsequence lengths is a lot higher than the subsequences themselves. For a simple example, if one subsequence is 4 time units and another 5, I have to generate 5 of the former and 4 of the latter, getting an overall sequence length of 20. Depending on the number of channels and the lengths involved, the total length can become very large, requiring too much time to generate and introducing noticeable delays between a move in the game and the next set of sounds being produced.
An alternative, where I am headed, is not to generate a fixed sequence at all, but rather generate all the notes and other MIDI data when needed. Generation for each channel can run at a different tempo.
I was just wondering if anyone has seen a similar technique used -- maybe it's common, what do I know?? -- and if there are some helpful pointers.
Thanks.
Dale _________________ When the stream is deep
my wild little dog frolics,
when shallow, she drinks. |
|
Back to top
|
|
 |
Octahedra

Joined: Nov 29, 2008 Posts: 149 Location: Cheshire, UK
Audio files: 7
|
Posted: Tue Jun 23, 2009 10:10 am Post subject:
|
 |
|
I've done this sort of thing in a piece called Mountains, but none of it was in realtime. All done with copy/paste etc. in a tracker (MED Soundstudio), with external midi synths.
I had a sequence of notes which repeats, but keeps getting longer by inserting new notes in between the existing ones from time to time - a lot like Philip Glass's 'additive' process. Then I took 7 copies of the sequence to create 7 different instrument parts. I divided the tempo by a different factor each time, ranging from 7 for the fastest to about 60 for the slowest, so the finished music is quite slow - a different sound world from the early minimalist stuff we've just been talking about.
The faster the tempo of a part, the higher its octave transposition. I designed the 'additive' effect so that the notes more likely to clash with the starting note - the root of the main chord - would only appear late in the sequence, so only the fast high-pitch parts would get far enough into the sequence to play those notes, before time ran out and the next sequence came in. The bass would be relatively safe.
I got lucky. The accidental counterpoint between all those parts doing their own thing could have been awful, but it turned out quite well I think...
Sorry I can't suggest a setup that you could alter during performance - anything realtime is beyond me really.
Gordon |
|
Back to top
|
|
 |
robsol
Stream Operator

Joined: Apr 24, 2009 Posts: 2494 Location: Bristol UK
Audio files: 495
|
Posted: Tue Jun 23, 2009 11:16 am Post subject:
|
 |
|
Have you heard of standard MIDI files (SMF) type 2?
They can contain several tracks each with independent tempos. I have not seen much software that supports type 2 import/generation but I must admit that I haven't looked for it.
Even if it sounds quite interesting, I must admit that I haven't experimented with this format either, I merely know that it exists... _________________ Muied Lumens Sub Forum
Bandcamp |
|
Back to top
|
|
 |
Acoustic Interloper

Joined: Jul 07, 2007 Posts: 2073 Location: Berks County, PA
Audio files: 89
|
Posted: Tue Jun 23, 2009 12:11 pm Post subject:
|
 |
|
Octahedra wrote: | so the finished music is quite slow - a different sound world from the early minimalist stuff we've just been talking about. |
My piece Saturday started out sounding minimalist, but by the end it had transformed into something more along the lines of Miles Davis' On the Corner with respect to rhythmic density. Minimalism is, I think, a subset of the space touched by this technique.
Quote: | I got lucky. The accidental counterpoint between all those parts doing their own thing could have been awful, but it turned out quite well I think... |
I know exactly what you mean. By the end of Saturday's piece I had most voices playing from different scales (albeit all with the same tonic), and when I ran out of ideas, I just started hacking the lengths of the per-channel accent patterns, which gave an effect similar to what I am aiming for here. The harmonies seemed to work because the tonic was the same in all cases, and the fifths as well, and their octaves, of course. From there the scales diverged, but since I mapped each Scrabble letter in each scale to the same corresponding position in the other scales (some scales had fewer notes and no exact correspondences, so I'd just reuse some intervals in those scales), the different voices would cyclically diverge and converge with respect to harmony. It could have sucked, but it didn't!
I think the same is true of the polyrhythmic accent patterns. They, too, diverge and converge, and so it is that second-order rhythm of divergence and convergence that catches the ear.
Quote: | Sorry I can't suggest a setup that you could alter during performance - anything realtime is beyond me really. |
Good! I like novel work that pans out!
Muied Lumens wrote: | Have you heard of standard MIDI files (SMF) type 2?
They can contain several tracks each with independent tempos. I have not seen much software that supports type 2 import/generation but I must admit that I haven't looked for it. |
I can do this statically using sequences now, with or without files, but the plan is to do it on the fly in real time, with storage of the sequence in a file purely optional.
My working assumption for now is that all pre-cooked sequencers are single rate. If anybody knows of a counter example, please let me know.
Thanks for the responses. _________________ When the stream is deep
my wild little dog frolics,
when shallow, she drinks. |
|
Back to top
|
|
 |
Antimon
Joined: Jan 18, 2005 Posts: 4145 Location: Sweden
Audio files: 371
G2 patch files: 100
|
Posted: Tue Jun 23, 2009 2:06 pm Post subject:
|
 |
|
Not sure I am completely understanding everything here, but I hack up indepent sequencers in ChucK all the time, even though I've not experiment that much with slight differences in tempo. You can do stuff like this:
Code: | fun void firstPlayer() {
while (true) {
// get note
// MIDI on
300::ms => now; // wait 300 milliseconds
// MIDI off
}
}
fun void secondPlayer() {
while (true) {
// get note
// MIDI on
299::ms => now // wait 299 millisecond
// MIDI off
}
}
spork ~ firstPlayer();
spork ~ secondPlayer();
|
This starts two sequencers, one (firstPlayer) with a 300 millisecond delay between each beat, and another (secondPlayer) with a 299 millisecond delay. Due to the way ChucK's clock is constructed, this will create a reliably consistent drift between the cycles of each sequencer, and the sequencers will play in time. Also ChucK is great for being controlled real time, with MIDI, OSC, joysticks or other things.
Since you're doing stuff in Java I guess you may already be doing stuff like this, and I just don't understand what the issue is.
/Stefan _________________ Antimon's Window
@soundcloud @Flattr home - you can't explain music |
|
Back to top
|
|
 |
Acoustic Interloper

Joined: Jul 07, 2007 Posts: 2073 Location: Berks County, PA
Audio files: 89
|
Posted: Tue Jun 23, 2009 2:18 pm Post subject:
|
 |
|
Antimon wrote: | Since you're doing stuff in Java I guess you may already be doing stuff like this, and I just don't understand what the issue is.
/Stefan |
Yep. I actually did some of this in Python-and-ChucK with chess-to-music last year, and switched over to Java mostly because I am teaching Java regularly and needed an interesting project for my classes.
The issue is mostly a search for references on applying this technology, musical technique, etc. I plan to implement the next round of this, but before writing code, I am collecting references in the analysis phase, so to speak. I just wondered whether there are essays similar to Reich's on doing this, or any representative examples I should look at and listen to, before commencing the next round of coding.
Thanks. _________________ When the stream is deep
my wild little dog frolics,
when shallow, she drinks. |
|
Back to top
|
|
 |
Antimon
Joined: Jan 18, 2005 Posts: 4145 Location: Sweden
Audio files: 371
G2 patch files: 100
|
Posted: Tue Jun 23, 2009 2:40 pm Post subject:
|
 |
|
Acoustic Interloper wrote: | Antimon wrote: | Since you're doing stuff in Java I guess you may already be doing stuff like this, and I just don't understand what the issue is.
/Stefan |
Yep. I actually did some of this in Python-and-ChucK with chess-to-music last year, and switched over to Java mostly because I am teaching Java regularly and needed an interesting project for my classes.
The issue is mostly a search for references on applying this technology, musical technique, etc. I plan to implement the next round of this, but before writing code, I am collecting references in the analysis phase, so to speak. I just wondered whether there are essays similar to Reich's on doing this, or any representative examples I should look at and listen to, before commencing the next round of coding.
Thanks. |
Then I'm with you. Nothing like a dimwitted entry to bring out a simpler explanation.
My thoughts go to Brian Eno's liner notes on his LP Discreet Music, about his treatment of the Pachelbel canon.
I also start to think about church bells, I've always found the drifting in and out of sync of two almost synchronized church bells to be a hypnotizing effect (wrote a G2 patch that was inspired by this feature of church bells a while back). Can also be heard on some railway crossing bells here.
/Stefan _________________ Antimon's Window
@soundcloud @Flattr home - you can't explain music |
|
Back to top
|
|
 |
Acoustic Interloper

Joined: Jul 07, 2007 Posts: 2073 Location: Berks County, PA
Audio files: 89
|
Posted: Tue Jun 23, 2009 3:03 pm Post subject:
|
 |
|
Antimon wrote: | Then I'm with you. Nothing like a dimwitted entry to bring out a simpler explanation. |
I write too much text.
Quote: | My thoughts go to Brian Eno's liner notes on his LP Discreet Music, about his treatment of the Pachelbel canon. |
Thanks, I'll get it. I am interested in studying canons for reasons both related and tangential to this post, so this sounds like a good pick.
Quote: | I also start to think about church bells, I've always found the drifting in and out of sync of two almost synchronized church bells to be a hypnotizing effect (wrote a G2 patch that was inspired by this feature of church bells a while back). Can also be heard on some railway crossing bells here. |
My office is under an old college bell, but only one. Time for some field work.
Thanks! _________________ When the stream is deep
my wild little dog frolics,
when shallow, she drinks. |
|
Back to top
|
|
 |
Octahedra

Joined: Nov 29, 2008 Posts: 149 Location: Cheshire, UK
Audio files: 7
|
Posted: Wed Jun 24, 2009 8:17 am Post subject:
|
 |
|
Acoustic Interloper wrote: | I mapped each Scrabble letter in each scale to the same corresponding position in the other scales (some scales had fewer notes and no exact correspondences, so I'd just reuse some intervals in those scales), the different voices would cyclically diverge and converge with respect to harmony. It could have sucked, but it didn't! |
I just found the mp3 and had a listen - I agree it works very well. If you're going to have the sequences running at different speeds it might be useful to keep most of them quiet or muted most of the time. So that at any time you can concentrate on the rhythm relationships between just a few parts, and over a long timespan you can change the atmosphere quite a lot by trying different combinations.
If you ever find you need more data, you could always use a slightly more complicated game - like, say, Twilight Imperium!
Gordon |
|
Back to top
|
|
 |
Acoustic Interloper

Joined: Jul 07, 2007 Posts: 2073 Location: Berks County, PA
Audio files: 89
|
Posted: Wed Jun 24, 2009 9:02 am Post subject:
|
 |
|
Octahedra wrote: | Acoustic Interloper wrote: | I mapped each Scrabble letter in each scale to the same corresponding position in the other scales (some scales had fewer notes and no exact correspondences, so I'd just reuse some intervals in those scales), the different voices would cyclically diverge and converge with respect to harmony. It could have sucked, but it didn't! |
I just found the mp3 and had a listen - I agree it works very well. |
Glad you liked it!
Quote: | If you're going to have the sequences running at different speeds it might be useful to keep most of them quiet or muted most of the time. So that at any time you can concentrate on the rhythm relationships between just a few parts, and over a long timespan you can change the atmosphere quite a lot by trying different combinations. |
That's a good description of the overall modus operandi. I pulled out most of the stops only at the end. A lot of times I'll play around with 2 to 4 voices and gradually work one out of the mix as I work another in.
I like the fact that this really works like an interactive instrument, that practice improves performance, etc. It's also interesting that, when I get stuck for an idea, the simple act of putting a few more words out on the board can change the groove noticeably. The sequence of chords branches out from the most recently formed word, so adding a new word in a region remote from the last add changes the sequence while keeping the overall sound. The biggest drawback to Scrabble is that, once you have the majority of the words formed, each subsequent move tends to be a smaller and smaller delta in relation to the overall space. Some games would not be prone to that dynamic.
Quote: | If you ever find you need more data, you could always use a slightly more complicated game - like, say, Twilight Imperium!
Gordon |
This is always a topic for discussion. I think the next classroom move, so to speak, will be to further develop a game design environment, and then design a game intended for music generation. I don't teach Java again until Spring, 2010, and the most I am likely to get done this summer is adding this multirate bit. I may get funding to write an extensible lab book for related Java projects over the next 3 summers, in which case I'll be putting some serious time into this next summer.
Thanks for the insights, everyone. _________________ When the stream is deep
my wild little dog frolics,
when shallow, she drinks. |
|
Back to top
|
|
 |
|