electro-music.com   Dedicated to experimental electro-acoustic
and electronic music
 
    Front Page  |  Radio
 |  Media  |  Forum  |  Wiki  |  Links
Forum with support of Syndicator RSS
 FAQFAQ   CalendarCalendar   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   LinksLinks
 RegisterRegister   ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in  Chat RoomChat Room 
go to the radio page Live at electro-music.com radio 1 Please visit the chat
  host / artist show at your time
today> Twyndyllyngs Live Chez Mosc
 Forum index » Discussion » Composition
Acoustic Interloper: "Toward a Quantum Computer Music"
Post new topic   Reply to topic Moderators: elektro80
Page 1 of 1 [7 Posts]
View unread posts
View new posts in the last week
Mark the topic unread :: View previous topic :: View next topic
Author Message
dewdrop_world



Joined: Aug 28, 2006
Posts: 858
Location: Guangzhou, China
Audio files: 4

PostPosted: Mon Jul 23, 2007 6:59 pm    Post subject: Acoustic Interloper: "Toward a Quantum Computer Music" Reply with quote  Mark this post and the followings unread

Acoustic Interloper recently posted some papers that I think are interesting enough to warrant their own thread.

Acoustic Interloper wrote:
Yeah, interacting with reactions coming out of one's code can be a lot more alpha-interesting than interacting with the static code. You might be interested in this paper ( http://home.ptd.net/~dparson/dafx06_dparson_final.pdf ) from dafx06 or the electro-music 2006 presentation that preceded it ( http://home.ptd.net/~dparson/elmu06talk.pdf ), or even better the somehwat speculative draft ( http://home.ptd.net/~dparson/icmc06_dparson.pdf ). A quote from the latter:

"Instead of using fixed meter, tonal
progressions, scales, chords, pedal points and melody notes
as the basis for composition, this method extracts and stores
digital superpositions of these musical structures from MIDI
performance data. Subsequent performance, which
invariably includes nondeterministic contributions from
performers, instruments and performance environments,
triggers alternative bindings for superpositional musical
structures that result in novel computer-collaborative
accompaniment. This approach unifies composition and
improvisation."

There has been academic study of matching of probabilistic 'scores' to performance data -- I'm about 1/2 way thru David Temperley's *Music and Probability* since starting it in January -- but most of these impotent old wizard in a tower studies are aimed to controlled applications like score matching. Haven't read much on using it in live performance/improv situations, although it may be that, unlike the impotent wizards, all the hot stud performers spend less time writing about it and more time doing it. Except on web forums, of course.


I'm getting verklempt... there's your topic... discuss!

James

_________________
ddw online: http://www.dewdrop-world.net
sc3 online: http://supercollider.sourceforge.net
Back to top
View user's profile Send private message Visit poster's website AIM Address
dewdrop_world



Joined: Aug 28, 2006
Posts: 858
Location: Guangzhou, China
Audio files: 4

PostPosted: Mon Jul 23, 2007 7:19 pm    Post subject: Reply with quote  Mark this post and the followings unread

I just read the papers this afternoon, haven't quite fully digested yet.

For good or ill, these days research for me is primarily opportunistic, or parasitic... what can I steal for my own work? So I find myself most interested in the general concepts of the stage 1 and stage 2 "writers" to parse incoming note data into efficient structures for later recombination. Right now I'm using relatively simple array manipulations on pre-seeded note data that I code by hand (and getting a lot of mileage out of it), but I do want to play lead material live and right now the interaction is more one-sided than I would like. The computer generates a harmony and I respond to it. There are great ideas in here about inferring harmonic context from melodic data, which I'm sure will make their way into my code.

For my purposes, the training and "mapping" phases represent a specific working method which may or may not be generally applicable. My gut feeling is that I (speaking only for myself) probably don't want to work that way. I'm not so much of an instrumentalist anymore -- used to be a pretty decent flutist but those chops have decayed significantly. I'm better at composing rules and shaping them into a composition.

Anyway, great stuff, really got me thinking!

James

_________________
ddw online: http://www.dewdrop-world.net
sc3 online: http://supercollider.sourceforge.net
Back to top
View user's profile Send private message Visit poster's website AIM Address
Acoustic Interloper



Joined: Jul 07, 2007
Posts: 2073
Location: Berks County, PA
Audio files: 89

PostPosted: Tue Jul 24, 2007 9:25 am    Post subject: Reply with quote  Mark this post and the followings unread

dewdrop_world wrote:
I just read the papers this afternoon, haven't quite fully digested yet.


Glad you liked it enough to start a thread! Still digesting myself, actually, because I don't always get completely satisfactory results with a reasonable amount of work on the back end MIDI generator plugins, and of course sometimes don't get satisfactory results at all, and wind up ChucKing a composition. On the other hand, having outlined my first proposed piece for EM2008 which is strictly mic'd banjo running through an audio FX chain in Live, with some added hints from Steve Reich's book, no MIDI at all, I suddenly see structural elements of the non-MIDI stuff I am doing that I can work back into the MIDIME software of the papers. Music as a gradual process, indeed!

Quote:

For good or ill, these days research for me is primarily opportunistic, or parasitic... what can I steal for my own work?


Perfect. An engineer at heart! That's stock in trade for us folkies.

Quote:

So I find myself most interested in the general concepts of the stage 1 and stage 2 "writers" to parse incoming note data into efficient structures for later recombination. Right now I'm using relatively simple array manipulations on pre-seeded note data that I code by hand (and getting a lot of mileage out of it), but I do want to play lead material live and right now the interaction is more one-sided than I would like. The computer generates a harmony and I respond to it. There are great ideas in here about inferring harmonic context from melodic data, which I'm sure will make their way into my code.


On a non-MIDI note, I discovered over the weekend that if I am playing acoustic banjo with a bass pedal point note that is hit periodically at the same location in a finger picking pattern, and I filter the mic'd banjo with a notch filter for just that note and then delay it by 1 measure (or whatever the time interval of that pedal point is in the finger pattern), then I've got myself a self-generating metronome. True, it entrains my playing to whatever tempo I've set for the delay, but of course I can tie that to an expression pedal if I want. Even without that, though, there is enough elasticity in my finger picking that the 1-measure-metronome breathes with me.

A promising philosophy generally is to try to get the computer to do the ten other things that might be rolling around in the back of one's head while the body is playing whatever it can manage.

Quote:

For my purposes, the training and "mapping" phases represent a specific working method which may or may not be generally applicable. My gut feeling is that I (speaking only for myself) probably don't want to work that way. I'm not so much of an instrumentalist anymore -- used to be a pretty decent flutist but those chops have decayed significantly. I'm better at composing rules and shaping them into a composition.


Yep, the software and design flow is definitely customized to my way of working, which is part of the genesis of the name MIDIME. My little clone. Whatever means is most effective for getting your impulses into digital form without blocking your way is the way to go.

Quote:

Anyway, great stuff, really got me thinking!

James


Glad you found it useful.

One new observation about the less immediate, 'superposition' aspect, especially of the second paper. I was just reading Steve Reich's *Writings on Music 1965-2000* this morning, and in the short essay "15. Steve Reich and Musicians [1972]" he says this, in the context of his phase-varying-pattern-driven-compositions:

Quote:

The question often arises as to what contributions the performers make to the music. The answer is that they select the resulting patterns in all compositions that have resulting patterns, and that certain details of the music are worked out by members of the ensemble during rehearsals. Resulting patterns are melodic patterns that result from the combination of two or more identical instruments playing the same repeating melodic pattern one or more beats out of phase with each other. During the selection of resulting patterns to be sung in the second section of *Drumming*, Joan LaBarbara, Jay Clayton, Judy Sherman, and I all contributed various patterns we heard resulting from the combination of the three marimbas. These patterns were selected, and an order for singing them worked out, with the help of tape loops of the various marimba combinations played over and over again at my studio during rehearsals held throughout the summer of 1971. Similarly, in the resulting patterns for *Six Pianos*, Steve Chambers, James Preiss and I worked out the resulting patterns and the order in which to play them during rehearsals at the Baldwin Piano Store during the fall and winter of 1972-73.

During the summer of 1973, in Seattle, I worked with different singers in the marimba section of *Drumming* who heard and sang very different resulting patterns than the singers in New York. When I returned to New York, I showed the new resulting patterns to Jay Clayton and Joan LaBarbara, who decided to incorporate some of these patterns into their own version. Th details of the music changed when the performers changed.

Selecting resulting patterns is not improvising; it is actually filling in the details of the composition itself. It offers the performers the opportunity to listen to minute details and to sing or play the ones he or she finds most musical.


The second paragraph in particular really hit me, because this is musical superposition par excellence in the sense I had in mind in the second paper. Reich's compositions are superpositions of emergent patterns akin to intereference patterns of wave physics, and the performers' act of deciding which notes to play constitutes collapsing the superpsoition to concrete notes. The second paragraph is especially important because it demonstrates through selection of concrete notes by a second set of performers, that the composition is indeed a superposition.

Incidentally, the notion of a musical superposition came to me while debugging the stage 2 pattern matcher while in the midst of a fever at about 2 am in Feb. of 2006. I finger pick arpeggios, and so the chords are not on-again-off-again the way a chord chart would imply; they probably aren't for any manual style of playing. Anyway, in looking at the chord debug traces, I saw that chord matching was hovering in a cloud around a space of closely related chords rather than sitting still on a stable chord. It struck me that this was like quantum mechanics, where the *position* of an electron is actually an abstraction, usually represented as a probability function. Actually the electron is in all those places at the same time. In the multiverse interpretation of quantum mechanics, you could say that some number of universes exist that hold each distinct position for the electron, at the time that the position is actually measured. Having stumbled across the Reich reference this morning, I am more convinced than ever that musical superposition is a pretty deep vein of gold to be mined.

Take care.

Dale

_________________
When the stream is deep
my wild little dog frolics,
when shallow, she drinks.
Back to top
View user's profile Send private message Visit poster's website
Acoustic Interloper



Joined: Jul 07, 2007
Posts: 2073
Location: Berks County, PA
Audio files: 89

PostPosted: Thu Jul 26, 2007 7:27 am    Post subject: Reply with quote  Mark this post and the followings unread

In thinking about this business of musical superposition some more, a Herbie Hancock quote from Ian Carr's Miles Davis biography came to mind, concerning the music of the mid 60's Davis-Shorter-Hancock-Williams acoustic quartet.
Quote:

By the time we got to ESP, Miles said, 'I don't want to play chords any more' ... I guess what he wanted to go for was the core of the music ... Here's how I look at it ... now I don't know if this is the way Miles looks at it, but a composition is an example of a conception, so Miles, rather than play the composition, he wants to play the conception that the composition came from ... That's why you hear melody fragments and you kind of hear the momemtum and the sound of the tune somewhere -- something that distinguishes that tune from another one ... but maybe the chords are not there. Even when we were playings 'Walkin'' or any of those other [familir] things, he didn't want to play the chords after we played the melody. (p. 201)

The music from that period is very abstract and understated. Much of the force is latent. Superpositions are not concrete notes and sounds; it is the collapse of a superposition into one of its possible bound states that emits sound. So, in writing and playing a conception, the musician seems to provide a scaffold for the listener to fill in, somewhat similar to sparse, understated, and seemingly ambiguous poetry or lyrics, that imply multiple simultaneous meanings. Puns are like this at a silly level. Instrumental puns.

Davis' later electric music, which I also love, seems to head in an opposite direction, in the sense that whereas in the music of the mid 60's much of the energy was latent and capable of pointing in various directions (fingers pointing at the moons), some of the eletric stuff made the latent energy manifest, not by taking it in one or two of the implied directions, but in all of them, sometimes simultaneously (the moons, planets and stars together). I've read a quote from one of the electric band members (can't recall who or find it now) who said that in going from the above abstract composing/playing to his electric periods, Miles 'got tired of playing with quantum mechanics and went back to playing the blues.' I'm not sure that's all there is to it. In the above period Miles & band were implying multiple directions for a piece simultaneously, and in the latter periods (there are a couple of different periods from 1968..1975) Miles & band were fleshing out multiple directions, sometimes overlaying them simultaneously. They weren't illustrating the generators, they were letting them run.

Getting back to the point of the second paper on superposition, the main idea is that you can store superpositions of fragments of music in a computer, and use it to generate real notes at play time. Of course, this is what everybody does. Notes are latent in any musical software until it runs. But what comes out of this perspective is a search for compositional strategies for uncovering or creating new and interesting superpositional structures. The papers are about a pattern matching network that builds up superpositional models of finger picked string music, but that's just one example. I've been thinking about using a formal grammar of some sort to write sentences about musical relationships, generate abstract machines that outline the relationships of those sentences graphically (for example, finite state machines, which for non-programmers are conceptually like wildcard search strings, where some parts of a search string is fixed text and some parts can be bound by context of the body of text you are searching), and then click through this finite state machine at run time to bind & send concrete notes to a sound synth. You could do most of this in ChucK, maybe feed it from a Python GUI. Live coding where, instead of typing code, you generate code on the fly from graphical code templates.

All examples of superpositional scores. The score takes on a life of its own. Scores have always had dimensions for performer 'interpretation.' A superpositional perspective to composition makes superposition an explicit part of a score, amenable to new bindings at performance time.

_________________
When the stream is deep
my wild little dog frolics,
when shallow, she drinks.
Back to top
View user's profile Send private message Visit poster's website
dewdrop_world



Joined: Aug 28, 2006
Posts: 858
Location: Guangzhou, China
Audio files: 4

PostPosted: Mon Aug 20, 2007 7:35 pm    Post subject: Reply with quote  Mark this post and the followings unread

Picking up again from "One trick pony":

Acoustic Interloper wrote:
Balancing load across multiple threads can be difficult.


OK... here you're talking about threading client-side processes. My comment about two instances of scsynth is for audio rendering only, just to clarify.

scsynth is good at audio but it's pretty dumb when it comes to deep pattern analysis. The sc client can handle it, and it can be threaded with a little care. Threads in the sc virtual machine are not preemptive, so you have to yield explicitly from each. If you want to distribute work among different OS threads, you can create as many TempoClocks as you want. Each TempoClock starts its own c++ thread in the backend.

I wrote some code awhile back to watch an audio stream for triggers, pass the time index back to the client, and then the client pulls a short snippet of audio and uses a crude algorithm to compare the FFT against other events in the incoming audio stream. Since the nth trigger requires n - 1 comparisons, it can get expensive so I threaded it and it works well, never kills the audio or disrupts sequencing, and since it's a pipeline model, it's doing comparisons while new time indices are coming in.

That's just to say it definitely can be done, just takes some planning.

James

_________________
ddw online: http://www.dewdrop-world.net
sc3 online: http://supercollider.sourceforge.net
Back to top
View user's profile Send private message Visit poster's website AIM Address
Acoustic Interloper



Joined: Jul 07, 2007
Posts: 2073
Location: Berks County, PA
Audio files: 89

PostPosted: Tue Aug 21, 2007 7:54 am    Post subject: Reply with quote  Mark this post and the followings unread

It's good to know that hooks for thread control are there if you need them. I am sure with up to 28 DSPs, Kyma's software manages the processor scheduling. With flow graphs of audio transforms (coded as either pictures or chained function calls) it's possible to automatically allocate cores or hardware threads to subgraphs. And it should be possible to roughly estimate both the processor cost of each function and communication cost of each function-to-function communication, as a % of the overall graph. Cycles in the graph and non-linear decisions would complicate estimates.

Partitioning processes across cores, and partitioning signal flow subgraphs across hardware threads, both become increasingly important as these features become more prevalent in everbody's music computers. I think this is a place where ChucK's 'strongly timed' language model has a lot of potential. Of course the language implementation isn't scheduling threads yet, but the language structure should make this possible without requiring a lot of programmer attention to scheduling detail.

_________________
When the stream is deep
my wild little dog frolics,
when shallow, she drinks.
Back to top
View user's profile Send private message Visit poster's website
Morgan



Joined: Aug 03, 2008
Posts: 23
Location: TX
Audio files: 1
G2 patch files: 2

PostPosted: Mon Aug 04, 2008 7:49 pm    Post subject: Reply with quote  Mark this post and the followings unread

I've been wondering about this stuff!

. . . haven't gone through the material yet, but will.
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic   Reply to topic Moderators: elektro80
Page 1 of 1 [7 Posts]
View unread posts
View new posts in the last week
Mark the topic unread :: View previous topic :: View next topic
 Forum index » Discussion » Composition
Jump to:  

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Forum with support of Syndicator RSS
Powered by phpBB © 2001, 2005 phpBB Group
Copyright © 2003 through 2009 by electro-music.com - Conditions Of Use