electro-music.com   Dedicated to experimental electro-acoustic
and electronic music
 
    Front Page  |  Articles  |  Radio
 |  Media  |  Forum  |  Wiki  |  Links  |  Store
Forum with support of Syndicator RSS
 FAQFAQ   CalendarCalendar   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   LinksLinks
 RegisterRegister   ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in  Chat RoomChat Room 
 Forum index » Discussion » Composition » Project Zylaphon
tonality
Post new topic   Reply to topic Moderators: bachus
Page 1 of 1 [23 Posts]
View unread posts
View new posts in the last week
Mark the topic unread :: View previous topic :: View next topic
Author Message
bachus



Joined: Feb 29, 2004
Posts: 2921
Location: Up in that tree over there.
Audio files: 5

PostPosted: Sun Dec 12, 2010 8:46 pm    Post subject: tonality Reply with quote  Mark this post and the followings unread

In zylaphon, a tonality is any global system of organization of certain musical elements that may be said to have a root or key, where the value of said root is any one of the enharmonic diatonic pitch class names (A, A:sharp, B:flat. B, B:sharp, C ... etc.).

Tonalities posses an ordered set of unique pitches with unique pitch class (pc) names.

For a given “key” of a given “tonality” some specific pc takes the roll of root, which makes it the first element of the tonality's ordered pitch class set. For example A in the pcs (A, B, C, D, E, F, G) is the root of a subset of tonality. Specifically this pcs would serve for “A minor” as well as countless other potential tonalities. Implicit in this is a “special role” for the root in a tonality's system of organization which makes it the highest ranking aurally logical starting and stopping point for various kinds of sequences within its tonality.

It is desired that the “systems of organization” provide a transform to a value of concordance which would be a real valued scaler expressing the “degree” to which a temporal point in a composition was experienced (by an informed listener) as “being in” that tonal system. It should be possible to use such scalers as the magnitudes of vectors coincident with basis vectors of the tonal space of a composition.

It is also desired of these transforms that in such a tonal space the computations using tonal space tuples with certain dimension magnitudes set to constant zero be equivalent to the same computations in a space where the zero valued dimensions do not exist (as basis vectors).

Edit: To be clear the above is a given within the linear algebra. But it is a constraint for the algorythms.

At this point I will posit the above as Zylaphon's tentative partial working definition for tonality. Informed suggestions modification, extensions, etc., would be very welcome.

_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham

Last edited by bachus on Mon Dec 13, 2010 6:26 pm; edited 1 time in total
Back to top
View user's profile Send private message Visit poster's website
bachus



Joined: Feb 29, 2004
Posts: 2921
Location: Up in that tree over there.
Audio files: 5

PostPosted: Mon Dec 13, 2010 6:15 am    Post subject: Reply with quote  Mark this post and the followings unread

I should make explicit that Zylaphons's implementation of this definition of tonality would have to be as fine grained as it is general. Taking the example of A minor as it would be recognized by post Schenker generative analysis; under Zylaphon it would map to very many different tonalities, each a "flavor" of the standard definition of A minor.
_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham
Back to top
View user's profile Send private message Visit poster's website
bachus



Joined: Feb 29, 2004
Posts: 2921
Location: Up in that tree over there.
Audio files: 5

PostPosted: Mon Jan 03, 2011 9:29 am    Post subject: Reply with quote  Mark this post and the followings unread

Moving another small step towards class definitions I'd really like to know if this is intelligible, if not agreeable to others.

Quote:
Many of the styles of music that fit within the constraints of the diatonic system exhibit a property best denoted by the word Tonality. The meaning of this word is highly context dependent. In terms of the music of the Classical period, ~ 1750 to 1830, its meaning is highly elaborated and has wide general acceptance. In the Romantic period things get a bit fuzzy, some of the best theories of Classical tonality begin fail. Moving to the first half of the 20th century the forces of modalism and chromaticism break completely with the earlier conception of its meaning – not to mention Serial Composition which intentionally and systematically avoids anything that could even be mistaken for tonality.

Zylaphon's aim is to be as general as possible and so must be its definition of tonality and the data structures, classes and methods that model it. To that end:

A Tonality is a collection of forces that, in a given context, organize and direct the flow of music in relation to a weighted-pitch class set. The significance of weighted-pitch being that at any given time the various members of this set may have varying degrees of importance in their “relationship” to the forces of the tonality. At any given time the tone with the greatest weight is the “key” in which the music resides (for that instant). Depending on the style of music, over moderate to large spans of time one tone will exhibit the strongest relationship with the tonality and the one that does so over the course of the entire composition will be designated as the “key” of that composition.

_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham
Back to top
View user's profile Send private message Visit poster's website
bachus



Joined: Feb 29, 2004
Posts: 2921
Location: Up in that tree over there.
Audio files: 5

PostPosted: Mon Jan 03, 2011 10:13 am    Post subject: Reply with quote  Mark this post and the followings unread

Perhaps it would be helpful, by way of example, to say that in traditional tonality it is the forces of tonality that modulate the minor keys between their pure, melodic and harmonic forms and in fact requires the existence of the latter two.
_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham

Last edited by bachus on Mon Jan 03, 2011 4:29 pm; edited 2 times in total
Back to top
View user's profile Send private message Visit poster's website
bachus



Joined: Feb 29, 2004
Posts: 2921
Location: Up in that tree over there.
Audio files: 5

PostPosted: Mon Jan 03, 2011 10:35 am    Post subject: Reply with quote  Mark this post and the followings unread

The need for stylistic coherency, at least on a local, but more typically a global level drives the forces of a tonality to prefer the selection of certain configurations of notes ove others and contra-wise the occurrence of certain note configurations influence the forces of the tonality. It is this bidirectional relationship that makes tonality virtually impossible to crystallize into a fixed formulation even within a given composition.
_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham
Back to top
View user's profile Send private message Visit poster's website
Acoustic Interloper



Joined: Jul 07, 2007
Posts: 1663
Location: Berks County, PA
Audio files: 39

PostPosted: Tue Jan 04, 2011 8:25 am    Post subject: Reply with quote  Mark this post and the followings unread

bachus wrote:
I should make explicit that Zylaphons's implementation of this definition of tonality would have to be as fine grained as it is general. Taking the example of A minor as it would be recognized by post Schenker generative analysis; under Zylaphon it would map to very many different tonalities, each a "flavor" of the standard definition of A minor.


bachus wrote:
The need for stylistic coherency, at least on a local, but more typically a global level drives the forces of a tonality to prefer the selection of certain configurations of notes over others and contra-wise the occurrence of certain note configurations influence the forces of the tonality. It is this bidirectional relationship that makes tonality virtually impossible to crystallize into a fixed formulation even within a given composition.

These descriptions seems to fit nicely with my mental model of composition as superposition of potential states. A tonality is a superposition of conventional scales, I think.

Some of my more successful Scrabble+stringed duets have worked in the context of assigning overlapping but slightly different "scales" to different instrument voices (MIDI channels) being generated from the Scrabble game. I do similar with accent patterns, i.e., some accent patterns will be prefixes of others, so that alignment will be periodic. I usually think in terms of "polychords" and "polyrhythms," but perhaps tonality in your terms is a better fit than polychord. Or, should it be polytonality?

With these two superpositions, one harmonic and the other rhythmic, being generated by the Scrabble game, two useful effects emerge. If the game is running by itself, it tends to cycle through perceived (and related) rhythms and harmonies in a cyclic manner. More importantly, when I start to finger pick the banjo or MIDI guitar, one gets the aural impression that Scrabble "locks in" to my playing, supporting me. I believe that a form of masking is taking place. I bind a sufficient number of "free variables" with my playing -- or equivalently, emphasize one or two scales while deemphasizing others -- that the emphasized ones coming out of Scrabble appear as intentionally locked with my playing, while the others (which are now a smaller percentage of the overall) come off as passing notes.

Not sure how much this has to do with your posts, but they brought it to mind. I am used to playing in an approximate "scale," conditioned in part by listening to enough "modal jazz" and "modal Appalachian folk music" before that to acquire this taste in (multi?)tonality. Most (but not all) local folkies are used to standard diatonic chords and scales in Ionian or Aeolian. If I pick banjo with them and slip into Mixolydian or Dorian, it overlaps what they are playing. We have meeting points at the root and fifth, and often the third, but we diverge elsewhere.

Diverge, converge, diverge, converge . . .

That sequence describes my taste in both harmonic and rhythmic composition, with the harmonic cycle and the rhythmic cycles often taking place at different rates.

_________________
When the stream is deep
my wild little dog frolics,
when shallow, she drinks.
Back to top
View user's profile Send private message Visit poster's website
bachus



Joined: Feb 29, 2004
Posts: 2921
Location: Up in that tree over there.
Audio files: 5

PostPosted: Tue Jan 04, 2011 6:57 pm    Post subject: Reply with quote  Mark this post and the followings unread

Acoustic Interloper wrote:
These descriptions seems to fit nicely with my mental model of composition as superposition of potential states. A tonality is a superposition of conventional scales, I think.


As I understand it, for each moment in a composition you create, when needed, a set of music-state vectors where each vector represents a concrete description of a musical event or configuration of such events. The magnitude of the vector is the probability of occurrence of the vector's musical state at the moment with which the vector set is associated.. When moment of creation of the actual event is reached the vector set is “collapsed” with a random distribution determined by the magnitude of the vectors in the set in analogy to the collapse of a Hilbert-Space vector in QM.

bachus wrote:
At any given time the tone with the greatest weight is the “key” in which the music resides (for that instant). Depending on the style of music, over moderate to large spans of time one tone will exhibit the strongest relationship with the tonality and the one that does so over the course of the entire composition will be designated as the “key” of that composition.


If a system as I describe were implemented and it generated an assertion that a point X the music is in the key of Y, that computation would be for the user and would have very little computational musical value except at the last note of a cadence. Again this is all in the context of the traditional concept of tonality. The weighted multiple keys of which I speak are not expressions of probability but of ambiguity, and we simply denote the least ambiguous key as being “the key the music is in at X” to comfort those of us steeped in traditional music theory. Better would be to reduce the representation down no further than a bar graph of the Key ambiguities and say the music is in “those keys”. Any way, the real distinction between the two conceptions here is the distinction between probabilities and inherent ambiguities. And this is because a key location is, for the most part, pure abstraction, there is no reducing further AFAICT.

That said, any music computational system eventually gets to the point when one note must be selected from among some set of others. At that point your methodology looks very attractive to me.

_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham
Back to top
View user's profile Send private message Visit poster's website
bachus



Joined: Feb 29, 2004
Posts: 2921
Location: Up in that tree over there.
Audio files: 5

PostPosted: Tue Jan 04, 2011 8:10 pm    Post subject: Reply with quote  Mark this post and the followings unread

Acoustic Interloper wrote:

Some of my more successful Scrabble+stringed duets have worked in the context of assigning overlapping but slightly different "scales" to different instrument voices (MIDI channels) being generated from the Scrabble game. I do similar with accent patterns, i.e., some accent patterns will be prefixes of others, so that alignment will be periodic. I usually think in terms of "polychords" and "polyrhythms," but perhaps tonality in your terms is a better fit than polychord. Or, should it be polytonality?


That's hard to say from here. The “mix” of C major and G major scales and tonalities, for a concrete example, is virtually essential to Classical tonality. And no one would call that polytonal. On the other hand the simultaneous occurrence of the V chord in G major with the II in C major would indeed be polytonal. The details are everything.

And thanks for the thought provoking post.

_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham
Back to top
View user's profile Send private message Visit poster's website
Acoustic Interloper



Joined: Jul 07, 2007
Posts: 1663
Location: Berks County, PA
Audio files: 39

PostPosted: Tue Jan 04, 2011 8:38 pm    Post subject: Reply with quote  Mark this post and the followings unread

bachus wrote:
The weighted multiple keys of which I speak are not expressions of probability but of ambiguity, and we simply denote the least ambiguous key as being “the key the music is in at X” to comfort those of us steeped in traditional music theory.


The distinction between probability and ambiguity is note worthy! Would simultaneity be an appropriate word -- Containing variables for which there are values that can satisfy all the equations: simultaneous equations.

One small nugget of insight into this matter came when I was looking at chord-pattern-matching-traces while debugging this research system that I have posted previously. Rather than getting precise traces of chords, the system detected "chord clusters" that hovered about some center, shifting about a point of attraction as notes entered or left the "current chord." Of course finger-picking fast arpeggios would heighten this effect, since there is no instantaneous production of one chord. Fingers arrive at different times. But that statement is also true of keyboards, and MIDI channels, an any sound creation or recognition device with some serial limits. The system was detecting a simultaneous set of related chords.

Probability is secondary, derivative. Use of probability is a way to engineer a system. I had to scan this page to see where "probability" came in, because I didn't think that I had introduced it. The system is in each of those states to some degree. Probability becomes a useful tool when a computation must serialize the phenomenon. But it is somewhat of an engineering kludge.

Quote:
Better would be to reduce the representation down no further than a bar graph of the Key ambiguities and say the music is in “those keys”

I agree wholeheartedly.
Quote:
Any way, the real distinction between the two conceptions here is the distinction between probabilities and inherent ambiguities. And this is because a key location is, for the most part, pure abstraction, there is no reducing further AFAICT.

That said, any music computational system eventually gets to the point when one note must be selected from among some set of others. At that point your methodology looks very attractive to me.

Attractive is good flower

_________________
When the stream is deep
my wild little dog frolics,
when shallow, she drinks.
Back to top
View user's profile Send private message Visit poster's website
bachus



Joined: Feb 29, 2004
Posts: 2921
Location: Up in that tree over there.
Audio files: 5

PostPosted: Wed Jan 05, 2011 8:49 pm    Post subject: Reply with quote  Mark this post and the followings unread

Acoustic Interloper wrote:
Probability is secondary, derivative. Use of probability is a way to engineer a system. I had to scan this page to see where "probability" came in, because I didn't think that I had introduced it. The system is in each of those states to some degree. Probability becomes a useful tool when a computation must serialize the phenomenon. But it is somewhat of an engineering kludge.


Right, sorry about that. I had inferred it from analogies (I thought) you had draw with QM. My bad Embarassed

Acoustic Interloper wrote:
The distinction between probability and ambiguity is note worthy! Would simultaneity be an appropriate word -- Containing variables for which there are values that can satisfy all the equations: simultaneous equations.


What I am trying to model are human perceptions and subjective evaluations, and IMO our sense of location in tonality space is not of simultaneous tonalities but of uncertainties and expectations.

Again IMO music is not amenable to large scale mathematical treatment, interpretation or calculation. It is to a great extent an ad hoc system partly based on the physics of harmonics (a la Hindemith et al) ; partly a system of aural acculturation; partly associations both physiological and psychological; and partly purely idiosyncratic invention.

_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham
Back to top
View user's profile Send private message Visit poster's website
bachus



Joined: Feb 29, 2004
Posts: 2921
Location: Up in that tree over there.
Audio files: 5

PostPosted: Wed Jan 05, 2011 9:09 pm    Post subject: Reply with quote  Mark this post and the followings unread

Quote:
Better would be to reduce the representation down no further than a bar graph of the Key ambiguities and say the music is in “those keys”


I am mixing definitions of Key and tonality in the last few posts which will become very confusing. But this is rather hard not to do because in the traditional conception of tonality we might say that song X is in the key of G Major and entailed within that "key of G Major" would be all the workings of Classical/traditional tonality.

In Zylaphon “Key” refers to the combination of a root tone and a mode (a sequence of melodic intervals) while a tonality is a body of domain knowledge of a methodology for manipulating musical resources in the context of one or more modes (in its general application) or one or more Keys in a specific actual instance. Again, a key is a specific form of a modal generalization.

_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham
Back to top
View user's profile Send private message Visit poster's website
bachus



Joined: Feb 29, 2004
Posts: 2921
Location: Up in that tree over there.
Audio files: 5

PostPosted: Wed Jan 05, 2011 10:44 pm    Post subject: Reply with quote  Mark this post and the followings unread

bachus wrote:
Again IMO music is not amenable to large scale mathematical treatment, interpretation or calculation.


By that I am refering to a unified formal/semiformal system of compostion, and in terms of its ability to be simulteaneously generalizable and useful. I am not implying that such systems can't make music. Pparticularly in interaction with a human nervous system.

_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham
Back to top
View user's profile Send private message Visit poster's website
bachus



Joined: Feb 29, 2004
Posts: 2921
Location: Up in that tree over there.
Audio files: 5

PostPosted: Thu Jan 06, 2011 5:01 am    Post subject: Reply with quote  Mark this post and the followings unread

I'm afraid another mix-up I'm making here is what Zylaphon is and what I personally want to do with it. Zylaphon is an extensible infrastructure for music composition and analysis. The class definition towards which I am working here is the base class for all tonalities that a user/developer might want to create. So I'm really looking for those things that are common to anything that would function as a tonality. When I said:

Quote:
What I am trying to model are human perceptions and subjective evaluations, and IMO our sense of location in tonality space is not of simultaneous tonalities but of uncertainties and expectations.


I was expressing one of my personal goals. The class for the model above would inherit layers of subclasses each more general than the one below it. Different implementations of models would branch out from that structure at differing points depending on its needs in terms of infrastructure.

So what would be common to all tonalities? I think it might be a list of one or more Keys and a scalar associated with each such Key. A model of the tonality of Gregorian chant would likely have only one key, but a list of one is still a list. So what else? Ideas?

And immediately I see that I need to create a subclass for Key so that micro-tonal systems could use the tonality base class. Clearly I do a lot of stumbling while getting to where I'm going.

_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham
Back to top
View user's profile Send private message Visit poster's website
Acoustic Interloper



Joined: Jul 07, 2007
Posts: 1663
Location: Berks County, PA
Audio files: 39

PostPosted: Thu Jan 06, 2011 8:02 am    Post subject: Reply with quote  Mark this post and the followings unread

bachus wrote:

And immediately I see that I need to create a subclass for Key so that micro-tonal systems could use the tonality base class. Clearly I do a lot of stumbling while getting to where I'm going.

And once you start considering micro-tones, you'd might as well consider granular tones as well. I just finished reading Roads' *Microsound* last week. It was very readable, more so than much of Xenakis' *Formalized Music*, for example. One interesting aspect of studying microsound is looking at the transitions where rhythm becomes pitch and pitch becomes timbre as the timebase becomes more narrow. The transitions between these perceptions is interesting. I did a piece for electro-music summer solstice 2010 that started out with two identical harmonica samples, one of which I would detune in increasing amounts, generating rhythm in the form of the low-frequency beats. Tone difference served as rhythm.

Have you read anything on the application of formal grammars and context-free parsers to analysis and representation of the phrase structures of music? I've attached two papers that I read recently. They seem possibly relevant to your work here.

I have been working on integrating a context-free sentence parser into a software instrument for reorganizing spoken word and instrumental phrases at performance time, based on parsing mouse-click phrase boundaries as markers into the audio stream. I also use intentional inter-phrase lulls in the audio passage to align the markers. The attached papers are interested in formal grammars from a perspective closer to yours, I think. I decided to read Roads' book because, when the system parses and restructures spoken sentences at a fine enough temporal grain, it gets into the real of granular phrase restructuring. This was unanticipated on my part, but of course it's obvious once it happens. I am hoping to present at ICMC 2011 (and also perform a Scrabble-to-MIDI piece).

As with any engineered system, one has to decide where to cut bait and actually finish something. I understand and agree with your last post about mathematics. Of course, I am an engineer and folk musician for most of my life, so I have no problem accepting expedient technical means Very Happy


GrammarsRepresentationMusicRoads.pdf
 Description:
Roads' survey from 1979, still one of the better papers, on using context-free grammars in music analysis and representation. It has a good overview of the mechanisms of formal grammars.

Download
 Filename:  GrammarsRepresentationMusicRoads.pdf
 Filesize:  1.87 MB
 Downloaded:  182 Time(s)


ReplyGenerativeTheoryTonalMusic.pdf
 Description:
Authors' response to a critique of one of the bettrer known books on applying formal grammars to music.

Download
 Filename:  ReplyGenerativeTheoryTonalMusic.pdf
 Filesize:  1.45 MB
 Downloaded:  170 Time(s)


_________________
When the stream is deep
my wild little dog frolics,
when shallow, she drinks.
Back to top
View user's profile Send private message Visit poster's website
bachus



Joined: Feb 29, 2004
Posts: 2921
Location: Up in that tree over there.
Audio files: 5

PostPosted: Thu Jan 06, 2011 9:12 am    Post subject: Reply with quote  Mark this post and the followings unread

I'm quite familiar with Lerdahal and Jackendoff's work but not see this paper. So much thanks.

The first work I have not read and it looks very worthwhile, So much much thanks.

Good references are truely invaluable and deeply appreciated.

_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham
Back to top
View user's profile Send private message Visit poster's website
bachus



Joined: Feb 29, 2004
Posts: 2921
Location: Up in that tree over there.
Audio files: 5

PostPosted: Thu Jan 06, 2011 9:40 am    Post subject: Reply with quote  Mark this post and the followings unread

Before starting let me denote a concept of a class of tonalities that are recognizable as being or closely related to Classical tonality as V->I tonalities.

Thinking on this Key/Tonality stuff I clearly have them all wrong. And I need to add another concept to begin to hash things out. That concept is tonality space. Since the latter part of the 19th century composers have mixed different tonalities in their compositions, in the beginning moving back and forth between V->I tonalities and non V->I tonalities, both Ionian and others. So I propose the tonality space as a class whose domain is the entirety of a composition, movement or other such division. It would provide a method that would, for a given point in time, return the tonalities at that might be construed to have musical significance (as determined from the music preceding that point). But I'm afraid it will take me a while to get Key and Tonality better sorted out.

I see that there are a number of people reading this and I apologize that you are watching sausage being made. I only do it in public because occasionally someone will contribute a valuable idea, question, objection or reference.

_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham
Back to top
View user's profile Send private message Visit poster's website
Acoustic Interloper



Joined: Jul 07, 2007
Posts: 1663
Location: Berks County, PA
Audio files: 39

PostPosted: Thu Jan 06, 2011 9:44 am    Post subject: Reply with quote  Mark this post and the followings unread

Quote:
Good references are truely invaluable and deeply appreciated.

My pleasure. You are most welcome Smile

_________________
When the stream is deep
my wild little dog frolics,
when shallow, she drinks.
Back to top
View user's profile Send private message Visit poster's website
bachus



Joined: Feb 29, 2004
Posts: 2921
Location: Up in that tree over there.
Audio files: 5

PostPosted: Thu Jan 06, 2011 5:36 pm    Post subject: Reply with quote  Mark this post and the followings unread

bachus wrote:
Thinking on this Key/Tonality stuff I clearly have them all wrong.


No, I think they are almost all right. Given the definition of Key and the nascent definiton of tonality I can convince myself that it is perfectly reasonable to say, for example, that a composition, X, is in the Mozart-1 Tonality in the Key of E-Minor. The list of weighted Keys I'm wanting to associate with the Tonality is a contextual property while the Tonality's Key and its root tone are fixed essential properties. This leaves the problem of hashing out the relation of the Tonality-Space's contextual tonality set with it's Tonalities weighted Key set. I think that was the part that was making my head spin a bit.

But onward!

_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham
Back to top
View user's profile Send private message Visit poster's website
bachus



Joined: Feb 29, 2004
Posts: 2921
Location: Up in that tree over there.
Audio files: 5

PostPosted: Fri Jan 07, 2011 6:48 pm    Post subject: Reply with quote  Mark this post and the followings unread

Acoustic Interloper wrote:
And once you start considering micro-tones, you'd might as well consider granular tones as well.


Before I get to the proposition I will argue against the comparison:) I see micro-tonal composition as the most promising and important developing force in "classical" composition. I think its development would be helped greatly by a congenial music computational infrastructure providing notation, generation and analysis for it. Which is just what Zylaphon is designed to provide in a general sense. So I am very committed to seeing that its class system hierarchy and function provides for smooth and easy extension in that direction should someone wish to take that on. In that spirit I have implemented a virtual base class Key and renamed the diatonic key class KeyDiatonic and made it a subclass of Key. I have also done a minor rewrite of the Key and Mode manager dialog so that is also ready for extension into the realms of micro-tonalities as well as atonal and indefinite pitch systems. To be clear I've done nothing toward implementing these things only making sure that Zylaphon can be extended without rewriting while simultaneously providing maximum leverage to those ends through its existing classes.

Regarding the proposition I think, but am not certain, that such microsound systems would necessarily have to begin by inheriting from Zylaphon's lowest levels of abstraction, ie, directly from class ChannalBar and ChannelEvent (one level up from ChannelElement). ChannelEvents can be “worked” at a granularity of 417372849653760 parts per whole note so I don't see any real problems. But again, Zylaphon cannot be extended to mix real-time and metered time channels unless one inhibits all tempo changes and rubato while such channels are concurrent.

_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham
Back to top
View user's profile Send private message Visit poster's website
Acoustic Interloper



Joined: Jul 07, 2007
Posts: 1663
Location: Berks County, PA
Audio files: 39

PostPosted: Sat Jan 08, 2011 7:48 am    Post subject: Reply with quote  Mark this post and the followings unread

bachus wrote:
Acoustic Interloper wrote:
And once you start considering micro-tones, you'd might as well consider granular tones as well.

Regarding the proposition I think, but am not certain, that such microsound systems would necessarily have to begin by inheriting from Zylaphon's lowest levels of abstraction, ie, directly from class ChannalBar and ChannelEvent (one level up from ChannelElement). ChannelEvents can be “worked” at a granularity of 417372849653760 parts per whole note so I don't see any real problems. But again, Zylaphon cannot be extended to mix real-time and metered time channels unless one inhibits all tempo changes and rubato while such channels are concurrent.

So, it seems that you have now considered microsounds and ruled out their integration into Zylaphon Very Happy

I am a novice in that area, although it seems a promising source of ideas for some of the things I have been doing lately. I am especially interested in the "phase transitions" in frequency ranges, where rhythm becomes pitch and pitch becomes timbre. I'm not really suggesting integrating it into a notation system at this point.

_________________
When the stream is deep
my wild little dog frolics,
when shallow, she drinks.
Back to top
View user's profile Send private message Visit poster's website
bachus



Joined: Feb 29, 2004
Posts: 2921
Location: Up in that tree over there.
Audio files: 5

PostPosted: Sat Jan 08, 2011 9:39 am    Post subject: Reply with quote  Mark this post and the followings unread

Perhaps I have not made it clear but Zylaphon is not in any way, shape, or form real time. But real time captured data could be collected and imported into Zylaphon and stored in a as a resource out side of working memory with out real-time-data metered-time-data conflicts. This realtime-data could be transformed into metered data and written into composition data with bidirectional links between the two. One could also create an abstract division within working memory by allowing it only Resource category Channels and inhibiting tempo changes in that division so that the metered data and the real time data could be worked interactively. When that was finished the metered data could be written into one of the composition's regular divisions.

So again, I don't think its a problem but I will have to read up on microsounds to say for sure.

Posted Image, might have been reduced in size. Click Image to view fullscreen.

To clarify a bit. At present only diatonic channels have been implemented. But, virtually any data that is a function of time can be modeled as a channel under the restrictions discussed here.

Edit:

If, after the fact, one wanted to combine the imported streams along with the composition level data, that would be very difficult, requiring time-stretching to be applied to the audio under strict control of the metered composition data. I don't see that in Zylaphon's future, but it is, as a matter of principal, not impossible.

Edit: Again!

"Perhaps I have not made it clear but Zylaphon is not in any way, shape, or form real time."

I will of course produce a real time midi data stream. But it doesn't create in real time.

Edit: Again, again.
Removed, "generative audio"

_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham

Last edited by bachus on Sat Jan 08, 2011 4:19 pm; edited 2 times in total
Back to top
View user's profile Send private message Visit poster's website
bachus



Joined: Feb 29, 2004
Posts: 2921
Location: Up in that tree over there.
Audio files: 5

PostPosted: Sat Jan 08, 2011 3:57 pm    Post subject: Reply with quote  Mark this post and the followings unread

Does that response seem either inadequate or irrelevant to the issues you raise?
_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham
Back to top
View user's profile Send private message Visit poster's website
Acoustic Interloper



Joined: Jul 07, 2007
Posts: 1663
Location: Berks County, PA
Audio files: 39

PostPosted: Sat Jan 08, 2011 5:23 pm    Post subject: Reply with quote  Mark this post and the followings unread

bachus wrote:
Does that response seem either inadequate or irrelevant to the issues you raise?

Your response makes sense. My observations come more from phenomena that have run into me lately, with which I must deal in my real-time work, than with any concrete suggestion for Zylaphon. They are tangential observations, I think.

_________________
When the stream is deep
my wild little dog frolics,
when shallow, she drinks.
Back to top
View user's profile Send private message Visit poster's website
Display posts from previous:   
Post new topic   Reply to topic Moderators: bachus
Page 1 of 1 [23 Posts]
View unread posts
View new posts in the last week
Mark the topic unread :: View previous topic :: View next topic
 Forum index » Discussion » Composition » Project Zylaphon
Jump to:  

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum
e-m mkii

Please support our site. If you click through and buy from
our affiliate partners, we earn a small commission.


Forum with support of Syndicator RSS
Powered by phpBB © 2001, 2005 phpBB Group
Copyright © 2003 through 2009 by electro-music.com - Conditions Of Use