electro-music.com   Dedicated to experimental electro-acoustic
and electronic music
 
    Front Page  |  Radio
 |  Media  |  Forum  |  Wiki  |  Links
Forum with support of Syndicator RSS
 FAQFAQ   CalendarCalendar   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   LinksLinks
 RegisterRegister   ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in  Chat RoomChat Room 
 Forum index » Discussion » Composition
Time and event data structures
Post new topic   Reply to topic Moderators: elektro80
Page 1 of 1 [9 Posts]
View unread posts
View new posts in the last week
Mark the topic unread :: View previous topic :: View next topic
Author Message
bachus



Joined: Feb 29, 2004
Posts: 2922
Location: Up in that tree over there.
Audio files: 5

PostPosted: Thu Nov 05, 2009 7:01 am    Post subject: Time and event data structures Reply with quote  Mark this post and the followings unread

The long range goal here is to define a data structure that adequately models musical events. As time is the fundamental domain parameter for music composition systems its consideration comes first and I begin with an number of assertions about/for which I invite questions, extensions, corrections and objections.

Performance Time:

Performance time is essentially real time. For the most part, creating playable MIDI streams from a music application's data structures requires only knowledge defined for the instant in time being transformed into MIDI. That is, the MIDI values at time t depend only on application data defined at time t or:

MIDI(t) = f(t)

This is however a limited approximation. Many musical instruments are able to connect consecutive notes with various kinds of phrasing and many of these phrased connections require prior knowledge of the characteristics of the next event. To my knowledge only Csound and Synful Orchestra can handle such processing. For this case we would say

MIDI(t) = f(t, t1)

where t is the time for which we are constructing MIDI data and t1 is the time of the next event.

These two expressions encompass everything that is needed to convert an app's internal music representation to MIDI.

Composition Time

Many if not most compositions have meter. Meter provides an abstraction of time that is regular in regard to rational event subdivision but elastic in its transform (mapping) to real time. The way time is represented in score in the context of meter we will call composition time and denote it by T.

If we are constructing a composition in non-real-time we have the opportunity to directly associate an event with its context. An event's position in a score provides its indirect associations with the score. This kind of association is lossy in terms of data connections compared to direct association. For example if a note is the third event in a motif that association can be established only by extensive data analysis and might even be missed. On the other hand, if the note's data construct posses a reference to its position in its source motif, we have that fact directly. A source motif is an abstraction that has no location in time even though it may have a duration.

Generally in human composed music the data values of an event are dependent on the events that preceded and follow the given event. At em2009 ark asked if a pause included in a score to account for applause would be considered to be in composition time or performance time. My answer was that it is part of composition time. This is because the existence and duration of such an event would depend on the dynamics of the salience of the music for some duration both before and after the time when the pause occurs.

From the above it is clear that at time T the data values of a composition are diverse functions of multiple parameters that may include points and spans of composition time as well as data concerning many other non-temporal abstractions. Thus

event(T) = f( T1, T2, ... Tn, A, B, C....)

_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham
Back to top
View user's profile Send private message Visit poster's website
newdendrite



Joined: Nov 24, 2007
Posts: 25
Location: atlanta, ga

PostPosted: Thu Nov 05, 2009 9:19 am    Post subject: Re: Time and event data structures Reply with quote  Mark this post and the followings unread

bachus wrote:


Many musical instruments are able to connect consecutive notes with various kinds of phrasing and many of these phrased connections require prior knowledge of the characteristics of the next event. To my knowledge only Csound and Synful Orchestra can handle such processing.


If I understand the model correctly thus far, PD, Max, and by extension Max4Live under Live 8 also have this capability.

You've begun an interesting foundation here, please continue!

Best regards,
Michael
Back to top
View user's profile Send private message
bachus



Joined: Feb 29, 2004
Posts: 2922
Location: Up in that tree over there.
Audio files: 5

PostPosted: Thu Nov 05, 2009 9:23 am    Post subject: Reply with quote  Mark this post and the followings unread

Here is a link to a thread describing work I am doing on implementing an efficiently computable abstraction of meter.
_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham
Back to top
View user's profile Send private message Visit poster's website
bachus



Joined: Feb 29, 2004
Posts: 2922
Location: Up in that tree over there.
Audio files: 5

PostPosted: Thu Nov 05, 2009 9:27 am    Post subject: Re: Time and event data structures Reply with quote  Mark this post and the followings unread

newdendrite wrote:

If I understand the model correctly thus far, PD, Max, and by extension Max4Live under Live 8 also have this capability.


Thanks for the update! I find it difficult to keep abrest of developments.

_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham
Back to top
View user's profile Send private message Visit poster's website
Inventor
Stream Operator


Joined: Oct 13, 2007
Posts: 6221
Location: near Austin, Tx, USA
Audio files: 267

PostPosted: Thu Nov 05, 2009 9:42 am    Post subject: Reply with quote  Mark this post and the followings unread

Bachus, I am really confused by all of this. In fact, I'm having trouble even attempting to rephrase it back to you so you can correct my misunderstanding.

I see the f(t) performance time definition, but why would you have f(t, t1) where t1 is the next note to appear?

Anyway, I would suggest that ChucK has the capability to deal with the situation only because it has a full C-like programming language, and as such you can do whatever you want with it.

Can you explain this in more pedestrian terms so that I can understand?

Les

_________________
"Let's make noise for peace." - Kijjaz
Back to top
View user's profile Send private message Send e-mail
bachus



Joined: Feb 29, 2004
Posts: 2922
Location: Up in that tree over there.
Audio files: 5

PostPosted: Thu Nov 05, 2009 12:22 pm    Post subject: Reply with quote  Mark this post and the followings unread

I think this would be a good place to start. Does this help?
_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham
Back to top
View user's profile Send private message Visit poster's website
Antimon



Joined: Jan 18, 2005
Posts: 4145
Location: Sweden
Audio files: 371
G2 patch files: 100

PostPosted: Thu Nov 05, 2009 2:59 pm    Post subject: Reply with quote  Mark this post and the followings unread

I'm a pragmatist, so while I find this very interesting, it's hard to get it all in without having a clear understanding of the goal.

You say that we need to adequately model musical events. What would you say happens when we have this model in place - i.e. what do you value as "adequate"?

I gather the quest is for some kind of notation, with the focus at this stage being on timing. Who or what will read this notation? Man or machine? Who will write it? Will it be edited on a computer or written on paper? What is more important, ease of noting music down or ease of reading?

My first thought when reading this was that what we need is a sequence of events, plus for each event we need to know the current state of the music (i.e. what has happened before and what will happen afterwards) and how the event affects the state.

One thing I thought about was the state machines that I've seen in computer science: you have a diagram of circles with lines between them. One of the circles is the start circle. Each line has a note on it and each circle does some calculation which line it should follow to the next circle. This would create a tune - the difficulty in making this manageable would be describing the conditions in the circles.

This looks like a cool project. Smile

/Stefan

_________________
Antimon's Window
@soundcloud @Flattr home - you can't explain music
Back to top
View user's profile Send private message Visit poster's website
bachus



Joined: Feb 29, 2004
Posts: 2922
Location: Up in that tree over there.
Audio files: 5

PostPosted: Thu Nov 05, 2009 4:27 pm    Post subject: Reply with quote  Mark this post and the followings unread

Antimon wrote:
I'm a pragmatist, so while I find this very interesting, it's hard to get it all in without having a clear understanding of the goal


The ultimate goal is another attempt at this but as an open source project.

Antimon wrote:
I gather the quest is for some kind of notation, with the focus at this stage being on timing. Who or what will read this notation? Man or machine? Who will write it? Will it be edited on a computer or written on paper? What is more important, ease of noting music down or ease of reading?


I raised the issue of time simply to clarify the distinction between performance time and composition time. The system I am working on presents the data behind a composition through various views of various abstractions. Standard score notation is the one on which I am focusing first. Any abstraction that is modeled by a function whose domain is time is defined within some channel (not a midi channel). The system has channel view windows that can display the users choice of channels and related data: staff views, midi piano-rolls, figured bass etc., etc..

_________________
The question is not whether they can talk or reason, but whether they can suffer. -- Jeremy Bentham
Back to top
View user's profile Send private message Visit poster's website
Acoustic Interloper



Joined: Jul 07, 2007
Posts: 2067
Location: Berks County, PA
Audio files: 89

PostPosted: Fri Nov 06, 2009 9:14 pm    Post subject: Re: Time and event data structures Reply with quote  Mark this post and the followings unread

bachus wrote:
newdendrite wrote:

If I understand the model correctly thus far, PD, Max, and by extension Max4Live under Live 8 also have this capability.


Thanks for the update! I find it difficult to keep abrest of developments.

ChucK as well! And I imagine Supercollider.

In fact, Chuck is first and foremost strongly timed. Delay/synchronization is one of the fundamental programming constructs in ChucK.

_________________
When the stream is deep
my wild little dog frolics,
when shallow, she drinks.
Back to top
View user's profile Send private message Visit poster's website
Display posts from previous:   
Post new topic   Reply to topic Moderators: elektro80
Page 1 of 1 [9 Posts]
View unread posts
View new posts in the last week
Mark the topic unread :: View previous topic :: View next topic
 Forum index » Discussion » Composition
Jump to:  

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Forum with support of Syndicator RSS
Powered by phpBB © 2001, 2005 phpBB Group
Copyright © 2003 through 2009 by electro-music.com - Conditions Of Use