| Author |
Message |
mosc
Site Admin

Joined: Jan 31, 2003 Posts: 18274 Location: Durham, NC
Audio files: 232
G2 patch files: 60
|
Posted: Thu Aug 16, 2007 10:28 am Post subject:
|
 |
|
I don't know how many DSPs it would take, but I think an 8 processor system would probably work - just a guess. The Kyma uses the 24 bit Moto DSP. I'm not sure which one. _________________ --Howard
my music and other stuff |
|
|
Back to top
|
|
 |
dewdrop_world

Joined: Aug 28, 2006 Posts: 858 Location: Guangzhou, China
Audio files: 4
|
Posted: Thu Aug 16, 2007 1:12 pm Post subject:
|
 |
|
| Acoustic Interloper wrote: | | Was talking with Jeremy's bass teacher, who moved into a nearby house last fall, during my nightly walk the other evening. After doing the finger picked delays in Live, I was thinking about doing different delays on different strings on the finger picked MIDI guitar, since the MIDI channel carries the string number. Interleave the strings in time in modulo polyrhythms. We got onto how with fast signal processing and hi Q bandbass filters on a stringed audio signal, you might not be able to ID the strings per se, but you could pull out different drones or reappearing central notes, and do wonderful non-linear things when they appear. Sort of a generalization of the idea of sympathetic strings, but without the fuss of building yet another stringed instrument. To never have to buy another banjo . . . |
That sounds at least approachable in SuperCollider...
hjh _________________ ddw online: http://www.dewdrop-world.net
sc3 online: http://supercollider.sourceforge.net |
|
|
Back to top
|
|
 |
Acoustic Interloper

Joined: Jul 07, 2007 Posts: 2074 Location: Berks County, PA
Audio files: 89
|
Posted: Fri Aug 17, 2007 2:02 pm Post subject:
|
 |
|
| mosc wrote: | | I don't know how many DSPs it would take, but I think an 8 processor system would probably work - just a guess. The Kyma uses the 24 bit Moto DSP. I'm not sure which one. |
http://www.symbolicsound.com/press-AES98.html gets into some detail.
"Based on the new Motorola DSP-56309 chip running at 80 MHz" (this was 1998)
| Quote: |
More Background on the Motorola 56309
Announced in January of this year (1998), the 80 MHz 56309 DSP from Motorola is the latest in the 56000 series of DSPs long favored by professional audio hardware manufacturers--including Digidesign, TC Electronics, Eventide, Peavey, Event, and others.
Besides running at the faster clock speed of 80 MHz, the 56309 executes an instruction on each cycle, has a normalize instruction, and can do a multiply and accumulate in one instruction. It does 24-bit arithmetic and has a 56-bit accumulator, providing a signal-to-noise ratio of 336 dB.
Assuming the peak parallelism of a multiply, an add, and three on-chip memory fetches per instruction, at the 80 MHz clock rate, the 56309 provides an effective clock rate of 5 * 80 = 400 MHz. Thus, a fully-loaded Capybara-320 with 28 processors provides an effective clock speed of 28 * 400 = 11,200 or 11.2 Gigahertz devoted exclusively to generating and processing audio.
If you prefer to think in terms of operations per second, assuming a peak parallelism of a multiply and an add on each instruction, the 56309 can reach 2 * 80 = 160 million operations per second. Multiply that by 28 processors and you get 4.48 Giga operations per second--that's 4.48 billion operations per second."
|
I love DSP benchmarks I love DSPs even more That "5 *" may be a wee bit suspect - giving each DSP data tap, coefficient and instruction fetch its own * (or maybe these are data fetch, store, and coefficient fetch -- not sure). The 2 ops per clock appears more reasonable -- signal processing instructions are typically single cycle. So for 8 processor's that'd be 1.28Gops.
Freescale (the Motorola spinoff, that unlike Agere, made some money before it was acquired. Post-Agere LSI just had another layoff, by the way) listed their latest 24 bit DSP in Jan. 2007 at http://media.freescale.com/phoenix.zhtml?c=196520&p=irol-newsArticle&ID=946152&highlight
| Quote: |
The Symphony audio DSP56720 and DSP56721 single-chip solutions were designed using dual DSP56300 24-bit cores, which handle both the latest decoding standards and advanced post processing on the same chip. Each core operates at 200 MIPs with a 200 MHz clock, enabling the chip to meet the high performance requirements of many audio applications including HD audio standards such as DTS-HD, Dolby® Digital+ and Dolby TrueHD. Currently, many of today's high performance audio products use multi-chip DSP implementations. The Symphony audio multi-core DSPs eliminate the need for a multi-chip solution, dramatically reducing board space and the cost of the design.
|
So they are also quoting single cycle instructions, OK, 200/80 = 2.5 x faster, and dual core means 5 x if you are using multithreading.
Some Intel benchmarks show (http://www.intel.com/ca/multiply/hub_core2duo_benchmarks_070307.pdf) dual core Pentiums 19,000 MIPS to 34,000 MIPS, 13,000 Mflops, and some multimedia integer operation speeds that require more digging to understand. The buses operate at up to 800 Mhz. They also offer a 4 core package.
Hardware speed obviously is not the entire question, although these multicore processors offer the opportunity to offload some of the OS BS onto separate cores. The other part of the equation is software environment. Kyma has obviously spent a lot of time and effort on that. So far, I ordered the manual. I like reading manuals
| dewdrop_world wrote: |
That sounds at least approachable in SuperCollider...
hjh |
Yeah, I gotta study this some more. Alienware ain't telling me what's up with comatose machine #1, and Apple ain't telling me when they're gonna ship machine #2, so I finally added some memory to the "fallback machine" here so I can load my Edirol ASIO driver & get back to work. Decided to download SC before ChucK strictly for performance reasons. Ge told me at EM2007 that ChucK doesn't use hardware multithreading right now, SC's 2-process architecture guarantees that it'll use at least 2 cores (I'm guessing the engine pulls harder than the interpreter, though -- wonder if it's multithreaded).
Haven't really decided, may be able to justify doing both, although my time is likely the bottleneck (which may mean Kyma). I remember when I transferred from Bell Labs Reading to Allentown in 1988, there was a TI LISP machine sitting in my office. That would have been fun, but there was no time for that, either.
I did take on a Sparc, though, when SunOS running on Mot 68000 was the only stable, supported Sun at the time. The more things change . . .  _________________ When the stream is deep
my wild little dog frolics,
when shallow, she drinks. |
|
|
Back to top
|
|
 |
Acoustic Interloper

Joined: Jul 07, 2007 Posts: 2074 Location: Berks County, PA
Audio files: 89
|
Posted: Mon Aug 20, 2007 10:22 am Post subject:
|
 |
|
| dewdrop_world wrote: | That sounds at least approachable in SuperCollider...
hjh |
Is there a comprehensive doc for the SC ugens? http://supercollider.sourceforge.net/docs/UGens/Tour_of_UGens.html is a tour, but not a complete doc for these? I am estimating the difficulty of using the FFT ugens for pitch shifting, time stretching, and related FX to live audio. Thanks.
Thanks. _________________ When the stream is deep
my wild little dog frolics,
when shallow, she drinks. |
|
|
Back to top
|
|
 |
dewdrop_world

Joined: Aug 28, 2006 Posts: 858 Location: Guangzhou, China
Audio files: 4
|
Posted: Mon Aug 20, 2007 3:29 pm Post subject:
|
 |
|
There are help files for most UGens. Usually what I do is use the command "UGen.dumpSubclassList" to get an alphabetical list of all UGens (with superclasses), and if one of them looks intriguing, I'll select the text and hit the help button to see if it's documented. (Not all of them are.)
Also see this -- Dan Stowell just did a graphical UGen browser. http://www.nabble.com/UGen-thematic-browser-p12213572.html
FFT for pitch shifting should be pretty straightforward (see PV_BinShift). Also the granular pitch shifter PitchShift is pretty good. For time stretching of live input, you'll run into the same problem as you were discussing with Kassen in the ChucK section -- you have the record head and playback head running at different rates, so a small circular buffer would be a problem. You can have a big circular buffer -- the scsynth executable is pretty small, and you'll have most of the 32-bit address space available. Or it would be possible to record the audio to disk in one buffer and use a routine in the client to update another buffer piecemeal. Should be possible to do this without glitches... then just read the second buffer at whatever rate you need.
| Acoustic Interloper wrote: | | SC's 2-process architecture guarantees that it'll use at least 2 cores (I'm guessing the engine pulls harder than the interpreter, though -- wonder if it's multithreaded). |
Yes, the interpreter's CPU usage usually doesn't top 5%. scsynth has only one real-time thread for audio rendering (there are async commands that run on a lower priority thread). If you run two instances of scsynth at the same time and distribute the load roughly equally, the OS seems to put both servers' real-time threads on the different cores and there's a marked increase in the amount of audio work the machine can do. This is pretty easy to show through a simple benchmark.
James _________________ ddw online: http://www.dewdrop-world.net
sc3 online: http://supercollider.sourceforge.net |
|
|
Back to top
|
|
 |
Acoustic Interloper

Joined: Jul 07, 2007 Posts: 2074 Location: Berks County, PA
Audio files: 89
|
Posted: Mon Aug 20, 2007 4:04 pm Post subject:
|
 |
|
| dewdrop_world wrote: | FFT for pitch shifting should be pretty straightforward (see PV_BinShift). Also the granular pitch shifter PitchShift is pretty good. For time stretching of live input, you'll run into the same problem as you were discussing with Kassen in the ChucK section -- you have the record head and playback head running at different rates, so a small circular buffer would be a problem. You can have a big circular buffer -- the scsynth executable is pretty small, and you'll have most of the 32-bit address space available. Or it would be possible to record the audio to disk in one buffer and use a routine in the client to update another buffer piecemeal. Should be possible to do this without glitches... then just read the second buffer at whatever rate you need.
|
Thanks once again! In thinking about this, I think a workable strategy is just to let the slow variant play for some modulo time period, i.e., establish BPM, meter, and some number of measures, at which point the slow copy is considered 'back in sync,' after which I can empty the buffer and start over. Just another cyclic process, like a washing machine!
In listening to Reich et. al lately (Drumming just arrived late last week; have gone thru both CDs 3 times now), it is suggesting a lot of generalizations to looping that have some promise, and is different from Reich and I guess from typical looping. A better name than looping might be 'reincarnation.' Identify a phrase, copy the phrase, apply some transform (I'm mostly interested in non-lossy transforms, i.e., reversible transforms, which preserve all of the info in the original audio waveform), and then reincarnate the phrase in some compatible phrase slot. Suppose a control pedal identifies the start and end of a phrase. (Much simpler than real-time pattern matching ) Instead of typical pedals, have enough pedals so we can ID the 'type' of a phrase, and a phrase of a given 'type' can be reincarnated only in a type-compatible phrase slot. So now you compose a piece by writing a grammar for it, and you perform it both by playing something (banjo, obviously), and concurrently copying phrases out of the ongoing stream using your feet or whatever, and having them reincarnate in places that are type compatible.
Need to work up a demo by em2008. Grammars are very powerful. It seems like looping is really just a case of what's called a right-regular grammar -- the wildcards appear only at the right side of the structure-to-be-looped -- and a right-regular grammar is the simplest grammar.
| Quote: |
Yes, the interpreter's CPU usage usually doesn't top 5%. scsynth has only one real-time thread for audio rendering (there are async commands that run on a lower priority thread). If you run two instances of scsynth at the same time and distribute the load roughly equally, the OS seems to put both servers' real-time threads on the different cores and there's a marked increase in the amount of audio work the machine can do. This is pretty easy to show through a simple benchmark.
James |
Balancing load across multiple threads can be difficult. Midime uses a thread per pipeline stage, and they work correctly with a multithreaded Pentium, but the problem with threading a pipeline is that the second thread doesn't kick in unless a MIDI incoming event arrives before the previous one is fully processed. Helps with throughput, but not a bit with latency. I want to look at how well I could parallelize and individual pipeline stage, e.g., do rhythm extraction in parallel with harmony/melody extraction; they live in the same pipeline stage. Even better, paralllelize score matching, which I suspect is the real CPU eater.
I assume this is why Intel et. al. and the PCs and Macs are migrating from multi-threaded cores to multiple cores. (I don't know if these multiple cores are also multithreaded.) A multi-threaded core requires explicit thread creation & scheduling by the application or compiler; all hardware threads live in a single process and its address space. A multiple core chip, on the other hand, has distinct processors that run distinct processes concurrently, for example some OS process at the same time as a musical application. It still helps, since it increases the probability of available CPU bandwidth for the musical app. _________________ When the stream is deep
my wild little dog frolics,
when shallow, she drinks. |
|
|
Back to top
|
|
 |
dewdrop_world

Joined: Aug 28, 2006 Posts: 858 Location: Guangzhou, China
Audio files: 4
|
|
|
Back to top
|
|
 |
Acoustic Interloper

Joined: Jul 07, 2007 Posts: 2074 Location: Berks County, PA
Audio files: 89
|
Posted: Tue Aug 21, 2007 7:32 am Post subject:
|
 |
|
OK  _________________ When the stream is deep
my wild little dog frolics,
when shallow, she drinks. |
|
|
Back to top
|
|
 |
Acoustic Interloper

Joined: Jul 07, 2007 Posts: 2074 Location: Berks County, PA
Audio files: 89
|
Posted: Thu Jan 24, 2008 1:15 pm Post subject:
|
 |
|
| mosc wrote: | John Paul Jones is a big Kyma user. I have seen several of his sounds (Kyma parlance for patches). I didn't know he was famous.  |
Hey, Howard, Led Zeppelin had a reunion concert in London in December, 2007, after not playing a concert together since their drummer died in 1980. Here's what John Paul Jones did to ease his nerves before the big concert, as told in the January, 2008 issue of Rolling Stone magazine, p. 10.
| Quote: | | On the day of the show, says Jones, "I tried to keep the enormity of it all as far away as possible until the last minute. I sat around playing banjo all day. It calms me down." |
Kyma *and* banjo. I've watched video of him processing acoustic guitar through Kyma. I don't know how long he's been playing banjo, but I know he's producing some Appalachian-folk-derived musicians in the U.S. and Canada.
I just found out that there's a genre to describe electro-processed folky music, Folktronica. Also, my nephew suggested that my new Farewell ballad and electro-banjo pieces might fit into the New Weird America genre. I feel so trendy.  _________________ When the stream is deep
my wild little dog frolics,
when shallow, she drinks. |
|
|
Back to top
|
|
 |
mosc
Site Admin

Joined: Jan 31, 2003 Posts: 18274 Location: Durham, NC
Audio files: 232
G2 patch files: 60
|
Posted: Sat Jan 26, 2008 12:29 pm Post subject:
|
 |
|
Cool. I guess that puts a little more weight on the positive side of your Kyma decision....  _________________ --Howard
my music and other stuff |
|
|
Back to top
|
|
 |
Acoustic Interloper

Joined: Jul 07, 2007 Posts: 2074 Location: Berks County, PA
Audio files: 89
|
Posted: Sat Jan 26, 2008 1:19 pm Post subject:
|
 |
|
| mosc wrote: | Cool. I guess that puts a little more weight on the positive side of your Kyma decision....  |
Ah, well, I'm not so sure it is a decision, as it is conceding to a set of constraints, particularly time spent on job hunting. I suspect in a parallel universe somewhere I am banging my head up against Kyma signal transforms, whereas today in this universe it is MAX/MSP. With some help from EdisonRez and cbm, I can now make my purest tone banjo sound like I am playing bed springs The good news on this front is that I now have a second processed banjo tune in the works in addition to Ordinary Machinery. I ought to be in good shape for em2008.
I haven't given up on exploring Kyma -- for now it's deferred until I have more time. Having completed Volume I of Musimathics, I am now slowly working my way through Volume II, which is the signal processing volume, and making up lab exercises in MAX/MSP as I go along. Between my freelance programming gigs and job hunting, that's about all I have time for. I am happy to be making progress on musical and non-musical fronts, even though both are largely "catch as catch can" for another year  _________________ When the stream is deep
my wild little dog frolics,
when shallow, she drinks. |
|
|
Back to top
|
|
 |
|