electro-music.com   Dedicated to experimental electro-acoustic
and electronic music
 
    Front Page  |  Radio
 |  Media  |  Forum  |  Wiki  |  Links
Forum with support of Syndicator RSS
 FAQFAQ   CalendarCalendar   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   LinksLinks
 RegisterRegister   ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in  Chat RoomChat Room 
 Forum index » DIY Hardware and Software » Microcontrollers and Programmable Logic
FPGA oscillator aliasing
Post new topic   Reply to topic Moderators: State Machine
Page 1 of 1 [25 Posts]
View unread posts
View new posts in the last week
Mark the topic unread :: View previous topic :: View next topic
Author Message
gdavis



Joined: Feb 27, 2013
Posts: 359
Location: San Diego
Audio files: 1

PostPosted: Mon Mar 13, 2017 7:16 pm    Post subject: FPGA oscillator aliasing Reply with quote  Mark this post and the followings unread

Is this worth discussing here? I did a quick search and didn't find much. Curious if anyone has thoughts on this.

Now that I'm able to control my Zybo synth with a MIDI keyboard, I've noticed the aliasing of the oscillator (just a simple saw at this point). This surprised me at first but after doing a little research it makes perfect sense.

The question is, how to address aliasing in an oscillator implemented in FPGA logic?

I've read up a bit on BLIT and BLEP which in concept are pretty much what I was thinking would be a way to address this. I can see implementing them in a processor, but they seem a bit complicated to do in FPGA (at least at my level, which is pretty noobie in regards to implementing DSP). My gut tells me it would chew up FPGA resources fairly quickly.

Oversampling seems to be a simpler solution, though I was hoping to use that bandwidth for polyphony. The FPGA resources for the oscillator are pretty low though so maybe this is better use of resources?

And finally, there's a lookup table. If I used board memory, this would be very light in terms of FPGA resources, though I'd worry a little about the access latency (especially with multiple oscillators trying to read it). FPGA block memory might be possible for simple periodic wave forms.

Does that pretty much cover the available options? Thoughts?

_________________
My synth build blog: http://gndsynth.blogspot.com/
Back to top
View user's profile Send private message
JovianPyx



Joined: Nov 20, 2007
Posts: 1988
Location: West Red Spot, Jupiter
Audio files: 224

PostPosted: Mon Mar 13, 2017 8:15 pm    Post subject: Reply with quote  Mark this post and the followings unread

start here:
htp://www.dspguide.com/
you should have fun there.

_________________
FPGA, dsPIC and Fatman Synth Stuff

Time flies like a banana.
Fruit flies when you're having fun.
BTW, Do these genes make my ass look fat?
corruptio optimi pessima
Back to top
View user's profile Send private message Visit poster's website
gdavis



Joined: Feb 27, 2013
Posts: 359
Location: San Diego
Audio files: 1

PostPosted: Tue Mar 14, 2017 3:55 pm    Post subject: Reply with quote  Mark this post and the followings unread

JovianPyx wrote:
start here:
htp://www.dspguide.com/
you should have fun there.


Oh I've already been spending time there. Theory is great, but I'm still having trouble wrapping my head around what the most practical solution would be in this case. Bouncing ideas of another sentient being usually helps me Smile

Everything other than a table or oversampling seems to involve an excruciating number of calculations to implement in FPGA logic, unless I'm missing something dunno

I'm not sure how to implement a table scheme that produces all the possible harmonics below Nyquist over a large fundamental frequency range - say 16Hz to 5kHz. Split it up into smaller ranges with a table for each range?

Oversampling is starting to look like a good option, but I can't help thinking there's a way to calculate band-limited samples in real time while keeping the calculations required to a reasonable level for FPGA implementation. I'm looking at deriving a band-limited continuous function from the unlimited wave form function combined with a sinc function (or similar low pass). Trying to come up with a function that produces a reasonable approximation with a reasonable amount of calculations.

_________________
My synth build blog: http://gndsynth.blogspot.com/
Back to top
View user's profile Send private message
JovianPyx



Joined: Nov 20, 2007
Posts: 1988
Location: West Red Spot, Jupiter
Audio files: 224

PostPosted: Tue Mar 14, 2017 5:51 pm    Post subject: Reply with quote  Mark this post and the followings unread

heh, I see. In my experience, there's just no easy way to deal with aliasing (which is not to say you asked for an easy way). Tables work, but you need multiple tables because those that suffice at high pitches produce muddy tones at the low pitch end. And of course some logic is needed to decide which tables to use, and then the interpolation.

My own approach - I had a Spartan 3E Starter Kit which has a 12 bit DAC capable of 1 MHz sample rate. I completely ignored the idea of aliasing by running the synth at the DAC's top sample rate. This way, I could use naive waveforms because alias artifacts were at such a low amplitude they couldn't be heard.

I have also tried the "upsample and brick wall filter" method at more reasonable sample rates. This works, but suffers from fluctuating amplitude at high pitches.

As for "all possible harmonics below Nyquist", that may not be necessary since human hearing drops off below what would be considered the lowest "acceptable" Nyquist frequency for high fidelity.

One of my tasks regarding DSP design is deciding what "good enough" is. In karplus-strong designs, I take no effort at all to suppress aliasing even though I know it's there (theoretically). Think about this system for a moment - it's a waveform that loops through a delay line - with a digital low pass filter in that loop. Any alias problems will be present only during the very first few cycles of the waveform - during the attack, or "piano-forte thump" portion of the sound. So in that case, I tolerate alias artifact noise for a period of time so short it makes no difference in the timbre of the sound that is noticeable or even perceptible.

What you're facing is why so many of my projects go through so many versions. I'll get it working one way and then find or realize a better way that takes less FPGA fabric or improves the sound.

A large factor is usually "how many clocks do I have to make each sample?". Working with the fastest clock I can has always been beneficial since logic can be shared. More clocks means more
processing power to do more clever things to the signal.

I've also done other synth types such as an additive flute synth. A flute produces pretty much a fundamental and one harmonic at 2F. Such a synth can easily just forget about aliasing - look at the actual frequency range of a real flute and you'll see why.

With regard to completely random waveforms, I wrote a synth that generated the basis of a waveform using random numbers to fill a table with rectangular waveform pieces. Since there were many obvious transients, the waveform as produced would alias horribly. I wrote code to replace vertical step changes with variable controllable slope splines which eliminated the aliasing artifacts satisfactorily. In that case, I used a cubic spline. The same thing is also possible using a single pole low pass filter passed over the wavetable both forward and reverse (which eliminates phase shift). The difference in computational effort between the two methods was not large. So for me, "try it and see" was at least educational... The synth does the random waveform generation in realtime with crossfades to change waveforms and could do the changes at low audio rates (about 3khz).

The sinc function idea is something I've seen many times, but I've never employed it. As far as I've seen it, it's been an FIR filter. And the longer the kernel, the better it works. But Kernel length adds latency. Since I do MIDI synths, I'm always concerned about latency. This is a place where higher sample rates help.

I've worked only with a few Xilinx products, but even among them, the way you use fabric resources can matter. Some methods generate less routing clutter than others which can affect timing. That is a factor when you're trying to get the FPGA clock as high as possible.

Dunno if any of that helps, but I hope so.

_________________
FPGA, dsPIC and Fatman Synth Stuff

Time flies like a banana.
Fruit flies when you're having fun.
BTW, Do these genes make my ass look fat?
corruptio optimi pessima
Back to top
View user's profile Send private message Visit poster's website
gdavis



Joined: Feb 27, 2013
Posts: 359
Location: San Diego
Audio files: 1

PostPosted: Tue Mar 14, 2017 8:20 pm    Post subject: Reply with quote  Mark this post and the followings unread

JovianPyx wrote:

A large factor is usually "how many clocks do I have to make each sample?". Working with the fastest clock I can has always been beneficial since logic can be shared.

I've been considering this in two scenarios:
1. Share logic to create more voices - use the faster clock during one sample period to calculate a single sample of each voice for multiple voices. So a multiplier for example would serve a single function in the sample calculation, but be shared for a number of voices.

2. Share logic to implement more complex calculations - so a multiplier would be used to calculate one sample for one voice, but would be shared for a number of different functions of calculating that sample.

Scenario 1. seems more straight forward. Scenario 2. seems like I'd essentially be implementing a processor, at which point I should just use an existing processor and do this in software rather than FPGA Wink


Quote:
The sinc function idea is something I've seen many times, but I've never employed it. As far as I've seen it, it's been an FIR filter. And the longer the kernel, the better it works. But Kernel length adds latency. Since I do MIDI synths, I'm always concerned about latency. This is a place where higher sample rates help.

Well, I was thinking not doing the calculation in hardware, but rather that since it's a well defined periodic wave form, I could just calculate the final wave function by hand - doing the convolution integral on the sawtooth and sync functions to come up with the final continuous equation that I could sample in the FPGA. Turns out these functions aren't integrable Laughing There's a series approximation but then it becomes essentially the same problem as just doing a Fourier series - multiplying and adding a lot of terms.

Quote:
Dunno if any of that helps, but I hope so.


It did, thanks Smile You may not have provided a specific solution (which isn't necessarily what I was asking for) but most if not all your points mirrored many of my thoughts which at least gives me some comfort that my head is in the right space.


Everything involving real-time calculations seems to come down to a bunch of MACC operations (which I suppose is the essence of DSP after all). Maybe I should just make use of the high clock rate and Zynq DSP blocks. I think their are 80 of them, each of which implement a MACC. Should still be able to get a decent number of voices from that huh?

Off the top of my head I think I can I can do about 60 MACC operations per sample with one DSP block. More if I up the clock frequency. I guess I should be able to get a few hundred MHz, but I can't figure out Xilinx speed grading system to get an actual number Confused How many harmonics does a 16Hz sawtooth need to recreate a reasonable audio representation?

_________________
My synth build blog: http://gndsynth.blogspot.com/
Back to top
View user's profile Send private message
JovianPyx



Joined: Nov 20, 2007
Posts: 1988
Location: West Red Spot, Jupiter
Audio files: 224

PostPosted: Wed Mar 15, 2017 3:54 am    Post subject: Reply with quote  Mark this post and the followings unread

There is also the "BLIT" family of methods - BLIT or Band Limited Impulse Train can be used to create a band limited sawtooth (for example) by adding band limited impulses (hence "train"). All of the band limiting is performed on the impulse first and then the output of that (which has no alias components) can be used with adds to make whatever can be made and will have no alias components in it's output. There are also several other methods like BLEP that begin with BL to read about. These are all rather specific for what problems they can solve. I've not used them mainly because I like simple stuff and BL* tends to be a bit complex for my 13 cell brain. But that's me.

You mentioned that working in certain ways causes you to create a "processor" in FPGA fabric.. Yes! you do! Smile but it's not just a processor - it's a highly specialized processor designed to do exactly what you need. I've done those in synths more than once, although it was not in a Zynq where I had a choice between ARM code or FPGA. Still, I think there is a place for that even in a hybrid like Zynq. My idea with the Zybo was to create a mostly self contained synthesizer in FPGA fabric and control it from an ARM core and use the second core to process some audio effect like reverb. I thought this might work because the ARM has easy access to big external RAM. I will admit that the Raspberry Pi has been a distraction from that.

I've admitted this before, I've no formal training nor education in DSP. I did pass college Calculus with an A grade 30+ years ago, but that's just the math basis for DSP, not how to use an FPGA to do it. So I've also got no formal FPGA technique training, all of my work has been a sort of blind man's bluff (not complaining either). I do know that it is taught that a pipelined design is the preferred target such that you create a chain of arithmetic and logic elements between registers so that you just clock the crap out of it and answers come out. I've never been able to accomplish that sort of system, but maybe synths really don't fall into that category of project (I dunno). My FPGA code has always worked out to a state machine that uses pipelining wherever possible, but invariably I seem to get odd process stuctures. They work, but as you say, they do look like a processor and not a streamlined logic design.

As for how many harmonics does a 16 Hz saw require, well, that's a question I think only ears and your own tolerance can answer. Everyone's ears are different and everyone's tolerance for missing harmonics is different. The most extreme answer is "all of them up to just below Nyquist". That conforms best to theory, but ignores the fact that hearing drops off before Nyquist on a high fidelity system. And I think you've hit the devil in the details squarely in the nose. Here's a question I ask myself - Do I sit and listen to a sawtooth waveform being critical of it's harmonic construction - really? Probably not. I listen to a sawtooth that is produced by an instrument playing a piece of music. So if your music is drones that depend on high frequency harmonic interactions with other stuff like filters or like beating (say two closely tuned saw waves), then perhaps you need to consider higher harmonic frequencies for your personal limit. If you're playing classical fugues with the synth, I think these harmonics become less important due to the complexity of the melodies - your brain won't have time to consider "oh, a harmonic at the top is missing"... Smile

_________________
FPGA, dsPIC and Fatman Synth Stuff

Time flies like a banana.
Fruit flies when you're having fun.
BTW, Do these genes make my ass look fat?
corruptio optimi pessima
Back to top
View user's profile Send private message Visit poster's website
gdavis



Joined: Feb 27, 2013
Posts: 359
Location: San Diego
Audio files: 1

PostPosted: Wed Mar 15, 2017 1:31 pm    Post subject: Reply with quote  Mark this post and the followings unread

Dude, you're spooky! BLIT is exactly where my investigation headed and I've already been reading up on it. Looking like a strong contender at this point. I guess I'll need a sine lookup table (which should be relatively easy - no harmonics Smile ), but I'm worried about the division. Change it to multiplication with a 1/sin lookup table? But 1/sin is going to shoot off to infinity... Confused

I'm also wondering if my "hand derivation" approach would work with a non-ideal LPF like a butterworth, or maybe doing the convolution in frequency domain (where it becomes a simple multiplication) and doing a reverse transform back to time domain. My gut tells me the reverse transform won't be easy but I wonder if there's a way to at least get a good approximation.

Regarding the "processor", technically its all a processor, but I guess what I was trying to get at was the varying degrees of specificity or generality. I feel FPGA design is more suited to the more specific end of the spectrum. I don't really want to wander into the realm of creating a general purpose calculating machine. However, it's a continuum with trade offs that need to be balanced.

Sounds like I might have a little bit of a leg up on you in the DSP theory category, but you've got a head start on me with actual implementation. I have a Computer Engineering degree (combination of Computer Science and Electrical Engineering - mostly focused on digital electronics). So that was what, about 3 courses of calculus, differential equations, linear algebra, signal analysis, transforms... ugh. And I did actually study DSP theory. It's been a little over 20 years now for me. I never did attain much competence with the DSP math and have forgotten most of the specifics, mostly just some of the intuitive principles have stuck. So it's fun to be relearning this and hopefully really getting it this time. There has been a lot that I struggled with in school but finally started to "click" for me when I started doing synth DIY.

One of the things I do remember is that DSP implementations basically break everything down to 3 elements: addition (accumulation), multiplication and delay. If you look at chapter 19 Recursive Filters in the DSP Guide, you'll see this is exactly what they do. It's a little obscured by the use of an array, but a[i-1], a[i-2] etc. are the input signal delayed by 1 sample, 2 samples etc. So hardware DSP (or DSP specific processors) typically implement delay elements (flip-flops) and MACC blocks (streamlined multiply and accumulate). Combining these in a pipeline with enough clock cycles you can implement any DSP algorithm.

Of course my ears will tell me how many harmonics I need, I was just kind of thinking out loud Laughing I would like this to be suitable for people other than just me, so curious what the "average" would be. I just installed an open source "Matlab" type of program called Octave to play around with this and get a sense of the limitations I'm dealing with.

Thank you for your thoughtful replies, this has been a helpful and enjoyable discussion. I don't really have any people around to talk about this sort of stuff with.

_________________
My synth build blog: http://gndsynth.blogspot.com/
Back to top
View user's profile Send private message
gdavis



Joined: Feb 27, 2013
Posts: 359
Location: San Diego
Audio files: 1

PostPosted: Sat Mar 18, 2017 12:57 pm    Post subject: Reply with quote  Mark this post and the followings unread

So I've gone ahead and implemented a polyBLEP solution. It's not perfect but it's a noticeable improvement. I might play around with it a little more but I'm pretty happy with it for now so I think I'll move on to other things to make this synth more playable Smile

PolyBLEP is pretty much exactly the type of solution I was looking for. No lookup tables - there's a pre-derived polynomial function used to calculate corrections to the naive wave form in real-time. I haven't wrapped my head around the derivation of the polynomial, but found some examples that were straight forward and didn't have too many calculations to implement in FPGA.

The polyblep calculation added 4 dedicated multipliers (DSP48 blocks in the zynq) and one divider. I think the divider was synthesized in CLB's, I may have to look into a better way to do it. There's no pipelining and I had to add a multicycle constraint for it to meet timing at 50MHz. Not really a problem as it stands but could get tricky when I start oversampling or interleaving voices.

As it stands, I'm using 13 of the 80 available DSP48 blocks for one voice with one oscillator, simple 4-pole LPF and a VCA. About 16% of the CLB's and DSP48 blocks used. So I still have some room to grow but I'm not going to get much polyphony without interleaving the voices between sample periods.

_________________
My synth build blog: http://gndsynth.blogspot.com/
Back to top
View user's profile Send private message
JovianPyx



Joined: Nov 20, 2007
Posts: 1988
Location: West Red Spot, Jupiter
Audio files: 224

PostPosted: Tue Mar 21, 2017 7:40 am    Post subject: Reply with quote  Mark this post and the followings unread

As for dividers - I've never liked using division in DSP, but that may be because I'm an "old timer" who lived through the days of slow divide logic or completely absent divide logic.

I've noticed that divides are fairly fast in the Rpi ARM CPU, but I still avoid where possible, especially dividing by a constant (like N/3 would be N*0.33333333). In FPGA logic, the DSP48 blocks contain only multipliers, adders and routing to block RAMs, so this made for very fast "divides".

I'm sure this bit is obvious - for FPGA work, when I was stuck with providing a divide operation, I code that as either a table look up or using a reciprocal (which might also be a table look up) and multiply. Given something like a SVF which needs (at the human level) Q, the logic actually needs q (which is defined as 1/Q). This (the q value created from 1/Q) was supplied through a human interface on a PC into the uProc that acted as as MIDI controller and synth register stuffer.

Of course, if whatever you use for divide is fast enough (I am a firm believer in "good enough is good enough"), all is well, but for me, I always tried to get things to happen as fast as possible - mainly so that I could increase either voice or feature count or feature complexity.

I'm curious though, you mentioned your polyBLEP technique wasn't perfect - what is imperfect about it? And did you find a core or did you code it yourself?

One of the reasons I always do a state machine for a MIDI synth is that it then becomes rather simple to re-use the same logic for multiple voices (not that it can't be done other ways). In doing that, because of voice complexity, it may not be possible (at a given clock rate) to produce all of the needed voices with the state machine so if there is enough unused fabric, I just replicated the state machine as many times as need (or as would fit) and used a controlling state machine to do the final audio sums. I was able to use this technique on Spartan-3E to produce 16 voice FM (simple bell configuration), 8 voice KS and several others. I'm sure if I looked at those designs today, I could probably find ways to increase the voice count and quality of the audio, but at this point, the computation power of the Spartan-3E pales in comparison to a fast multicore ARM with NEON.

_________________
FPGA, dsPIC and Fatman Synth Stuff

Time flies like a banana.
Fruit flies when you're having fun.
BTW, Do these genes make my ass look fat?
corruptio optimi pessima
Back to top
View user's profile Send private message Visit poster's website
gdavis



Joined: Feb 27, 2013
Posts: 359
Location: San Diego
Audio files: 1

PostPosted: Wed Mar 22, 2017 12:38 pm    Post subject: Reply with quote  Mark this post and the followings unread

JovianPyx wrote:
As for dividers - I've never liked using division in DSP, but that may be because I'm an "old timer" who lived through the days of slow divide logic or completely absent divide logic.

Yep, me too. I'd managed to avoid it up to this point, but haven't done a resonant filter yet. I was having difficulty get more complex filters to work so I ended up going to a simple 1-pole LPF that I duplicated 4 times to get a 4-pole filter. I'm planning to try another filter now that I've got a better grasp of how to implement this DSP stuff, so I'll need to take your advice into consideration.

Quote:
I'm curious though, you mentioned your polyBLEP technique wasn't perfect - what is imperfect about it? And did you find a core or did you code it yourself?

It seems to work great at eliminating the high frequency aliasing, but I still have a lot of low frequencies present. I found a couple examples in C code which I implemented in VHDL. Not sure I didn't make a mistake or if what I'm hearing is actually caused by something else because I also implemented the algorithm in Octave where it seemed to work much better. Very interesting to see the effect on the scope Smile

Since my last post I've added 4x oversampling which really helps lot. Much cleaner sounding overall to the point that I don't really hear a difference with the polyBLEP.

Quote:
One of the reasons I always do a state machine for a MIDI synth is that it then becomes rather simple to re-use the same logic for multiple voices (not that it can't be done other ways).

Oh, I see. I actually just started working on this, but it's more like a counter and muxes. I'd like to keep it as streamlined as possible and avoid the "odd process structures" you mentioned earlier. Wink

_________________
My synth build blog: http://gndsynth.blogspot.com/
Back to top
View user's profile Send private message
JovianPyx



Joined: Nov 20, 2007
Posts: 1988
Location: West Red Spot, Jupiter
Audio files: 224

PostPosted: Wed Mar 22, 2017 1:06 pm    Post subject: Reply with quote  Mark this post and the followings unread

Regarding detecting alias artifacts, there is more than one method including using FFT to create a spectral graph. Once you have the graph you can analyze the peaks which should be present according to the periodic waveform's harmonic components. The ones that don't belong (not part of the true harmonic sequence) are most likely alias artifacts. In my case, I didn't have easy access to FFT at the time (some many years ago).

However, I've always been able to hear the artifacts given the right set up. When I worked with the brick wall filter method, I had two set ups, one that simply output the niave sawtooth and another that passed the naive waveform through the brickwall filter. But that's not enough - I also made each setup smoothly vary the pitch of the naive sawtooth. While the oscillator is running (with the pitch variation), the naive waveform alone would have these obvious ghost signals also varying in pitch, but not coincident with the fundamental's pitch variation, a bit like several police cars running their sirens. These oddities were not present in the filtered version. If you've not heard this effect, it's worthwhile to listen to it, it's only a tiny bit subtle. I can hear higher frequency alias artifacts better than I can low frequency ones. Doing this experiment, I was able to convince myself that my brickwall filter was actually removing most of the alias artifacts.

In my view, the "listen method" isn't scientific. That is, it can't tell us how much aliasing there is. However, it's a basic go-no-go test such that if there's no anti-alias activity, you'll hear it. Getting a scientific result would require sampling your output signal and analyzing it with a program like RMAA or using Audacity's analysis tools. Remember that it will still be ok to sample an aliased signal like this because aliasing forces all harmonics above nyquist to wrap back below nyquist so the FFT peaks you see will be real, even if they are aliases of the original signal.

HTH

_________________
FPGA, dsPIC and Fatman Synth Stuff

Time flies like a banana.
Fruit flies when you're having fun.
BTW, Do these genes make my ass look fat?
corruptio optimi pessima
Back to top
View user's profile Send private message Visit poster's website
gdavis



Joined: Feb 27, 2013
Posts: 359
Location: San Diego
Audio files: 1

PostPosted: Thu Mar 30, 2017 7:00 pm    Post subject: Reply with quote  Mark this post and the followings unread

Ya, somethings not right. I must have made a mistake somewhere.

I can hear the problem even without doing a sweep, playing semitones on the keyboard is enough. If you watched the show Emergency 51 as much as I did as a kid it reminds me of the station alert tones.

I recorded a sweep which I'll attach. When it gets to the higher frequencies you can start to hear what sounds like "yoi yoi yoi"

I also recorded a continuous tone and did an FFT on it and there's still all kinds of peaks that shouldn't be there. Running the algorithm through Octave gives much better results than what I'm seeing from the FPGA output.


aliasing.wav
 Description:

Download
 Filename:  aliasing.wav
 Filesize:  2.06 MB
 Downloaded:  897 Time(s)


_________________
My synth build blog: http://gndsynth.blogspot.com/
Back to top
View user's profile Send private message
gdavis



Joined: Feb 27, 2013
Posts: 359
Location: San Diego
Audio files: 1

PostPosted: Thu Mar 30, 2017 11:41 pm    Post subject: Reply with quote  Mark this post and the followings unread

After a closer look at the simulation, I found that the divider is always producing a result of 0. So the polynomial correction factor is always calculated to be +/-0.5 which obviously isn't correct.

I looked into the IP catalog divider generator but it's showing a latency of 32 clocks because of the bit length of the values I'm working with.

I'm afraid this divider may be beyond my abilities at this point and I may be better off just using oversampling. 4x seemed to work quite well and seems like it would actually use fewer clock cycles per sample than a basic divider. Confused

_________________
My synth build blog: http://gndsynth.blogspot.com/
Back to top
View user's profile Send private message
JovianPyx



Joined: Nov 20, 2007
Posts: 1988
Location: West Red Spot, Jupiter
Audio files: 224

PostPosted: Fri Mar 31, 2017 4:06 am    Post subject: Reply with quote  Mark this post and the followings unread

Aha. Now you know why I never liked dividers... Is there a way you could use a lookup table in block RAMs with the quotients pre-calculated? That has worked for me under the right circumstances.

As for oversampling, you also need to get rid of transients and other problems with the input signal. I had done a brickwall FIR filter, but it needs to have a large kernel (I think mine was 256 samples which isn't even very large). Such large kernel filters are also clock expensive...

_________________
FPGA, dsPIC and Fatman Synth Stuff

Time flies like a banana.
Fruit flies when you're having fun.
BTW, Do these genes make my ass look fat?
corruptio optimi pessima
Back to top
View user's profile Send private message Visit poster's website
gdavis



Joined: Feb 27, 2013
Posts: 359
Location: San Diego
Audio files: 1

PostPosted: Fri Mar 31, 2017 11:47 am    Post subject: Reply with quote  Mark this post and the followings unread

Haha, ya, always had a bad feeling about dividers, but for a little while there I thought it was actually working Laughing

The divider generator actually has a LUT mode which uses a table to obtain the recipricol of the divisor then does a multiply. I think the problem with any lookup table method is that I'm working with too many bits so the table will be large, unless I employ a lot of interpolation. Before pursuing a divider any further I think I need to take a more careful look at how many bits are actually needed. Now that I think about it, it would be a simple matter to keep the larger number of bits just in the accumulator for frequency accuracy, then just chop it down before the polyBLEP calculation instead of after. Should have thought of that in the first place Rolling Eyes

I also thought about just calculating the recipricol in software and writing it to a register, though that seems a bit kludged. I might try it though since it's simple just to see how it works.

Not sure I'm seeing the problems with oversampling, could you elaborate? I figure the harmonics folding back at 4X will be so small by the time they fold back down below 20kHz that they're pretty much unnoticeable. For the harmonics between 20KHz and 80KHz which would alias when down-sampled, I was just using the normal LPF of the synth limited to an upper cutoff of 20Khz to filter those out. It's not a brick wall, but it seemed to be working OK when I tried it briefly before dunno Am I missing something? (I'm not being precise with the frequencies, just rounding off to easy numbers).

_________________
My synth build blog: http://gndsynth.blogspot.com/
Back to top
View user's profile Send private message
JovianPyx



Joined: Nov 20, 2007
Posts: 1988
Location: West Red Spot, Jupiter
Audio files: 224

PostPosted: Fri Mar 31, 2017 12:59 pm    Post subject: Reply with quote  Mark this post and the followings unread

I am perhaps making an incorrect assumption -

I assumed that you are using 44.1 kHz sample rate for DAC output. That is where the problem will occur.

Yes, you can upsample by 4, so the effective sample rate while generating the waveforms will be 176.4 kHz and certainly harmonics above 88.2 kHz might be unnoticable - if your DAC ran at 176.4 kHz.

However, if your DAC sample rate if 44.1 kHz, you will need at some point downsample to from 176.4 to 44.1 and that is where the problem will occur and is why the brickwall filter needs to be between those two operations. If you filter at 176.4 kHz SR so that near nothing over 20 kHz exists and then downsampling to 44.1 kHz won't have alias artifacts.

If your synth filter (running at 176.4 kHz SR) really does eliminate harmonics above 20 kHz then you don't need the brick wall and that is entirely possible - best way to know is to try it and see what you get in FFT or do the sweep test - and make sure to test with the filter wide open.

I've no idea what sort of filter your synth filter is, but I'll guess a SVF because it's easy to tune and easy to code. At 176.4 kHz SR, your top Fc setting is 29.4 kHz, so it can go above the 20 kHz limit and that would cause aliasing. You could, however, limit the SVF Fc range so that it's top end is 20 kHz, but even then you should note that a SVF is 2 poles and only -12 dB per octave falloff and at 20 kHz a harmonic at 40 kHz is only -12dB down and that might easily be heard/wrapped. Even a 4 pole filter which is -24 dB/oct could allow some crap through at the high end.

I'm not at all advocating a brick wall filter, I tried it and got dubious results and the huge kernel adds a bunch of latency. Yeah, it made the aliasing go away, but it also had weird amplitude variations when the tone is swept, especially at the high end of the spectrum. I'd say that your polyBLEP is a better way to do this if you can resolve the divide issue. You suggested doing the divide in a CPU outside of the FPGA - Why not? I've done that often - take an SVF which needs to be supplied a Q value - but the SVF actually needs 1/Q, not Q. To give it Q would require doing 1/Q in the FPGA - so I would use the controlling CPU to do the 1/Q to get q and supply q to the FPGA - problem goes away. Nothing at all "wrong" with that IMO.

_________________
FPGA, dsPIC and Fatman Synth Stuff

Time flies like a banana.
Fruit flies when you're having fun.
BTW, Do these genes make my ass look fat?
corruptio optimi pessima
Back to top
View user's profile Send private message Visit poster's website
gdavis



Joined: Feb 27, 2013
Posts: 359
Location: San Diego
Audio files: 1

PostPosted: Fri Mar 31, 2017 5:07 pm    Post subject: Reply with quote  Mark this post and the followings unread

OK, I got it, thanks Smile

So ya, my DAC (and all my processing) is running at 48kHz sampling rate. So 4x is going at 192kHz.

Filter is a basic single pole LPF algorithm I found somewhere, I run the signal through 4 copies of it for a 4 pole filter. 24dB/oct, running at 192kHz SR but limited to 20kHz Fc and a little extra margin from the 48kHz SR downsampled, not great but better than nothing Smile

If more AA filtering is needed, maybe a chebyshev would work well.

If I only needed the divided value I wouldn't be concerned about doing the division in software, but it's actually t/dt where t is the current phase of the oscillator and dt is the phase increment. So I'd need to provide both dt (to increment the oscillator) and 1/dt (for the polynomial calculation). This creates the potential for errors where dt and 1/dt aren't corresponding correctly, either from programming error or the delay between writing the two registers. Not the end of the world but one of those things I like to avoid if I can. I'll have a go at it and see how it works out. Might still be a better trade off than the oversampling hassles.

_________________
My synth build blog: http://gndsynth.blogspot.com/
Back to top
View user's profile Send private message
blue hell
Site Admin


Joined: Apr 03, 2004
Posts: 24079
Location: The Netherlands, Enschede
Audio files: 278
G2 patch files: 320

PostPosted: Fri Mar 31, 2017 5:47 pm    Post subject: Reply with quote  Mark this post and the followings unread

After a quick look to my code ... some random thoughts ... no idea if its useful or even correct Laughing

In the oscillator you'd have something like Phase = Phase + PhaseDelta .. where PhaseDelta is calculated from the wanted frequency .. in the BLEP you'd need 1 / PhaseDelta .. I don't know if you calculate PhaseDelta or do a table lookup for it ... but if the latter a second lookup table could work. (and I'd assume the latter .. as frequency to PhaseDelta needs a power calculation in it (something like 2^((WantedNoteNr - MiddleNoteNr) / NotesPerOctave) * ReferenceNoteFrequency).

To calculate PhaseDelta you need something like PhaseDelta = Frequency / SampleRate .. so for 1 / PhaseDelta you need SampleRate / Frequency .. so with a lookup for 1 / Frequency you could do it as well I think.

Or alternatively you could pre-compute it on a controling CPU ... however when you want to do frequency modulation (or phase modulation) you can not do it that way .. unless you make a bidirectional signal flow there.

Table lookups would need interpolation but that could be scaled so it can be calculated with shifts instead of divisions .. I think.

_________________
Jan
also .. could someone please turn down the thermostat a bit.
Posted Image, might have been reduced in size. Click Image to view fullscreen.
Back to top
View user's profile Send private message Visit poster's website
blue hell
Site Admin


Joined: Apr 03, 2004
Posts: 24079
Location: The Netherlands, Enschede
Audio files: 278
G2 patch files: 320

PostPosted: Fri Mar 31, 2017 6:00 pm    Post subject: Reply with quote  Mark this post and the followings unread

Oh oops .. linear interploation needs no division - was looking at all-pass interpolation - which does need it.
_________________
Jan
also .. could someone please turn down the thermostat a bit.
Posted Image, might have been reduced in size. Click Image to view fullscreen.
Back to top
View user's profile Send private message Visit poster's website
gdavis



Joined: Feb 27, 2013
Posts: 359
Location: San Diego
Audio files: 1

PostPosted: Fri Mar 31, 2017 6:48 pm    Post subject: Reply with quote  Mark this post and the followings unread

Oh hi! Thanks for stopping by welcome Three people officially makes it a party, right? party time!

Ya, I guess I am doing a table for the phase increment, though it's in software. I defined an array of 12 values for the first octave of midi notes, then just shift the value left one bit (multiply by two) for each octave above that. The result is written to a register which the oscillator accesses to get the increment value.

I totally forgot about frequency modulation. Now that you mention it, I was thinking a while back that I'd have to move the above to the FPGA so I could do modulation with FPGA based sources. I guess having a phase delta table and a 1/phase delta table wouldn't be too bad. Just need enough entries for one octave and bit shift to get the other ones.

That leads me to the question of what sort of pitch value am I going to send to the oscillator and how do I convert that to a LUT address and bit shift value? I did a div and mod in software which I'm trying to avoid in FPGA...

_________________
My synth build blog: http://gndsynth.blogspot.com/
Back to top
View user's profile Send private message
gdavis



Joined: Feb 27, 2013
Posts: 359
Location: San Diego
Audio files: 1

PostPosted: Fri Mar 31, 2017 8:27 pm    Post subject: Reply with quote  Mark this post and the followings unread

Actually, I guess my question above becomes simple if my LUT is a power of 2 in size. Then it's just a matter of using x MSB's for the octave, the next y bits for the LUT address and the remaining z bits for the interpolation factor.
_________________
My synth build blog: http://gndsynth.blogspot.com/
Back to top
View user's profile Send private message
gdavis



Joined: Feb 27, 2013
Posts: 359
Location: San Diego
Audio files: 1

PostPosted: Sat Apr 01, 2017 10:57 am    Post subject: Reply with quote  Mark this post and the followings unread

Alright, I calculated the recipricol in software and the polyBLEP is definitely working much better now Laughing

Now working on implementing the LUT's in the FPGA

_________________
My synth build blog: http://gndsynth.blogspot.com/
Back to top
View user's profile Send private message
blue hell
Site Admin


Joined: Apr 03, 2004
Posts: 24079
Location: The Netherlands, Enschede
Audio files: 278
G2 patch files: 320

PostPosted: Sat Apr 01, 2017 11:41 am    Post subject: Reply with quote  Mark this post and the followings unread

Ah, great :-)
_________________
Jan
also .. could someone please turn down the thermostat a bit.
Posted Image, might have been reduced in size. Click Image to view fullscreen.
Back to top
View user's profile Send private message Visit poster's website
gdavis



Joined: Feb 27, 2013
Posts: 359
Location: San Diego
Audio files: 1

PostPosted: Sat Apr 01, 2017 9:11 pm    Post subject: Reply with quote  Mark this post and the followings unread

OK, lookup table is in the FPGA now. That wasn't as bad as I'd feared it would be, except for Vivado not being able to manage the path to the ROM load file correctly and getting my IP sources out of sync. That was a pain Rolling Eyes But it all seems to be working well now Very Happy

Utilizing 25 of 80 fixed multipliers for one voice with 2 oscillators, VCF and VCA. With everything pipelined I should be able to get a good amount of polyphony without having to duplicate hardware.

I guess that brings my adventure in anti-aliasing oscillators to a close for now. Thanks to both of you for all your input, was very helpful. beer

_________________
My synth build blog: http://gndsynth.blogspot.com/
Back to top
View user's profile Send private message
blue hell
Site Admin


Joined: Apr 03, 2004
Posts: 24079
Location: The Netherlands, Enschede
Audio files: 278
G2 patch files: 320

PostPosted: Sun Apr 02, 2017 2:20 am    Post subject: Reply with quote  Mark this post and the followings unread

Cool beer
_________________
Jan
also .. could someone please turn down the thermostat a bit.
Posted Image, might have been reduced in size. Click Image to view fullscreen.
Back to top
View user's profile Send private message Visit poster's website
Display posts from previous:   
Post new topic   Reply to topic Moderators: State Machine
Page 1 of 1 [25 Posts]
View unread posts
View new posts in the last week
Mark the topic unread :: View previous topic :: View next topic
 Forum index » DIY Hardware and Software » Microcontrollers and Programmable Logic
Jump to:  

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Forum with support of Syndicator RSS
Powered by phpBB © 2001, 2005 phpBB Group
Copyright © 2003 through 2009 by electro-music.com - Conditions Of Use