As long as the phase shifts are linear there's no problem.

But what about the start/end of the wave? I suppose that will no longer nicely fit together in a loop unless you sort of crossfade that together or something?

Personally I'd just do it continuously on the whole stream.

That said, I'll definitely check out what's done in OpenMSX in that area; it seems some very interesting stuff is done there

*erikd*wrote:

... I suppose that will no longer nicely fit together in a loop ...

You almost answered your own question :-)

The SCC waveforms are played in a loop, so you also have to treat them like a "loop". Or rephrased, like a "circular buffer". FFT does this automatically (that's how it's defined mathematically). For sinc-interpolation, when part of the sinc function goes past the end of the buffer, you'll have to wrap it around so that it re-enters at the other side of the buffer.

Turns out that the FFT-zeropad-IFFT interpolation I suggested is discrete sinc interpolation :). (See section 8.4.1 here.) It seems the interpolation with that is limited to integer multiples though (and powers-of-two for fast radix-2 FFT). There are also other more flexible ways to implement discrete sinc interpolation (I haven’t gotten that far into the article yet), which I think you’re referring to.

That wikipedia link earlier talks about continuous sinc interpolation btw, not directly applicable to samples.

For now I’m assuming that the FFT-zero padding approach doesn’t introduce any phase shift.

You almost answered your own question :-)

But then you'll agree with me that a phase shift of a filter is actually an issue (you said it wasn't?) and it'll be better (and simpler) to filter the output streams continuously; changing wave forms will sound better that way I suspect.

But anyway, just using an LPF isn't the most 'correct' solution and that discrete sync interpolation looks interesting.

*erikd*wrote:

... I suppose that will no longer nicely fit together in a loop ...

Unless I’m missing something, phase shift just means the wave components are offset (or maybe just the entire wave?), but sound the same. Because the waveform can only contain harmonic frequencies, they will just wrap around when shifted. Also, the FFT transform preserves phase (in the imaginary number).

Also, can’t you cancel out phase shift using an all-pass filter?

*erikd*wrote:

But then you'll agree with me that a phase shift of a filter is actually an issue (you said it wasn't?)

Can you describe the issue? Are you worried about phase cancellation?

*erikd*wrote:

...and it'll be better (and simpler) to filter the output streams continuously; changing wave forms will sound better that way I suspect.

By the way, when filtering a stream, keep in mind that the filter needs to follow the frequency of the tone to properly smooth out the stepping in the waveform.

*Grauw*wrote:

Turns out that the FFT-zeropad-IFFT interpolation I suggested is discrete sinc interpolation .

Both approaches give (theoretically) the same result, so it's not surprising there is a mathematical connection .

*Grauw*wrote:

That wikipedia link earlier talks about continuous sinc interpolation btw, not directly applicable to samples.

Actually also here there's a connection. I personally find the following reasoning useful/insightful:

- We start from a sampled signal (e.g. the SCC circular buffer).

- According to the Nyquist–Shannon sampling theorem this discrete sampled signal corresponds to a unique bandwidth-limitted continuous signal (and vice-versa).

- When fitting sinc-functions through all the sampled points we can (theoretically) exactly (re)construct that continuous signal. Note that the sinc-functions take on the value '1' at exactly one of the sample points and '0' on all other points. So fitting sinc-functions is relatively easy.

- Now that we have a mathematical description for the continuous signal, we can evaluate (=sample) it at arbitrary points. If we choose to evaluate at regular intervals we have effectively resampled the original signal.

So we started from a discrete signal, then via a (theoretical) continuous signal we ended up with another discrete signal at a different sample rate. Although this is a nice theoretical description of a resampling algorithm, it's not possible to implement in practice, because the sinc-functions only fall very slowly to zero when going to +/- infinity. So in practice the sinc-functions are 'windowed' This windowing corresponds to applying some filter to the signal (and there are various trade-offs you can make while designing the filter/window).

*erikd*wrote:

But then you'll agree with me that a phase shift of a filter is actually an issue ...

I'm not sure how you come to that conclusion. Let me give some more background info, maybe that will clear things up.

- The ideal is to have no phase-shift in the system (system here means resampling, filtering, or any other operation you can think of). Both the FFT and the sinc-interpolation can achieve this given that you know _all_ past and future samples (this is the case for the SCC buffer).

- Often you don't know all the future samples, for example when resampling an arbitrary audio stream. So what practical algorithms do is introduce a small delay between the input and the output (so that effectively they can look in the future for some amount of samples). If we split the input signal in harmonics, we'd like that each of these harmonics is delayed by the same amount of time in the output. Shifting a harmonic signal in time is the same as changing its phase. Though phase shifts are expressed in radians, not in time. So to delay all components with the same amount of time, we need the phase shifts to vary linearly with the frequency of the harmonic. Thus linear phase shift means the signal has some delay, but that delay (in time) is the same for all harmonic components.

- As said before the FFT and (theoretical) sinc-interpolators can achieve zero-phase shift. Practical resamplers and FIR filters need to introduce a (short) delay so they can only achieve linear phase shift (when they are designed with this goal in mind). But with e.g. an IIR filter it's hard to achieve a linear phase shift, meaning that not all harmonic components will be delayed by the same amount of time.

- As a first approximation the human ear is phase-deaf. A sine wave with a particular frequency and phase will sound exactly the same as another sine wave with the same frequency but different phase. So as a first approximation you can ignore phase shifts in audio processing. Though phase cannot be completely ignored when precise localization in time is important (e.g. in stereo sounds when the difference in arrival time in the left/right ear is important).

Can you describe the issue? Are you worried about phase cancellation?

No, no, the only thing that I'm worried about is that the start and end of the waveform will not match up if the issue is ignored, leading to a distorted waveform.

Even if the phase shift is completely linear, you'll have an issue when the wave form transitions from one to the next: The start of the next waveform will have delayed information from the end of it's own wave instead of the end of the previous wave.

So my concern is: The signal will not be nicely continuous if the signal is not continuously filtered (possibly leading to ticks and crackles when the waveform changes), so doing it as a one-off on the internal wave form in the SCC instead of the output streams might have side effects.

There I'm sure ther are work-arounds, and maybe the issue is overstated, but anyway this was my reasoning fwiw.

But again, this is assuming just using a traditional IIR LPF, which was mentioned at the time, and which was why I brought up the phase shifts in the first place.

If a phase-shift free algorithm can be used, then this is a non-issue.

y the way, when filtering a stream, keep in mind that the filter needs to follow the frequency of the tone to properly smooth out the stepping in the waveform.

Well, yes, it'll have to be set to 5 octaves above the main freq. in case of SCC I suppose. That's not an issue, is it?

Sorry, but emulation that enhance the original, I push BlueMSX on TurboR with 21Mhz CPU and play NUTS so well.

I think it should be good if we patch some games that are so choppy on MSX2, Xak2, Xak 3, Valis 2 and some more, to make the graphics move every 4 or 2 pixels, then, on the emulator, we can play so much better thanks to the CPU power (do not forguet that BlueMSX has faster VDP thanks to not exact emulation).

What do you think?

Hi all,

I recorded the Valis OST using 3 sound chips, with MSX plugin for Winamp:

I recorded first track using YM2413 (center)

I recorded second track using VRC7 (50% Left)

I recorded third track using YMF281B (50% Right)

Mixed all in one wonderfull track, take it from here:Valis OST 3chips stereo, recorded by Toni Galvez

Maybe we can implement this on OpenMSX or any other emulator, please hear my recording. With the same FMPac music we have from the games, we can play it on this configuration and we will obtain a wonderfull stereo effect.

Hi Toni, I just listened to your mp3, it sounds nice!

I did however notice a bit of a comb-filter effect, perhaps because those 3 chips output at a slightly different phase? Maybe it pays off to tweak the phase of the 3 streams from the FM chips to get rid of the comb-filter effect so that it'll sound even fuller.

It's certainly an interesting approach though, but personally I'm more interested to see what can be done if we forget about 'real' hardware of the era.

Patching games can be part of that. For example a nice gentleman here patched some Konami MSX1 games to help me making the Nemesis games scroll smoothly in my emulator.