# Thread: An efficient storage scheme for fourier data

1. ## An efficient storage scheme for fourier data

The fourier transform (FT), of course, maps a real signal to the complex plane. Unfortunately, this also means that the result is twice as large as the original signal. As it turns out, though, fully half of that information is completely redundant! Let's see how this is possible. Suppose we are working with a signal containing 8 samples. After performing the FT, we see a pattern emerge:

Reals:
[W][A][b][C][X][C][b][A]
Imaginaries:
[Y][D][E][F][Z][-F][-E][-D]

Notice how all of the coefficients straddling X are "reflections" of eachother, and those of Z are "negative reflections". Also note that Y and Z are *always* zero (or extremely close to it). In other words, all we really need to reconstruct the two planes are:

Composite:
[W][A][b][C][X][-F][-E][-D]

Translated to C:

Code:
```int is_power_of_two( int n )
{
int
bits = 0;
while( n )
{
if( n & 1 )
++bits;
n >>= 1;
}
return bits == 1;
}

void compress_fft_result( const double reals[ ], const double imags[ ], double composite[ ], int size )
{
if( !is_power_of_two( size ) )
return;
for( int idx = 0, mid = size >> 1; idx < size; ++idx )
{
if( idx <= mid )
composite[ idx ] = reals[ idx ];
else
composite[ idx ] = imags[ idx ];
}
}

void decompress_fft_result( const double composite[ ], double reals[ ], double imags[ ], int size )
{
if( !is_power_of_two( size ) )
return;
int
mid = size >> 1;
for( int idx = 0; idx < size; ++idx )
{
if( idx <= mid )
{
reals[ idx ] = composite[ idx ];
imags[ idx ] = -composite[ mid + ( mid - idx ) ];
}
else
{
reals[ idx ] = composite[ mid - ( idx - mid ) ];
imags[ idx ] = composite[ idx ];
}
}
imags[ 0 ] = imags[ mid ] = 0;
}```
The only constraint, of course, it that the sample size must be a power of two.

Cheers! 2. Fast Fourier transform - Wikipedia, the free encyclopedia
Talks about reduced time and memory. 3. Originally Posted by Salem Okay, so it's a pretty well-known property then. Got it! Captain Obvious strike again...  4. ##  Originally Posted by Sebastiani The fourier transform (FT), of course, maps a real signal to the complex plane. Unfortunately, this also means that the result is twice as large as the original signal. As it turns out, though, fully half of that information is completely redundant!
Hi, I'm struggling to recall my electrical engineering theory a LONG time ago, but since
you used the phrase "a real signal" I assume your discussion is not purely mathematical
but in the real-time signal domain.

Anyway, the Laplace transformation, if I recall correctly, is just a special case of the
Fourier transformation (the integral from minus to plus infinity) whereas the Laplace
transformation is evaluated from zero to plus infinity. Thereby the problem, as noted
earlier:

Unfortunately, this also means that the result is twice as large as the original signal.
is automatically taken care of. My buddies in college used to call me "Dr Laplace" since
I seemed to excel in this area and many of them asked me for help in their homework.

Anyway, if you're looking at signal analysis, I would definitely consider using Laplacian transformations. A good link here: 2. Definition of the Laplace Transform

I'm starting to recall more now... the reason why I especially enjoyed using Laplace was in my calculations in the time domain. Doing this by hand was quite laborious when, for example, two complex numbers are multiplied or divided together in the time domain.

However, once the complex numbers (time domain) were converted to (frequency domain) using the Laplace, then the calculation was greatly reduced: A complex multiplication in the time domain is now simply an addition in the frequency domain. And I think, a complex division in the time domain, reduced to a simple subtraction in the frequency domain.

Hence a lot of calculations were simplified once they were in the frequency domain, and then the final calculation was converted back into the time domain.

The bottom line is, the Laplace transformations came about to solve "real world" problems. 5. The fourier transform (FT), of course, maps a real signal to the complex plane.
This isn't 100% correct. Technically, the Fourier Transform can have both complex inputs and complex outputs. In fact, it's outputs could be 100% real with nothing on the complex side...though there's almost always at least something on the complex side. 6. This is very well known in DSP. Almost all real-to-complex transforms return a half-plane instead of a full-plane of values.

If the input signal has even symmetry, the result can be further reduced by half because it is completely real. For odd symmetry it is completely imaginary. 7. Originally Posted by Char*Pntr Hi, I'm struggling to recall my electrical engineering theory a LONG time ago, but since
you used the phrase "a real signal" I assume your discussion is not purely mathematical
but in the real-time signal domain.

Anyway, the Laplace transformation, if I recall correctly, is just a special case of the
Fourier transformation (the integral from minus to plus infinity) whereas the Laplace
transformation is evaluated from zero to plus infinity. Thereby the problem, as noted
earlier:

is automatically taken care of. My buddies in college used to call me "Dr Laplace" since
I seemed to excel in this area and many of them asked me for help in their homework.

Anyway, if you're looking at signal analysis, I would definitely consider using Laplacian transformations. A good link here: 2. Definition of the Laplace Transform

I'm starting to recall more now... the reason why I especially enjoyed using Laplace was in my calculations in the time domain. Doing this by hand was quite laborious when, for example, two complex numbers are multiplied or divided together in the time domain.

However, once the complex numbers (time domain) were converted to (frequency domain) using the Laplace, then the calculation was greatly reduced: A complex multiplication in the time domain is now simply an addition in the frequency domain. And I think, a complex division in the time domain, reduced to a simple subtraction in the frequency domain.

Hence a lot of calculations were simplified once they were in the frequency domain, and then the final calculation was converted back into the time domain.

The bottom line is, the Laplace transformations came about to solve "real world" problems.
Interesting! Yep, I'm reading up on it right now...so far, it seems to be a superior method altogether. Thanks so much for the insight. Originally Posted by DavidP This isn't 100% correct. Technically, the Fourier Transform can have both complex inputs and complex outputs. In fact, it's outputs could be 100% real with nothing on the complex side...though there's almost always at least something on the complex side.
So, what sort of signal might generate both real and complex inputs? Or just complex inputs? Originally Posted by brewbuck This is very well known in DSP. Almost all real-to-complex transforms return a half-plane instead of a full-plane of values.

If the input signal has even symmetry, the result can be further reduced by half because it is completely real. For odd symmetry it is completely imaginary.
Ah, I see. Yes, I suppose I have a tendency to do my research "in vacuo". I really need to work on that! 8. Originally Posted by Sebastiani So, what sort of signal might generate both real and complex inputs? Or just complex inputs?
Given that the Fourier transform and its inverse are different only by a sign in the exponent, and that the choice of which direction is "forward" and which is "inverse" is basically arbitrary, you can think of the spectrum itself as being a realistic example of a complex-valued signal.

Another example is the quantum wavefunction of a particle. 9. Given that the Fourier transform and its inverse are different only by a sign in the exponent, and that the choice of which direction is "forward" and which is "inverse" is basically arbitrary, you can think of the spectrum itself as being a realistic example of a complex-valued signal.
Yes, but in the forward transform there is the normalization step. Without that, the two wouldn't be orthogonal to eachother. I see what you're getting at, tho; it's essentially a reversible transform, and complex terms can appear in either (or both) of the domains.

Nvm - you're absolutely right; the direction of normalization is completely arbitrary (and can even be unilateral, using a scaling term of 1/sqrt(N)).
[/edit]

Another example is the quantum wavefunction of a particle.
What about the time-resolution limitations of the FT? Wouldn't a short-time FT (or maybe even a multiresolution analysis approach) be more appropriate? 10. Originally Posted by Sebastiani What about the time-resolution limitations of the FT?
Congratulations, you just discovered the mathematical basis of Heisenberg's uncertainty principle!

The momentum and spatial wavefunctions are the Fourier duals of each other. Thus, the transform of a localized pulse in space becomes a wide pulse in momentum space (know where the particle is, don't know as much about its speed), and vice versa. 11. Congratulations, you just discovered the mathematical basis of Heisenberg's uncertainty principle!
Indeed! But even the most advanced technique is hindered (to some degree, at least) by HUP; the FT is just that much more inaccurate because it can only work in terms of a "continuous signal" interpretation. The lower bound of HUP with respect to a time-domain signal is the Nyquist limit, and any other given domain is going to have a respective counterpart, naturally.

The momentum and spatial wavefunctions are the Fourier duals of each other. Thus, the transform of a localized pulse in space becomes a wide pulse in momentum space (know where the particle is, don't know as much about its speed), and vice versa.
So...instead of decimations in frequency/time, we have velocity/momentum - is that correct? Or wait. Maybe it's velocity/field-tensor?

Anyway, I do see the important point there - ie: that Fourier analysis can be applied to a *wide* variety of "dual domains". I've never really considered that before, actually. 12. Originally Posted by Sebastiani Indeed! But even the most advanced technique is hindered (to some degree, at least) by HUP;
Yeah, the uncertainty principle is more general than to just space/momentum and Fourier pairs. It's satisfying that in that case though, it falls out naturally from the math. There are many observable pairs that adhere to the principle that aren't so easily explained, for instance energy/time -- the more accurately the energy of an event is known, the less accurately you know WHEN it occurred, and vice versa. 13. The only constraint, of course, it that the sample size must be a power of two.
That's not so bad. I believe I've seen code at work where it it wasn't a power of two, it was worth to either pad or cut the signal (depending on what the original signal was from).

Somebody mentioned the orthogonality of the sine and cosine functions. I'm a bit out of the loop on this stuff so somebody can correct me if I'm wrong, but the basis functions must be orthogonal to be able to describe any function in a given domain? I don't know if I"m wording that well.

Again, perhaps somebody can chine in on this with something more accurate/eloquent, but I recently came across (also related to work), was that the imaginary component of certain* complex solutions to potential functions (fluids) the real component of the solution at a given point gives you potential and the complex component gives you the strength of the streamline, and they're always perpendicular and represent a function/derivative pair, and seemed a (sort of) similar analogy to the space/wave analogy above.

*I say certain because of the limited number of cases where analytical solutions can be found for the potential equations

Anyway, the Laplace transformation, if I recall correctly, is just a special case of the
Fourier transformation
That always confused me. Never got a satisfactory explanation from a professor, but I liked your explanation because I came to the same conclusions. 14. Quote: The only constraint, of course, it that the sample size must be a power of two.
That's not so bad. I believe I've seen code at work where it it wasn't a power of two, it was worth to either pad or cut the signal (depending on what the original signal was from).
I wouldn't recommend it. At best, the result is a slightly skewed (and sometimes unreliable) interpretation. At the very worse, the artifacts thus introduced can be so severe that the result is utterably invalid. Anyway, most of the time the sampling rate (SR) is readily "tunable", so this is not really a huge issue, in practice. However, if for some reason you simply can't modify the SR then consider breaking the sample block into smaller blocks, each having a length that is a powers of two, and then later (after processing each buffer individually) interpolating the results back such that they are orthonormal to the original SR, if that makes sense.

I'm a bit out of the loop on this stuff so somebody can correct me if I'm wrong, but the basis functions must be orthogonal to be able to describe any function in a given domain?
Generally desirable, yes, but not always required. I'll have to let a real mathemetician explain why that is, tho. Again, perhaps somebody can chine in on this with something more accurate/eloquent, but I recently came across (also related to work), was that the imaginary component of certain* complex solutions to potential functions (fluids) the real component of the solution at a given point gives you potential and the complex component gives you the strength of the streamline, and they're always perpendicular and represent a function/derivative pair, and seemed a (sort of) similar analogy to the space/wave analogy above.

*I say certain because of the limited number of cases where analytical solutions can be found for the potential equations  Popular pages Recent additions 