Thread: An efficient storage scheme for fourier data

  1. #1
    Guest Sebastiani's Avatar
    Join Date
    Aug 2001
    Location
    Waterloo, Texas
    Posts
    5,708

    Lightbulb An efficient storage scheme for fourier data

    The fourier transform (FT), of course, maps a real signal to the complex plane. Unfortunately, this also means that the result is twice as large as the original signal. As it turns out, though, fully half of that information is completely redundant! Let's see how this is possible. Suppose we are working with a signal containing 8 samples. After performing the FT, we see a pattern emerge:

    Reals:
    [W][A][b][C][X][C][b][A]
    Imaginaries:
    [Y][D][E][F][Z][-F][-E][-D]

    Notice how all of the coefficients straddling X are "reflections" of eachother, and those of Z are "negative reflections". Also note that Y and Z are *always* zero (or extremely close to it). In other words, all we really need to reconstruct the two planes are:

    Composite:
    [W][A][b][C][X][-F][-E][-D]

    Translated to C:

    Code:
    int is_power_of_two( int n )
    {
        int
            bits = 0;
        while( n )
        {
            if( n & 1 )
                ++bits;
            n >>= 1;
        }
        return bits == 1;
    }
    
    void compress_fft_result( const double reals[ ], const double imags[ ], double composite[ ], int size )
    {
        if( !is_power_of_two( size ) )
            return;
        for( int idx = 0, mid = size >> 1; idx < size; ++idx )
        {
            if( idx <= mid )
                composite[ idx ] = reals[ idx ];
            else
                composite[ idx ] = imags[ idx ];
        }
    }
    
    void decompress_fft_result( const double composite[ ], double reals[ ], double imags[ ], int size )
    {
        if( !is_power_of_two( size ) )
            return;
        int 
            mid = size >> 1;
        for( int idx = 0; idx < size; ++idx )
        {
            if( idx <= mid )
            {
                reals[ idx ] = composite[ idx ];
                imags[ idx ] = -composite[ mid + ( mid - idx ) ];
            }
            else
            {
                reals[ idx ] = composite[ mid - ( idx - mid ) ];
                imags[ idx ] = composite[ idx ];
            }
        }
        imags[ 0 ] = imags[ mid ] = 0;
    }
    The only constraint, of course, it that the sample size must be a power of two.

    Cheers!
    Code:
    #include <cmath>
    #include <complex>
    bool euler_flip(bool value)
    {
        return std::pow
        (
            std::complex<float>(std::exp(1.0)), 
            std::complex<float>(0, 1) 
            * std::complex<float>(std::atan(1.0)
            *(1 << (value + 2)))
        ).real() < 0;
    }

  2. #2
    and the hat of int overfl Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    39,659
    Fast Fourier transform - Wikipedia, the free encyclopedia
    Talks about reduced time and memory.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.

  3. #3
    Guest Sebastiani's Avatar
    Join Date
    Aug 2001
    Location
    Waterloo, Texas
    Posts
    5,708
    Quote Originally Posted by Salem View Post
    Okay, so it's a pretty well-known property then. Got it! Captain Obvious strike again...
    Code:
    #include <cmath>
    #include <complex>
    bool euler_flip(bool value)
    {
        return std::pow
        (
            std::complex<float>(std::exp(1.0)), 
            std::complex<float>(0, 1) 
            * std::complex<float>(std::atan(1.0)
            *(1 << (value + 2)))
        ).real() < 0;
    }

  4. #4
    Registered User Char*Pntr's Avatar
    Join Date
    Sep 2007
    Location
    Lathrop, CA
    Posts
    198

    Smile

    Quote Originally Posted by Sebastiani View Post
    The fourier transform (FT), of course, maps a real signal to the complex plane. Unfortunately, this also means that the result is twice as large as the original signal. As it turns out, though, fully half of that information is completely redundant!
    Hi, I'm struggling to recall my electrical engineering theory a LONG time ago, but since
    you used the phrase "a real signal" I assume your discussion is not purely mathematical
    but in the real-time signal domain.

    Anyway, the Laplace transformation, if I recall correctly, is just a special case of the
    Fourier transformation (the integral from minus to plus infinity) whereas the Laplace
    transformation is evaluated from zero to plus infinity. Thereby the problem, as noted
    earlier:

    Unfortunately, this also means that the result is twice as large as the original signal.
    is automatically taken care of. My buddies in college used to call me "Dr Laplace" since
    I seemed to excel in this area and many of them asked me for help in their homework.

    Anyway, if you're looking at signal analysis, I would definitely consider using Laplacian transformations. A good link here: 2. Definition of the Laplace Transform

    I'm starting to recall more now... the reason why I especially enjoyed using Laplace was in my calculations in the time domain. Doing this by hand was quite laborious when, for example, two complex numbers are multiplied or divided together in the time domain.

    However, once the complex numbers (time domain) were converted to (frequency domain) using the Laplace, then the calculation was greatly reduced: A complex multiplication in the time domain is now simply an addition in the frequency domain. And I think, a complex division in the time domain, reduced to a simple subtraction in the frequency domain.

    Hence a lot of calculations were simplified once they were in the frequency domain, and then the final calculation was converted back into the time domain.

    The bottom line is, the Laplace transformations came about to solve "real world" problems.

  5. #5
    l'Anziano DavidP's Avatar
    Join Date
    Aug 2001
    Location
    Plano, Texas, United States
    Posts
    2,743
    The fourier transform (FT), of course, maps a real signal to the complex plane.
    This isn't 100% correct. Technically, the Fourier Transform can have both complex inputs and complex outputs. In fact, it's outputs could be 100% real with nothing on the complex side...though there's almost always at least something on the complex side.
    My Website

    "Circular logic is good because it is."

  6. #6
    Officially An Architect brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,396
    This is very well known in DSP. Almost all real-to-complex transforms return a half-plane instead of a full-plane of values.

    If the input signal has even symmetry, the result can be further reduced by half because it is completely real. For odd symmetry it is completely imaginary.
    Code:
    //try
    //{
    	if (a) do { f( b); } while(1);
    	else   do { f(!b); } while(1);
    //}

  7. #7
    Guest Sebastiani's Avatar
    Join Date
    Aug 2001
    Location
    Waterloo, Texas
    Posts
    5,708
    Quote Originally Posted by Char*Pntr View Post
    Hi, I'm struggling to recall my electrical engineering theory a LONG time ago, but since
    you used the phrase "a real signal" I assume your discussion is not purely mathematical
    but in the real-time signal domain.

    Anyway, the Laplace transformation, if I recall correctly, is just a special case of the
    Fourier transformation (the integral from minus to plus infinity) whereas the Laplace
    transformation is evaluated from zero to plus infinity. Thereby the problem, as noted
    earlier:

    is automatically taken care of. My buddies in college used to call me "Dr Laplace" since
    I seemed to excel in this area and many of them asked me for help in their homework.

    Anyway, if you're looking at signal analysis, I would definitely consider using Laplacian transformations. A good link here: 2. Definition of the Laplace Transform

    I'm starting to recall more now... the reason why I especially enjoyed using Laplace was in my calculations in the time domain. Doing this by hand was quite laborious when, for example, two complex numbers are multiplied or divided together in the time domain.

    However, once the complex numbers (time domain) were converted to (frequency domain) using the Laplace, then the calculation was greatly reduced: A complex multiplication in the time domain is now simply an addition in the frequency domain. And I think, a complex division in the time domain, reduced to a simple subtraction in the frequency domain.

    Hence a lot of calculations were simplified once they were in the frequency domain, and then the final calculation was converted back into the time domain.

    The bottom line is, the Laplace transformations came about to solve "real world" problems.
    Interesting! Yep, I'm reading up on it right now...so far, it seems to be a superior method altogether. Thanks so much for the insight.

    Quote Originally Posted by DavidP View Post
    This isn't 100% correct. Technically, the Fourier Transform can have both complex inputs and complex outputs. In fact, it's outputs could be 100% real with nothing on the complex side...though there's almost always at least something on the complex side.
    So, what sort of signal might generate both real and complex inputs? Or just complex inputs?

    Quote Originally Posted by brewbuck View Post
    This is very well known in DSP. Almost all real-to-complex transforms return a half-plane instead of a full-plane of values.

    If the input signal has even symmetry, the result can be further reduced by half because it is completely real. For odd symmetry it is completely imaginary.
    Ah, I see. Yes, I suppose I have a tendency to do my research "in vacuo". I really need to work on that!
    Code:
    #include <cmath>
    #include <complex>
    bool euler_flip(bool value)
    {
        return std::pow
        (
            std::complex<float>(std::exp(1.0)), 
            std::complex<float>(0, 1) 
            * std::complex<float>(std::atan(1.0)
            *(1 << (value + 2)))
        ).real() < 0;
    }

  8. #8
    Officially An Architect brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,396
    Quote Originally Posted by Sebastiani View Post
    So, what sort of signal might generate both real and complex inputs? Or just complex inputs?
    Given that the Fourier transform and its inverse are different only by a sign in the exponent, and that the choice of which direction is "forward" and which is "inverse" is basically arbitrary, you can think of the spectrum itself as being a realistic example of a complex-valued signal.

    Another example is the quantum wavefunction of a particle.
    Code:
    //try
    //{
    	if (a) do { f( b); } while(1);
    	else   do { f(!b); } while(1);
    //}

  9. #9
    Guest Sebastiani's Avatar
    Join Date
    Aug 2001
    Location
    Waterloo, Texas
    Posts
    5,708
    Given that the Fourier transform and its inverse are different only by a sign in the exponent, and that the choice of which direction is "forward" and which is "inverse" is basically arbitrary, you can think of the spectrum itself as being a realistic example of a complex-valued signal.
    Yes, but in the forward transform there is the normalization step. Without that, the two wouldn't be orthogonal to eachother. I see what you're getting at, tho; it's essentially a reversible transform, and complex terms can appear in either (or both) of the domains.

    [edit]
    Nvm - you're absolutely right; the direction of normalization is completely arbitrary (and can even be unilateral, using a scaling term of 1/sqrt(N)).
    [/edit]

    Another example is the quantum wavefunction of a particle.
    What about the time-resolution limitations of the FT? Wouldn't a short-time FT (or maybe even a multiresolution analysis approach) be more appropriate?
    Last edited by Sebastiani; 11-03-2010 at 11:39 AM. Reason: update
    Code:
    #include <cmath>
    #include <complex>
    bool euler_flip(bool value)
    {
        return std::pow
        (
            std::complex<float>(std::exp(1.0)), 
            std::complex<float>(0, 1) 
            * std::complex<float>(std::atan(1.0)
            *(1 << (value + 2)))
        ).real() < 0;
    }

  10. #10
    Officially An Architect brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,396
    Quote Originally Posted by Sebastiani View Post
    What about the time-resolution limitations of the FT?
    Congratulations, you just discovered the mathematical basis of Heisenberg's uncertainty principle!

    The momentum and spatial wavefunctions are the Fourier duals of each other. Thus, the transform of a localized pulse in space becomes a wide pulse in momentum space (know where the particle is, don't know as much about its speed), and vice versa.
    Code:
    //try
    //{
    	if (a) do { f( b); } while(1);
    	else   do { f(!b); } while(1);
    //}

  11. #11
    Guest Sebastiani's Avatar
    Join Date
    Aug 2001
    Location
    Waterloo, Texas
    Posts
    5,708
    Congratulations, you just discovered the mathematical basis of Heisenberg's uncertainty principle!
    Indeed! But even the most advanced technique is hindered (to some degree, at least) by HUP; the FT is just that much more inaccurate because it can only work in terms of a "continuous signal" interpretation. The lower bound of HUP with respect to a time-domain signal is the Nyquist limit, and any other given domain is going to have a respective counterpart, naturally.

    The momentum and spatial wavefunctions are the Fourier duals of each other. Thus, the transform of a localized pulse in space becomes a wide pulse in momentum space (know where the particle is, don't know as much about its speed), and vice versa.
    So...instead of decimations in frequency/time, we have velocity/momentum - is that correct? Or wait. Maybe it's velocity/field-tensor?

    Anyway, I do see the important point there - ie: that Fourier analysis can be applied to a *wide* variety of "dual domains". I've never really considered that before, actually.
    Last edited by Sebastiani; 11-03-2010 at 01:19 PM. Reason: wording
    Code:
    #include <cmath>
    #include <complex>
    bool euler_flip(bool value)
    {
        return std::pow
        (
            std::complex<float>(std::exp(1.0)), 
            std::complex<float>(0, 1) 
            * std::complex<float>(std::atan(1.0)
            *(1 << (value + 2)))
        ).real() < 0;
    }

  12. #12
    Officially An Architect brewbuck's Avatar
    Join Date
    Mar 2007
    Location
    Portland, OR
    Posts
    7,396
    Quote Originally Posted by Sebastiani View Post
    Indeed! But even the most advanced technique is hindered (to some degree, at least) by HUP;
    Yeah, the uncertainty principle is more general than to just space/momentum and Fourier pairs. It's satisfying that in that case though, it falls out naturally from the math. There are many observable pairs that adhere to the principle that aren't so easily explained, for instance energy/time -- the more accurately the energy of an event is known, the less accurately you know WHEN it occurred, and vice versa.
    Code:
    //try
    //{
    	if (a) do { f( b); } while(1);
    	else   do { f(!b); } while(1);
    //}

  13. #13

    Join Date
    May 2005
    Posts
    1,042
    The only constraint, of course, it that the sample size must be a power of two.
    That's not so bad. I believe I've seen code at work where it it wasn't a power of two, it was worth to either pad or cut the signal (depending on what the original signal was from).


    Somebody mentioned the orthogonality of the sine and cosine functions. I'm a bit out of the loop on this stuff so somebody can correct me if I'm wrong, but the basis functions must be orthogonal to be able to describe any function in a given domain? I don't know if I"m wording that well.


    Again, perhaps somebody can chine in on this with something more accurate/eloquent, but I recently came across (also related to work), was that the imaginary component of certain* complex solutions to potential functions (fluids) the real component of the solution at a given point gives you potential and the complex component gives you the strength of the streamline, and they're always perpendicular and represent a function/derivative pair, and seemed a (sort of) similar analogy to the space/wave analogy above.

    *I say certain because of the limited number of cases where analytical solutions can be found for the potential equations

    Anyway, the Laplace transformation, if I recall correctly, is just a special case of the
    Fourier transformation
    That always confused me. Never got a satisfactory explanation from a professor, but I liked your explanation because I came to the same conclusions.
    I'm not immature, I'm refined in the opposite direction.

  14. #14
    Guest Sebastiani's Avatar
    Join Date
    Aug 2001
    Location
    Waterloo, Texas
    Posts
    5,708
    Quote: The only constraint, of course, it that the sample size must be a power of two.
    That's not so bad. I believe I've seen code at work where it it wasn't a power of two, it was worth to either pad or cut the signal (depending on what the original signal was from).
    I wouldn't recommend it. At best, the result is a slightly skewed (and sometimes unreliable) interpretation. At the very worse, the artifacts thus introduced can be so severe that the result is utterably invalid. Anyway, most of the time the sampling rate (SR) is readily "tunable", so this is not really a huge issue, in practice. However, if for some reason you simply can't modify the SR then consider breaking the sample block into smaller blocks, each having a length that is a powers of two, and then later (after processing each buffer individually) interpolating the results back such that they are orthonormal to the original SR, if that makes sense.

    I'm a bit out of the loop on this stuff so somebody can correct me if I'm wrong, but the basis functions must be orthogonal to be able to describe any function in a given domain?
    Generally desirable, yes, but not always required. I'll have to let a real mathemetician explain why that is, tho.

    Again, perhaps somebody can chine in on this with something more accurate/eloquent, but I recently came across (also related to work), was that the imaginary component of certain* complex solutions to potential functions (fluids) the real component of the solution at a given point gives you potential and the complex component gives you the strength of the streamline, and they're always perpendicular and represent a function/derivative pair, and seemed a (sort of) similar analogy to the space/wave analogy above.

    *I say certain because of the limited number of cases where analytical solutions can be found for the potential equations
    Interesting. Do have any links to more info on that?
    Code:
    #include <cmath>
    #include <complex>
    bool euler_flip(bool value)
    {
        return std::pow
        (
            std::complex<float>(std::exp(1.0)), 
            std::complex<float>(0, 1) 
            * std::complex<float>(std::atan(1.0)
            *(1 << (value + 2)))
        ).real() < 0;
    }

  15. #15

    Join Date
    May 2005
    Posts
    1,042
    Interesting. Do have any links to more info on that?
    The case I mentioned above I read in a Fluid Mechanics text, which I can scan for you if you want. I am having a hard time finding a link that states the case above using my wording, but, in general I believe this falls under the broader category of 'conformal mapping' (a transform that preserves angles).

    The specific thing I was doing was researching how to map the analytic solution for inviscid flow around a sphere and map it to an airfoil using a kutta-joukowsky transform, for the purpose of validating a time domain computer model.

    I think I didn't quite answer your question. I am willing to scan the section from my fluid mechanics text, I'm just trying as hard as I can to be lazy right now.

    Joukowsky transform - Wikipedia, the free encyclopedia
    Conformal map - Wikipedia, the free encyclopedia
    Potential flow - Wikipedia, the free encyclopedia
    Last edited by BobMcGee123; 11-06-2010 at 02:16 PM. Reason: wording
    I'm not immature, I'm refined in the opposite direction.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. program terminates abruptly
    By roaan in forum C Programming
    Replies: 3
    Last Post: 08-28-2009, 03:53 PM
  2. Lame null append cause buffer to crash
    By cmoo in forum C Programming
    Replies: 8
    Last Post: 12-29-2008, 03:27 AM
  3. [question]Analyzing data in a two-dimensional array
    By burbose in forum C Programming
    Replies: 2
    Last Post: 06-13-2005, 07:31 AM
  4. Data Storage Question, and Dynamic variables?
    By Zeusbwr in forum C++ Programming
    Replies: 5
    Last Post: 10-21-2004, 11:01 PM
  5. data storage
    By azzalicious in forum C++ Programming
    Replies: 4
    Last Post: 10-15-2001, 06:33 PM