Thread: need access to long double (10-byte float) support in MSVC++ 6.0

  1. #1
    jasondoucette.com JasonD's Avatar
    Join Date
    Mar 2003
    Posts
    278

    need access to long double (10-byte float) support in MSVC++ 6.0

    After being shocked at reading this:

    "The long double data type (80-bit, 10-byte precision) is mapped directly to double (64-bit, 8- byte precision) in Windows NT and Windows 95."

    from the MSDN for Data Type Ranges, I looked it up further and found this:

    "Previous 16-bit versions of Microsoft C/C++ and Microsoft Visual C++ supported the long double, 80-bit precision data type. In Win32 programming, however, the long double data type maps to the double, 64-bit precision data type. The Microsoft run-time library provides long double versions of the math functions only for backward compatibility. The long double function prototypes are identical to the prototypes for their double counterparts, except that the long double data type replaces the double data type. The long double versions of these functions should not be used in new code."

    This is ridiculous. It must have something to do with ANSI standards. I know that the FPU uses 10-byte floating point numbers internally. Is there any way to use 10-byte floats in MSVC++ without having to code in assembly?

  2. #2
    Registered User Codeplug's Avatar
    Join Date
    Mar 2003
    Posts
    4,981
    Looks like you SOL on using native types with MSVC - what do you need all the precision for anyways?

    gg

  3. #3
    jasondoucette.com JasonD's Avatar
    Join Date
    Mar 2003
    Posts
    278
    Graphing programs, and fractal zooms. The more precision, the better. Also, memory storage is not a problem, so I'd use 20 bytes / float, if I could.

    I cannot believe that MSVC++ dropped support for a basic data type of the FPU. I guess programming assembly is the only answer.

    Does any one have any relevant information on this topic?

  4. #4
    Registered User Codeplug's Avatar
    Join Date
    Mar 2003
    Posts
    4,981
    You could find a nice fixed point math lib - you'll get speed and crazy precision.

    gg

  5. #5
    jasondoucette.com JasonD's Avatar
    Join Date
    Mar 2003
    Posts
    278
    You know what? I've never considered fixed point. And that's something that would work for a fractal zoom, especially when you know the bounding region.

    However, it will not work for my graphing program, since I need access to functions like cos, sin, atan, etc. I would have to program these myself. I even know how to program some of them, but I really, really do not feel like messing with that, and optimizing the code.

    Another problem is that even 64-bit integers are only 8-bytes, and therefore do not have the precision of 10-byte floats.

    Thank you for your input.

  6. #6
    Pursuing knowledge confuted's Avatar
    Join Date
    Jun 2002
    Posts
    1,916
    CodePlug has a point, you could find a fixed point library and use that. You could probably find one with the trig functions you need. However, there is a bit of a problem. Back in the day...(such a funny phrase)...int operations were a lot faster than a float operation on a computer. So, the fastest way to do math for 3D engines and such was with fixed point. Our good friends at Intel solved the problem with FPUs. I believe it may have been the Pentium chip which first had this feature, but regardless, on one chip, they decided that instead of trying to up the clock cycles a bit more, they would make floating point numbers faster. They put a ton of work into the FPU, and realizing that int numbers were used more rarely, didn't bother to put as much work into that region. They got it to the point that floating point operations were performed more quickly than int operations, which of course increased performance greatly. It's still like that today, and I believe that most floating point operations are done in one or two clock cycles while int ops. are a few clock cycles slower. Get the Intel ASM documentation for more info. Conclusion: if you're programming a 386, by all means, go fixed point. Otherwise, you're better off with floats, and they're easier too. Perhaps you can find a library which will help you out with greater precision. This is an intruiging problem, perhaps I'll think about it.
    Away.

  7. #7
    Pursuing knowledge confuted's Avatar
    Join Date
    Jun 2002
    Posts
    1,916
    I thought about it a bit. I came to the conclusion that Google would be a good place to start, because it always is. I found this, I hope it's of use...(it's ASM though)

    http://www.opferman.net/Programming/...ls/txt/fpu.txt
    Away.

  8. #8
    Registered User Codeplug's Avatar
    Join Date
    Mar 2003
    Posts
    4,981
    I found CLN. It's a C++ lib that provides unlimitted precision floats (using GMP under the hood), among other things. You have to build it with g++, may take some work to build a dll or get something MSVC can link with.

    I was going to just post GMP, but it didn't look like it provides trig functions - CLN does.

    gg

  9. #9
    jasondoucette.com JasonD's Avatar
    Join Date
    Mar 2003
    Posts
    278
    Thanks for the replies, guys.

    blackrat364, yes, I was aware that the FPU had become much faster recently. It was the pentium that first had an FPU (you could get 486's with FPUs, but they were not standard). Because the FPU is faster, and is just as precise as 64-bit integers (unless you restrict the exponent, which I can for fractals), and because it already has a math library, it would be best to use this. Also, because the FPU already makes use of a 10-byte float type, I WANT to use it. Turbo Pascal has allowed this type since it first appeared - Microsoft is getting rid of it? This makes no sense. Again, I believe this must be an issue with the ANSI standards... It is really disappointing that you cannot take advantage of what the FPUs make available because the language refuses to do so. Besides the fact that the FPU uses 10-bytes internally REGARDLESS of what type you pass to it. *argh!*

    Incidentally - thanks for the links. I am going to look at them now.

  10. #10
    jasondoucette.com JasonD's Avatar
    Join Date
    Mar 2003
    Posts
    278
    Regarding the libraries: I know that it is possible to get libraries to handle numbers with more precision. I have done this myself, actually (I once had the world record for longest computation of the 196 Palindrome Quest problem: Palindrome World Records ), but this is not really why I want to use the 10-byte float. I wish for speed more than anything. And when I can use 10-byte precision at the same speed - then why not? After quickly looking at these libraries, it appears that they are made for the desire of more precision (I would have to look closer to see if they work directly on 10-byte floats), which is not what I am looking for. Also, if they make use of the long double type under MSVC++, then their code will not work, as it is just a typedef for double, if you can believe that. Imagine programming something specifically for 10-bytes, and compiling it on today's compiler to find out that the type no longer exists, but is typedef for double, for backwards compatibility!

    I will look into the libraries some more...

  11. #11
    Registered User Codeplug's Avatar
    Join Date
    Mar 2003
    Posts
    4,981
    Maybe g++ has 10-byte long double support - the "math" module could be compiled with it and use MSVC for the rest.

    gg

  12. #12
    jasondoucette.com JasonD's Avatar
    Join Date
    Mar 2003
    Posts
    278
    Yes, I was thinking (hoping) that was the case. I don't have any time to look now, but I'll check into it tomorrow.

  13. #13
    jasondoucette.com JasonD's Avatar
    Join Date
    Mar 2003
    Posts
    278
    I don't believe this. Microsoft actually turns off 80-bit internal precision. Even though it doesn't support 80-bit floats, it could still use the default 80-bit precision for intermediate calculations, so that it will arrive at a more accurate anwer.

    This is quoted from the MSDN:

    "Floating-Point Support
    The Microsoft run-time library sets the default internal precision of the math coprocessor (or emulator) to 64 bits. This default applies only to the internal precision at which all intermediate calculations are performed; it does not apply to the size of arguments, return values, or variables. You can override this default and set the chip (or emulator) back to 80-bit precision by linking your program with LIB/FP10.OBJ. On the linker command line, FP10.OBJ must appear before LIBC.LIB, LIBCMT.LIB, or MSVCRT.LIB.
    "

  14. #14
    Registered User Codeplug's Avatar
    Join Date
    Mar 2003
    Posts
    4,981
    Good find! Thats always good to know.

    gg

  15. #15
    Registered User
    Join Date
    Apr 2003
    Posts
    8
    im not sure, but i think that you would do:

    long double number=0.0;

    just like you would do:

    long int number=0; for a long integer

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. C++ to C Conversion
    By dicon in forum C Programming
    Replies: 7
    Last Post: 06-11-2007, 08:38 PM
  2. Conversion From C++ To C
    By dicon in forum C++ Programming
    Replies: 2
    Last Post: 06-10-2007, 02:54 PM
  3. HUGE fps jump
    By DavidP in forum Game Programming
    Replies: 23
    Last Post: 07-01-2004, 10:36 AM
  4. Segmentation Fault
    By Lost__Soul in forum C Programming
    Replies: 46
    Last Post: 04-28-2003, 04:24 AM
  5. How do you search & sort an array?
    By sketchit in forum C Programming
    Replies: 30
    Last Post: 11-03-2001, 05:26 PM