Thread: The 64-bit thread

  1. #16
    Hurry Slowly vart's Avatar
    Join Date
    Oct 2006
    Location
    Rishon LeZion, Israel
    Posts
    6,788
    But of course if you write non-portable code - you will have problems with porting...

    If you suppose that int is of 32 bits - you will have problems porting it to platforms with different int sizes...

    If you use Windows APi you will have problems with porting the code to the Unix...

    The question was about running code compiled by the 32-bit compiler on the 64-bit system. Right?
    All problems in computer science can be solved by another level of indirection,
    except for the problem of too many layers of indirection.
    – David J. Wheeler

  2. #17
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,895
    What's at stake considering programming in C++
    - These compiler toolchains, what are they?

    You mean, what is a toolchain?

    - What type of code compiled on a 32-bit machine will not run on a 64-bit one?

    If we're talking specifically about x86_64, all x86 (32-bit) code will run in a 64-bit system if the OS supports it. (Windows does, of course. Linux does, too, but you can choose to disable the support.)
    In general, of course, 64-bit and 32-bit may be completely different.

    - What type of code compile on a 64-bit machine will run on a 32-bit one?

    None. That's under the assumption that the code was compiled for the 64-bit machine, of course. You can always use a cross-compiler.

    - Have variable types increased in size (capacity)?

    Yes. Pointers are 64 bits now. All other types typically stay the same, except for long and unsigned long, where compilers disagree. Under Linux, long is now 64 bits too. Under Windows, except possibly for GCC (not sure), long stays at 32 bits.

    - Is there an effort between the major players to agree on a common size?

    Not really. On Windows, everyone (except possibly GCC, as I said) seems to be following Microsoft's lead, as usual, but across platforms there seems to be no such consensus.

    - What side effects are expected from 64-bit compiled applications in terms of size and speed?

    64-bit apps generally need more memory: every pointer variables is twice as large, as are some others, like size_t, ptrdiff_t etc. In terms of speed, more memory must be fetched, less data fits into the caches, which makes it a bit slower (but the fetching is offset by the broader bus). However, data transfers (memcpy, for example) can work in larger chunks and can thus be considerably faster. In addition, the x86_64 CPUs have more registers, which is a definite win for most code. All in all, expect 64-bit code to be a bit faster.

    The 64-bit hype
    - Considering 32-bit systems accompanied much of the software development during some of its most productive years, do you expect 32-bit machines to finally disappear in 5 years? 10? Maybe later? Much sooner?

    Are we still talking about the x86 family only? In that case, 10 to 15 years. 32-bit architectures in general, however, are here to stay.

    - What's in your general opinion the current state in terms of compilers, debuggers, profilers and overall industry support for 64-bit machines?

    Pretty good, but then, I'm working in Linux, which had support for various 64-bit architectures for a long time.

    - Should I consider an upgrade today? In 1 year? 2? 5?

    Given that all new CPUs produced are 64-bit CPUs, just upgrade when you would normally upgrade and you'll get the 64-bit stuff.

    Quote Originally Posted by Mario F. View Post
    If I recall correctly long long is expected for C++0x too, right?
    It's in there, definitely.

    Wouldn't it to be expected floating point numbers to also take advantage of the wider data bus and see, if not a change in current variables, some form of a "long double"?
    long double already exists. It's just that it's the same size as double in Microsoft's compilers (64 bits). GCC on x86 uses 80 bits, which is the maximum the x87 FPUs deal with.
    The FPU architecture didn't really change. However, SSE and SSE2 can already deal with 128-bit floating point numbers, and this is even supported by various compilers. (Note, however, that SSE really cares about data alignment, which means these huge floats must be aligned at 16-byte boundaries - and malloc/new only guarantee 8-byte alignment on the various compilers. Oops!)
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  3. #18
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Thanks a bunch for the replies so far folks. I trust your opinions more - especially because they may generate debate - than much of what I have been reading.

    I've skimmed through the last posts but unfortunately can't give much attention until Monday. Something came up. So don't think any less of my silence.

    Cheers.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  4. #19
    Massively Single Player AverageSoftware's Avatar
    Join Date
    May 2007
    Location
    Buffalo, NY
    Posts
    141
    Just to introduce another perspective, from the Apple side of the house, we've been running "pseudo 64 bit" for years. The PowerPC G5 chip is 64 bit, and OS X seems to take advantage of some of the 64 bit features, the expanded RAM ceiling for example, but not all. 10.5 is supposed to be fully 64 bit, and I can't wait to see how it performs on my system.

    One thing that I think Microsoft ought to copy from Apple is their application packaging strategy, which deals with the 64 bit issue quite nicely. Apple is using something they call "fat binaries," where your program gets compiled twice, once for 32 and once for 64 bit. The resulting builds are combined into a single executable file, and the correct version is selected on program launch. This of course results in larger executables, but gives you universal compatibility. OS X uses the same strategy for the Intel/PowerPC variance, so if you choose to cover all the bases you end up with a 4-way executable for 32 and 64 bit versions of both chips.
    There is no greater sign that a computing technology is worthless than the association of the word "solution" with it.

  5. #20
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,895
    I think they officially call them "universal binaries" ...
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

  6. #21
    Massively Single Player AverageSoftware's Avatar
    Join Date
    May 2007
    Location
    Buffalo, NY
    Posts
    141
    Quote Originally Posted by CornedBee View Post
    I think they officially call them "universal binaries" ...
    Universal binaries are the PowerPC/Intel hybrids. Fat binaries were the 32/64 bit hybrids. When you combine the two, I guess you get "universal fat binaries."

    Or something.
    There is no greater sign that a computing technology is worthless than the association of the word "solution" with it.

  7. #22
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Quote Originally Posted by CornedBee
    All other types typically stay the same, except for long and unsigned long, where compilers disagree. Under Linux, long is now 64 bits too. Under Windows, except possibly for GCC (not sure), long stays at 32 bits.
    Yes, I read about this too on the link Sinkula provided but didn't have the time to ask then. I'll ask now:

    Correct me if I'm wrong, but doesn't this introduce a new portability issue that didn't exist between 32-bit Linux and Windows machines where at least C++ integral variable sizes were consistent?

    Quote Originally Posted by CornedBee
    32-bit architectures in general, however, are here to stay.
    I'm curious as to why you say this. I read also about it. But little effort was done there explaining why exactly. Do you mean as an architectural solution to other types of systems (pocket PCs for instance) or still in the shape we are currently so familiar with?

    Quote Originally Posted by AverageSoftware
    The resulting builds are combined into a single executable file, and the correct version is selected on program launch. This of course results in larger executables, but gives you universal compatibility.
    Any chance of a VM that compiles the package at installation time thus resulting in a smaller executable?
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  8. #23
    Cat without Hat CornedBee's Avatar
    Join Date
    Apr 2003
    Posts
    8,895
    Quote Originally Posted by Mario F. View Post
    Correct me if I'm wrong, but doesn't this introduce a new portability issue that didn't exist between 32-bit Linux and Windows machines where at least C++ integral variable sizes were consistent?
    Yup, it does. But generally, if an application is written for Linux, it should be written with architecture-independence in mind from the get-go, because Linux supports so many.

    I'm curious as to why you say this. I read also about it. But little effort was done there explaining why exactly. Do you mean as an architectural solution to other types of systems (pocket PCs for instance) or still in the shape we are currently so familiar with?
    The main reason 64 bits were introduced was that we hit the limit of 32-bit addressing. 4 gigabytes of addressable RAM sure sounded like a lot when the 386 was released and typical memory sizes were 4 or 8 MB RAM. Nowadays, it's just a little above average for a desktop computer, and simply insufficient for a big DB server. 64-bit architectures support theoretically up to 2^64 bytes of RAM (even though at least the initial Athlon64 only supported 2^40).

    But embedded systems are not there yet. Nor will some of them ever be there. For small devices, 64 bits are overkill that artificially makes the things more complicated than they need to be. 32 bits suffice. Heck, 16 bits suffice for some devices.

    For example, the ARM CPU (used in most portable music players, various PDAs, handheld game devices and many more, for example the iPOD, GameBoy Advance and Nintendo DS) is a 32-bit CPU, and that won't change anytime soon. PDAs might upgrade to 64 bits, but music players don't need that.

    The x86 32-bit CPUs will eventually die out, but 32-bit in general won't.
    All the buzzt!
    CornedBee

    "There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
    - Flon's Law

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. 64 bit programs
    By learning in forum C++ Programming
    Replies: 17
    Last Post: 05-10-2007, 11:26 PM
  2. C++ Threading?
    By draggy in forum C++ Programming
    Replies: 5
    Last Post: 08-16-2005, 12:16 PM
  3. 64 bit variables
    By Yawgmoth in forum C Programming
    Replies: 11
    Last Post: 12-19-2002, 01:55 PM
  4. 64 bit
    By stormbringer in forum C Programming
    Replies: 2
    Last Post: 10-29-2002, 06:51 PM