How difficult is parallel programming in C?

This is a discussion on How difficult is parallel programming in C? within the C Programming forums, part of the General Programming Boards category; >> I guess nothing will prevent that... There's ECC memory - Dynamic random access memory - Wikipedia, the free encyclopedia ...

  1. #16
    Registered User Codeplug's Avatar
    Join Date
    Mar 2003
    Posts
    4,681
    >> I guess nothing will prevent that...
    There's ECC memory - Dynamic random access memory - Wikipedia, the free encyclopedia

    gg

  2. #17
    Registered User
    Join Date
    May 2006
    Posts
    57
    Yes I hope to get hardware configured to run ECC memory. It will cause a performance hit, however. Also ECC memory is not perfect, some soft errors will probably slip through. If a cosmic ray particle flips 2 bits (unlikely but possible) ECC can't fix it, won't even know what happened. That could be enough to throw such huge calculations completely off.

    My concern about using a single processor is that these programs are HUGE. One program will easily use up all memory resources (assuming I can find an op system that allows large process size). So they will be running one program at a time. Also the initial request they made to me was for speed, which is out the window if they use 1 out of 4 or 1 out of 8 cores.

    The other cores will help cause the op system won't interrupt the main program running, but not a lot. However, again they said that is what they want.

  3. #18
    C++まいる!Cをこわせ!
    Join Date
    Oct 2007
    Posts
    22,829
    Any 64-bit OS will allow you to use a lot of memory.
    Also, have you considered that if you can't run a single calculation on multiple cores, that you can actually run several calculations simultaneously?
    Quote Originally Posted by Adak View Post
    io.h certainly IS included in some modern compilers. It is no longer part of the standard for C, but it is nevertheless, included in the very latest Pelles C versions.
    Quote Originally Posted by Salem View Post
    You mean it's included as a crutch to help ancient programmers limp along without them having to relearn too much.

    Outside of your DOS world, your header file is meaningless.

  4. #19
    Registered User
    Join Date
    Jun 2005
    Posts
    6,458
    darsunt, I consider you're getting bedazzled with one aspect of the problem: the hardware.

    The hardware is actually the cheapest factor in this whole setting. If the problems are able to be split across four cores, they are able to be split across four computers. And the cost of four computers is probably less than the cost of rearchitecting the software for parallelism. And make no mistake: rearchitecting a working software design so it can be run in parallel is a massive rearchitecture. The incidence of systematic errors (i.e. programmer errors) increases substantially with complexity of the system design and with the size of the code base - and parallelism increases both design complexity and code size. One practical rule of thumb: system rearchitecture is an extremely effective method to produce an expensive and unreliable system from a working system.

    The hardware will also be routinely replaced. Parallelism, on the other hand, increases both the initial and maintenance cost of the software - because it increases the complexity of the software, making it harder to maintain. "Bang for buck" of hardware will keep going up (even if the software is not written specifically to exploit multiple cores) but that will not necessarily be true with a new software design.

    I consider the concerns raised in this thread with floating point precision are irrelevant. The mathematics of floating point is not trivial, but nor is it a show stopper for large scale numerical calculations if it is accounted for in the software - specifically algorithm - design and when interpreting the program outputs. That's true whether the software is parallelised or not.

    Similarly, the point about soft errors is irrelevant - even if we ignoring the fact that cosmic rays are only one contributor. Soft errors is a probabilistic effect that needs to be detected (if possible) and accounted for but - if anything - parallel software will be more likely to be adversely affected because parallel software involves use of additional hardware recources (eg memory) for synchronisation. That means a need for more machinery - software and hardware overhead - to detect the impacts following on from soft errors.

    What I suspect these guys are looking for from you is optimisation of the code they have - i.e. fine tuning the algorithms - rather than a complete system rearchitecture. In terms of return on investment, I suspect they're right.
    Right 98% of the time, and don't care about the other 3%.

  5. #20
    Registered User
    Join Date
    Jul 2009
    Posts
    1
    how can i Marge two .exe file together and how they can run both r simultaneously...
    please replay me as soon as possible..
    thanks...

  6. #21
    Registered User
    Join Date
    Dec 2006
    Location
    Canada
    Posts
    3,183
    "as soon as possible"

Page 2 of 2 FirstFirst 12
Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Parallel Port to USB controller and outb()
    By coletek in forum Linux Programming
    Replies: 1
    Last Post: 06-05-2009, 06:57 AM
  2. Serial to Parallel
    By ssharish2005 in forum Tech Board
    Replies: 11
    Last Post: 09-10-2007, 01:11 PM
  3. Parallel port programming
    By h3ro in forum Windows Programming
    Replies: 6
    Last Post: 08-08-2007, 11:14 AM
  4. Segmentation Fault - Trying to access parallel port
    By tvsinesperanto in forum C Programming
    Replies: 3
    Last Post: 05-24-2006, 03:28 AM
  5. Help needed: Output to Parallel port.
    By Ingsy in forum C Programming
    Replies: 4
    Last Post: 10-10-2001, 12:06 PM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21