How do machines share information in distributed computing in a matter so that it's efficient? Do they have to customize the software for each problem?
Printable View
How do machines share information in distributed computing in a matter so that it's efficient? Do they have to customize the software for each problem?
Typically, yes, although in some cases (say, for example, digital filtering) you could write a fully generic algorithm and each problem is merely a new set of coefficients for the generic case.
Generally, the clients download blocks of data, and process each, then upload a whole batch of results (either immediately or later).
Poor software design can, however, lead to a decrease in efficiency if you distribute the problem over several processors or machines.
Interesting, I would have thought it would increase efficiency :DQuote:
Originally posted by Zach L.
Poor software design can, however, lead to a decrease in efficiency if you distribute the problem over several processors or machines.
if its like a normal supercomputing spilting it up as much as possible would be better. everything is shared and right next to each other.
a cluster with a cheap ethernet line would be fighting the ethernet and it wouldnt always be the best way. but alot easier.