When creating arrays I can't define any larger than 100 million with double. Is there a way to get even larger arrays or would this be unstable and I would be better off using several arrays rather than one large one?
When creating arrays I can't define any larger than 100 million with double. Is there a way to get even larger arrays or would this be unstable and I would be better off using several arrays rather than one large one?
Um, assuming a double is 64-bits in size, you're trying to allocate 800,000,000 bytes on the stack. That's about 762.939453125 MB of data. In general, your stack size is very limited, from 1 MB to something about 8 MB.
I would recommend you use new (or malloc() in C) to allocate the memory dynamically from the heap, however, if you are going to allocate that much memory..... well, rethink what you're planning on doing.
About. You know, roughly speaking, plus or minus a few kilobytes, in the general ballpark of 762.939453125 MB.That's about 762.939453125 MB of data.
Might've been a copy/paste from calc. (Couldn't resist - probably time I go to sleep.)
long time; /* know C? */
Unprecedented performance: Nothing ever ran this slow before.
Any sufficiently advanced bug is indistinguishable from a feature.
Real Programmers confuse Halloween and Christmas, because dec 25 == oct 31.
The best way to accelerate an IBM is at 9.8 m/s/s.
recursion (re - cur' - zhun) n. 1. (see recursion)
It sounds like Jarwulf is already using dynamic memory and running out of that.
It doesn't matter if there are several smaller arrays or one larger one, if you need 100 million doubles then you need that much memory.
You have to figure out if you really need over 100 million at one time and if they really need to be doubles.
In general when you need more than perhaps about 512MB of something you need to change your architecture to primarily store the data on disk, and only use RAM as a cache for some of the most recently needed part of that data. Like idtech6
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
Okay, maybe if I describe what I'm doing we can come up with a good solution. I'm doing a population simulation of sorts. Say it consists of a billion steps. But I have to do this billion steps 100 times. After doing the billion steps 100 times I average it and come up with one billion step simulation.
Since for every particular step I need all 100 instances before I average I can't do it piecemeal. The only solution I can think of would be to use File IO to write down every single value and then once the first cycle is finished read in every single value again, do an addition and then right it down again repeating this for every single pass. I think this would slow down the process too much.
If you have to do all billion steps in order and can't figure any of them out from others, then I'm guessing you'll have to do something like what you described. If you use a binary file and fixed size pieces of data then it might not be too slow to read and write certain values chunks at a time. For example, you could run the first 1000 steps of the simulation, then read in the first 1000 values and add the steps you ran to them and write them back out, then continue with the next 1000 and do the same until you've done all 1 billion. You might be able to experiment with different sized chunks to see which performs fastest.
Last edited by Daved; 06-28-2008 at 11:37 PM.
It sounds like you'd want File I/O with a worker thread to do the reading for you. That'd be the fastest solution. That is, worker thread locks file, reads x from file, unlocks and waits for main thread to be done using the file handle to read more. Perhaps an auxiliary array would be best, so your worker thread supply your main thread with values all the time.
Also, instead of copying all data from array A to B, just switch the pointers like they do in DirectX. Much faster.
"What's up, Doc?"
"'Up' is a relative concept. It has no intrinsic value."
The problem is that the main thread would be lightning fast compared to the read, so the read will lag behind quickly, thus eliminating the whole advantage of a second thread. It may even be slower.
Using caching when reading/writing files, however, will give you a fair speedup. At least on Windows.
And besides, it's easier to make it easy instead of complex.
use a map and sparse vectors. your data probably is full of null chars, a sparse vector is thus best suited.