Thread: using large memory problem

  1. #1
    Registered User
    Join Date
    Dec 2006
    Posts
    14

    using large memory problem

    I am trying to "benchmark" some sorting algorithms I learned from a book .
    The problem is I can't reserve into memory more than 16383 long variables .
    That means 64 kB .
    I've read that a segment can only be 64KB long and I set the compiling options to "Large" .
    That didn't help .

    I'm using Borlandc 3.1 under MS-DOS

  2. #2
    Registered User
    Join Date
    Jan 2005
    Location
    Estonia
    Posts
    131
    MS-DOS might be the problem. Although I have no experiences with it.

  3. #3
    and the hat of int overfl Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    39,660
    > I'm using Borlandc 3.1 under MS-DOS
    Really - actual DOS ?

    And not just that black rectangle in the middle of your XP desktop say with a C:\> prompt in it?
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.

  4. #4
    Registered User
    Join Date
    Dec 2006
    Posts
    14

    yes Real MSDOS

    yes , although I was using XP I made a fat32 partition especially for this problem and I booted using a win 98 boot disk .

  5. #5
    and the hat of int overfl Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    39,660
    So why not use one of the many free 32-bit compilers which are native to your 32 bit OS?

    Then the maximum memory you can allocate is 2GB, not the 64K from yester-millenium.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.

  6. #6
    Registered User
    Join Date
    Dec 2006
    Posts
    14

    low lovel understanding

    I was planning to calculate running time and get a close result to the real one. Under windows that would be much harder

  7. #7
    and the hat of int overfl Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    39,660
    What do you mean "the real one" ?

    Unless you're also going to turn off all DOS interrupts as well, there is no absolute "it takes x clock ticks" to sort this specific array.

    Besides, the different sorting algorithms are far more different (like quick sort compared to bubble sort) than the overhead of the OS. So long as the background activity remains about the same, you'll have a meaningful comparison.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.

  8. #8
    Registered User
    Join Date
    Dec 2006
    Posts
    14
    I agree with you .
    But no matter how long it will take , and how much I'll have to study , I have this obsession of knowing what happens on every clock cycle .
    This a long term ambition .
    And MAYBE I'll figure out what every clock cycle is used on in DOS , but I'm positive I'll NEVER be able to do that in windows .
    But for now I would be pleased if I'd get the running times of each algorithm so I can compare them .
    I've inserted breakpoints and debuged .What seems strange to me is that It doesn't crash when a allocate (myArray=new long[22000]) but when I try to iterate using a for in order to initialize .

  9. #9
    and the hat of int overfl Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    39,660
    Did you check the return result of the new ?

    Your old compiler isn't going to throw an exception if memory cannot be allocated, it will just return NULL and expect you to deal with it.

    Also, if you're just comparing, why not use 10000 rather than 22000 or any other number for that matter.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.

  10. #10
    Algorithm Dissector iMalc's Avatar
    Join Date
    Dec 2005
    Location
    New Zealand
    Posts
    6,318
    Quote Originally Posted by TechHigh
    I agree with you .
    And MAYBE I'll figure out what every clock cycle is used on in DOS , but I'm positive I'll NEVER be able to do that in windows .
    The whole idea of running something under DOS so that you can get precise timings is ridiculous. All the tiny differences in implementations of any algorithm any some language vs someone else's implementation will be far more noticeable. I can practically guarantee you that your implementation of an algorithm will not be the fastest one.

    There is no such thing as xxxxxxsort takes exactly xxx clock cycles to sort xxxxx items.

    The thing you can say for sure with sorting algorithms is an upper and lower bound on how many comparisons and assignments (or swaps if you prefer), it will take.

    I suggest you get over your irrational fear of timing things under Windows.

  11. #11
    Registered User
    Join Date
    Dec 2006
    Posts
    14
    Quote Originally Posted by iMalc
    There is no such thing as xxxxxxsort takes exactly xxx clock cycles to sort xxxxx items.[/I]
    Yes , there is .If you don't believe me believe Donald Knuth , the author of Art of Programming the most respected book in the field .
    Mathemathical equations are shown in that book that calculate the exact number of CPU instruction each algorithm takes depending on the input .

  12. #12
    Algorithm Dissector iMalc's Avatar
    Join Date
    Dec 2005
    Location
    New Zealand
    Posts
    6,318
    The fact that it is possible isn't the point. Virtually nobody uses DOS nowdays, and writing it specifically for DOS does not give you the advantage you are imagining.

    What is your overall goal? It seems to me that your goal is to be able to say that a certain algorithm takes x clock cycles. That just isn't possible to say.

    If you want to compare them against each other, then you could time your implementations under DOS. Or you can just as easily time them under Windows with very similiar results. It doesn't really matter which, they'll be about the same. Other things have far more effect on the time taken than the OS you use:
    1. A small optimisation to the implementation of an algorithm could make one algorithm implementation faster than another. Switching to another compiler or compiler version may certainly also do this.
    2. Sorting a different data type could result in totally different results.
    3. Running the algorithm on a different PC could yeild very different results with different algorithms being faster depending on things like the size of caches etc.
    4. Every single possible different permutation of input sequences is potentially going to give you a different timing. In particular, some algorithms sort faster when the data is nearly sorted, yet some do the reverse of that, and some don't care. This has far far more impact than the OS choice!

    If you just want to see the specific timings for your implementation, compiled by your compiler, with your input sequences, running on your PC under your choice of OS, then by all means go right ahead. But that wont tell you what is the best algorithm for any other situation (implementation / compiler / input sequence / PC / OS).

  13. #13
    Algorithm Dissector iMalc's Avatar
    Join Date
    Dec 2005
    Location
    New Zealand
    Posts
    6,318
    Quote Originally Posted by TechHigh
    Yes , there is .If you don't believe me believe Donald Knuth , the author of Art of Programming the most respected book in the field .
    Mathemathical equations are shown in that book that calculate the exact number of CPU instruction each algorithm takes depending on the input .
    Wrong! It does not tell you how many CPU instructions an algorithm takes. An algorithm is completely independant from its implementation. He can only state how many CPU instructions an implementation, or more precisely a compiled implementation takes. How many CPU instructions it takes is most definitely not determined by timing how long it takes.

    Depending on the input sequence however, yes most definitely. You were originally just concerned with the time taken. The time taken depends on a whole lot of things: the exact assembly code, processor, cache, RAM, hard disk, motherboard etc...
    Last edited by iMalc; 01-01-2007 at 03:36 AM.

  14. #14
    C++ Witch laserlight's Avatar
    Join Date
    Oct 2003
    Location
    Singapore
    Posts
    28,413
    If you intend it to be a benchmark, then it makes sense to do it on the system on which you will actually be using whatever implementation that you test.

    If you want to see if what Knuth says about algorithmic complexity is true in practice, then you do not have to worry about the minor details - as your input data size reaches asymptotic bounds, the patterns will emerge.
    Quote Originally Posted by Bjarne Stroustrup (2000-10-14)
    I get maybe two dozen requests for help with some sort of programming or design problem every day. Most have more sense than to send me hundreds of lines of code. If they do, I ask them to find the smallest example that exhibits the problem and send me that. Mostly, they then find the error themselves. "Finding the smallest program that demonstrates the error" is a powerful debugging tool.
    Look up a C++ Reference and learn How To Ask Questions The Smart Way

  15. #15
    and the hat of int overfl Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    39,660
    > that calculate the exact number of CPU instruction each algorithm takes depending on the input .
    In the idealised language which Knuth uses for his examples. This does not necessarily infer an exact count in any other implementation. You can count the number of 'comparisons', or the number of 'exchanges', but unless you're sorting primitive data types, these operations are not completed in one instruction.

    Algorithms are compared using Big O notation. It is these orders which really tell you how a sort is going to perform on larger and larger data sets. You can plot these graphs based on measurements taken on any OS / platform / language and see that bubble sort is O(n*n) and quicksort is O(nlogn)

    All that the totality of different language + different compiler + different compiler switches + different programmer + different OS is going to do is change the 'k' constant of where a "Big O" curve intercepts the Y axis at zero elements. Once you get into the larger data sets, this constant becomes pretty insignificant.

    Sure you could count how many instructions it takes, but most people go with calculating elapsed time. Even on machines which take variable lengths of time to execute each instruction, this is going to average out over the life of the program.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. To find the memory leaks without using any tools
    By asadullah in forum C Programming
    Replies: 2
    Last Post: 05-12-2008, 07:54 AM
  2. Assignment Operator, Memory and Scope
    By SevenThunders in forum C++ Programming
    Replies: 47
    Last Post: 03-31-2008, 06:22 AM
  3. Problem allocating memory.
    By Hulag in forum C Programming
    Replies: 5
    Last Post: 12-11-2003, 12:17 PM
  4. Crazy memory problem with arrays
    By fusikon in forum C++ Programming
    Replies: 9
    Last Post: 01-15-2003, 09:24 PM
  5. memory allocation problem with 2 dimensional array
    By nano_nasa in forum C++ Programming
    Replies: 7
    Last Post: 06-13-2002, 11:34 AM