plzzzzzz... i need a help in natural merge sort

This is a discussion on plzzzzzz... i need a help in natural merge sort within the C++ Programming forums, part of the General Programming Boards category; Hmm. Let me restate that. The myfile.read() did not trigger the myfile.eof() condition for the last record....

  1. #16
    Registered User Kanshu's Avatar
    Join Date
    May 2009
    Posts
    27
    Hmm. Let me restate that. The myfile.read() did not trigger the myfile.eof() condition for the last record.

  2. #17
    Registered User Kanshu's Avatar
    Join Date
    May 2009
    Posts
    27
    I've read about the merge sort algorithm at wikipedia.org:

    Merge sort - Wikipedia, the free encyclopedia

    I wanted to confirm that the algorithm uses recursion. It does. So, now I'm wondering about your implementation using 3 global binary files. Usually, the later recursive calls require private storage for successively smaller bits of data. The contents are discarded once the call terminates. You, on the other hand, are using common or global storage (i.e., binary files) to store data during recursive calls. How is that even possible?

    You may want to use global arrays first to test your implementation. This will eliminate a lot of lines related to reading from and writing to the binary files. If the global arrays work, it should be simple enough to duplicate using files.

    Over the top of my head, I think you should use temporary binary files to write the result of a call to merge/sort. This will be similar to local arrays during recursive calls to merge/sort.

  3. #18
    Algorithm Dissector iMalc's Avatar
    Join Date
    Dec 2005
    Location
    New Zealand
    Posts
    6,292
    The method being used is okay. It's called External Sorting, and it's what we use when the data to be sorted doesn't all fit in RAM at once.
    The files are supposed to contain decreasing numbers of runs of ever increasing lengths as time goes by.

    You shouldn't have to get as high as 90 to cause a problem. You need to try it with much lower numbers and run it several times until it fails, then hard code the exact strings it gave you to make it reproduceable.
    If there's a bug then it's highly likely that it'll occur with fewer than 10 items.
    My homepage
    Advice: Take only as directed - If symptoms persist, please see your debugger

    Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"

  4. #19
    Registered User Kanshu's Avatar
    Join Date
    May 2009
    Posts
    27
    Do you mean to say that one source file and two working files are enough to implement the algorithm? (From the looks of it, he is using the source file as a working file also.)

    I'm skeptical, but interested in the technique.

    --------------------
    By the way, can you comment/confirm/verify my earlier post on the myfile.read(). When I ran it, it was not triggering immediately the myfile.eof() for the last record. Since he is using myfile.read() all over, this may be affecting the solution.

  5. #20
    Algorithm Dissector iMalc's Avatar
    Join Date
    Dec 2005
    Location
    New Zealand
    Posts
    6,292
    Quote Originally Posted by Kanshu View Post
    Do you mean to say that one source file and two working files are enough to implement the algorithm?
    Yes absolutely. The merge process has the extra complication that the streams of data being merged occasionally have items out of order, but you simply check for this every time you advance each data stream. The number of comparisons approximately doubles, but it's still on the order of O(n * log(n))
    By the way, can you comment/confirm/verify my earlier post on the myfile.read(). When I ran it, it was not triggering immediately the myfile.eof() for the last record. Since he is using myfile.read() all over, this may be affecting the solution.
    Yes that sounds right. Eof isn't true until you're tried reading past it, iirc. I actually very very seldom read from files directly, surprisingly.
    My homepage
    Advice: Take only as directed - If symptoms persist, please see your debugger

    Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"

  6. #21
    Registered User sam-j89's Avatar
    Join Date
    May 2009
    Posts
    14
    Hello thank you guyz a lot for helping me through the past two weeks
    Finally .. my program worked successfully without any problems even with great values for Nmax , and here is an explanation to my problem :
    In the fucnction copy_sequence:

    Code:
    while ((!f1.eof())&&(w1 >= w2))
    {
    	f2.write(w1,6);
    	w2 = w1;
    	f1.read(w1,6);
    }
    In case of two successive identical words , I consider that the preceding word is less than the succeeding one . that is , the c c c , for example, is a sequence according to the above assumption.. but on the contrary , in this segment:

    Code:
    if (( wrd >= w1)||(f1.eof()))
    {
    	wrd = w2;
    	myfile.write(w2,6);
    	copy_sequence(f2,myfile,w2,wrd);
    	end_sequence = true;
    }
    When it comes to an identical-words sequence , I consider that the condition (wrd>=w) means this is not a sequence so I move to the next file to read from.
    that is why dropping the (=) in (wrd >= w1) solved the problem!!!!!
    so .. when Nmax gets bigger it is far more likely to have a successive identical words , that would cause the prolem!!
    so thank u again iMalc for your useful advices you gave to me ... see you later bro

  7. #22
    Registered User Kanshu's Avatar
    Join Date
    May 2009
    Posts
    27
    Quote Originally Posted by iMalc View Post
    Yes absolutely. The merge process has the extra complication that the streams of data being merged occasionally have items out of order, but you simply check for this every time you advance each data stream. The number of comparisons approximately doubles, but it's still on the order of O(n * log(n))
    Hmm. I should study this algorithm. Good thing it's finally fixed.

    Quote Originally Posted by iMalc View Post
    Yes that sounds right. Eof isn't true until you're tried reading past it, iirc. I actually very very seldom read from files directly, surprisingly.
    And this one also.

  8. #23
    and the hat of wrongness Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    32,434
    > while (!myfile.eof())
    Why you shouldn't do this (and the observation of what happens if you do) is in the FAQ.
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.
    I support http://www.ukip.org/ as the first necessary step to a free Europe.

  9. #24
    Algorithm Dissector iMalc's Avatar
    Join Date
    Dec 2005
    Location
    New Zealand
    Posts
    6,292
    Well done solving it yourself!

    See you around
    My homepage
    Advice: Take only as directed - If symptoms persist, please see your debugger

    Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"

Page 2 of 2 FirstFirst 12
Popular pages Recent additions subscribe to a feed

Similar Threads

  1. Merge Sort Help
    By blindman858 in forum C++ Programming
    Replies: 5
    Last Post: 05-14-2005, 09:47 PM
  2. threaded merge sort
    By AusTex in forum Linux Programming
    Replies: 4
    Last Post: 05-04-2005, 04:03 AM
  3. merge sort tutorial from cprogramming.com
    By Micko in forum C Programming
    Replies: 0
    Last Post: 03-15-2005, 08:43 AM
  4. Natural Merge Sort
    By penny_731729 in forum C Programming
    Replies: 1
    Last Post: 04-28-2003, 04:12 PM
  5. help with merge sort
    By maloy in forum C Programming
    Replies: 4
    Last Post: 03-04-2002, 05:22 PM

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21