Yep... there's just no other way to do it. Indexing can be used but your code will have to be able to deal with multiple identical fields in your indexes... and that means you will have to search the index sequentially. One way other the other, a birtday search requires looking at every record in the file.Just forget my client's Transaction file.
These are the records stored in that file.
one name and date of birth
there are 200,000 Records. Ok. Now i want to search the person who were born in specific date. In your method i have to search for all (200,000) records.
Yes. As I said above extracting a birthday index from the main client records as an index can be done... but this file then has to be sorted, which isn't much of a chore...But if the records are in sorted order by date of birth, we can check the record until our date is reached. after that we can neglect the remaining records.
By not seeing it as a problem... Stop and think about your daily business practices... what information do you look up the most often?But in the random order access, we have to search for all record. Please explain me how to rectify the problem ??
I'm betting it's the client's account number... so you have 2 choices here...
1) Record number in account number...
Give your customers sequential account numbers like this...
and so on...
It's a very simple scheme, the first and last group are randomly generated (i.e. meaningless) the actual account number is in the second group... take the second group subtract 10000000 ... and you have the record number in the client file.
2) Previously assigned account numbers... (eg. Credit card number)
If you are working with previously assigned numbers you can simply put the account number in the main client file as a field. Extract and sort an index and us a binary search of the index to locate the record number....
If you have other *unique* bits of information, like a phone number you could build an index for phone numbers.
However, when you have information that can easily be duplicated many times in unknown places throughout a the main file you are relegated to searching sequentially and possibly dealing with multiple return values from the search... FindFirst, FindNext... and yes it can be a bit slowish.
This is just so simple... if you add the transaction records in the order they are created (i.e. always add to end of file) they will automatically be in chronological order. Since it is most likely that inquiries will be about recent (as opposed to ancient) transactions, just search the file *backwards* from end to beginning.In my client transaction file i want to search a record for specific date. If it is ordered by date means, we can search it upto the specific date. In random access method we have to search all records. The processing is waste for some records. I hope you understand my problem ? Please help me !
You can optimize this further by breaking the file at specific intervals... yearly or perhaps monthly... simply create a new file the first day of each year so you have files named 2000.trans, 2001.trans, 2002.trans, etc ... If your transaction numbers imbed a date code for example...
and so on, it's a fairly easy matter to open the 2011 file and search there.
However, to correllate them with clients you will need to add a customer account number field (see above) to each transaction record. In that way the transaction file also becomes an index into the main client file... so in the process of finding a transaction you also have the means to open the customer's file as well.
Each transaction record will need fields for...
1) Transaction number (sequential)
2) Customer account number (or record # in main file)
3) Transaction date
As well as any transaction specific information.
Now you can do binary searches for transaction numbers... which will be very fast. You can do sequential searches based on multiple criteria... Customer, date, transaction type etc...
The thing is... before you start pounding out code you need to do an analysis of the requirements and plan your data base layout accordingly... This isn't something you can do "off the cuff"... you HAVE to analyse and plan this kind of data structure or it will eventually fail.