# creating a new record

• 05-28-2009
Aisthesis
creating a new record
I'll explain in the following the only strategy I'm immediately seeing for creating a new record in synchronized arrays and just wondered if that's the standard way to do it, as my strategy feels a little tedious.

I think it's probably easier to explain this without posting the complete code. So, here goes: Let's say you have some number of arrays ordered so that one index corresponds to 1 record across the various arrays (I'm calling that "synchronized arrays" in the hope that that name describes the problem). And now in the program, the user has a choice to keep adding one record across the whole groups of arrays as many times as she wants to.

For simplicity's sake, I'll just say there are 2 arrays here, which I'll call QtyList and TitleList, which can be defined with pointers as:

int * QtyList;
char * TitleList;

Now, if all of the arrays had the form of QtyList, we could just make them vectors rather than arrays and be done with it. And I suppose it's also possible to do that with TitleList, but let's just say we're doing it with arrays rather than vectors (I'm in Gaddis, chap. 9 at this point, and I feel sure that he wants the problem approached in this way).

Further let's suppose that Num is an integer variable whose current value is the number of rows that QtyList and TitleList have, and that TitleLength is a const int that determines how wide a title is allowed to be.

Here's my strategy for adding a new record to these arrays:

1) Generate 2 duplicate arrays with
int * QtyListOld = new int[Num];
char * TitleListOld = new char[Num*TitleLength];

then fill in the array values to make them the same as the arrays we already have.

2) In order to allow the original pointers to hold both the old values and the new value that the user is getting ready to enter, we now delete the original pointers with
delete [] QtyList;
delete [] TitleList;

3) We now generate new pointers for the original arrays with
QtyList = new int[Num + 1];
TitleList = new char[(Num + 1) * TitleLength];

4) Fill in the pointers we just created with the values from QtyListOld and TitleListOld and the new row with values entered by the user.

5) delete QtyListOld and TitleListOld.

Is this the standard way to keep adding an arbitrary number of lines to synchronized arrays? Or is there something better? The only other way I can think of would be to give yourself some headroom when you make the original array and then only going through the process of duplication and deletion once you pass a certain number of lines.

Or does one normally just use vectors from the start? I ask because these data organization methods have to be important. It feels to me like Access (with which I'm a lot more familiar than C++) has to be doing something like that whenever a new record is created. btw, is Access written in C++?
• 05-28-2009
laserlight
Quote:

Originally Posted by Aisthesis
Let's say you have some number of arrays ordered so that one index corresponds to 1 record across the various arrays (I'm calling that "synchronized arrays" in the hope that that name describes the problem). And now in the program, the user has a choice to keep adding one record across the whole groups of arrays as many times as she wants to.

The term that I have seen used to describe this is "parallel arrays". Typically, a better solution is to have a single array (or more sophisticated container) of objects.

Quote:

Originally Posted by Aisthesis
Here's my strategy for adding a new record to these arrays:

Unfortunately, this strategy is expensive. Every time you want to add another element, you need to create a new array, copy over, add the element, and then destroy the old array. A better solution if you do not know the size in advance or have a small upper bound in size is to create an array, then expand it by some factor such that inserting elements would not require you to keep expanding the array until the capacity is reached.

Quote:

Originally Posted by Aisthesis
Or does one normally just use vectors from the start?

That would be a good idea.
• 05-28-2009
hk_mp5kpdw
Quote:

Originally Posted by Aisthesis
btw, is Access written in C++?

At least in the early stages of its existence it looks like VB:

Quote:

Originally Posted by http://en.wikipedia.org/wiki/Microsoft_Jet_Database_Engine
Jet originally started in 1992 as an underlying data access technology that came from a Microsoft internal database product development project, code named Cirrus. Cirrus was developed from a pre-release version of Visual Basic code and was used as the database engine of Microsoft Access. Tony Goodhew, who worked for Microsoft at the time, says

• 05-28-2009
Aisthesis
tx for the Access background, hk.

and to laserlight for the response to the problem at hand. I'm glad to hear that you find this constant creating and destroying of arrays not good. The more I think about it, the less I like it.

Here's what I think I'll do instead for the parallel arrays: Create the array from the start with lots of headroom. It's easy to mark empty values in these arrays, and we can track the number of active rows in any case. The specific problem in the book deals with entries by the cashier at a bookstore, then printing an invoice. I'm thinking that if there are more than 20 titles, well, the cashier just has to go for 2 invoices.

Do you think the vector solution is close to optimal in terms of what actually happens in memory? With the problem at hand, it's probably no big deal because we're not talking about huge chunks of memory. But when a vector is created, the fact that the system doesn't know its length can be a blessing in one sense (makes programming easier) but a curse in the sense that something has to happen when the memory location reserved for the vector fills up--presumably it's copied to another location and the data is appended there; if not the only result can be an error.

Anyhow, I'm thinking really that on today's computers with all kinds of RAM always available, probably the best approach is just to ask what the largest normal array is and cap it at that--even if that array turns out to have a lot of overcapacity 99% of the time.

Then if you want to store the new data to file, you don't add the whole big parallel arrays but quit as soon as you hit an empty row.

Does that make sense?