Hi,

Ive written some code to decode a file, and write the decoded data out to a new file, the problem is at the moment its incredibly slow, it processing the file character by character and it could be a very large file.

My current decoding process is:

Read a line from the file, process each character individually, write out each decoded character individually to the file, Repeat for all lines.

If i comment out the 'write each char to file' line of the code, then its a whole lot faster, so i think the best solution around this is to use a buffer of some kind, but i am not really sure how to implement it and what size would be most suitable etc

I guess i could use a std::string as a buffer, and add a certain amount of chars to it then write it out, but i don't know how many chars there will be and what the memory limitations are on a string, and i am not too confident about how to correctly use a char * buffer and how to correctly allocate the memory for it and add a single char to it each loop iteration (strcpy()?), so if anyone could tell me how to optimize this code and kill the slowness it would be great.

Heres my current code (the file output is native linux but i don't think thats relevant to the problem).

Code:
string tmp;
char *buffer = new char[1024]; // Input read line buffer

	do
	{
		istr.getline(buffer, 1024);

		header_line = strstr(buffer, "=yend");	
		
		if(header_line == NULL)
		{
			int ascii_result = 0;
			
			// Decode line
			for(int i = 0; i < strlen(buffer) - 1; i++)
			{
                               ...
                               // Decoding maths on buffer[i]
                               ...

                                // Dump decoded char to output file
				tmp = (char)ascii_result;
				write(ostr, tmp.c_str(), 1);
			}

		}
		
	} while(header_line == NULL);

Thanks for any help,

Jack