How do they cram the data into a smaller file without loosing any bytes ? ...or do they loose some bytes?
How do they cram the data into a smaller file without loosing any bytes ? ...or do they loose some bytes?
Last edited by abh!shek; 05-13-2008 at 12:57 AM.
It depends on the algorithm: some are lossless, others are lossy.How do they cram the data into a smaller file without loosing any bytes ? ...or do they loose some bytes?
As a simple example: suppose you wanted to compress a string of ten 'a's. You could express this as a pair: the number 10, and the character 'a'. This is lossless, since given this pair, you can re-construct the string of ten 'a's.
Look up a C++ Reference and learn How To Ask Questions The Smart WayOriginally Posted by Bjarne Stroustrup (2000-10-14)
Thanks, and sorry for starting this thread without googling properly.
Some are lossy, when the information lost is of low value. Like in a picture. If 3 pixels are very close to the same red value e.g. (0,0,128 0,1,129 0,0,127) there really is very little information that the human eye would percieve in the difference, so to improve compression the algorithm may treat them as being the same value, like 0,0,128. You lost information, but its not noticeable, so the decompressed image woudl look almost exactly like the original.
A general approach is to split the data into subsets (like bytes or pixels), sort according to frequency and then encode the more frequent subsets with shorter codes while the less frequent subsets gets longer codes. Check out huffman tables.
MagosX.com
Give a man a fish and you feed him for a day.
Teach a man to fish and you feed him for a lifetime.