I'm not saying that it isn't. But that's just not the job of a Huffman encoder. That would be like an RLE/LZW/Huffman encoder, or what have you. The fact is you would have more than one compression model to transmit, and thus it wouldn't be a strictly Huffman encoder. Obviously, if you were to make a compression *utility* you would probably employ a number of compression algorithms internally. Like I said though it's a matter of delegation. If my utility first passes the data through an RLE encoder and then on to a Huffman coder, would it make sense that the Huffman coder would try to also perform an RLE, and thus yet another compression model to append to the output stream? And what if this gets passed to an LZW encoder that then itself tries to apply a Huffman encoding to it? Every compression method generates overhead. By separating each algorithm clearly we ensure that only one model is generated per compression phase.