At work I've written an integration to a third party device and customers are frequently having reliability problems with it, which I prefer not to go further into at this stage, and one issue which really bugs me is that in both directions the comms only transmits the upper half (high 8-bits) of a 16-bit CRC (polynomial x^16 + x^15 + x^2 + 1).
What's more, the whole packet's themselves are only a maximum of six bytes long, typically only two or three bytes long (including 1 byte of CRC)!
Now my intuition tells me that although in theory a 16-bit CRC gives a 1 in 65536 chance of detecting an error, it would not necessarily hold true that transmitting only half of those CRC bits equates to a 1 in 256 chance of detecting an error. To me this seems especially true when the packet sizes are so small, such as when the whole CRC is bigger than the packet data!
Am I right that straight out using an 8-bit CRC would be far better than sending half of a 16-bit one?
Would even sending the other half of the CRC be an improvement?