Quote:
but why it happens? 0.02-0.02 is 0 anyway..
Because a computer has to convert decimal numbers to binary format, i.e a series of 1's and 0's. Some numbers written in decimal format cannot be represented exactly in binary format--just like the number 1/3 written in fraction format can't be represented exactly in decimal format, 1/3 = .3333... on out to infinity. At some point, if you stop the repeating 3's, and use that decimal as a representation of 1/3, for instance .333, then you have only an approximation of 1/3. When you use an approximation of a number, then you can get small differences when you start doing calculations with other numbers.