Thread: SUBMISSION: How comparing a signed with an unsigned variable resulted to a sneaky bug

  1. #1
    Registered User
    Join Date
    Oct 2021
    Posts
    118

    SUBMISSION: How comparing a signed with an unsigned variable resulted to a sneaky bug

    *********
    As I was typing this and I was about to post this, I was doing `undo` in my editor to go back to the original code (the one I'm posting). However, I missed one `undo` and I run the program again and for my big surprise.... IT WORKED! Well the last one was the one making <end> a signed variable from an unsigned one. It turns out that the problem was that in the codition, because we were trying to check values of different signs. I don't know what the compiler does under the hood but changing <end> from "u64" to "i64" fixed the program. Again, one more example of unsigned values causing problems...

    I want to hear your thougts one that one! I'm making a compiler and I'm thinking of not supporting unsigned values. I prefer supporting 128-bits using two registers to cover any range needs. As you can see, unsigned variables can leed to very sneaky bugs. I was literally trying to fix this for more than half an hour. Half an hour for something so small.... I know that they have caused more serious problems to more experianced and proffesional developres than me too! My language will focus on safetly and on trying to help the user on avoding doing mistakes and spend hours debugging so allowing and banning unsigned variables is very crucial! Thanks for reading!
    *********

    ORIGINAL POST:
    It's late here so maybe I'm blind. If not, then I tried so much to find what I'm doing wrong but I wasn't able to find it. So, I have the following code snippet:

    Code:
    #include "stdio.h"
    #include "stdlib.h"
    #include "string.h"
    
    typedef unsigned long  u64;
    typedef long           i64;
    
    u64 get_final_num(const char*  value, i64 slen) {
      u64 end = 0;
    
      u64 final_val = 0;
      u64 multi = 1;
    
      // NOTE: Another case when the oveflow caused me problems. Should I remove unsigned values????
      printf("slen = %lu, end = %lu\n", slen, end);
      int i = 0;
      while (--slen >= end) {
        printf("slen: %ld, end: %lu, value: %d\n", slen, end, value[slen]);
        i++;
        // printf("\nslen: %lu\n", slen);
        if (value[slen] >= 48 && value[slen] <= 57) {
          final_val += (value[slen] - 48) * multi;
          multi *= 10;
        }
    
        else { printf("HITS HERE!!! after loop: %d, slen: %ld\n", i, slen); return 2; } // Error, the character is not a digit
      }
    
      return final_val;
    }
    
    u64 str_to_u64(const char* value) {
      i64 slen = strlen(value);
    
      // Error, the value is an empty string
      if (strcmp(value, "") == 0) { return 0; }
    
      if (slen == 1) {
        if (*value == '-') { return 1; } // Error, there is no number after '-'
    
        else { return get_final_num(value, slen); }
      }
    
      return get_final_num(value, slen);
    }
    
    int main() {
      char* s1 = "12345678910";
      printf("libc: %ld | mylib: %ld\n", strtol(s1, NULL, 10), str_to_u64(s1));
      return 0;
    }
    The big thing her is the "while" loop. So the condition says: "keep looping as long as <slen> is bigger or equal to <end>" and first, it decrements <slen>. Now, the `printf` in the else branch will tell something interesting. When the code hits this branch, <slen> will have the value "-1" which is bigger than the value of <end> which is 0. So why did the condition executed???

    If I change the code slightly by making <end> a signed varibale and set its value to to 1 and change the condition to "while(--slen > end)" then the program seems to work as expected.

    Again, sorry if I'm blind and I can't see something obvious. I could just leave it as it is and move on as I find the alternative way that it works but I what to find what I'm doing wrong. I really did tried to check it multiple times but I cannot seem to find what it is....

  2. #2
    Registered User rstanley's Avatar
    Join Date
    Jun 2014
    Location
    New York, NY
    Posts
    947
    rempas:

    "I want to hear your thougts one that one! I'm making a compiler and I'm thinking of not supporting unsigned values."

    So how do you deal with size_t which is an unsigned type, sizeof which evaluates to a size_t value, and all the C Standard Library functions that use or return a size_t value? strlen(), etc...

    "My language will focus on safetly and on trying to help the user on avoding doing mistakes and spend hours debugging so allowing and banning unsigned variables is very crucial!"

    IMHO, your are wrong! You can't just write unsigned out of a C compiler, or for any other language. You learn how to use it properly! The programmers using your language, or any other language would need to learn this as well!

  3. #3
    Registered User
    Join Date
    Feb 2022
    Posts
    29
    I agree that banning unsigned values is a mistake - one which Java made, and which has caused a great deal of trouble when trying to work with fixed-size structures designed with unsigned values in mind (e.g., several network protocols).

    Furthermore, the safety issue isn't caused by using unsigned values, it comes from having overflow and underflow wrap around by default, rather than having a mechanism for handling overflow and underflow in a definable manner. The solution I favor is to have any fixed-width integer or decimal types (which realistically is all of them) have both a defined range, and a specific response for when an overflow or underflow occur. Note that different types may need different solutions; in some cases wrapping is the right solution after all, while in others it should raise an exception, and in still others it should saturate to the max or min value.

    You might even want to take a page from Brendan Trotter's playbook, and not only require all scalar types to be ranged, but also have your compiler report a hard error in any code which could overflow or underflow without it being handled. While I may disagree with his view of runtime errors versus programming errors (he basically sees automatic runtime checks as a language design failure, preferring to force programmers to make manual checks), I do see the reasoning behind it.

  4. #4
    Registered User
    Join Date
    Sep 2020
    Posts
    336
    One thing that might be worth adding is that while signed and unsigned addition and subtraction is largely symmetric (i.e. the same hardware can be used for both), along with comparisons, it isn't the same for multiplication, division or "arithmetic right shifts".

    There is most likely something deep to this observation... it is at least important in my line of work.

  5. #5
    Registered User
    Join Date
    Oct 2021
    Posts
    118
    Quote Originally Posted by rstanley View Post
    So how do you deal with size_t which is an unsigned type, sizeof which evaluates to a size_t value, and all the C Standard Library functions that use or return a size_t value? strlen(), etc...
    I'm sorry for not making it clear but I will not write a compiler for C but I'll make a new language. I'm just writing the compiler in C at this point. Of course, make a compiler for a language and ban a standard feature of the language doesn't make much sense...

    Quote Originally Posted by rstanley View Post
    IMHO, your are wrong! You can't just write unsigned out of a C compiler, or for any other language. You learn how to use it properly! The programmers using your language, or any other language would need to learn this as well!
    Don't get me wrong, I agree with what you are saying. You need to properly learn to use a tool (which is what a programming language really is). The thing is, humans do make mistakes. Funny enough, I made a post (link in the bottom) asking why the compiler should support "const" at all. All this feature does is just stopping the compiler from allowing US to modify a variable. But you wrote the code right? You know that a variable should not be modified, right? So you don't need assistance and someone to "check" on you and what you can and cannot do... right? Well... WRONG! Countless of replies made sure to make me curve deep into my mind that humans do make mistakes. Some times you code late at night, some times you code at a headache, some times you just lose your focus. This is why some languages (rust, V etc.) have made their variables been immutable be default and need an explicit keyword to make the mutable. This is what I intend to do with my language.

    So the idea is similar with this feature too. The users may learn how to use it. But what about when they do this mistake. Or when there are sneaky corner cases? How can we solve this problem?

    Post: Is there any real reason to use "const"? - D Programming Language Discussion Forum

  6. #6
    Registered User
    Join Date
    Oct 2021
    Posts
    118
    Quote Originally Posted by Schol-R-LEA-2 View Post
    I agree that banning unsigned values is a mistake - one which Java made, and which has caused a great deal of trouble when trying to work with fixed-size structures designed with unsigned values in mind (e.g., several network protocols).

    Furthermore, the safety issue isn't caused by using unsigned values, it comes from having overflow and underflow wrap around by default, rather than having a mechanism for handling overflow and underflow in a definable manner. The solution I favor is to have any fixed-width integer or decimal types (which realistically is all of them) have both a defined range, and a specific response for when an overflow or underflow occur. Note that different types may need different solutions; in some cases wrapping is the right solution after all, while in others it should raise an exception, and in still others it should saturate to the max or min value.

    You might even want to take a page from Brendan Trotter's playbook, and not only require all scalar types to be ranged, but also have your compiler report a hard error in any code which could overflow or underflow without it being handled. While I may disagree with his view of runtime errors versus programming errors (he basically sees automatic runtime checks as a language design failure, preferring to force programmers to make manual checks), I do see the reasoning behind it.
    Brendan's idea is BRILLIANT! the compiler can predict the possible ranges of a variable can at compile time and give a warning. Even if the programmer has created 5 tests (which I doubt), there is always the possibility that you just didn't hit the sneak case(s) that will result in your program to crash. And if you don't find it at test time, guess who's gone find it.... the end user! And in some causes, this may be catastrophic in some cases. All that for a small sneak case that your tests couldn't predict...

    Of course even when the end user finds the bug and reports it, good job finding which part of the program caused the error and why if you don't have a good debug mechanism for your software (which will all know is very hard to create and needs tons of your time). Having the compiler make these checks at compile time is a mechanism that is so simple yet so effective and can save the world from countless hours of debugging and tons of money! And you don't have to spend all these hours design and implementing this ugly debug system!

    Of course, that's not something I thought (and it makes me exceeded seen how many ideas exists out there and how much we can improve our tools!) but I wonder how big compiles don't implement a check for something so common and well studied as an integer overflow/underflow. At least, GCC, Clang, DMD, LDC and GDC don't do that. For example, the following code snippet will not raise any warnings:


    Code:
    #include   <stdio.h>
    
    int main() {
      int x;
      scanf("%d", &x);
      short v = x;
      printf("V = %d\n", v);
    }
    Unless there are any warning flags that I don't know. Funny enough, LDC is able to catch the possibility of negation overflows and it will warn you about that. For example, If the value in an 8-bit variable is -128 then it cannot be negated to 128 as it will overflow. LDC reports that and if you want to fix it, either:
    • A. You assign the original value to a bigger type and then you negate (so the original values stays unmodified) or:
    • B. You check if the value is "-128" and if it is then you set a flag, you increment the value and then you negate the value and then you increment the final value.

    The second way is how I do it! That's something I didn't knew and things like that are why I think that compilers should help as much as they can.

    When it comes to ranged values, I don't really like the idea. I believe that a specific range for a variable should be something "logical" that is specific to a part of your program. For example, let's say that you create a calculator. We have the left-hand, the right-hand and the operator. So, you want to check if the right-hand is 0 when the operator is '/' as we cannot divide with 0. When we use variables with fixed ranges, this raises two problems.

    1. How will we allow the variable to accept 0 as a value when the operator is not '/' but not accepted it when it is '/' if the check just checks for ranges? Unless I don't understand what Brendan said...

    2. What happens when the value is out of range? Will it throw an exception and exit? What if we want to allow the user to try again?

    The only way I see ranges been useful is when you want to save memory but this is why we have bit-fields in the first place (and really, we are not in the 70s anymore, do we really have to worry about bits???). Other than than, any range should be just a logical condition and the programmer should act based of the needs of the program. This makes much more sense IMO.

    What makes much more sense to me is to do runtime checks ONLY in debug mode to catch sneaky cases (like the one I run into) to make the developer's life easier and to prevent bugs to reach in the end user. Languages like C++ and D already do out-of-bounds checks, why can't we do that for all sorts of errors like integer overflow/underflow, division by zero, negation overflow etc.? What the compiler could do is something like changing the following code:

    Code:
    int main() {
      int x;
      scanf("%d", &x);
      short v = x;
      printf("V = %d\n", v);
    }

    to:

    Code:
    int main() {
      int x;
      scanf("%d", &x);
    
      // Only on debug mode
      if (x > SHRT_MAX || x < SHRT_MIN) throw_exception("Integer overflow");
    
      short v = x;
      printf("V = %d\n", v);
    }
    Of course, in this case, the compiler should also inform that the value of 'x' will be taken at runtime so it may cannot predict and it may overflow. Checking only in debug mode means no overhead. When the programmer find a bug then it is in his/her hand to make the needed checks based on the wanted behavior.

    Even for the cases that the compiler cannot give us warnings at compile time and we miss the corner case when testing in debug mode, the end user will find the error and report the exact steps he/she did to get the error. Of course the user will run the release version and won't see any info as the program will just crash. But after that, the developers can run the debug version and see the exact place that made the program crash and why that happened. This should make our lives much much easier with as less effort as possible!

    I'm now fully promoting the idea should help find bugs early as much as possible. Like I said in the previous reply in the thread, the replies from the post I linked made me completely change my view one this topic. I now understand how people code and the problems people face and the time and effort required to fix these problems. I want to limit these factors and make coding a stress free experience! At least as stress free as possible...

    I'm inspired by the things humans have made. I don't have a lot of motivation and ideas myself (I hope this changes as soon as possible!) but one things that motivates me is the fact that other people do! So I want to give a tool that other people will love and enjoy using!

    Thank you for all the ideas and thoughts you offered!

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. signed or unsigned?
    By ulillillia in forum C Programming
    Replies: 7
    Last Post: 05-08-2007, 01:06 AM
  2. Signed vs Unsigned int
    By osyrez in forum C++ Programming
    Replies: 18
    Last Post: 08-17-2006, 07:38 AM
  3. signed/ unsigned what!!
    By the bassinvader in forum C Programming
    Replies: 3
    Last Post: 07-27-2006, 01:46 PM
  4. macro to determine if variable is signed/unsigned
    By Cdigitproc1 in forum C Programming
    Replies: 7
    Last Post: 04-29-2005, 01:44 PM
  5. signed vs unsigned
    By char in forum C Programming
    Replies: 1
    Last Post: 04-24-2002, 01:10 PM

Tags for this Thread