Thread: 10 digit computation problem

  1. #1
    Registered User
    Join Date
    Jan 2013
    Posts
    4

    10 digit computation problem

    My program is supposed to display a ten digit number starting with the last number on top. I keep getting an error when divide the number entered by 10000000000.

    the error says "integer constant too large for 'long' type"

    I attempted using long long int for the variables and there was no change in the error message. I also tried to divde by 10 and then divide by 100000000, the program ran but it's output was all garbage when ten digits were entered.


    Code:
    #include <stdio.h>
      
    int main(void)
    {
        int n, d1,d2,d3,d4,d5,d6,d7,d8,d9,d10;
        int a, b, c, d, e, f, g, h, i;
         printf("Enter a number:");
         scanf("%d", &n);
        
        d1 = n%10;
        
        a = n%100;
        d2 = a/10;
        
        b = n%1000;
        d3 = b/100;
        
        c = n%10000;
        d4 = c/1000;
        
        d = n%100000;
        d5 = d/10000;
        
        e = n%1000000;
        d6 = e/100000;
        
        f = n%10000000;
        d7 = f/1000000;
        
        g = n%100000000;
        d8 = g/10000000;
        
        h = n%1000000000;
        d9 = h/100000000;
        
        d10 = i/1000000000;      
        
        printf("%d\n%d\n%d\n%d\n%d\n%d\n%d\n%d\n%d\n%d\n",d1,d2,d3,d4,d5,d6,d7,d8,d9,d10); 
    }
    Any solution ideas would be appreciated.
    Last edited by Josh Genser; 03-01-2013 at 05:43 PM.

  2. #2
    Registered User
    Join Date
    Jun 2005
    Posts
    6,815
    The error message is self-explanatory. There is no guarantee a long integer type can hold a tend digit value. In fact, a long unsigned type is only guaranteed to be able to represent a maximum value of 4294967296 (which means it can represent some, but not all, 9 digit values).

    A long long type is guaranteed to be able to hold ten-digit values - although the limits are finite. The thing to remember is that literal values (like 1000000) have type int by default - the compiler doesn't magically decide to make a literal value have type long long unless you tell it to, even if the value is large. If you want a literal value to be of type long long, use 10000000LL. Also, when printing a long long type, a different format than %d is needed.


    You would also be better off using an array and a loop, rather than trying to hardcode multiple variables named d1,d2,d3, .... d10. That way, all you need to do is divide by ten (repeatedly) in the loop, not divide by hard coded powers of 10.
    Right 98% of the time, and don't care about the other 3%.

    If I seem grumpy or unhelpful in reply to you, or tell you you need to demonstrate more effort before you can expect help, it is likely you deserve it. Suck it up, Buttercup, and read this, this, and this before posting again.

  3. #3
    Registered User
    Join Date
    Mar 2011
    Posts
    546
    you also need a different format specifier on the scanf to get a long long

  4. #4
    Algorithm Dissector iMalc's Avatar
    Join Date
    Dec 2005
    Location
    New Zealand
    Posts
    6,318
    You aren't showing any code that reproduces the problem. The code there should work fine for 32-bit ints. It only tries to divide by one billion, not ten billion. As it is using signed ints, this will produce a maximum of 2 for the highest digit.

    You should be using unsigned types, and yes as mentioned, just divide by 10 after extracting each digit. You don't need anywhere near that many variabled.
    My homepage
    Advice: Take only as directed - If symptoms persist, please see your debugger

    Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"

  5. #5
    Registered User
    Join Date
    Jan 2013
    Posts
    4
    I'm trying to use %u for a unsigned int, is that the correct format specifier? Also what would be the correct one for long long unsigned int? I rewrote the code into a loop. 9 digits works perfect but as soon as I enter 10 i get complete garbage output.10 digit computation problem-untitled-jpg

  6. #6
    Registered User
    Join Date
    May 2012
    Posts
    1,066
    Quote Originally Posted by Josh Genser View Post
    9 digits works perfect but as soon as I enter 10 i get complete garbage output.
    Your unsigned ints are probably 32 bits, thus the biggest possible value is 4,294,967,295. Bigger values will overflow and will look like "garbage".
    See also this table.
    Quote Originally Posted by Josh Genser View Post
    I'm trying to use %u for a unsigned int, is that the correct format specifier?
    Yes
    Quote Originally Posted by Josh Genser View Post
    Also what would be the correct one for long long unsigned int?
    "%llu"

    Bye, Andreas

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. How to display single digit int as four digit C++ Programming
    By Sloganathan in forum C++ Programming
    Replies: 1
    Last Post: 03-06-2012, 11:30 AM
  2. Read from file - 1-digit and 2-digit numbers
    By Bonaventura in forum C Programming
    Replies: 8
    Last Post: 03-06-2010, 06:33 AM
  3. Replies: 13
    Last Post: 11-13-2009, 08:47 AM
  4. Adding a Large number digit by digit
    By mejv3 in forum C Programming
    Replies: 1
    Last Post: 09-14-2007, 03:28 AM
  5. another theory of computation problem
    By axon in forum A Brief History of Cprogramming.com
    Replies: 13
    Last Post: 11-12-2004, 08:54 PM