Thread: Concept of Quantity

  1. #166
    Registered User MacNilly's Avatar
    Join Date
    Oct 2005
    Location
    CA, USA
    Posts
    466
    Quote Originally Posted by C_ntua View Post
    Yes, but you are adding every time a smaller amount. So in the end you are adding something very close to 0. Which in the case of a truly infinite amount of 9s that will imply that you are actually adding 0 thus you will never get to 1. In other words the amount you are adding can get closer and closer to 0 and in the limiting case it will equal to 0.
    Very close, yes, but never equal. When dealing with limits, the difference (epsilon) approaches but never equals 0. If it does, the limit is undefined.

    There is no "limiting case," you simply add increasingly smaller amounts forever.

    This was an interesting thread, but I have to say there is quite a bit of confusion involving limits and their definition and rationale, especially about the "epsilon," or what some have been calling the "infintesimal." E > 0 always. NOT 0.

    Some would be better served reading a calculus textbook (limits and infinite series) than attempting to disprove the calculus. I guess the problem is one of viewpoint; whether or not a (finite) limit produces a single, unique real number or not, and whether this number cannot be used like any other real number for some strange reason.

    In other words, its a debate about the validity of the definition of a (finite) limit whether or not 0.999... = 1.

    Overall, I have to say that reading this thread has not convinced me that the definition of a limit is flawed.
    Last edited by MacNilly; 03-08-2011 at 02:43 PM.

  2. #167
    Registered User
    Join Date
    Jun 2005
    Posts
    6,815
    Quote Originally Posted by MacNilly View Post
    Very close, yes, but never equal. When dealing with limits, the difference (epsilon) approaches but never equals 0. If it does, the limit is undefined.

    There is no "limiting case," you simply add increasingly smaller amounts forever.
    That's not true. The limiting case is the value that is continually approached.

    If epsilon continually approaches zero, then the limiting case (for epsilon) is zero.

    Quote Originally Posted by MacNilly View Post
    Some would be better served reading a calculus textbook (limits and infinite series) than attempting to disprove the calculus.
    While I agree with your comment about people attempting to disprove the theory (of limits and calculus), you might want to read such a textbook more closely yourself.

    The notion of a limiting case - whether or not that limiting case can ever be reached - underpins calculus.
    Right 98% of the time, and don't care about the other 3%.

    If I seem grumpy or unhelpful in reply to you, or tell you you need to demonstrate more effort before you can expect help, it is likely you deserve it. Suck it up, Buttercup, and read this, this, and this before posting again.

  3. #168
    Registered User
    Join Date
    Oct 2008
    Posts
    1,262
    Quote Originally Posted by grumpy View Post
    That's not true. The limiting case is the value that is continually approached.

    If epsilon continually approaches zero, then the limiting case (for epsilon) is zero.


    While I agree with your comment about people attempting to disprove the theory (of limits and calculus), you might want to read such a textbook more closely yourself.

    The notion of a limiting case - whether or not that limiting case can ever be reached - underpins calculus.
    Actually, grumpy, I think he was right. He says there is no limit IF epsilon ever reaches 0, which is right. If epsilon grows closer and closer to 0, then there is a limit.

  4. #169
    Password:
    Join Date
    Dec 2009
    Location
    NC
    Posts
    587
    After spending my 3rd period class, today, reading about hyperreals, I think I've got a unifying statement.

    Can we all agree that 1 and .999... are real numbers(and, by inclusion, hyperreal numbers)? And, that 1 - sum(i=0, n).9(10^-i) = 1/10^n?

    Quote Originally Posted by Mario F. View Post
    That's the nature of all "proofs" we have seen here, and likewise we will ever see; they are demonstrations of the requirement, not rigorous, undeniable and unquestionable proofs. "0.999... = 1" is accepted for its consistency with any remaining R axioms.
    How is nested interval theorem "not rigorous?"

    Both .999... and 1 lie within the intersection of the infinitely nested intervals [.9,1]⊃[.99,1]⊃[.999,1]⊃... therefore, by nested interval theorem, they are equal.

    This isn't my proposed unifying statement, just a question.
    Last edited by User Name:; 03-08-2011 at 04:53 PM.

  5. #170
    Registered User MacNilly's Avatar
    Join Date
    Oct 2005
    Location
    CA, USA
    Posts
    466
    Quote Originally Posted by grumpy View Post
    That's not true. The limiting case is the value that is continually approached.

    If epsilon continually approaches zero, then the limiting case (for epsilon) is zero.


    While I agree with your comment about people attempting to disprove the theory (of limits and calculus), you might want to read such a textbook more closely yourself.

    The notion of a limiting case - whether or not that limiting case can ever be reached - underpins calculus.
    Yes, then in that case the "limiting case" is simply the value of the limit itself, L. When I posted that I wasn't so sure the term was being used in that sense, or in the sense that the lower bound (exclusive) on epsilon is always 0 (that never changes). On the other hand, I feel that the term "limiting case" is rather vague. In my calc I and II classes, we never used the term "limiting case," so I wasn't sure what was meant.

    I believe its the word "case"... it seems to imply that at some point (ie, a case), we can't decrease epsilon further and we've suddenly reached the limit, which is not how it works.
    Last edited by MacNilly; 03-08-2011 at 05:26 PM.

  6. #171
    (?<!re)tired Mario F.'s Avatar
    Join Date
    May 2006
    Location
    Ireland
    Posts
    8,446
    Quote Originally Posted by MacNilly View Post
    Overall, I have to say that reading this thread has not convinced me that the definition of a limit is flawed.
    Oh, but no one even got close to mean that. Merely that limits are not enough to establish the identity of a quantity such as a real number with non-terminating decimals.

    Also, I don't remember anyone (on any side of the debate) ever attempting to say that an infinitesimal equals 0, although that's the logical counterpart one must accept if they want to establish that 0.999... equals 1.

    It's also slightly interesting that you use the term epsilon. Did you know that Cauchy himself called it "error"? Epsilon and Delta = "error" and "distance". Quite the revealing names, when one thinks of using limits as a means to prove the identity of 0.999... Give it a thought.

    Quote Originally Posted by User Name: View Post
    After spending my 3rd period class, today, reading about hyperreals, I think I've got a unifying statement.

    Can we all agree that 1 and .999... are real numbers(and, by inclusion, hyperreal numbers). And, that 1 - sum(i=0, n).9(10^-i) = 1/10^n?
    I'm actually surprised you propose that. I'm not so sure we should be talking about hyperreals though. I'd rather preferred approaching it through the Internal Set Theory, on account of this option allowing us to remain in R.

    But nonetheless; Yes, I fully agree with that.

    Quote Originally Posted by User Name: View Post
    How is nested interval theorem "not rigorous?"

    Both .999... and 1 lie within the intersection of the infinitely nested intervals [.9,1]⊃[.99,1]⊃[.999,1]⊃... therefore, by nested interval theorem, they are equal.

    This isn't my proposed unifying statement, just a question.
    Pretty rigorous, of course. As limits are, for that matter. They aren't however tools that can be used as means of proof to establish the identity of a real with non-terminating digits. See above answer on this post.

    To be clear, I feel I may have been careless in my wording sometimes and may have given the feeling I'm somehow taking a jab at establish rules and conventions, when I'm actually not (well, I am in a way certainly. But definitely not attacking calculus as a tool). When I mention the word "rigorous" I'm not attacking limits. I'm merely looking at it in the context of trying to use them to establish the identity of an... actually unknown quantity such as 0.999...

    For that purpose they aren't rigorous at all.
    Last edited by Mario F.; 03-08-2011 at 10:29 PM.
    Originally Posted by brewbuck:
    Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.

  7. #172
    Unregistered User Yarin's Avatar
    Join Date
    Jul 2007
    Posts
    2,158
    Quote Originally Posted by ಠ_ಠ View Post
    please reread that statement and explain to me why you think it's not completely retarded.
    Indeed it is retarded. That would make it 0.(3)4, which not only is impossible, but would sum up to 1.(0)2.

    Until about half way back in posts of this thread, I was under the impression that 1/infinity = something bigger than 0, specifically 0.(0)1, and likewise that 0.(9) < 1.
    The problem in my thinking stemmed from the mistake of trying to tack numbers onto the end of the infinite string - which was really crazy to even try considering the very point of infinity is that there's no end to which anything can be tacked on to in the first place! If I'm not mistaken, this is what Mario was initially trying to make me aware of.

    A point that I don't think anyone has brought up on this thread up yet, is the base 3 example. In such a system, 1/3 would actually be represented as 1/10, which would equal 0.1. Consider also that in ternary, 1/2 would actually end up being represented as 0.111... Imagine a society that used ternary, telling us that one couldn't truly accurately represent one-half, we'd think they were crazy! Likewise they'd think we're crazy for saying one-third can't.

    I'm glad I started this thread, even with a stupid idea.
    Last edited by Yarin; 03-08-2011 at 10:32 PM.

  8. #173
    Password:
    Join Date
    Dec 2009
    Location
    NC
    Posts
    587
    Quote Originally Posted by Mario F. View Post
    I'm actually surprised you propose that. I'm not so sure we should be talking about hyperreals though. I'd rather preferred approaching it through the Internal Set Theory, on account of this option allowing us to remain in R.

    But nonetheless; Yes, I fully agree with that.
    Remind me, who was it who first mentioned *R? I'm just playing along.

    Since in *R, it is valid to say 1 - .999... = 1/10^∞, we will start from there.
    1 - .999... = 1/10^∞
    st(1 - .999...) = st(1/10^∞) // st is order preserving, therefore the equality remains valid
    st(1) - st(.999...) = 0 // st(a + b) = st(a) + st(b)
    st(1) = st(.999...)
    st(1) = 1 = st(.999...) = .999... // st(x) = x iff x is a real number
    1 = .999... // transitive property

    So, in short, .999... = 1 in R, but not in *R.

    Quote Originally Posted by Mario F. View Post
    Pretty rigorous, of course. As limits are, for that matter. They aren't however tools that can be used as means of proof to establish the identity of a real with non-terminating digits. See above answer on this post.
    Last I checked, real numbers didn't have to have terminating decimal expansions. So, following this logic, neither sqrt(2) or 1/7 are real, correct? Nor can we say that, as lim(x->.142857142857142857...) x = 1/7?

    Quote Originally Posted by Mario F. View Post
    To be clear, I feel I may have been careless in my wording sometimes and may have given the feeling I'm somehow taking a jab at establish rules and conventions, when I'm actually not (well, I am in a way certainly. But definitely not attacking calculus as a tool). When I mention the word "rigorous" I'm not attacking limits. I'm merely looking at it in the context of trying to use them to establish the identity of an... actually unknown quantity such as 0.999...
    Okay... one more time. This is the very definition of the theorem you have, so far, neglected to read: "Within each infinitely nested nonempty interval there is exactly one real number." Therefore, if two decimal expansions can be shown to exist within the same infinitely nested interval, they must be expansions of the same number. Different representations of the same number is no paradox. It's a simple as (n)/(2n) = 1/2, it's no paradox that there's infinitely many ways to write 1/2. You make it seem paradoxical by injecting ideas of infinitesimals that don't exist in the real numbers.

    Quote Originally Posted by Mario F. View Post
    For that purpose they aren't rigorous at all.
    What is the purpose of any rigorous math if you can pick and chose which = means = and which mean ~. = is = whether it seems intuitive to you or not.
    Last edited by User Name:; 03-08-2011 at 11:10 PM.

  9. #174
    Registered User
    Join Date
    Jun 2005
    Posts
    6,815
    Quote Originally Posted by MacNilly View Post
    I believe its the word "case"... it seems to imply that at some point (ie, a case), we can't decrease epsilon further and we've suddenly reached the limit, which is not how it works.
    There are other meanings of the word "case" than that. One meaning of case is "circumstance".

    A more complete expansion of "limiting case" would be "theoretical limiting circumstance", reflecting a circumstance in which a limiting circumstance may be continually approached but ultimately unreachable. However, mathematicians are trained to be lazy, so will not more words (or longer words) than necessary. That sometimes introduces ambiguity unless you know exactly what is intended.
    Right 98% of the time, and don't care about the other 3%.

    If I seem grumpy or unhelpful in reply to you, or tell you you need to demonstrate more effort before you can expect help, it is likely you deserve it. Suck it up, Buttercup, and read this, this, and this before posting again.

  10. #175
    Registered User
    Join Date
    Oct 2008
    Posts
    1,262
    Quote Originally Posted by User Name: View Post
    After spending my 3rd period class, today, reading about hyperreals, I think I've got a unifying statement.

    Can we all agree that 1 and .999... are real numbers(and, by inclusion, hyperreal numbers)? And, that 1 - sum(i=0, n).9(10^-i) = 1/10^n?
    I disagree with this. For any integer number n this is true, indeed (actually, there's a small error in there, it should be "i=1, n". For infinity this no longer holds.
    That's because "sum(i=1, n) 9(10^-i)" is in itself a limit for n is infinity. It's an implicit limit, but a limit nonetheless. And you can't say that:
    Code:
    1 - sum(i=1, n) 9(10^-i) = 1/10^n
    for n is infinity here, as the implicit limit is disappearing on the right hand side of the equation. Also, "1/10^n" is never actually 0, so that would actually disproof 1 = 0.999... (edit: well, not actually disproof, but it would assume 0 = infinitesimal, which Mario wrongfully accused me of doing).

    So it should rather be:

    Code:
    1 - sum(i=1, n) 9(10^-i) = lim(x->n) 1/10^x
    Here, the right hand side isn't "a very small number", but it actually is zero, proving that 1 = 0.9999...
    Last edited by EVOEx; 03-09-2011 at 05:39 AM.

  11. #176
    Password:
    Join Date
    Dec 2009
    Location
    NC
    Posts
    587
    Quote Originally Posted by EVOEx View Post
    I disagree with this. For any integer number n this is true, indeed (actually, there's a small error in there, it should be "i=1, n". For infinity this no longer holds.
    That's because "sum(i=1, n) 9(10^-i)" is in itself a limit for n is infinity. It's an implicit limit, but a limit nonetheless. And you can't say that:
    Code:
    1 - sum(i=1, n) 9(10^-i) = 1/10^n
    for n is infinity here, as the implicit limit is disappearing on the right hand side of the equation. Also, "1/10^n" is never actually 0, so that would actually disproof 1 = 0.999... (edit: well, not actually disproof, but it would assume 0 = infinitesimal, which Mario wrongfully accused me of doing).

    So it should rather be:

    Code:
    1 - sum(i=1, n) 9(10^-i) = lim(x->n) 1/10^x
    Here, the right hand side isn't "a very small number", but it actually is zero, proving that 1 = 0.9999...
    I was using the hyperreals, in which the pattern that carries for all real n will also carry for the infinite n.

    In the reals, you are right. But the hyperreals are a discrete space, and thus the limit is wrong.
    Last edited by User Name:; 03-09-2011 at 04:48 PM.

  12. #177
    Registered User MacNilly's Avatar
    Join Date
    Oct 2005
    Location
    CA, USA
    Posts
    466
    Quote Originally Posted by Mario F
    It's also slightly interesting that you use the term epsilon. Did you know that Cauchy himself called it "error"? Epsilon and Delta = "error" and "distance". Quite the revealing names, when one thinks of using limits as a means to prove the identity of 0.999... Give it a thought.

    To be clear, I feel I may have been careless in my wording sometimes and may have given the feeling I'm somehow taking a jab at establish rules and conventions, when I'm actually not (well, I am in a way certainly. But definitely not attacking calculus as a tool). When I mention the word "rigorous" I'm not attacking limits. I'm merely looking at it in the context of trying to use them to establish the identity of an... actually unknown quantity such as 0.999...

    For that purpose they aren't rigorous at all.
    Yes, I see your point. Because if there is an error margin, then 0.999... can't be exactly equal to 1. Its an interesting theory, and does raise many questions. In school they brainwash you and usually nobody is smart enough to give a counterargument to the professors. They just accept what they are told.

    However, if the error (epsilon) is _arbitrarily small_, and if the function under consideration (0.999... in this case) has a maximum upper bound, we can achieve a value _arbitrarily close_ to that upper bound. Thus, the function is effectively equal to that upper bound (limit). That is, there is no number one can name between 0.999... and 1 (no matter how many 9's you choose, I choose one more 9).

    If that isn't proof enough I don't know what could be.

    One instance of establishing an unknown quantity via limits is the number pi. Take a circle of fixed radius and inscribe in it a regular polygon composed of N triangles. Limits show that as N -> inf, the area of the polygon approaches the area of the circle, but never increases beyond the area of the circle. A few simple calculations give the area and circumference of a circle, from which pi can be established as a constant factor.

    Now if you think pi as derived from a limit proof is valid to use in "regular algebra," I cannot see why you think limits can't prove that 0.999... = 1.

  13. #178
    Password:
    Join Date
    Dec 2009
    Location
    NC
    Posts
    587
    Quote Originally Posted by MacNilly View Post
    Yes, I see your point. Because if there is an error margin, then 0.999... can't be exactly equal to 1. Its an interesting theory, and does raise many questions. In school they brainwash you and usually nobody is smart enough to give a counterargument to the professors. They just accept what they are told.

    However, if the error (epsilon) is _arbitrarily small_, and if the function under consideration (0.999... in this case) has a maximum upper bound, we can achieve a value _arbitrarily close_ to that upper bound. Thus, the function is effectively equal to that upper bound (limit). That is, there is no number one can name between 0.999... and 1 (no matter how many 9's you choose, I choose one more 9).
    Or, you can state it as "There are infinitely many numbers between any 2 numbers.[A well known property formally referred to as 'dense'] There are no numbers between .999... and 1, therefore .999... = 1." I'm betting on it working as well for you as it did for me ~1000 pages ago.

    There is a problem I see with the lim and sup proofs. Although correct, the result requires the assumption that 1 = .999... For example, if you try proof by contradiction, assuming .999... < 1, then sup(.9, .99, .999, ...) = .999... < 1. I use sup in this case for the ease of notation, and since, in this context, they have the same underlying concept.

  14. #179
    Registered User
    Join Date
    Oct 2008
    Posts
    1,262
    Quote Originally Posted by User Name: View Post
    There is a problem I see with the lim and sup proofs. Although correct, the result requires the assumption that 1 = .999... For example, if you try proof by contradiction, assuming .999... < 1, then sup(.9, .99, .999, ...) = .999... < 1. I use sup in this case for the ease of notation, and since, in this context, they have the same underlying concept.
    Just when I thought we agreed :P. My proof used limits and I never assumed that "1 = .999...". It only uses the definition of "0.999..." and the limit of "10^-x" as x goes to infinity. If you really think that, can you point out where in my proof I assumed any such thing (just out of curiousity).

  15. #180
    S Sang-drax's Avatar
    Join Date
    May 2002
    Location
    Göteborg, Sweden
    Posts
    2,072
    I think many here complicate things too much. A correct proof is:

    0.999... = sum(k=1...oo)9 * 10^(-k) = 1.

    The first equality by definition and the second one by using the geometric sum formula for the partial sums.
    Last edited by Sang-drax : Tomorrow at 02:21 AM. Reason: Time travelling

Popular pages Recent additions subscribe to a feed

Similar Threads

  1. left shifting signed quantity
    By BEN10 in forum C Programming
    Replies: 6
    Last Post: 04-01-2009, 07:39 AM
  2. "Magos is an unknown quantity at this point"
    By Magos in forum A Brief History of Cprogramming.com
    Replies: 36
    Last Post: 04-30-2004, 11:27 AM
  3. Trivial Trigonometry Question, Need Help Understanding Concept
    By SourceCode in forum A Brief History of Cprogramming.com
    Replies: 3
    Last Post: 12-14-2003, 05:50 PM
  4. help !!concept of class templates
    By sanju in forum C++ Programming
    Replies: 1
    Last Post: 03-20-2003, 09:12 AM
  5. linkedlist concept please!
    By SAMSAM in forum C Programming
    Replies: 3
    Last Post: 03-15-2003, 01:50 PM