1. Originally Posted by C_ntua Yes, but you are adding every time a smaller amount. So in the end you are adding something very close to 0. Which in the case of a truly infinite amount of 9s that will imply that you are actually adding 0 thus you will never get to 1. In other words the amount you are adding can get closer and closer to 0 and in the limiting case it will equal to 0.
Very close, yes, but never equal. When dealing with limits, the difference (epsilon) approaches but never equals 0. If it does, the limit is undefined.

There is no "limiting case," you simply add increasingly smaller amounts forever.

This was an interesting thread, but I have to say there is quite a bit of confusion involving limits and their definition and rationale, especially about the "epsilon," or what some have been calling the "infintesimal." E > 0 always. NOT 0.

Some would be better served reading a calculus textbook (limits and infinite series) than attempting to disprove the calculus. I guess the problem is one of viewpoint; whether or not a (finite) limit produces a single, unique real number or not, and whether this number cannot be used like any other real number for some strange reason.

In other words, its a debate about the validity of the definition of a (finite) limit whether or not 0.999... = 1.

Overall, I have to say that reading this thread has not convinced me that the definition of a limit is flawed. 2. Originally Posted by MacNilly Very close, yes, but never equal. When dealing with limits, the difference (epsilon) approaches but never equals 0. If it does, the limit is undefined.

There is no "limiting case," you simply add increasingly smaller amounts forever.
That's not true. The limiting case is the value that is continually approached.

If epsilon continually approaches zero, then the limiting case (for epsilon) is zero. Originally Posted by MacNilly Some would be better served reading a calculus textbook (limits and infinite series) than attempting to disprove the calculus.
While I agree with your comment about people attempting to disprove the theory (of limits and calculus), you might want to read such a textbook more closely yourself.

The notion of a limiting case - whether or not that limiting case can ever be reached - underpins calculus. 3. Originally Posted by grumpy That's not true. The limiting case is the value that is continually approached.

If epsilon continually approaches zero, then the limiting case (for epsilon) is zero.

While I agree with your comment about people attempting to disprove the theory (of limits and calculus), you might want to read such a textbook more closely yourself.

The notion of a limiting case - whether or not that limiting case can ever be reached - underpins calculus.
Actually, grumpy, I think he was right. He says there is no limit IF epsilon ever reaches 0, which is right. If epsilon grows closer and closer to 0, then there is a limit. 4. After spending my 3rd period class, today, reading about hyperreals, I think I've got a unifying statement.

Can we all agree that 1 and .999... are real numbers(and, by inclusion, hyperreal numbers)? And, that 1 - sum(i=0, n).9(10^-i) = 1/10^n? Originally Posted by Mario F. That's the nature of all "proofs" we have seen here, and likewise we will ever see; they are demonstrations of the requirement, not rigorous, undeniable and unquestionable proofs. "0.999... = 1" is accepted for its consistency with any remaining R axioms.
How is nested interval theorem "not rigorous?"

Both .999... and 1 lie within the intersection of the infinitely nested intervals [.9,1]⊃[.99,1]⊃[.999,1]⊃... therefore, by nested interval theorem, they are equal.

This isn't my proposed unifying statement, just a question. 5. Originally Posted by grumpy That's not true. The limiting case is the value that is continually approached.

If epsilon continually approaches zero, then the limiting case (for epsilon) is zero.

While I agree with your comment about people attempting to disprove the theory (of limits and calculus), you might want to read such a textbook more closely yourself.

The notion of a limiting case - whether or not that limiting case can ever be reached - underpins calculus.
Yes, then in that case the "limiting case" is simply the value of the limit itself, L. When I posted that I wasn't so sure the term was being used in that sense, or in the sense that the lower bound (exclusive) on epsilon is always 0 (that never changes). On the other hand, I feel that the term "limiting case" is rather vague. In my calc I and II classes, we never used the term "limiting case," so I wasn't sure what was meant.

I believe its the word "case"... it seems to imply that at some point (ie, a case), we can't decrease epsilon further and we've suddenly reached the limit, which is not how it works. 6. Originally Posted by MacNilly Overall, I have to say that reading this thread has not convinced me that the definition of a limit is flawed.
Oh, but no one even got close to mean that. Merely that limits are not enough to establish the identity of a quantity such as a real number with non-terminating decimals.

Also, I don't remember anyone (on any side of the debate) ever attempting to say that an infinitesimal equals 0, although that's the logical counterpart one must accept if they want to establish that 0.999... equals 1.

It's also slightly interesting that you use the term epsilon. Did you know that Cauchy himself called it "error"? Epsilon and Delta = "error" and "distance". Quite the revealing names, when one thinks of using limits as a means to prove the identity of 0.999... Give it a thought. Originally Posted by User Name: After spending my 3rd period class, today, reading about hyperreals, I think I've got a unifying statement.

Can we all agree that 1 and .999... are real numbers(and, by inclusion, hyperreal numbers). And, that 1 - sum(i=0, n).9(10^-i) = 1/10^n?
I'm actually surprised you propose that. I'm not so sure we should be talking about hyperreals though. I'd rather preferred approaching it through the Internal Set Theory, on account of this option allowing us to remain in R.

But nonetheless; Yes, I fully agree with that. Originally Posted by User Name: How is nested interval theorem "not rigorous?"

Both .999... and 1 lie within the intersection of the infinitely nested intervals [.9,1]⊃[.99,1]⊃[.999,1]⊃... therefore, by nested interval theorem, they are equal.

This isn't my proposed unifying statement, just a question.
Pretty rigorous, of course. As limits are, for that matter. They aren't however tools that can be used as means of proof to establish the identity of a real with non-terminating digits. See above answer on this post.

To be clear, I feel I may have been careless in my wording sometimes and may have given the feeling I'm somehow taking a jab at establish rules and conventions, when I'm actually not (well, I am in a way certainly. But definitely not attacking calculus as a tool). When I mention the word "rigorous" I'm not attacking limits. I'm merely looking at it in the context of trying to use them to establish the identity of an... actually unknown quantity such as 0.999...

For that purpose they aren't rigorous at all. 7. Originally Posted by ಠ_ಠ please reread that statement and explain to me why you think it's not completely retarded.
Indeed it is retarded. That would make it 0.(3)4, which not only is impossible, but would sum up to 1.(0)2.

Until about half way back in posts of this thread, I was under the impression that 1/infinity = something bigger than 0, specifically 0.(0)1, and likewise that 0.(9) < 1.
The problem in my thinking stemmed from the mistake of trying to tack numbers onto the end of the infinite string - which was really crazy to even try considering the very point of infinity is that there's no end to which anything can be tacked on to in the first place! If I'm not mistaken, this is what Mario was initially trying to make me aware of.

A point that I don't think anyone has brought up on this thread up yet, is the base 3 example. In such a system, 1/3 would actually be represented as 1/10, which would equal 0.1. Consider also that in ternary, 1/2 would actually end up being represented as 0.111... Imagine a society that used ternary, telling us that one couldn't truly accurately represent one-half, we'd think they were crazy! Likewise they'd think we're crazy for saying one-third can't.

I'm glad I started this thread, even with a stupid idea.  8. Originally Posted by Mario F. I'm actually surprised you propose that. I'm not so sure we should be talking about hyperreals though. I'd rather preferred approaching it through the Internal Set Theory, on account of this option allowing us to remain in R.

But nonetheless; Yes, I fully agree with that.
Remind me, who was it who first mentioned *R? I'm just playing along.

Since in *R, it is valid to say 1 - .999... = 1/10^∞, we will start from there.
1 - .999... = 1/10^∞
st(1 - .999...) = st(1/10^∞) // st is order preserving, therefore the equality remains valid
st(1) - st(.999...) = 0 // st(a + b) = st(a) + st(b)
st(1) = st(.999...)
st(1) = 1 = st(.999...) = .999... // st(x) = x iff x is a real number
1 = .999... // transitive property

So, in short, .999... = 1 in R, but not in *R. Originally Posted by Mario F. Pretty rigorous, of course. As limits are, for that matter. They aren't however tools that can be used as means of proof to establish the identity of a real with non-terminating digits. See above answer on this post.
Last I checked, real numbers didn't have to have terminating decimal expansions. So, following this logic, neither sqrt(2) or 1/7 are real, correct? Nor can we say that, as lim(x->.142857142857142857...) x = 1/7? Originally Posted by Mario F. To be clear, I feel I may have been careless in my wording sometimes and may have given the feeling I'm somehow taking a jab at establish rules and conventions, when I'm actually not (well, I am in a way certainly. But definitely not attacking calculus as a tool). When I mention the word "rigorous" I'm not attacking limits. I'm merely looking at it in the context of trying to use them to establish the identity of an... actually unknown quantity such as 0.999...
Okay... one more time. This is the very definition of the theorem you have, so far, neglected to read: "Within each infinitely nested nonempty interval there is exactly one real number." Therefore, if two decimal expansions can be shown to exist within the same infinitely nested interval, they must be expansions of the same number. Different representations of the same number is no paradox. It's a simple as (n)/(2n) = 1/2, it's no paradox that there's infinitely many ways to write 1/2. You make it seem paradoxical by injecting ideas of infinitesimals that don't exist in the real numbers. Originally Posted by Mario F. For that purpose they aren't rigorous at all.
What is the purpose of any rigorous math if you can pick and chose which = means = and which mean ~. = is = whether it seems intuitive to you or not. 9. Originally Posted by MacNilly I believe its the word "case"... it seems to imply that at some point (ie, a case), we can't decrease epsilon further and we've suddenly reached the limit, which is not how it works.
There are other meanings of the word "case" than that. One meaning of case is "circumstance".

A more complete expansion of "limiting case" would be "theoretical limiting circumstance", reflecting a circumstance in which a limiting circumstance may be continually approached but ultimately unreachable. However, mathematicians are trained to be lazy, so will not more words (or longer words) than necessary. That sometimes introduces ambiguity unless you know exactly what is intended. 10. Originally Posted by User Name: After spending my 3rd period class, today, reading about hyperreals, I think I've got a unifying statement.

Can we all agree that 1 and .999... are real numbers(and, by inclusion, hyperreal numbers)? And, that 1 - sum(i=0, n).9(10^-i) = 1/10^n?
I disagree with this. For any integer number n this is true, indeed (actually, there's a small error in there, it should be "i=1, n". For infinity this no longer holds.
That's because "sum(i=1, n) 9(10^-i)" is in itself a limit for n is infinity. It's an implicit limit, but a limit nonetheless. And you can't say that:
Code:
`1 - sum(i=1, n) 9(10^-i) = 1/10^n`
for n is infinity here, as the implicit limit is disappearing on the right hand side of the equation. Also, "1/10^n" is never actually 0, so that would actually disproof 1 = 0.999... (edit: well, not actually disproof, but it would assume 0 = infinitesimal, which Mario wrongfully accused me of doing).

So it should rather be:

Code:
`1 - sum(i=1, n) 9(10^-i) = lim(x->n) 1/10^x`
Here, the right hand side isn't "a very small number", but it actually is zero, proving that 1 = 0.9999... 11. Originally Posted by EVOEx I disagree with this. For any integer number n this is true, indeed (actually, there's a small error in there, it should be "i=1, n". For infinity this no longer holds.
That's because "sum(i=1, n) 9(10^-i)" is in itself a limit for n is infinity. It's an implicit limit, but a limit nonetheless. And you can't say that:
Code:
`1 - sum(i=1, n) 9(10^-i) = 1/10^n`
for n is infinity here, as the implicit limit is disappearing on the right hand side of the equation. Also, "1/10^n" is never actually 0, so that would actually disproof 1 = 0.999... (edit: well, not actually disproof, but it would assume 0 = infinitesimal, which Mario wrongfully accused me of doing).

So it should rather be:

Code:
`1 - sum(i=1, n) 9(10^-i) = lim(x->n) 1/10^x`
Here, the right hand side isn't "a very small number", but it actually is zero, proving that 1 = 0.9999...
I was using the hyperreals, in which the pattern that carries for all real n will also carry for the infinite n.

In the reals, you are right. But the hyperreals are a discrete space, and thus the limit is wrong. 12. Originally Posted by Mario F
It's also slightly interesting that you use the term epsilon. Did you know that Cauchy himself called it "error"? Epsilon and Delta = "error" and "distance". Quite the revealing names, when one thinks of using limits as a means to prove the identity of 0.999... Give it a thought.

To be clear, I feel I may have been careless in my wording sometimes and may have given the feeling I'm somehow taking a jab at establish rules and conventions, when I'm actually not (well, I am in a way certainly. But definitely not attacking calculus as a tool). When I mention the word "rigorous" I'm not attacking limits. I'm merely looking at it in the context of trying to use them to establish the identity of an... actually unknown quantity such as 0.999...

For that purpose they aren't rigorous at all.
Yes, I see your point. Because if there is an error margin, then 0.999... can't be exactly equal to 1. Its an interesting theory, and does raise many questions. In school they brainwash you and usually nobody is smart enough to give a counterargument to the professors. They just accept what they are told.

However, if the error (epsilon) is _arbitrarily small_, and if the function under consideration (0.999... in this case) has a maximum upper bound, we can achieve a value _arbitrarily close_ to that upper bound. Thus, the function is effectively equal to that upper bound (limit). That is, there is no number one can name between 0.999... and 1 (no matter how many 9's you choose, I choose one more 9).

If that isn't proof enough I don't know what could be.

One instance of establishing an unknown quantity via limits is the number pi. Take a circle of fixed radius and inscribe in it a regular polygon composed of N triangles. Limits show that as N -> inf, the area of the polygon approaches the area of the circle, but never increases beyond the area of the circle. A few simple calculations give the area and circumference of a circle, from which pi can be established as a constant factor.

Now if you think pi as derived from a limit proof is valid to use in "regular algebra," I cannot see why you think limits can't prove that 0.999... = 1. 13. Originally Posted by MacNilly Yes, I see your point. Because if there is an error margin, then 0.999... can't be exactly equal to 1. Its an interesting theory, and does raise many questions. In school they brainwash you and usually nobody is smart enough to give a counterargument to the professors. They just accept what they are told.

However, if the error (epsilon) is _arbitrarily small_, and if the function under consideration (0.999... in this case) has a maximum upper bound, we can achieve a value _arbitrarily close_ to that upper bound. Thus, the function is effectively equal to that upper bound (limit). That is, there is no number one can name between 0.999... and 1 (no matter how many 9's you choose, I choose one more 9).
Or, you can state it as "There are infinitely many numbers between any 2 numbers.[A well known property formally referred to as 'dense'] There are no numbers between .999... and 1, therefore .999... = 1." I'm betting on it working as well for you as it did for me ~1000 pages ago.

There is a problem I see with the lim and sup proofs. Although correct, the result requires the assumption that 1 = .999... For example, if you try proof by contradiction, assuming .999... < 1, then sup(.9, .99, .999, ...) = .999... < 1. I use sup in this case for the ease of notation, and since, in this context, they have the same underlying concept. 14. Originally Posted by User Name: There is a problem I see with the lim and sup proofs. Although correct, the result requires the assumption that 1 = .999... For example, if you try proof by contradiction, assuming .999... < 1, then sup(.9, .99, .999, ...) = .999... < 1. I use sup in this case for the ease of notation, and since, in this context, they have the same underlying concept.
Just when I thought we agreed :P. My proof used limits and I never assumed that "1 = .999...". It only uses the definition of "0.999..." and the limit of "10^-x" as x goes to infinity. If you really think that, can you point out where in my proof I assumed any such thing (just out of curiousity). 15. I think many here complicate things too much. A correct proof is:

0.999... = sum(k=1...oo)9 * 10^(-k) = 1.

The first equality by definition and the second one by using the geometric sum formula for the partial sums. Popular pages Recent additions 