Foolish.
Printable View
Foolish.
NaA (not an adjective).
"Everything I say is a lie."
"This statement is wrong."
I feel that this falls into the same category of oxymorons. If barish were barish, it'd be fooish. If it'd be fooish, it wouldn't be fooish.
Maybe. This used to be true for CS students at my university (that's why I knew about it). Now, the corresponding lecture isn't mandatory anymore. It seems to me that the focus has shifted from logic towards algorithms (which is good for me, because these two are the only topics that I appreciate in CS). BTW, my fellow students in theoretical philosophy tended to be much better in boolean logic than most CS students.
Greets,
Philip
For those who don't see it right away, here's the full proof:Quote:
If barish were barish, it'd be fooish. If it'd be fooish, it wouldn't be fooish.
Suppose that "barish" is barish. Then it denotes a property that it has itself, hence it is fooish. Contradiction.
Suppose that "barish" is fooish. Then by definition of "fooish", it must be barish. Contradiction.
Hence "barish" is neither barish nor fooish.
Or more correctly, the Grelling-Nelson antinomy. Back in 1908, they hadn't heard of "foo" and "bar" yet, so instead they used autological and heterological respectively.Quote:
Grelling–Nelson paradox
In 1884, Gottlob Frege published "The Foundations of Arithmetic", where he tries to deduce mathematics from logic (Frege is way ahead of his time: a few years earlier, Leopold Kronecker said "God made the integers, all the rest is the work of man"). In a letter to Frege, Bertrand Russell points out a fundamental flaw in Frege's design, which later came to be known as "Russell's antinomy". This letter is fun to read, but unfortunately I only have access to the original version, which is in German. Anyway, the Grelling-Nelson antinomy is a reformulation of Russell's antinomy which is a bit easier to understand (the original version isn't too hard either). In 1931, Kurt Gödel used the same technique in his proof of the First Incompleteness Theorem which states that "any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete". Read this as "math is either inconsistent or incomplete or both, and so are any of its reformulations". I highly appreciate the proof and I consider it to be the most important discovery of the 20th century, so I will summarize it below:
1) in a given theory T, assign a unique number to each theorem, e.g. theorem 1, theorem 2, theorem 3, ..., n.
2) add a new theorem n+1 which claims that "theorem X is not provable".
3) apply this theorem to itself, which yields a claim similar to "This sentence is not provable".
This claim is either true or false. If it is true, than the theory is incomplete (it lacks a proof). If it is false, then there exists a proof, hence the theory is inconsistent (it has a proof for something that says that there is no proof).
In today's mathematics, there are a lot of claims for which no proof can exist. The first one discovered (40 years after Gödel) is the continuum hypothesis which claims that "there is no set whose cardinality is strictly between that of the integers and that of the real numbers". Being unprovable means that you cannot find such a set, but you also can't show that it doesn't exist.
Another funny issue that suffers from the same problem: one can prove that it's possible to split a ball from the 3-dimensional space of the reals into six pieces such that out of these, you can construct two balls which both have the same size and volume as the original ball. But one can also prove that it's not possible to find a concrete decomposition such that it works.
I'm planning to write an article about the whole story to improve my English skills. Is this stuff somehow appealing to non-mathematicians?
Greets,
Philip
The natural numbers are usually ordered in the following way:
1 < 2 < 3 < 4 < ...
There is exactly one element (the first) which has no predecessor. Note that we are free to invent our own order for the natural numbers, e.g. 2 < 1 < 3 < 4 < ...
Can you find an order for the natural numbers such that there are two elements that have no predecessor?
Greets,
Philip
Does the order have to be total and strict? If it has to be, then there cannot be more than one. Assume there were e1 and e2 that are both "smallest". By the rules of total, strict ordering, either e1 < e2 or e2 < e1 must be true, otherwise e1 = e2.
Yes, the order has to be total and strict. Note that an element without a specific predecessor is not necessarily a smallest element (although a smallest element certainly doesn't have a predecessor).
Hint: can you find an order such that there is no smallest element?
Greets,
Philip
There's ... < 3 < 2 < 1, where no element has no predecessor, but exactly one has no successor. But I'm not sure if that could really be called a distinct ordering. Predecessor and successor are just a matter of perspective.
An ordering that really has no predecessor would be ... < 5 < 3 < 1 < 2 < 4 < ..., i.e. mirror the odd (or even) numbers.
Code:int oddneg(unsigned i) { return isodd(i) ? -(int)i : (int)i; }
bool operator <(unsigned l, unsigned r) { oddneg(l) < oddneg(r); }
As there are infinitely many natural numbers, the two orderings ... < 2 < 1 and 1 < 2 < ... are distinct because they have different properties (there's a clear difference between successor and predecessor). If the natural numbers were finite, these two orderings would be the same (apart from the labeling of the symbols, which is irrelevant).
That's exactly what I had in mind. From here, it's only a small step to an order where two elements don't have a predecessor.Quote:
An ordering that really has no predecessor would be ... < 5 < 3 < 1 < 2 < 4 < ..., i.e. mirror the odd (or even) numbers.
Hint: an element may have an ancestor, even if it doesn't have a predecessor.
Greets,
Philip
So: 1 < 3 < 5 < ... < 2 < 4 < 6 < ...
i.e. order all odd numbers before all even numbers, or vice versa. Given the definition of predecessor as "a is pred. of b if a < b and there's no c such that a < c < b", 2 has no predecessor.
The second hint did it.
>I'm planning to write an article about the whole story to improve my English skills. Is this stuff somehow appealing to non-mathematicians?
I'd read it. I was actually planning on taking a class this summer that looks at the foundations of logics, including things loke Gödel's theorem. I'll be away on an internship instead though so I won't get to take the course.
Since Snafuist posts all of the riddles he doesn't get the fun of trying to solve them. Here's one you guys may enjoy, the solution is more computer sciency than you might think ;)
Every year in a village of dwarves, an evil ogre comes to play a deadly game. The dwarves get lined up by hight and are not permitted to look any direction but forward (this means each dwarf can only see the dwarfs ahead of him/her). Each dwarf is then given a hat, which he can not see. The hat is either black or white. This means each dwarf can see all of the hats in front of him, but not his own or any behind him.
Starting from the tallest dwarf, they go in order down the line calling out either "black" or "white". If a dwarf correctly identifies the colour of their hat, they live. If they get the colour wrong, they die.
The dwarves spend all year planning a strategy to save the most dwarves as possible.
Problem: design a strategy to save the most dwarves. You can assume that all dwarves are cooperative since they want to save as many of themselves as possible.
Example:
======
Each dwarf calls out the colour of the final dwarfs hat. When it gets to the final dwarf, he knows the colour of his hat (because he's heard it n-1 times), and he just says his own hat colour. This is guaranteed to save at least one dwarf.
You should easily be able to find a solution that saves half of the dwarves. The optimal solution will always save at least n-1 dwarves.