1. there's no reason to assume that something that quantizes voltage levels must necessarily be considered digital. that sort of thing can be done with a purely analog device.

2. To me, quantized voltage levels are the definition of digital. How do you define a digital?

3. the common, de-facto definition of digital is a device that uses binary electrical states to internally represent its numeric system. parts of the dictionary definition seem to favor your point of view, although in common usage in at least the last decade or so, digital is almost always used to refer to a computer in the way we normally think of a computer.

4. Originally Posted by Elkvis
the common, de-facto definition of digital is a device that uses binary electrical states to internally represent its numeric system. parts of the dictionary definition seem to favor your point of view, although in common usage in at least the last decade or so, digital is almost always used to refer to a computer in the way we normally think of a computer.
I don't know how you normally think of a computer, but when I think of it I think of quantized voltages. It's not like there's a "digital life force" that imparts "digitism" to things. It's all in your head (as most things are).

5. Originally Posted by brewbuck
It's not like there's a "digital life force" that imparts "digitism" to things. It's all in your head (as most things are).
Midi-chlorians man, midi-chlorians!

6. Originally Posted by King Mir
You can use a device that's part analog, but to read a value as an integer requires breaking it down into discrete voltage level, a process that would make the device digital, at least in that part. So what you'd have is digital device that uses modulo N instead of 1's and 0's, for it's most basic unit of information.
That is called an "analogue to digital converter" (ADC). The reverse is a DAC.

In general, a digital system is just one that uses a discrete (discontinuous) set of values to represent information. An analogue system is one that uses continuous functions to represent information.

Analogue computers existed a long time before digital ones. Look up "Antikythera mechanism".

Add one switch to any analogue system, and it has a digital capability.

7. Originally Posted by Elkvis
the common, de-facto definition of digital is a device that uses binary electrical states to internally represent its numeric system. parts of the dictionary definition seem to favor your point of view, although in common usage in at least the last decade or so, digital is almost always used to refer to a computer in the way we normally think of a computer.
I would just call what you call a digital device a binary computer, and a device that allows more states than that as an n-ary computer. Ternary computers have existed. Higher n-ary devices have not, but are possible.

I would never call a turnary or n-ary computer an analog device (except in the sense that all computers are analog devices) because, though they are different from binary devices, they are necessarily quite different from analog circuits too.

8. Originally Posted by grumpy
That is called an "analogue to digital converter" (ADC). The reverse is a DAC.

In general, a digital system is just one that uses a discrete (discontinuous) set of values to represent information. An analogue system is one that uses continuous functions to represent information.

Analogue computers existed a long time before digital ones. Look up "Antikythera mechanism".

Add one switch to any analogue system, and it has a digital capability.
If a device uses ADCs and DACs as part of it's opperation It's not really an analog device. It's a device with analog components. I agree with your definition of digital, but C's (and C++'s) type system is necessarily digital. Being able to assert 1+1=2 requires a digital device. If a computer can perform such arithmetic repeatedly, without errors, such a system has to be called digital. It would need a rounding mechanism that makes all values near to the voltage (or other quantity) representing 1 to be equal to 1, like a digital repeater.

9. C++ doesn't only strictly have integers, it also has decimals (float/double) where you can represent a number even if you lose some precision. Decimal numbers are a way to represent continuous values. If you write C++ in an analog computer you will just be able to get a better precision and the memory allocated for each variable won't be in bytes.

C++ has trouble asserting 1.345f + 1.123f = 2.468f because of the way decimal floating points are defined in C++. The opposite is not true, you can assert 1+1=2 in an analog computer (three constants, each held in a voltage line with 1V, 1V, 2V respectively). So more or less if you were to use C++ in a computer with an analog memory then the only thing C++ would need to do is leave flexible the representation of decimal types (not necessarily use floating point) and generalize the concept of size (so not in bytes but in an arbitrary unit type).

10. Originally Posted by King Mir
If a device uses ADCs and DACs as part of it's opperation It's not really an analog device. It's a device with analog components.
By that type of logic, it would not be a digital device either.

It would be composed of both analogue and digital devices, with ADC or DAC being part of the interface between devices.

Originally Posted by King Mir
Being able to assert 1+1=2 requires a digital device.
No it doesn't. As N_ntua said, the assertion could be tested in an analogue computer using three voltage lines.

Originally Posted by King Mir
If a computer can perform such arithmetic repeatedly, without errors, such a system has to be called digital.
That has nothing to do with it being digital.

Firstly, it is possible to have repeatability with combinations of continuous functions (i.e. with an analogue system). In fact, technically, it is easier to achieve repeatability with a combination of continuous functions than it is with discontinuous functions.

Second, technically a transistor (which is the heart of modern digital devices) is an three-terminal analogue device - a voltage applied at one pair of terminals changes the current flowing between another pair of terminals. It just convention that if a voltage (or current) below some threshold is deemed to be "off", a voltage (or current) above that threshold is deemed to be "on". That is actually the physical underpinnings of representing bits within a digital machine. And, technically, the behaviour is statistical - a threshold is chosen so "off" is well below the threshold, and "on" is well above, so the odds of an error is small.

Third, because of the physical nature of modern semiconductor devices (including all those transistors in your CPU) there is actually a non-zero probability than any operation that attempts to turn a bit on/off in a digital device will not give the expected result. Modern devices are engineered so the probabilities are very low, but are non-zero. And some physical phenomena (alpha or beta particles, neutron upset events, electric fields, magnetic fields, .... ) can disturb those probabilities.

Originally Posted by King Mir
It would need a rounding mechanism that makes all values near to the voltage (or other quantity) representing 1 to be equal to 1, like a digital repeater.
Not true. All that is needed is a defined tolerance of uncertainty. Let's say a value of 1 is represented by some voltage in the range [1-delta, 1+delta] volts, with some variation depending on the voltage line used to represent it. Adding 1 + 1 would be expected to give a voltage in the range [2-2*delta, 2+2*delta]. The assertion 1+1 = 2 would be detected as true if the resultant voltage is in that range.