Is there an alternative to bits as the smallest unit of data? Something that won't be only 0 or 1, but actually hold many possible states in between? Wouldn't it be more natural to store floats like that?
12 Answers
Of course it is possible, both theoretically and practically.
Theoretically, there are two classes of alternatives: digital number systems with a base other than 2 (in fact, the decimal system as we know it is one such system); and non-digital number systems. Mathematically speaking, we're talking about discrete vs. continuous domains.
In practice, both options have been explored. Some of the early digital computers (e.g. ENIAC) employed decimal encodings rather than the now ubiquitous binary encoding; other bases, e.g. ternary, should be just as feasible (or infeasible). The esoteric programming language Malbolge is based on a theoretical ternary computer; while mostly satirical, there is no technical reason why it shouldn't work. Continuous-domain storage and processing was historically done on analog computers, where you could encode quantities as frequencies and / or amplitudes of oscillating signals, and you would then perform computations by applying all sorts of modulations to these signals. Today, quantum computing makes the theory behind continuous storage cells interesting again.
Either way, the bit as a theoretical smallest unit of information still stands, as any alternative can encode more information than a single yes/no, and nobody has yet come up with a smaller theoretical unit (and I don't expect it to happen anytime soon).
- 52,936
You're basically describing an analog signal, which are used in sensors, but rarely for internal computations. The problem is noise degrades the quality, you need very precise calibration of a reference point that is difficult to communicate, and transmission is a problem because it loses strength the farther it travels.
If you're interested in exploring analog computing, most undergrad "intro to electronics" classes have you build things like op-amp integrators. They're easy enough to build even without formal instruction.
You can also store multiple digital states on the same node. For example, instead of 0-2.5 volts being a zero and 2.5-5.0 volts being a one, you can add a third state in between. It adds a lot of complexity, though, and significantly increases your susceptibility to noise.
- 148,830
A matter of accuracy
One reason we use bits is that it helps us store and retrieve information accurately.
The real world is analog, therefore all the information computers pass or store is ultimately analog. For example, a current of a specific voltage on a wire, or magnetic charge of a specific strength on a disk, or a pit of a specific depth on a laser disc.
The question is: how accurately can you measure that analog information? Imagine that a current on a wire could be interpreted as any decimal number, as follows:
- 1 to 10 volts: 0
- 10 to 20 volts: 1
- 20 to 30 volts: 2
Etc. This system would let us pass a lot of data in a few pulses of current, right? But there's a problem: we have to be very sure what the voltage is. If temperature or magnets or cosmic rays or whatever cause some fluctuation, we may read the wrong number. And the more finely we intend to measure, the greater that risk is. Imagine if a 1-millivolt difference was significant!
Instead, we typically use a digital interpretation. Everything over some threshold is true, and everything under is false. So we can ask questions like "Is there any current at all?" instead of "Exactly how much current is there?"
Each individual bit can be measured with confidence, because we only have to be "in the right ballpark". And by using lots of bits, we can still get a lot of information.
- 3,657
There are also ternary computers instead of binary ones. http://en.wikipedia.org/wiki/Ternary_computer
A ternary computer (also called trinary computer) is a computer that uses ternary logic (three possible values) instead of the more common binary logic (two possible values) in its calculations...
It might well be more natural to us but there are specific reasons why binary was chosen for digital circuitry and thereby for programming languages. If you have two states you only need to distinguish between two volt settings say 0V and 5V. For each additional increase to the radix (base) you'd need to further divide that range thus getting values that are indistinct from one another. You could increase the voltage range but that has this nasty habit of melting circuitry.
If you want to change the hardware type from digital circuitry your options are more varied. Decimals used to be used in mechanical computers since gears have much more heat tolerance and are much more distinct than electron charges. Quantum computers as stated elsewhere have other ways of dealing with things. Optical computers might also be able to do things we've not dealt with before and magnetic computers are a possibility as well.
I think you could nowadays built items that could hold any amount of states or even work with analog data. Though building a whole system and getting all the logical components running to get a full featured and programmable architecture would be a lot of work and a financial risk for any company to undertake this task.
I think ENIAC was the last architecture to use ten-position ring counters to store digits. Though I could be wrong about this and I'm not sure, how much this influenced the other parts of the machine.
- 12,088
Storage can be thought as transmission to the future, all of the transmission problems with continuous(analogue) media will apply.
Storing those states could be trivial (a three way switch or some sort of grid) and physically storing these states is one issue that many answers cover, much better than I could.
My primary concern is how this stored state is encoded and it seem that there's a high posibility that this task is a fools errand, since bits are sufficient for representation of practical continuous data, depending the accuracy you need, keep adding more bits.
Truly continuous data is impossible to store in this way, but equations to calculate them e.g.
1/3
can be stored.
- 6,163
A clue and an inkling are smaller pieces of information than a bit. Several clues are usually required to establish the definite value of a bit. Inklings are worse: no matter how many you add up, you still can't know the value of the resulting bit for certain.
More seriously, there are multi-valued logics where the fundamental unit can have one of n states, where n > 2. You could consider these units to carry less information than a bit in the sense of the preceding paragraph, but from an information theory point of view you'd have to say they carry more. For example, you'd need two bits to represent the same amount of information that a single value in a four-valued logic could carry.
- 39,298
The optimal numerical base is e, but since the simplest way to represent a number in digital electronic is with two states (high voltage=1, low voltage=0), the binary number representation was chosen.
- 14,049
There is a smaller possible unit of data. I do not know an official name for it, let's call it an un.
Bit is a smart combo-word for "Binary digIT", meaning it has two possible states. So there must be a kind of digit with only a single possible state.
Let's see what that means. It means you would have only zeros to work with.
How would you count? In any x-base system, you increase the value until you run out of digits and then add a digit to form a number. If you have only one digit, you would run out of digits immediately so:
Zero = 0 One = 00 Two = 000 et cetera
This is definately more natural: more is more! It maps perfectly to any discrete number of things. How many potatoes? 00000 That is four potatoes. Wait a minute... that is off-by-one. If you don't like that you could redefine the value of 0 to one. Then it is really natural: no zeros is none, one zero is one, two zeros is two, et cetera.
This is impractical for a solid state machine though. Digits would have to be physically placed and removed and it doesn't scale well.
- 18,652
If you define natural by being close to how mother nature works, the most natural way of information encoding are DNA-like combinations of adenine, cytosine, guanine, and thymine.
- 113