- A BYTE is a unit of digital information comprising a number of BITS. In general terms 8 BITS make a BYTE. An 8-bit number, for instance 10111011, can be used to represent the decimal numbers 0 to 255, or 256 values.
The fact that such an 8-bit number was used in early computer systems, particularly home computers, to represent alphabetical characters and symbols has meant that the 8-bit byte has become a de facto standard. This is despite the fact that its orignal meaning was just a number of BITs, the actual number being dependent on the system in use. Early computers used 4 or 7-bit BYTEs as the smallest chunks of information their processors would address, modern computers process in 16, 32, 64-bit or greater chunks.
The name BYTE was coined by an IBM scientist. The idea of chunking BITs together into bite-sized chunks that the processor could “bite” into lead to the term BYTE, an intentional mis-spelling of “bite” that could not be accidentally shortened to “BIT” by leaving the “e” off.
Despite the changed spelling the term causes much confusion. BITs are usually represented by a small “b”, BYTEs by a large “B” (although there is a conflict with the International System of Units where “B” represents bel as in decibel). This can lead to misunderstndings especially where DATA RATES are concerned. Perhaps he should have called his unit a dollop or a slug. An “octet” is an alternative term specifically used to denote an 8-bit byte but is rarely used except in engineering.
Although these days processors rarely use only 8 bits in their architecture, the 8-bit byte derived from its use in alphabetical character encoding is still the standard unit of data storage. If you open a text editor or a word processor on your computer, create a new document and save the following as a text file:
“WE WERE SOMEWHERE AROUND BARSTOW ON THE EDGE OF THE DESERT WHEN THE DRUGS BEGAN TO TAKE HOLD… ”
(Be careful to cut and paste and include the quotation marks.)
… you should find that the file size is exactly 100 bytes!