I’m not sure when it happened but it looks like the C99 standard is now publicly available (for free download) in PDF form. Make sure to get the version which incorporates corrections TC1 and TC2. From the standard:

6.2.6.1 General – Representation of types

paragraph 3:

“Values stored in unsigned bit-fields and objects of type unsigned char shall be represented by a pure binary notation”.

A footnote tells us that a “pure binary notation” is:

“A positional representation for integers that uses the binary digits 0 and 1, in which the values represented by successive bits are additive, begin with 1, and are multiplied by successive integral powers of 2, except perhaps the bit with the highest position. … A byte contains CHAR_BIT bits, and the values of type

unsigned charrange from 0 to (2^CHAR_BIT) – 1.”

What the… *??*

Let’s go through that footnote bit by bit, no pun intended:

*A positional representation for integers that uses the binary digits 0 and 1 *… ok, that’s fairly straightforward

*… in which the values represented by successive bits are additive *… I guess that means that to get the value as a whole, you add the values represented by the bits. Just like you do in any binary number. Although, this is a fairly retarded way of saying it.

*… begin with 1… *Does this mean that the first bit position has a value of 1? Or that the value for any bit position is one before it is multiplied by some power of 2? The latter is mathematically redundant so I guess it must be the former. Ok.

*… and are multiplied by successive integral powers of 2 … *Yes, ok, each bit is worth its face value (0 or 1) multiplied by “successive powers of two”, the *begin with 1 *beforehand means that 1 is the first power of 2 (it is 2^0), then next would be 2 (2^1) then 4, 8, 16 and so on. Again there must be a better way to say this.

*… except perhaps the bit with the highest position. *WTF!!?? So the highest bit position can be worth something completely different. This might make sense for representation of signed values in 2’s complement, but this was specifically referencing an *unsigned *type. Also:

*A byte contains CHAR_BIT bits, and the values of type **unsigned char range from 0 to (2^CHAR_BIT) – 1.*

Do the math. If there are CHAR_BIT bits, the highest bit position is (CHAR_BIT – 1) if we number them starting from 0. Each bit except for that one is worth 2^position and the range we can represent using those bits as well as the highest bit is 2^CHAR_BIT – 1. What then must the highest bit position be worth? Why then specifically exclude it from this requirement?