I’m not sure when it happened but it looks like the C99 standard is now publicly available (for free download) in PDF form. Make sure to get the version which incorporates corrections TC1 and TC2. From the standard: General – Representation of types

paragraph 3:

“Values stored in unsigned bit-fields and objects of type unsigned char shall be represented by a pure binary notation”.

A footnote tells us that a “pure binary notation” is:

“A positional representation for integers that uses the binary digits 0 and 1, in which the values represented by successive bits are additive, begin with 1, and are multiplied by successive integral powers of 2, except perhaps the bit with the highest position. … A byte contains CHAR_BIT bits, and the values of type unsigned char range from 0 to (2^CHAR_BIT) – 1.”

What the… ??

Let’s go through that footnote bit by bit, no pun intended:

A positional representation for integers that uses the binary digits 0 and 1 … ok, that’s fairly straightforward

… in which the values represented by successive bits are additive … I guess that means that to get the value as a whole, you add the values represented by the bits. Just like you do in any binary number. Although, this is a fairly retarded way of saying it.

… begin with 1… Does this mean that the first bit position has a value of 1? Or that the value for any bit position is one before it is multiplied by some power of 2? The latter is mathematically redundant so I guess it must be the former. Ok.

… and are multiplied by successive integral powers of 2 … Yes, ok, each bit is worth its face value (0 or 1) multiplied by “successive powers of two”, the begin with 1 beforehand means that 1 is the first power of 2 (it is 2^0), then next would be 2 (2^1) then 4, 8, 16 and so on. Again there must be a better way to say this.

… except perhaps the bit with the highest position. WTF!!?? So the highest bit position can be worth something completely different. This might make sense for representation of signed values in 2’s complement, but this was specifically referencing an unsigned type. Also:

A byte contains CHAR_BIT bits, and the values of type unsigned char range from 0 to (2^CHAR_BIT) – 1.

Do the math. If there are CHAR_BIT bits, the highest bit position is (CHAR_BIT – 1) if we number them starting from 0. Each bit except for that one is worth 2^position and the range we can represent using those bits as well as the highest bit is 2^CHAR_BIT – 1. What then must the highest bit position be worth? Why then specifically exclude it from this requirement?


Mac OS X and networking

I have a shiny new 15″ MacBook Pro, my first Mac. It’s a great laptop and I only have one major complaint, hardware wise: Only a single button for the trackpad?!! Oh well, I’m going to be plugging in a proper mouse soon anyway. The real issues I’ve been having are with the OS software.

I have a small network connected via a combined ADSL modem/4-port router. I run the router in a mode called “half-bridge” which essentially means that the router forwards all traffic coming up the ADSL link onto the LAN directly, without modification other than de-encapsulating the packets from the PPP link. My NAT server (running linux) therefore listens for incoming traffic on the WAN address as well as its own LAN address. The router must also pick up packets destined for the WAN and forwards them on down the ADSL (PPP) link. Naturally the NAT machine is the only machine that sends packets destined for the WAN: all the other machines on the LAN use the NAT box as a gateway; this allows NATting and traffic regulation etc.

If you already know what the problem is that I’m about to describe, I think you’re doing pretty well. (more…)

The “J” in “AJAX” should have stood for “Java”

Java… more of a standard than a piece of software, I guess, but let’s attack it anyway. The only question, as is often the case, is where to begin… and I’ll constrain myself by not even mentioning Swing (oops, too late).

First, there is the abysmal lack of support for real world file system handling. Like symbolic links, and unix permissions (Is this really coming in Java 7? about time).

Second, the incomplete support for asynchronous operations (any operation which blocks should be interruptible).

Third, and this one will be contentious, “equals” and “hashCode” should not be instance members. You should pass in a functor object which performs those operations to collection classes which need them (that’s not to say there might not be one or more default implementations of these functors).

Fourth, the collections library is incomplete. Why aren’t there utility classes such as JointSet (provides a set interface, backed by two or more real sets)?

Fifth, every object can be synchronized() on. Which means that every single object has a mutex hiding in it – which is a waste.

Sixth, it’s not possible to create a real Java sandbox from within a Java program, which is quite ironic seeing as this is really what Java was developed for (If you want to run some untrusted code, you can set a security manager, sure; but you can’t force a thread to stop running once you have started it. Even the deprecated Thread.stop() method won’t always work). There should be a safe way to stop threads (even if it means restricting what those threads can access/do), and you should really be able to intercept file and gui calls and handle them in any way you want (it shouldn’t just be limited to allowing or disallowing the operation).

Seventh, stupid version numbers. Java 5 – wtf? Java SE 6? Whoever came up with these names & numbers should be reprimanded!

Ah well, I guess that’s enough. For now anyway…

QT’s qmake

Geeezus. There’s a lot wrong with “make” in general and there’s been way too many attempts to fix it; no-one, as far as I can see, has got it right yet. qmake is no exception, but it does in fact manage to be particularly bad.

Firstly, the makefiles that qmake generates can’t do a “make DESTDIR=<whatever> install”, no, that’s too standardised, we couldn’t have that. You have to do a stupid “make INSTALL_ROOT=<whatever> install” instead (DESTDIR has some completely different and somewhat retarded meaning). And that, of course, doesn’t actually work, because the file paths in the makefile are written in such a way that it just doesn’t work. You can fix it by editing the “qmake.conf” file in the QT bundle and add “no_fixpath” at the end of the options listed for the CONFIG variable, before you run qmake. Why isn’t that the default? Considering that “fixing” the path actually borks it, I’m not sure what sort of numbnut came up with the term or the code.

It’s easy to verify this by trying to compile “qca” (QT cryptographic architecture) and installing it in some alternate location. If no_fixpath isn’t set, it doesn’t work, does it? Hello QT developers?

Oh yeah, and qmake has a very limited concept of quoting, but that’s certainly a problem not limited to this particular software package.


Edit: Apparently “no_fixpath” is no longer required in Qt 4.3.1; I’m not sure when it was fixed (it was a problem in 3.3.7). However INSTALL_ROOT is still used instead of (the de facto standard) DESTDIR.


Well as a product OO is not actually too bad, it’s just a nightmare to build the friggin’ thing. I mean would it be too hard to put together some coherent build documentation? The best that I have found so far is the page entitled “building open office under linux“. That page tells me I need the csh shell, but do I really? What is the –with-use-shell=bash option for then? It also tells me that I need to download and extract the “gpc” library in some directory, which also appears to be a lie as the build completed just fine without that. I pass the –disable-mozilla option to configure because I don’t want to have to deal with mozilla and it might reduce the size/duration of the build. I’m not sure what functionality I’ll miss out on; the documentation neglects to tell me that.

Upon running the “configure” script I am told that I need to either fetch a pre-built dll file and plonk it in some directory. Either that, or install mingw as a cross-compiler so that the dll can be built. WTF? This is linux I’m building on, why is a Windows dll needed? I can’t be stuffed installing mingw so I just grab the pre-built dll.


Grub vs Lilo

As a bootloader, lilo wins hands down. Here is why: Lilo is designed for a single job, and it does that well. Grub on the other hand understands (some) filesystems and a bunch of other stuff that a boot loader should know nothing about (like object file formats). Lilo is “generic”, that is, you can use it to boot off pretty much any filesystem. This is a great strength.

Among other things, that means if you use software RAID or LVM or a filesystem that Grub doesn’t understand, you can’t use Grub. Lilo more or less isn’t bothered by that stuff.

Grub’s understanding of the filesystem allows it to work after you move partitions around and that sort of thing, which is considered by some to make it superior. But with Lilo, all you have to do is re-run lilo after doing such changes and it will work just as well. What’s more, it’s smaller and it doesn’t care what filesystem you’re using.

Lilo is no bigger than it needs to be. Grub is. And normally you don’t need any of the extra features that Grub provides – if you do, it’s not doing its job. Lilo is a bootloader – it loads the operating system. Grub practically is an operating system.