Eclipse website

What the hell is the matter with the people who design the eclipse website? It’s not clear what the latest release is nor how to find out (3.3.1, I think). When I click “download” the choice I am presented with is a bunch of language IDEs – why can’t I download the components like I used to be able to? Ok, I can; I have to “browse downloads by project” – which is hidden in a small box at the right of the download page – then choose “Eclipse Platform” (or JDT or PDE, which link to the same destination, though other items under the “Eclipse Project” don’t) – then click on the release which it turns out is 3.3.1 – then I can view the readme, and a list of components which can be downloaded.

Of course they’re not strictly speaking components seeing as some seem to contain others. Maybe. At least, I was able to get Eclipse running without downloading the “RCP Runtime Binary” (RCP = Rich Client Platform) which sounds fairly important, I’m assuming it’s included in the “Platform Runtime Binary”. I could be wrong. In any case an explanation of what each bit actually is wouldn’t be unwelcome.

The readme, incidentally, does contain a chapter 7 which lists bugs fixed between 3.3.1 and 3.3, however this chapter doesn’t appear in the table of contents at the beginning of the document for some unfathomable reason. And there doesn’t seem to be a list of actual changes between the 3.2 and 3.3 series anywhere. I presume there was some reason for calling it 3.3 instead of 3.2.3? (and perhaps a reason also for not displaying the version in the startup splash window anymore?)


Things not to do in an open-source project

After years of building packages for my box, I’ve encountered several very annoying tendencies. In no specific order:

1. From a library, don’t print output to stdout or stderr, or any other arbitrary file or stream. All error messages should be returned to the application which can then decide what to do with them. It’s also wrong to require an “error handler” to be established. The way that an error is handled often depends on the circumstances of the call to the function in which the error was detected. There’s nothing worse than application which spews out meaningless dribble from one or more of its libraries.

2. Not allow “make DESTDIR=/some/directory install” to alter the installation location.Using something other than DESTDIR should be avoided (it’s a de-facto standard) and if it’s not documented, that’s even worse. (INSTALL_PREFIX is sometimes used instead. Anything else is definitely out).

3. Use a funky build system which doesn’t allow “make DESTDIR=…” or equivalent. Makefiles are ugly but they can do the job. Most “better makes” are actually really crap.

Continue reading

Xorg X11R7

It’s funny, the “/usr/X11R6” directory name (and of course X11R6 itself) is such a mainstay that it’s hard to think of changing it, even if technically the new release is R7. I’ve finally moved to it, mainly because my old system decided to die on me recently and I ended up purchasing a new motherboard (Gigabyte again, despite minor annoyances previously, thankfully the onboard audio is not automatically disabled when I plug a PCI card in – hooray…) and processor (Core Duo). The system is much quieter than the old one (those old P4’s ran way too hot, and consequently the fan was always whirring away like a mad thing trying to cool things down a little).

The motherboard has an onboard intel graphics chip; I figured it was either that, or buy a new Nvidia to replace my aging TNT2 (which was AGP anyway and therefore unusable in the new system, which has PCI-Express slots instead). I initially tried the VESA BIOS driver with my old X version (6.8.1) but it just froze the system, and it wouldn’t have been accelerated anyway. So I figured I may as well upgrade to the modular X11R7.2 release.

It all went fairly well (it’s now running, at least) except the lack of build documentation is appalling. I eventually found a shell script which could be used to compile all the modules and therefore yield a working X11R7.2, well in theory anyway; I was missing a few dependencies (for one, xcb, which isn’t part of X, apparently) and even when those were resolved the xserver kept failing to build. Firstly, it’s configure script rejected my attempts to use the most recent versions of Mesa (7.0.1 and 6.5.3); older ones (6.4.2 for instance) caused the build to bomb out in mysterious ways (which didn’t seem Mesa related and which thus kept me scratching my head for a while). No, it turns out that building the server requires one very specific version of Mesa, and that for some obscure reason this fact is not documented, anywhere. Except, now for here. It’s 6.5.2.

Fortunately I was able to download the latest version of the intel video driver and it supported my G33 chipset with no problems, except for lack of 3D acceleration. For that I needed DRI and that meant upgrading my kernel (not such a big deal) and using a git snapshot of Mesa (very annoying, especially as it conflicts with the version used to compile the xserver and thus prevents AIGLX from working).

Also, although it initially worked fine, at some point the intel driver decided that it would ignore all previous convention regarding which resolution to start with and began running in something like 1152 by 768 which is way too short vertically (I use 1280 by 1024 normally) and also, gallingly, not directly supported by my LCD monitor which then needs to stretch the image itself so that it can display it the native 1280 by 1024, causing pixel artifacts and looking generally quite shit. I eventually discovered that using the “PreferredMode” option in the Monitor section of my xorg.conf configuration file could solve the problem, the main issue was that this setting isn’t documented in the man page that comes with the server in R7.2.

Ah well. Now I guess I wait for R7.3 and hopefully I can get AIGLX working…


Bugzilla still doesn’t include “RESOLVED – FIXED” bugs as a default when doing a search. Even when you “find a specific bug” and choose “open bugs”, resolved bugs do not appear in the search results even though the bugs are most definitely NOT closed (and therefore are open).

The real problem here is that people who think they may have found a new bug are going to continue to believe that after doing a search, even if the bug has already been reported and fixed (but not yet released).


I’m not sure when it happened but it looks like the C99 standard is now publicly available (for free download) in PDF form. Make sure to get the version which incorporates corrections TC1 and TC2. From the standard: General – Representation of types

paragraph 3:

“Values stored in unsigned bit-fields and objects of type unsigned char shall be represented by a pure binary notation”.

A footnote tells us that a “pure binary notation” is:

“A positional representation for integers that uses the binary digits 0 and 1, in which the values represented by successive bits are additive, begin with 1, and are multiplied by successive integral powers of 2, except perhaps the bit with the highest position. … A byte contains CHAR_BIT bits, and the values of type unsigned char range from 0 to (2^CHAR_BIT) – 1.”

What the… ??

Let’s go through that footnote bit by bit, no pun intended:

A positional representation for integers that uses the binary digits 0 and 1 … ok, that’s fairly straightforward

… in which the values represented by successive bits are additive … I guess that means that to get the value as a whole, you add the values represented by the bits. Just like you do in any binary number. Although, this is a fairly retarded way of saying it.

… begin with 1… Does this mean that the first bit position has a value of 1? Or that the value for any bit position is one before it is multiplied by some power of 2? The latter is mathematically redundant so I guess it must be the former. Ok.

… and are multiplied by successive integral powers of 2 … Yes, ok, each bit is worth its face value (0 or 1) multiplied by “successive powers of two”, the begin with 1 beforehand means that 1 is the first power of 2 (it is 2^0), then next would be 2 (2^1) then 4, 8, 16 and so on. Again there must be a better way to say this.

… except perhaps the bit with the highest position. WTF!!?? So the highest bit position can be worth something completely different. This might make sense for representation of signed values in 2’s complement, but this was specifically referencing an unsigned type. Also:

A byte contains CHAR_BIT bits, and the values of type unsigned char range from 0 to (2^CHAR_BIT) – 1.

Do the math. If there are CHAR_BIT bits, the highest bit position is (CHAR_BIT – 1) if we number them starting from 0. Each bit except for that one is worth 2^position and the range we can represent using those bits as well as the highest bit is 2^CHAR_BIT – 1. What then must the highest bit position be worth? Why then specifically exclude it from this requirement?

Mac OS X and networking

I have a shiny new 15″ MacBook Pro, my first Mac. It’s a great laptop and I only have one major complaint, hardware wise: Only a single button for the trackpad?!! Oh well, I’m going to be plugging in a proper mouse soon anyway. The real issues I’ve been having are with the OS software.

I have a small network connected via a combined ADSL modem/4-port router. I run the router in a mode called “half-bridge” which essentially means that the router forwards all traffic coming up the ADSL link onto the LAN directly, without modification other than de-encapsulating the packets from the PPP link. My NAT server (running linux) therefore listens for incoming traffic on the WAN address as well as its own LAN address. The router must also pick up packets destined for the WAN and forwards them on down the ADSL (PPP) link. Naturally the NAT machine is the only machine that sends packets destined for the WAN: all the other machines on the LAN use the NAT box as a gateway; this allows NATting and traffic regulation etc.

If you already know what the problem is that I’m about to describe, I think you’re doing pretty well. Continue reading

The “J” in “AJAX” should have stood for “Java”

Java… more of a standard than a piece of software, I guess, but let’s attack it anyway. The only question, as is often the case, is where to begin… and I’ll constrain myself by not even mentioning Swing (oops, too late).

First, there is the abysmal lack of support for real world file system handling. Like symbolic links, and unix permissions (Is this really coming in Java 7? about time).

Second, the incomplete support for asynchronous operations (any operation which blocks should be interruptible).

Third, and this one will be contentious, “equals” and “hashCode” should not be instance members. You should pass in a functor object which performs those operations to collection classes which need them (that’s not to say there might not be one or more default implementations of these functors).

Fourth, the collections library is incomplete. Why aren’t there utility classes such as JointSet (provides a set interface, backed by two or more real sets)?

Fifth, every object can be synchronized() on. Which means that every single object has a mutex hiding in it – which is a waste.

Sixth, it’s not possible to create a real Java sandbox from within a Java program, which is quite ironic seeing as this is really what Java was developed for (If you want to run some untrusted code, you can set a security manager, sure; but you can’t force a thread to stop running once you have started it. Even the deprecated Thread.stop() method won’t always work). There should be a safe way to stop threads (even if it means restricting what those threads can access/do), and you should really be able to intercept file and gui calls and handle them in any way you want (it shouldn’t just be limited to allowing or disallowing the operation).

Seventh, stupid version numbers. Java 5 – wtf? Java SE 6? Whoever came up with these names & numbers should be reprimanded!

Ah well, I guess that’s enough. For now anyway…