Xorg X11R7

It’s funny, the “/usr/X11R6” directory name (and of course X11R6 itself) is such a mainstay that it’s hard to think of changing it, even if technically the new release is R7. I’ve finally moved to it, mainly because my old system decided to die on me recently and I ended up purchasing a new motherboard (Gigabyte again, despite minor annoyances previously, thankfully the onboard audio is not automatically disabled when I plug a PCI card in – hooray…) and processor (Core Duo). The system is much quieter than the old one (those old P4’s ran way too hot, and consequently the fan was always whirring away like a mad thing trying to cool things down a little).

The motherboard has an onboard intel graphics chip; I figured it was either that, or buy a new Nvidia to replace my aging TNT2 (which was AGP anyway and therefore unusable in the new system, which has PCI-Express slots instead). I initially tried the VESA BIOS driver with my old X version (6.8.1) but it just froze the system, and it wouldn’t have been accelerated anyway. So I figured I may as well upgrade to the modular X11R7.2 release.

It all went fairly well (it’s now running, at least) except the lack of build documentation is appalling. I eventually found a shell script which could be used to compile all the modules and therefore yield a working X11R7.2, well in theory anyway; I was missing a few dependencies (for one, xcb, which isn’t part of X, apparently) and even when those were resolved the xserver kept failing to build. Firstly, it’s configure script rejected my attempts to use the most recent versions of Mesa (7.0.1 and 6.5.3); older ones (6.4.2 for instance) caused the build to bomb out in mysterious ways (which didn’t seem Mesa related and which thus kept me scratching my head for a while). No, it turns out that building the server requires one very specific version of Mesa, and that for some obscure reason this fact is not documented, anywhere. Except, now for here. It’s 6.5.2.

Fortunately I was able to download the latest version of the intel video driver and it supported my G33 chipset with no problems, except for lack of 3D acceleration. For that I needed DRI and that meant upgrading my kernel (not such a big deal) and using a git snapshot of Mesa (very annoying, especially as it conflicts with the version used to compile the xserver and thus prevents AIGLX from working).

Also, although it initially worked fine, at some point the intel driver decided that it would ignore all previous convention regarding which resolution to start with and began running in something like 1152 by 768 which is way too short vertically (I use 1280 by 1024 normally) and also, gallingly, not directly supported by my LCD monitor which then needs to stretch the image itself so that it can display it the native 1280 by 1024, causing pixel artifacts and looking generally quite shit. I eventually discovered that using the “PreferredMode” option in the Monitor section of my xorg.conf configuration file could solve the problem, the main issue was that this setting isn’t documented in the man page that comes with the server in R7.2.

Ah well. Now I guess I wait for R7.3 and hopefully I can get AIGLX working…

Advertisements

Bugzilla

Bugzilla still doesn’t include “RESOLVED – FIXED” bugs as a default when doing a search. Even when you “find a specific bug” and choose “open bugs”, resolved bugs do not appear in the search results even though the bugs are most definitely NOT closed (and therefore are open).

The real problem here is that people who think they may have found a new bug are going to continue to believe that after doing a search, even if the bug has already been reported and fixed (but not yet released).

ISO C99

I’m not sure when it happened but it looks like the C99 standard is now publicly available (for free download) in PDF form. Make sure to get the version which incorporates corrections TC1 and TC2. From the standard:

6.2.6.1 General – Representation of types

paragraph 3:

“Values stored in unsigned bit-fields and objects of type unsigned char shall be represented by a pure binary notation”.

A footnote tells us that a “pure binary notation” is:

“A positional representation for integers that uses the binary digits 0 and 1, in which the values represented by successive bits are additive, begin with 1, and are multiplied by successive integral powers of 2, except perhaps the bit with the highest position. … A byte contains CHAR_BIT bits, and the values of type unsigned char range from 0 to (2^CHAR_BIT) – 1.”

What the… ??

Let’s go through that footnote bit by bit, no pun intended:

A positional representation for integers that uses the binary digits 0 and 1 … ok, that’s fairly straightforward

… in which the values represented by successive bits are additive … I guess that means that to get the value as a whole, you add the values represented by the bits. Just like you do in any binary number. Although, this is a fairly retarded way of saying it.

… begin with 1… Does this mean that the first bit position has a value of 1? Or that the value for any bit position is one before it is multiplied by some power of 2? The latter is mathematically redundant so I guess it must be the former. Ok.

… and are multiplied by successive integral powers of 2 … Yes, ok, each bit is worth its face value (0 or 1) multiplied by “successive powers of two”, the begin with 1 beforehand means that 1 is the first power of 2 (it is 2^0), then next would be 2 (2^1) then 4, 8, 16 and so on. Again there must be a better way to say this.

… except perhaps the bit with the highest position. WTF!!?? So the highest bit position can be worth something completely different. This might make sense for representation of signed values in 2’s complement, but this was specifically referencing an unsigned type. Also:

A byte contains CHAR_BIT bits, and the values of type unsigned char range from 0 to (2^CHAR_BIT) – 1.

Do the math. If there are CHAR_BIT bits, the highest bit position is (CHAR_BIT – 1) if we number them starting from 0. Each bit except for that one is worth 2^position and the range we can represent using those bits as well as the highest bit is 2^CHAR_BIT – 1. What then must the highest bit position be worth? Why then specifically exclude it from this requirement?

Mac OS X and networking

I have a shiny new 15″ MacBook Pro, my first Mac. It’s a great laptop and I only have one major complaint, hardware wise: Only a single button for the trackpad?!! Oh well, I’m going to be plugging in a proper mouse soon anyway. The real issues I’ve been having are with the OS software.

I have a small network connected via a combined ADSL modem/4-port router. I run the router in a mode called “half-bridge” which essentially means that the router forwards all traffic coming up the ADSL link onto the LAN directly, without modification other than de-encapsulating the packets from the PPP link. My NAT server (running linux) therefore listens for incoming traffic on the WAN address as well as its own LAN address. The router must also pick up packets destined for the WAN and forwards them on down the ADSL (PPP) link. Naturally the NAT machine is the only machine that sends packets destined for the WAN: all the other machines on the LAN use the NAT box as a gateway; this allows NATting and traffic regulation etc.

If you already know what the problem is that I’m about to describe, I think you’re doing pretty well. Continue reading

The “J” in “AJAX” should have stood for “Java”

Java… more of a standard than a piece of software, I guess, but let’s attack it anyway. The only question, as is often the case, is where to begin… and I’ll constrain myself by not even mentioning Swing (oops, too late).

First, there is the abysmal lack of support for real world file system handling. Like symbolic links, and unix permissions (Is this really coming in Java 7? about time).

Second, the incomplete support for asynchronous operations (any operation which blocks should be interruptible).

Third, and this one will be contentious, “equals” and “hashCode” should not be instance members. You should pass in a functor object which performs those operations to collection classes which need them (that’s not to say there might not be one or more default implementations of these functors).

Fourth, the collections library is incomplete. Why aren’t there utility classes such as JointSet (provides a set interface, backed by two or more real sets)?

Fifth, every object can be synchronized() on. Which means that every single object has a mutex hiding in it – which is a waste.

Sixth, it’s not possible to create a real Java sandbox from within a Java program, which is quite ironic seeing as this is really what Java was developed for (If you want to run some untrusted code, you can set a security manager, sure; but you can’t force a thread to stop running once you have started it. Even the deprecated Thread.stop() method won’t always work). There should be a safe way to stop threads (even if it means restricting what those threads can access/do), and you should really be able to intercept file and gui calls and handle them in any way you want (it shouldn’t just be limited to allowing or disallowing the operation).

Seventh, stupid version numbers. Java 5 – wtf? Java SE 6? Whoever came up with these names & numbers should be reprimanded!

Ah well, I guess that’s enough. For now anyway…

QT’s qmake

Geeezus. There’s a lot wrong with “make” in general and there’s been way too many attempts to fix it; no-one, as far as I can see, has got it right yet. qmake is no exception, but it does in fact manage to be particularly bad.

Firstly, the makefiles that qmake generates can’t do a “make DESTDIR=<whatever> install”, no, that’s too standardised, we couldn’t have that. You have to do a stupid “make INSTALL_ROOT=<whatever> install” instead (DESTDIR has some completely different and somewhat retarded meaning). And that, of course, doesn’t actually work, because the file paths in the makefile are written in such a way that it just doesn’t work. You can fix it by editing the “qmake.conf” file in the QT bundle and add “no_fixpath” at the end of the options listed for the CONFIG variable, before you run qmake. Why isn’t that the default? Considering that “fixing” the path actually borks it, I’m not sure what sort of numbnut came up with the term or the code.

It’s easy to verify this by trying to compile “qca” (QT cryptographic architecture) and installing it in some alternate location. If no_fixpath isn’t set, it doesn’t work, does it? Hello QT developers?

Oh yeah, and qmake has a very limited concept of quoting, but that’s certainly a problem not limited to this particular software package.

Sigh.

Edit: Apparently “no_fixpath” is no longer required in Qt 4.3.1; I’m not sure when it was fixed (it was a problem in 3.3.7). However INSTALL_ROOT is still used instead of (the de facto standard) DESTDIR.

OpenOffice

Well as a product OO is not actually too bad, it’s just a nightmare to build the friggin’ thing. I mean would it be too hard to put together some coherent build documentation? The best that I have found so far is the page entitled “building open office under linux“. That page tells me I need the csh shell, but do I really? What is the –with-use-shell=bash option for then? It also tells me that I need to download and extract the “gpc” library in some directory, which also appears to be a lie as the build completed just fine without that. I pass the –disable-mozilla option to configure because I don’t want to have to deal with mozilla and it might reduce the size/duration of the build. I’m not sure what functionality I’ll miss out on; the documentation neglects to tell me that.

Upon running the “configure” script I am told that I need to either fetch a pre-built dll file and plonk it in some directory. Either that, or install mingw as a cross-compiler so that the dll can be built. WTF? This is linux I’m building on, why is a Windows dll needed? I can’t be stuffed installing mingw so I just grab the pre-built dll.

Continue reading