ATI driver woes

There are no less than 3 choices for a driver when you have an ATI graphics card, it seems. This would be a good thing if any of them actually worked.

I have an X1250 (which is integrated into a 690G-chipset motherboard). The “radeonhd” driver works but doesn’t support TV-out, XVideo acceleration or 3D acceleration and so is really little better than the vesa driver at this stage. The “radeon” (or just “ati”) driver gives me a blank screen (I filed a bug report). The proprietary Catalyst driver (or “fglrx”) works except it is a pain to install (on my homebrew system), all OpenGL programs crash (looks like the opengl library provided with the driver is causing some sort of memory corruption or double-free?), and Xvideo output from MythTV results in a double-image (one on top of the other, as if there’s some sort of de-interlacing issue). Of course XVideo didn’t work at all until I set some option “TexturedVideo” in my xorg.conf file – the only hint I got to do that was the X sever logs; the option wasn’t documented anywhere. In fact, none of the options for the fglrx driver are documented officially.

I’ll also throw in a complaint at this stage about the “/etc/ati/amdpcsdb” file, and the fact that settings it contains silently override equivalent settigns in xorg.conf, so that you end up wondering why changing the xorg.conf file doesn’t appear to have any effect.

Back to the original story. Basically, three drivers just means I’m screwed three ways. I’m trying to build a Home Theatre PC and my only option for watching TV, at this point, is now to install the MythTV frontend on another machine (with integrated Intel video) and watch it on that.

Edit 14/04/2008: I got the double-image problem sorted – it was actually MythTV itself that was causing the issue, the standard de-interlacing filter converts the image into a double-height image with the fields one above the other and this is what I was seeing. It’s meant to then double the frame rate and show the two halves one at a time but for some reason that wasn’t happening. I’m partly to blame here because I’m using a subversion pull of MythTV rather than a released version.

Also, I’m pleased to report there’s been some progress with the open-source ATI drivers. The radeonhd driver made another release (1.2.0) which apparently does nifty stuff like 2D acceleration (still no TV-out, 3D, or Xvideo). Also, by using the latest drm module (latest pre-patch kernel release, 2.6.25-rc9) and switching to the EXA acceleration method I was able to get XVideo working with the radeon driver, which is pretty good, though it still can’t do TV-out, still blanks the screen if anything is connected to the composite output when I start X, and it now also leaves me with a blank screen when I exit X or switch to another VT. Never-the-less it’s great to see some progress on these drivers.

Advertisements

The Illegal use of Safari

As this Slashdot story says, Apple’s “Safari for Windows” according to it’s EULA could only be installed on Apple-manufactured hardware. Which is, needless to say, pretty stupid. I understand the issue has been resolved now (i.e. Apple have changed the license) but the fact it happened in the first place is just hilarious.

Now as companies go, Apple is a real bastard anway. I mean, they have attempted to sue people left right and center for just discussing products that they might (or might not) be going to release; they don’t have a public bug database (and they seem to ignore some bug reports); they don’t let users know what is going on with their development plans (Java 6, anyone?); they release software as a “beta” which expires and actually refuses to continue working once the final product is released even though getting that requires you upgrade to a whole new operating system version (Boot Camp – and yeah, I know the license for the beta said that this would happen all along, but it’s still rubbish. As far as I know not even Microsoft uses these “What I giveth I also taketh away” tactics). Apple is also a control freak – why can’t I transfer songs from one computer to another via my iPod? Because the software won’t let me, and that’s the only reason. Despite all this many people pay good money for Apple products, that is, they actually pay money to be shafted by Apple. I myself have paid more for Apple hardware and software than I have for PCs, primarily because I would rather be reamed by Apple than use Windows (and I do, occasionally, still need to use commercial software which isn’t available for Linux or BSD).

But I’m going off on a tangent. The real issue is software licensing, and what a pile of droppings it normally is (what other product imposes conditions on you after you’ve already paid for it and taken it home from the shop?). A lot of it is questionable in terms of legality anyway (as far as I understand, copyright is about the conditions under which you can duplicate or reproduce copyrighted works, not what you do with them afterwards, even though many software licenses try to limit use in various ways). The really annoying thing is that you’ve really got no recourse if you don’t like the terms of the license, other than not to use the software (or to ignore the license, which might be illegal). The “must be used on Apple hardware” term is a perfect example of a potentially very annoying (to the user) condition which doesn’t actually seem to benefit Apple in any real way.

I can see what benefit Apple thinks it’s getting from terms such as this (which I think it’s safe to assume is a boilerplate term used in a lot of Apple software licenses) – they’re trying to increase hardware sales. If you want to use OS X, you have to buy a Mac to legally do so. This sort of license condition is, however, anti-competitive. It makes non-Apple hardware less useful because you’re not allowed to run OS X on it, even if the hardware is perfectly capable of doing so from a technical perspective. I can understand that Apple don’t want to provide technical support in that case but they shouldn’t be trying to make it actually illegal to install their software on any machine you like.

CUPS and unhelpful error messages

I’m currently trying to set up my HP PSC 1410 printer with CUPS. For a bit I was getting this message, when I tried to print a web page via the CUPS web interface:

Error: Unsupported format ‘application/postscript’!

Well, it turns out the problem was I needed to update my ghostscript version to one that included the “pstoraster” filter (specifically, GNU ghostscript 8.60, though I assume GPL ghostscript 8.60/8.61 would be fine also. Let’s not get started on the ridiculous number of ghostscript variants; I’ll save that for another day). I could have been saved quite some time if this had been made more clear by the error message. How about something like:

Unsupported format ‘application/postscript’: Could not find “pstoraster” filter specified in /etc/cups/mime.convs file

That would be much more helpful! Yes, I understand it might not be meaningful for a casual PC user but then the original message is not helpful in that case anyway. Oh, and it gets rid of that annoying exclamation mark.

Still haven’t got the printer working though; now it just starts a whole bunch of processes (foomatic-rip, gs) which all just seem to hang.

Update: Ok ,several hours later I have got it working. It was permissions; I had to modify udev rules so that the usb device nodes were created with the right group and permissions. Incidentally, hplip (HP’s software) includes a udev rules file but it’s outdated (uses SYSFS instead of ATTR) and insecure (sets mode 0666 instead of 0660). I don’t know why the driver doesn’t try and open the device before running ghostscript and all that stuff.

Oh, and CUPS has a weird problem when you have only a single remote printer, and no local printers. For some reason, when I try and modify a class, the remote printer doesn’t come up in the list. I could add the remote printer to the class via the command line, however.

HTTP specification

In the HTTP spec it says:

3.3 use of multipart/form-data

The definition of multipart/form-data is included in section 7. A
boundary is selected that does not occur in any of the data. (This
selection is sometimes done probabilisticly.)

Probabilisticly? Who wrote this shit? If you choose a boundary probabilisticly then given enough usage you will almost definitely, at some point in time, pick a boundary which will occur naturally in the data (though one alternative, choosing a boundary deterministically after scanning through the data, is not too appealing either). This sort of thing just shouldn’t be allowed to happen (even if the probability is low) because it can easily be prevented. There are two perfectly suited techniques to avoid the problem that this could cause:

  1. Mandate a Content-length header in each part (which obviates the need for a boundary tag anyway) OR
  2. Use content transfer encoding (escaping) so that the boundary cannot possibly occur in the encoded content

Neither of these techniques would be particularly difficult to implement or costly in terms of processing time or bandwidth (considering that Content-length for the entire body will generally need to be calculated anyway). The first one seems to be allowed by current standards, but not recommended anywhere and certainly not mandated (and arguably, doesn’t really solve the problem unless the standards are updated to say that it does – since the boundary tag should arguably be recognized wherever it occurs, even if it is partway through a body part according to that body part’s known content length). The second one has the problem that the only defined encoding which would be suitable is base64, and that incurs a 33% bandwidth overhead.

It’s really annoying that this sort of stupid blunder can make it into a standard which is so widely used. At least it seems it can’t lead to any security vulnerabilities (I think) but I pity the poor sucker whose file-upload-via-POST is failing due to a shoddy standard which says it’s ok that a boundary tag could possibly occur within the content.

Can somebody hit the Mesa maintainers with the clue-stick, please?

I’ve been waiting a while for the release of Mesa 7.0.2 as 7.0.1 doesn’t include support for the G33 chipset in my motherboard, and as a result I’ve been running with a git snapshot, something which always makes me a little uneasy. It’s finally been released, but alas when I build (at “make install” specifically) I get an error:

make[2]: Entering directory `/usr/src/Mesa-7.0.2/src/glw’
make[2]: *** No rule to make target `glw.pc.in’, needed by `glw.pc’. Stop.
make[2]: Leaving directory `/usr/src/Mesa-7.0.2/src/glw’
make[1]: *** [install] Error 1
make[1]: Leaving directory `/usr/src/Mesa-7.0.2/src’
make: *** [install] Error 1

The problem is quite basic; The “glw.pc.in” file is MIA because it hasn’t been included in the source tarball. This is in itself fairly annoying but I take issue for two reaons: firstly, this error should have happened during the initial “make” and not “make install”. Secondly, obviously nobody tested whether Mesa could be built and installed from the tarball before it was officially released. This latter is really stupid.

On the plus side, with the 7.0.2 release Mesa finally has support for “make DESTDIR=… install”.

Edit 19/11/07: This is what “glw.pc.in” is supposed to look like. There doesn’t seem to be a directly downloadable version.

Edit 12/01/08: Entry in the bug database. Fixed for 7.0.3 apparently (not yet released).

Edit 06/04/08: Mesa 7.0.3 has finally been released (after 5 months), and yes, it fixes this problem. It should not have taken 5 months to fix this problem. Not being able to “make install” is serious.

Eclipse cannot “retrieve ‘feature.xml'”, apparently.

I was getting the following error message in Eclipse, when I tried to use the software update (which, for some inexplicable reason, is accessible from the “help” menu):

Error retrieving “feature.xml”. [error in opening zip file]

Other than the fact that it probably shouldn’t have been appearing anyway, what’s particularly galling about this error message is that it offers no clue as to what the f*ck has gone wrong, nor even why it matters. The only visible effect of this error was that the eclipse update websites didn’t appear in the list, and that a dialog with the error message would pop up with just about any action I would take (other than closing the update dialog completely).

Very, very, stupid.

I did what I have increasingly found to be the quickest and easiest method of solving problems such as these: A Google search. It led me to this web page:

http://www.easywms.com/easywms/?q=en/node/97

…Sure enough, deleting site.xml from inside my eclipse installation directory made the problem magically go away. It turns out that the eclipse CDT (C development tools) zip file which I had downloaded, was actually meant for retrieval via the eclipse updater and not meant to be installed by simply unzipping it inside my eclipse directory as I had done. Clearly the CDT guys are partly to blame for this because they don’t seem to actually provide any other downloadable version.

But… I mean, should the presence of some file in the installation directory really cause such an annoying problem? And if it does, shouldn’t the error message at least attempt to explain what the problem actually is?