Hardware that Works with Linux

It’s sometimes surprisingly difficult to find out whether hardware will work with Linux. Partly, this is because a piece of hardware often consists internally of chips manufactured by someone other than the hardware itself, and it’s the chips which you must have drivers for. For instance, I have a Leadtek TV tuner card (PCI) and the chips onboard it are a couple of Connexant chips plus a Philips (tuner) chip. This is one of the few products I have seen which actually lists the chips on the box, however. Finding out what chips a product uses can be difficult in itself. Often the easiest way (for PCI cards) is to plug the card in and use “lspci” or equivalent to find out what’s on it, but by that stage you’ve already laid down your cash for the product.

Even knowing the chips and knowing that there is a linux driver for them doesn’t always tell you how well your device will be supported. The aforementioned TV tuner card required that I add a subsystem ID to the driver and recompile the kernel. Once I’d done that, it worked fine (well, it received digital TV fine; I never tried analog. I also got the remote to work fairly easily).

I bought a new (external) ADSL/2/2+ modem recently, an “OpenNetworks iConnectAccess621”. It has a single ethernet port as well as a USB port. I wondered if I would be able to talk to the modem via the USB port and thereby free up two ethernet ports – one on the PC and one on the modem. As it turned out, the iConnectAccess621 use a Texas Instruments (TI) chip, and I could talk to it using the “RNDIS” driver in Linux (RNDIS apparently is a badly-documented Microsoft-developed protocol for ethernet-over-USB); however, restarting the computer seemed to lock up the connection (couldn’t talk to the modem anymore) until I also power-cycled the modem. I partly blame the modem and partly blame Linux’s USB implementation (which seemed to have a lot of trouble dealing with the resulting situation on the USB bus; kept on giving error messages and took ages to boot).

Continue reading “Hardware that Works with Linux”

ATI driver woes

There are no less than 3 choices for a driver when you have an ATI graphics card, it seems. This would be a good thing if any of them actually worked.

I have an X1250 (which is integrated into a 690G-chipset motherboard). The “radeonhd” driver works but doesn’t support TV-out, XVideo acceleration or 3D acceleration and so is really little better than the vesa driver at this stage. The “radeon” (or just “ati”) driver gives me a blank screen (I filed a bug report). The proprietary Catalyst driver (or “fglrx”) works except it is a pain to install (on my homebrew system), all OpenGL programs crash (looks like the opengl library provided with the driver is causing some sort of memory corruption or double-free?), and Xvideo output from MythTV results in a double-image (one on top of the other, as if there’s some sort of de-interlacing issue). Of course XVideo didn’t work at all until I set some option “TexturedVideo” in my xorg.conf file – the only hint I got to do that was the X sever logs; the option wasn’t documented anywhere. In fact, none of the options for the fglrx driver are documented officially.

I’ll also throw in a complaint at this stage about the “/etc/ati/amdpcsdb” file, and the fact that settings it contains silently override equivalent settigns in xorg.conf, so that you end up wondering why changing the xorg.conf file doesn’t appear to have any effect.

Back to the original story. Basically, three drivers just means I’m screwed three ways. I’m trying to build a Home Theatre PC and my only option for watching TV, at this point, is now to install the MythTV frontend on another machine (with integrated Intel video) and watch it on that.

Edit 14/04/2008: I got the double-image problem sorted – it was actually MythTV itself that was causing the issue, the standard de-interlacing filter converts the image into a double-height image with the fields one above the other and this is what I was seeing. It’s meant to then double the frame rate and show the two halves one at a time but for some reason that wasn’t happening. I’m partly to blame here because I’m using a subversion pull of MythTV rather than a released version.

Also, I’m pleased to report there’s been some progress with the open-source ATI drivers. The radeonhd driver made another release (1.2.0) which apparently does nifty stuff like 2D acceleration (still no TV-out, 3D, or Xvideo). Also, by using the latest drm module (latest pre-patch kernel release, 2.6.25-rc9) and switching to the EXA acceleration method I was able to get XVideo working with the radeon driver, which is pretty good, though it still can’t do TV-out, still blanks the screen if anything is connected to the composite output when I start X, and it now also leaves me with a blank screen when I exit X or switch to another VT. Never-the-less it’s great to see some progress on these drivers.

Xorg X11R7

It’s funny, the “/usr/X11R6” directory name (and of course X11R6 itself) is such a mainstay that it’s hard to think of changing it, even if technically the new release is R7. I’ve finally moved to it, mainly because my old system decided to die on me recently and I ended up purchasing a new motherboard (Gigabyte again, despite minor annoyances previously, thankfully the onboard audio is not automatically disabled when I plug a PCI card in – hooray…) and processor (Core Duo). The system is much quieter than the old one (those old P4’s ran way too hot, and consequently the fan was always whirring away like a mad thing trying to cool things down a little).

The motherboard has an onboard intel graphics chip; I figured it was either that, or buy a new Nvidia to replace my aging TNT2 (which was AGP anyway and therefore unusable in the new system, which has PCI-Express slots instead). I initially tried the VESA BIOS driver with my old X version (6.8.1) but it just froze the system, and it wouldn’t have been accelerated anyway. So I figured I may as well upgrade to the modular X11R7.2 release.

It all went fairly well (it’s now running, at least) except the lack of build documentation is appalling. I eventually found a shell script which could be used to compile all the modules and therefore yield a working X11R7.2, well in theory anyway; I was missing a few dependencies (for one, xcb, which isn’t part of X, apparently) and even when those were resolved the xserver kept failing to build. Firstly, it’s configure script rejected my attempts to use the most recent versions of Mesa (7.0.1 and 6.5.3); older ones (6.4.2 for instance) caused the build to bomb out in mysterious ways (which didn’t seem Mesa related and which thus kept me scratching my head for a while). No, it turns out that building the server requires one very specific version of Mesa, and that for some obscure reason this fact is not documented, anywhere. Except, now for here. It’s 6.5.2.

Fortunately I was able to download the latest version of the intel video driver and it supported my G33 chipset with no problems, except for lack of 3D acceleration. For that I needed DRI and that meant upgrading my kernel (not such a big deal) and using a git snapshot of Mesa (very annoying, especially as it conflicts with the version used to compile the xserver and thus prevents AIGLX from working).

Also, although it initially worked fine, at some point the intel driver decided that it would ignore all previous convention regarding which resolution to start with and began running in something like 1152 by 768 which is way too short vertically (I use 1280 by 1024 normally) and also, gallingly, not directly supported by my LCD monitor which then needs to stretch the image itself so that it can display it the native 1280 by 1024, causing pixel artifacts and looking generally quite shit. I eventually discovered that using the “PreferredMode” option in the Monitor section of my xorg.conf configuration file could solve the problem, the main issue was that this setting isn’t documented in the man page that comes with the server in R7.2.

Ah well. Now I guess I wait for R7.3 and hopefully I can get AIGLX working…

Bugzilla

Bugzilla still doesn’t include “RESOLVED – FIXED” bugs as a default when doing a search. Even when you “find a specific bug” and choose “open bugs”, resolved bugs do not appear in the search results even though the bugs are most definitely NOT closed (and therefore are open).

The real problem here is that people who think they may have found a new bug are going to continue to believe that after doing a search, even if the bug has already been reported and fixed (but not yet released).

ISO C99

I’m not sure when it happened but it looks like the C99 standard is now publicly available (for free download) in PDF form. Make sure to get the version which incorporates corrections TC1 and TC2. From the standard:

6.2.6.1 General – Representation of types

paragraph 3:

“Values stored in unsigned bit-fields and objects of type unsigned char shall be represented by a pure binary notation”.

A footnote tells us that a “pure binary notation” is:

“A positional representation for integers that uses the binary digits 0 and 1, in which the values represented by successive bits are additive, begin with 1, and are multiplied by successive integral powers of 2, except perhaps the bit with the highest position. … A byte contains CHAR_BIT bits, and the values of type unsigned char range from 0 to (2^CHAR_BIT) – 1.”

What the… ??

Let’s go through that footnote bit by bit, no pun intended:

A positional representation for integers that uses the binary digits 0 and 1 … ok, that’s fairly straightforward

… in which the values represented by successive bits are additive … I guess that means that to get the value as a whole, you add the values represented by the bits. Just like you do in any binary number. Although, this is a fairly retarded way of saying it.

… begin with 1… Does this mean that the first bit position has a value of 1? Or that the value for any bit position is one before it is multiplied by some power of 2? The latter is mathematically redundant so I guess it must be the former. Ok.

… and are multiplied by successive integral powers of 2 … Yes, ok, each bit is worth its face value (0 or 1) multiplied by “successive powers of two”, the begin with 1 beforehand means that 1 is the first power of 2 (it is 2^0), then next would be 2 (2^1) then 4, 8, 16 and so on. Again there must be a better way to say this.

… except perhaps the bit with the highest position. WTF!!?? So the highest bit position can be worth something completely different. This might make sense for representation of signed values in 2’s complement, but this was specifically referencing an unsigned type. Also:

A byte contains CHAR_BIT bits, and the values of type unsigned char range from 0 to (2^CHAR_BIT) – 1.

Do the math. If there are CHAR_BIT bits, the highest bit position is (CHAR_BIT – 1) if we number them starting from 0. Each bit except for that one is worth 2^position and the range we can represent using those bits as well as the highest bit is 2^CHAR_BIT – 1. What then must the highest bit position be worth? Why then specifically exclude it from this requirement?