FUSE, and ways in which it sucks.

I’ve just started toying around with Fuse. The possibilities of writing comprehensive filesystems in user space interests me, not the least because it makes development much easier before you move to kernel space.

I’ll say it right out, Fuse is certainly not the worst conceived / designed/ implemented piece of software by a long shot. The documentation isn’t great, but it’s not totally lacking either (which immediately puts Fuse a cut above a whole swathe of other open source projects). It’s not perfect however.

My main annoyance is the multi-threading support which, frankly, shouldn’t exist; i.e., it should be handled by the application and not the library. It is, generally, better when software layers are thin; it should be up to the app to do appropriate locking etc., and the library (in this case Fuse) should just specify in the documentation where the locking is needed (or rather, more usefully, where it is not needed). Ok, quibbles, and providing more functionality is hardly a crime; after all, the low-level layer is still available and (mostly) unobfuscated by the higher-level functionality.

The real issue is that linking against libfuse pulls in the pthread shared library as well, which is not so good. The pthread library incurs overhead (other than taking up memory/address space) because when it is loaded, locking and unlocking mutexes suddenly become real operations – which means fputc() and a whole bunch of other library functions become a weeny bit slower. (Don’t believe me? do “nm” on /lib/libc.so and /lib/libpthread.so and notice that they both define pthread_mutex_lock).  Fuse should really be split into two librarys, libfuse and libfuse_mt, and only the latter should require pthreads. (Or, as I suggested earlier, just ditch the multi-thread functionality altogether, leave it up to the application to pull in pthreads if it really needs it).

Another annoyance is the fuse_chan_receive function:

int fuse_chan_recv(struct fuse_chan **ch, char *buf, size_t size);

Notice that first argument? Not just a pointer, it’s a pointer-to-a-pointer. Why, great Zeus, why??! The function is meant to receive data from a channel, in what crazy circumstance would you want it to change the channel pointer itself to point at something else? And in fact, the function delegates to a another function via a pointer (embedded in **ch) to a fuse_chan_ops structure; in the library itself, as far as I can see, there are only two implementations (one to receive directly from the kernel, and another called mt_chan_receive) and neither of them does actually alter the value of *ch. If you have a look at the implementation of fuse_loop() though you’ll notice the author clearly had the possibility in mind, since he copies the channel into a temporary pointer variable (tmpch) before using fuse_chan_recv – so yes, the bad API design has caused increased code complexity.

The comments above the declaration of fuse_chan_recv don’t give any clue as to why it would ever modify the *ch value and incidentally ch is incorrectly documented as being a “pointer to the channel” when it is of course really a “pointer to a pointer to the channel”.

Oh, and the mt_chan_recieve function I mentioned earlier – well, it presumably has something to do with the multi-threading support, but it’s hard to be sure because the function lacks anything bearing the slightest resemblance to a comment, and there certainly isn’t any other explanation of what it does nor how it works.

Advertisements

POSIX write()

Take a look at:

http://www.opengroup.org/onlinepubs/000095399/functions/write.html

What a mess – different requirements for regular files, pipes, and “other devices supporting non-blocking operation”.  For pipes, there are reasons for this (atomic writes), but I think they should have been abstracted out (why can’t other devices have atomic writes? Why isn’t there a general mechanism to determine maximum size of an atomic write for a given file descriptor?).

I also notice, and I think this is particularly stupid, that if write() is interrupted by a signal before transferring any data it returns -1 rather than 0. If it transfers some data before being interrupted, it returns the amount of transferred data. Why make a special case out of 0?!! This forces increased complexity in the application, which can not assume that the return from write is equal to the number of bytes actually written, and for which -1 is in almost any other case an abortive error.

Unfortunately there is no discussion of the topic I was most interested in: atomicity/ordering of reads and writes to regular files. Consider:

  1. Process A issues a “read()” call to read a particular part of a file. The block device is currently busy so the request is queued.
  2. Process B issues a “write()” request which writes to the same part of the file as process A requested.

The question is now: can the data written by process B be returned to process A, or must the data that was in the file at the time of the read() call being issued? Also, is it allowed that process B might see part of the data that process A wrote, and part of what was in the file at the time of the read request?

Continue reading

Intel Xorg drivers in a state of chaos?

My primary desktop machine has Intel’s G33 graphics chipset. It’s fairly low-end but it’s fine for my needs. For ages now I’ve been running on Xorg server 1.6.0,. Mesa 7.3rc2, and xf86-video-intel-2.6.3 and suffered only occasional X crashes, although I was certainly limited to run one X session at once (I used to like running two, essentially using it as an alternative to the desktop-switchers that are common now; it was something I could live with out, but it always bothered me that it didn’t work).

Just recently, a gcc 4.4.1 release candidate was tagged and so I decided to upgrade a few of these system components. Firstly, Xorg-sever 1.6.2, the latest stable release. I also updated my kernel to the latest 2.6.30.2. libdrm went to 2.4.12 (the latest). Mesa went to 7.4.4 (7.5 has been released as “stable” with a warning that “you might want to wait for 7.5.1, or stick with 7.4 series for now”). And, I built the xf86-video-intel driver version 2.8.0 (which supports only UXA style of acceleration, and “really likes” Kernel Mode Setting though is still meant to function without it).

Results:

Without KMS: Ok, but when I run a 3D app (Xscreensaver’s “antmaze”) I just get a blank screen. Running a second X server gives me a corrupted display and promptly crashes, forcing me to reboot via a remote log-in.

With KMS: Ok, but no 3D still. When I switch to a different VT I get an instant freeze, no chance to even start another X server (though I can still do a remote log-in).

Neither of these was really acceptable so I’ve gone back to the 2.7.1 xf86-video-intel driver. This driver supports the older “EXA” acceleration method, plus Intel’s own UXA method which they seem to think is the truth, the light, the way, but which has caused performance regressions and screen corruption in the past. (The even older “XAA” was ditched a while ago, I guess). I’m not even going to bother trying to use KMS again so I test between EXA and UXA:

With UXA: Despite the fact that Intel seem to think UXA is the way of the future, it gave me only problems. Specifically, running antmaze gave a window which flickered constantly between what antmaze should look like, and what appeared to be a copy of my desktop (resulting in one of those crazy infinite recursion effects). I managed to get a screen shot, below.

With EXA: Everything is Ok. Performance sucks a bit, but at least everything looks like it should and I don’t get crashes. Even better, it seems I am now able to run two X servers on the same machine and switch between them. I won’t be surprised if I still get the odd X crash, but that’s really just the status quo.

Screen capture when using xf86-video-intel 2.7.1 with UXA acceleration
Screen capture when using xf86-video-intel 2.7.1 with UXA acceleration

Frankly, it bothers me a lot that the 2.8.0 driver has seriously regressed from 2.7.1. For quite a while now it’s been touch-and-go with the intel drivers, I’ve seen quite a few serious problems in different releases.

Also, I’m totally unconvinced that UXA is the right solution for now. It may well have technical merits if done right but it really seems like it would be better to get EXA working properly – and fast – first, and worry about other improvements (like UXA) once the EXA driver is in a stable state, and is about as good performance-wise as it can be.

Hopefully now that EXA has been abandoned (even if it was too early) the focus on UXA will cause it to stabilise and improve.

Update 31/08/09: I’m now running Xorg server 1.6.3 and xf86-video-intel 2.8.1, which has been released since I originally wrote this blog entry. I’m pleased to say that the crashes (non-KMS) are fixed and I can run two Xservers side-by-side without problems. OpenGL was horribly slow and caused disk thrashing, until I also upgraded to Mesa 7.5. With KMS there are still problems; when I switch away from the X server to a text-mode terminal, no mode-change happens – I just keep seeing the X screen; if I switch back to it (CTRL+ALT+F7) I can continue working in X. Exiting X also fails to restore text mode.

Update 29/09/2009: xf86-video-intel 2.9.0 has the same problem as 2.8.1. Reported to the Xorg bug database. Let’s see what happens.

Update 1/10/2009: Ha, the old “not a bug” response. I thought about fighting this but didn’t want to devolve into a “open bug” / “close bug” ping-pong match; anyway, to summarise:

  1. Loading fbcon kind of solves the problem; I can switch away from the X session. However, it turns my text-mode virtual terminals into framebuffer VTs (which I don’t want).
  2. Technically it shouldn’t be impossible to have what I want – that is, to have KMS and still have text-mode terminals. But it’s probably a kernel-side issue rather than anything to do with the xf86-video-intel driver. (I may even look and see if I can fix it myself, at some point; I can’t explain why, I just really like text terminals).
  3. I personally think “NOTABUG” is the wrong resolution: there is a documentation bug. The fbcon requirement should be documented – either in the X driver man page, or the README that comes with the source bundle, or in the linux kernel documentation (the help text for the i915 driver). “NOTOURBUG” may have been appropriate.
  4. For now I’ll stick with non-KMS (can we call it “userspace mode-setting”? I like that). Although I’m worried that UMS will disappear from the driver at some point in the future, just like EXA and DRI1, murdered in the night.

Subversive

Eclipse was the first “real” Java IDE I used and I’ve stuck with it for a while now. We migrated a project from CVS to Subversion a while back and, after a brief and highly unsuccessful fling with Subclipse (I’m sure it’s improved in the meantime, but I’ve never gotten around to trying it out again) I started using Subversive to provide Subversion support within Eclipse. Most of the basic things work but it has the following annoyances:

1. On my home machine, for some reason when I synchronize, it always asks for my username and password. Actually it doesn’t so much ask, seeing as the fields are already filled in and I just have to click “ok”, but it always presents the dialog.

2. On my work machine, on the other hand, it doesn’t do that. What it does instead is ask for the proxy password, despite the fact that I have entered the proxy password in the Eclipse preferences. Annoyingly, it asks for the password again from time to time as I do commits, synchronizes, whatever.

3. A whole lot of corner cases seem to be broken, especially with regards to tagging. For instance if I choose a folder under trunk and “New -> tag” (which incidentally is a really awkward menu choice, why can’t “tag” be one of the top-level options?) and then give it a new tag name, it creates the tag and puts the contents of the folder under the tag, but not the folder itself. If I create the folder under the tag first, and supply that combined tag name and folder name as the tag name (i.e. “TAG_NAME/folder_name”), it then DOES put the folder underneath the folder with the same name that I’ve already created (TAG_NAME/folder_name/folder_name)! WTF! To get it to work I have to create the tag without creating the folder and then specify “TAG_NAME/folder_name” as the tag. It’s borked.

4. Some tag operations don’t automatically refresh the tree in the SVN repository browsing view, even though they should. (I can’t remember exactly which ones, but I did spent a lot of time today creating and removing tags while trying to solve the problem in [3]).

5. What the hell is the “ROOT” folder? It just seems to contain a copy of the repository underneath it. What’s the point?

6. What does “Add revision link” (in the context menu on a folder in the SVN repository browsing view) actually do?

C Compiler Benchmarks

Edit 1/5/09: I’ve kind of re-done this. I wasn’t happy with it and some of the results were incorrect. I’ve also added two new benchmark programs since the first version of this post.

Well gcc 4.4.0 has been released now (with a few bugs, of course…) and I thought it’d be interesting to do some benchmarks, especially as there are now a few open source compilers out there (gcc, llvm, libfirm/cparser, pcc, ack; I’m not so interested in the latter two just now). Now I don’t have access to SPEC so I cooked up just a few little programs specifically designed to test a compiler’s ability to perform certain types of optimization. Eventually, I might add some more tests and a proper harness; for now, here are the results (smaller numbers are better):

Results
Results

Thankfully, it looks like gcc 4.4.0 performs relatively well (compared to other compilers) in most cases. Noticable exceptions are bm5 (where gcc-3.4.6 does better), bm3 and bm6 (where llvm-2.5 does better, especially in bm6).

llvm-2.5 does pull ahead in a few of the tests, but fails dismally in the 4th, which tests how well the compiler manages a naive memcpy implementation. llvm-2.5 generated code is more than four times slower than gcc generated code in this case, which is ridiculously bad. If it wasn’t for this result, llvm would be looking like the better compiler.

Sadly, none of the compilers I tested did particularly well at optimising bm2. In theory bm2 run times could be almost idential to bm3 times, but no compiler yet performs the necessary optimizations. This is a shame because the potential speedup in this case is obviously huge.

It’s surprising how badly most compilers do at bm4, also.

So, nothing too exciting in these results, but it’s been an interesting exercise. I’ll post benchmark source code soon (ok, probably I won’t. So just yell if you want it).

Continue reading

Mplayer version 1.0… ever?

The guys that develope MPlayer have some serious hangup about doing an actual 1.0 release. I mean, they’ve been through 1.0pre1, pre2, pre3, pre4, pre5, -rc1 (2006-10-22, that’s over 2 years ago) and -rc2 (2007-10-7, over 1 year ago) and still we don’t have a 1.0 release.

Now, there are mailing list posts like this one which say “it’ll be ready when it’s ready” and calling for the use who dared venture forth with the question (of when the next rc or release might be forthcoming) to be the release manager. I mean, look, I understand that these guys are doing a lot of hard work developing MPlayer and it is, after all, a pretty good product by open source standards (I mean, it has some actual documentation for one thing) BUT geez, if you’re not going to actually release 1.0 in some reasonable timeline then why have “release candidates”? Was -rc2 so bad that it can’t be called 1.0? (I mean, that is the point of a “release candidate” right? It is the candidate for the release, yes?) And if -rc2 has problems, wouldn’t it be prudent to at least do another release candidate, i.e. rc3?

Sigh. I guess I’m complaining not so much about the absence of MPlayer 1.0 (although I would certainly like it to arrive) but the fact that these “release candiates” exist and did so for so long without any actual release occurring. I mean, you shouldn’t have a release candidate if you’re not planning (or able) to do an actual release – it kind of gives the wrong impression. (Let’s be clear. The developers aren’t at fault, at least, not most of them. I’m just kind of irked at whoever had the bright idea to bundle a tarball and call it “Mplayer 1.0pre1” in the first place…)