POSIX write()

Take a look at:

http://www.opengroup.org/onlinepubs/000095399/functions/write.html

What a mess – different requirements for regular files, pipes, and “other devices supporting non-blocking operation”.  For pipes, there are reasons for this (atomic writes), but I think they should have been abstracted out (why can’t other devices have atomic writes? Why isn’t there a general mechanism to determine maximum size of an atomic write for a given file descriptor?).

I also notice, and I think this is particularly stupid, that if write() is interrupted by a signal before transferring any data it returns -1 rather than 0. If it transfers some data before being interrupted, it returns the amount of transferred data. Why make a special case out of 0?!! This forces increased complexity in the application, which can not assume that the return from write is equal to the number of bytes actually written, and for which -1 is in almost any other case an abortive error.

Unfortunately there is no discussion of the topic I was most interested in: atomicity/ordering of reads and writes to regular files. Consider:

  1. Process A issues a “read()” call to read a particular part of a file. The block device is currently busy so the request is queued.
  2. Process B issues a “write()” request which writes to the same part of the file as process A requested.

The question is now: can the data written by process B be returned to process A, or must the data that was in the file at the time of the read() call being issued? Also, is it allowed that process B might see part of the data that process A wrote, and part of what was in the file at the time of the read request?

Continue reading “POSIX write()”

Advertisements

C Compiler Benchmarks

Edit 1/5/09: I’ve kind of re-done this. I wasn’t happy with it and some of the results were incorrect. I’ve also added two new benchmark programs since the first version of this post.

Well gcc 4.4.0 has been released now (with a few bugs, of course…) and I thought it’d be interesting to do some benchmarks, especially as there are now a few open source compilers out there (gcc, llvm, libfirm/cparser, pcc, ack; I’m not so interested in the latter two just now). Now I don’t have access to SPEC so I cooked up just a few little programs specifically designed to test a compiler’s ability to perform certain types of optimization. Eventually, I might add some more tests and a proper harness; for now, here are the results (smaller numbers are better):

Results
Results

Thankfully, it looks like gcc 4.4.0 performs relatively well (compared to other compilers) in most cases. Noticable exceptions are bm5 (where gcc-3.4.6 does better), bm3 and bm6 (where llvm-2.5 does better, especially in bm6).

llvm-2.5 does pull ahead in a few of the tests, but fails dismally in the 4th, which tests how well the compiler manages a naive memcpy implementation. llvm-2.5 generated code is more than four times slower than gcc generated code in this case, which is ridiculously bad. If it wasn’t for this result, llvm would be looking like the better compiler.

Sadly, none of the compilers I tested did particularly well at optimising bm2. In theory bm2 run times could be almost idential to bm3 times, but no compiler yet performs the necessary optimizations. This is a shame because the potential speedup in this case is obviously huge.

It’s surprising how badly most compilers do at bm4, also.

So, nothing too exciting in these results, but it’s been an interesting exercise. I’ll post benchmark source code soon (ok, probably I won’t. So just yell if you want it).

Continue reading “C Compiler Benchmarks”

Mplayer version 1.0… ever?

The guys that develope MPlayer have some serious hangup about doing an actual 1.0 release. I mean, they’ve been through 1.0pre1, pre2, pre3, pre4, pre5, -rc1 (2006-10-22, that’s over 2 years ago) and -rc2 (2007-10-7, over 1 year ago) and still we don’t have a 1.0 release.

Now, there are mailing list posts like this one which say “it’ll be ready when it’s ready” and calling for the use who dared venture forth with the question (of when the next rc or release might be forthcoming) to be the release manager. I mean, look, I understand that these guys are doing a lot of hard work developing MPlayer and it is, after all, a pretty good product by open source standards (I mean, it has some actual documentation for one thing) BUT geez, if you’re not going to actually release 1.0 in some reasonable timeline then why have “release candidates”? Was -rc2 so bad that it can’t be called 1.0? (I mean, that is the point of a “release candidate” right? It is the candidate for the release, yes?) And if -rc2 has problems, wouldn’t it be prudent to at least do another release candidate, i.e. rc3?

Sigh. I guess I’m complaining not so much about the absence of MPlayer 1.0 (although I would certainly like it to arrive) but the fact that these “release candiates” exist and did so for so long without any actual release occurring. I mean, you shouldn’t have a release candidate if you’re not planning (or able) to do an actual release – it kind of gives the wrong impression. (Let’s be clear. The developers aren’t at fault, at least, not most of them. I’m just kind of irked at whoever had the bright idea to bundle a tarball and call it “Mplayer 1.0pre1” in the first place…)

Multiple pkg-config incarnations?

Why the heck does ftp.gnome.org host (apparently two strains of) pkg-config?

http://ftp.gnome.org/pub/GNOME/sources/pkg-config/ (version 0.18 and 0.19)

http://ftp.gnome.org/pub/GNOME/sources/pkgconfig/ (version 0.8 through 0.18, though skipping 0.10)

It’s irresponsible to host out-of-date versions like that, as if pkg-config was part of the Gnome project and didn’t exist independently. At least put a README or something which mentions that this is a possibly-out-of-date mirror!

The real pkg-config distribution is here:

http://pkg-config.freedesktop.org/wiki/

with releases (including the current 0.23) here:

http://pkgconfig.freedesktop.org/releases/

Using $* in shell scripts is Nearly Always Wrong

(Bourne and Bash) Shell script writers bandy ‘$*’ about like it’s a stick of celery, but the truth is $* is nearly always the wrong thing to use. (For the clueless, $* expands to all the “positional parameters”, that is, all the parameters to the current script or function; it’s useful in cases where you want to pass the argument values for those parameters on to another program).

So why is $* wrong? Because it is evaluated before word splitting. In other words, if any of the arguments to your script had a space or other whitespace, then those arguments will become two when $* is evaluated. The following shell script can be used to verify this:

#!/bin/sh
# Call this script “mytouch”
touch $*

… Now try to “mytouch” a file with a space in its name (yeah, well done, you just created two files with one command).

Does quoting it work? will “$*” solve your problems? It will fix the above example, right up to the point where you try to “mytouch” two or more files at once and then you’ll see the fallacy (hint: “$*” always expands to a single “word”).

So what is the correct solution? It’s simple: use “$@” instead. Note, that’s $@, with quotes around it. Hence “$@”. This specifically expands to one string for each actual argument. It works. Hallelujah. (And while I’m blathering on shell scripts, note that any unquoted variable expansion is also Nearly Always Wrong for pretty much the same reasons).

The Joy of Wireless Networking with Linux

I just bought a cheap D-Link DWL-G120 “AirPlus” wireless adapter which connects to the PC via USB. I was hoping to either set it in AP (“Master”) mode and use it as an access point (so I can connect to my server from my MacBook wirelessly) or, failing that, in Ad-hoc mode; however, it seems that the prism (p54usb) driver in linux supports neither mode. Not that this is actually documented anywhere or anything, no, that would be too helpful! You can actually set Ad-hoc mode using iwconfig, but then trying to bring up the interface (with “ifconfig wlan0 up”) generates:

SIOCSIFFLAGS: Operation not supported

Needless to say, that’s not very helpful, and there’s no dmesg output which gives any clue as to what the problem is. Your humble narrator spent several hours sifting through kernel source code before he discovered where the problem was (drivers/net/wireless/p54/p54common.c, p54_add_interface() method refuses anything but managed mode; no trivial fix).

I found a patch which apparently adds support for both Master and Ad-hoc modes, and though I managed to apply it against my 2.6.26.3 kernel tree, it doesn’t work and caused a kernel panic when I was messing around with it (so, yeah, not good really). It didn’t apply cleanly so though I assume that there might be some kernel tree out there with working Master mode for the p54, but if there is I can’t find it. The patch doesn’t seem to be applied in the wireless-next tree. Looks like I’ll just have to find a wireless router for the time being.

Links:

http://wireless.kernel.org/en/users/Drivers/p54 (yeah, ok, here it says that only Station mode (= Managed mode) works).

Edit 2008-09-01: Hmm. I now have a linksys router running (as an AP) and still can’t connect to it using the D-Link. Can’t be bothered to debug it just now, but I suspect the driver just Does Not Work at all (though it can list various networks it finds on the airwaves, so it works partially at least).

Apple Sues Again

A while ago I wrote an entry saying how Psystar, which appears to be a Florida-based company, has begun producing Mac clones which can run Mac OS X. These clones were never sanctioned by Apple, and there was some speculation at the time that Apple would fire its legal cannons at some point.

It seems that time has now come.

It will be interesting to see how this plays out. From what I can gather, Apple are arguing that trademark infringements have occurred and weakened Apple’s name. Without knowing much detail, this sounds pretty weak. On the other hand, Apple has a lot of money and a lot of lawyers. Also, as I’ve discussed previously, Apple software EULAs often specifically state that you may not use their software on non-Apple hardware – a term which may or may not be enforcable in the courts.

Only time will tell.

2008-8-7: Psystar has countersued Apple, for anticompetitive business practices.