On the vagaries of init systems

When I started working on Dinit I had only a fairly vague idea of the particulars of various other init systems, being familiar mainly with Sys V init and to a lesser extent, Systemd and Upstart (the latter of which has more-or-less vanished off the face of the earth). At that stage it was a purely personal project and I didn’t count necessarily making it public; as time went on I heard lots of complaints about Systemd, which has become the choice of init system of many distributions; I did a little research on some other systems – enough to satisfy myself that Dinit filled a worthwhile niche – and then made an announcement that I was planning to develop it into a(nother) complete init/service manager that could potentially compete with Systemd.

Around that time, I also wrote a short document trying to summarise the differences between a number of extant systems, or at least between them and Dinit, and included this in the documentation of Dinit (as part of the source tree). However, the time has perhaps come to write a more comprehensive treatment examining the differing design choices of various systems; hence, this post. Hopefully I can give an interesting overview of some design decisions that are made in a service manager, highlight specific features of various particular pieces of service management software, and give some incidental background on why I’ve made the choices I have in the design of Dinit (though I’ll to try to keep this from being too Dinit-focused).

Recap: supervision system vs service manager vs system manager

The various terms – supervision, service manager, system manager – sometimes get thrown around a little loosely, but for my purposes here it’s better to have a clear distinction between them. Without further ado:

Supervision system: a process or means for supervising service processes, providing a means to start and terminate individual services and perhaps to automatically restart them if they terminate unexpectedly.

Into the category of supervision system falls the likes of daemon-tools, runit and S6. Note that a supervision system need not be made up of just a single process: it might supervise individual service processes using separate supervisor processes, for example. Also, an active “service” might not necessarily correspond to a running process (for example a “network” service could be made active by executing a script which terminates after the network interfaces are configured).

The next category is that of service manager:

Service manager: a process or means for starting or stopping services which have dependencies from and to other services, such that the dependencies of a service must be started before the service itself is started, and the dependents of a service should be stopped before the service itself is stopped.

So, compared to a supervision system, this adds the concept of dependency management. Some might disagree that “service manager” should entail dependency handling, but for our purposes here it’s useful to have a convenient name for such a distinction, so we make the separation – dependency-handling service management versus individual service supervision.

Note that it may be possible to implement a service manager as an additional component on top of a separate supervision system – for example, S6-RC and Anopa both implement service management over the S6 supervision system.

This brings us to the final category:

System manager: a process (or processes) responsible for controlling system startup, shutdown, and other system-level actions.

A system manager typically has to arrange for the bring-up and stopping of services, which it may do by also being – or by delegating to – a supervision system or service manager. A system manager includes an init process which is launched by the kernel as the first userspace process at boot.

It’s worth noting at this point that, while a service manager built on a supervision system typically requires tight coupling with the other system – it needs to know the specific details of how to start and stop services, and to observe changes in service state – a system manager can, in comparison, maintain quite a loose coupling; it only needs to tell the supervision system (or service manager) to start, and to stop, and can leave the handing of individual services to the supervisor’s care.

I should add that different systems use different terminology for what Systemd calls “units”, the basic concept of a thing that can be started and stopped and can have dependencies on other units. In Systemd terminology, a “service” and a “target” are different types of unit. Other systems just stick with “service” for everything, regardless of whether there’s a process or other functionality attached. The distinction isn’t particularly useful here, so I’ll use the terms unit, target, and service more-or-less as synonyms.

Pure supervision as service management

In my definitions above, I outlined the primary distinction between supervision systems and service managers as being.a question of dependency management.

However, a system where services technically have interdependencies can work with a supervision system that doesn’t manage dependencies. In the most basic form, it’s possible to rely on the fact that a service will naturally fail if its dependencies are not satisfied; it should then be restarted (ideally with a gradually increasing delay) by the supervisor, until the dependency itself has become available.

It may also be possible to explicitly start any dependencies as part of a service’s startup script (and optionally also stop known dependents as part of a stop script). The runit documentation suggests:

  • before providing the service, check if all services it depends on are available. If not, exit with an error, the supervisor will then try again.
  • optionally when the service is told to become down, take down other services that depend on this one after disabling the service.

Certainly this can work. Although in general checking for dependencies being available prior to starting is prone to a race condition (nothing prevents a dependency from stopping just after the check is made), this seems unlikely to be a common problem in practice.  In fact the joint technique outlined above allows a quite simple supervision system to provide much of the functionality associated with a service manager, provided that the dependencies are correctly encoded in the start/stop scripts.

However, that niggling race condition remains. For services which, for whatever reason, won’t behave as we want them to when dependencies are (or become) unavailable, this could potentially be problematic. Is it a stretch to claim that such services may in fact exist? Maybe it is, though I’m not particularly willing to vouch that various web app frameworks won’t lock themselves up if the DBMS becomes unavailable for a little too long, for example.

There’s also the fact that continuously polling to start services will consume system resources (only very little, if the “check for dependencies first” approach advocated by the runit documentation is followed; perhaps a significant amount if it’s not). It may also make noise in log files: service X can’t start, service X still can’t start, …, and so on. And a polling approach means that, when the dependencies of some service do become available, there may be a little delay before the service itself starts: the supervisor has to decide to try and start it again, and has no cue to do this over than by some timer expiring. These by themselves are minor issues, of course.

One advantage of proper dependency-handling service management is that you can usually query the system for dependency information (“what other services will need to be started in order to start service X?”, “what is the total set of dependencies for service X?”, etc).

Laurent Bercot, S6-RC author, gives his own argument for dependency management:

The runit model of separating one-time initialization (stage 1) and daemon management (stage 2) does not always work: some one-time initialization may depend on a daemon being up. Example: udevd on Linux. Such daemons then need to be run in stage 1, unsupervised – which defeats the purpose of having a supervision suite.

This seems a fair point and a good example, though I’m not sure it would be impossible to supervise even udevd in a supervision-only system (even if it might require tweaking the existing systems a little).

I’m certainly in favour of dependency-managing systems (and of course Dinit is such a system), though I’m aware the arguments for it may sound a little wishy-washy, and to some degree it’s a matter of personal preference.

Complexity level of dependency relationships

Different service managers provide different dependency configuration options, with differing levels of complexity.

At the most simple end, S6-RC offers only a single type of dependency: that is, a service can depend on another, and will not start unless the other starts first. However, it appears to be unusual in this regard. Many systems have the concept of a soft dependency – which should be started with a dependent, but for which failure should not cause the dependent to also fail. The “hard” and “soft” dependencies are termed differently in different systems (needs, requires, depends-on vs wants, waits-for).

The benefit of a soft dependency is essentially that you can enable a service but not have its failure prevent your system from booting due to the rollback that results (assuming that the system performs such rollback; discussion of activation model and rollback yet to come).

OpenRC has both a needs and a uses/wants relationship (“uses” vs “wants” in this case have different semantics depending whether the dependency has been enabled in the current runlevel; most other service managers have largely done away with the concept of runlevels).

Nosh has requires and wants relationships, and separately supports start ordering relationships (before/after, indicating that another service’s start/stop should be ordered with respect to this service, even if there is no dependency between them). Nosh dependencies can be specified in both directions (this service requires that service, this service is required-by that service). It also has a conflicts relationship: if one service is started it can force another to stop, and vice versa.

Systemd is a law unto itself, with more dependency types than you can count on one hand; consider it as Nosh++ (though I believe Systemd came first, and Nosh borrowed from it, rather than the other way around). It’s not clear how commonly useful most of the dependency types are, though they were presumably implemented with reasons in mind.

For Dinit, I eventually opted for three dependency types: depends-on (requires), waits-for (wants), and depends-ms (depends as a milestone; the dependency must start for the dependent to start, but once started it effectively becomes a waits-for dependency). The latter, depends-ms, is of somewhat dubious value and may be removed if I cannot find a compelling scenario for it. In my eyes three dependency types (or even better, two) is a nice middle ground giving good functionality with relatively low complexity.

Systemd documentation mentions the common requirement for a dependent to start only once the dependent has properly started:

It is a common pattern to include a unit name in both the After= and Requires= options, in which case the unit listed will be started before the unit that is configured with these options.

I do not see any compelling reason for having ordering relationships without actual dependency, as both Nosh and Systemd provide for. In comparison, Dinit’s dependencies also imply an ordering, which obviates the need to list a dependency twice in the service description. (edit: a problem caused by separating ordering and dependency is described in this Systemd bug ticket).

Activation model of service managers

Suppose that we have two services – A and B – and that the first depends on the second. When A is started, B will also be started. The question is: what if A is then stopped?

There are two somewhat reasonable answers:

  1. Since the action was to start and stop a single service, the state of all services should return to what it was before either action. B should therefore stop, since it has not been explicitly started (i.e. rollback should occur naturally).
  2. Services should start, or stop, only when required to do so. Since B started when A was started, and has not been required to stop, it should not stop.

I believe that most systems take the 2nd approach, but Dinit takes the first (and tracks which services have been explicitly activated versus which have only started due to being required by a dependent).

I am not certain that either approach is definitely better than the other. The first provides a nice consistency for the scenario described (starting and then stopping a service will generally return the system to the original state), and avoids potentially leaving unneeded services running; the second on the other hand reduces overall service transitions.

Advocating for the first approach, one benefit is that it is simple to emulate runlevels. If you set up each runlevel as a service (target, unit) which depends on the services that should run in that runlevel, then you can “switch runlevels” by starting the new runlevel service and stopping the old one. There is no need to explicitly set any services to stop: if they are not required by the current active runlevel, they will stop anyway (although additional services can always be activate via an explicit command).

(Compare to Systemd’s approach to runlevels: it implements a separate command, “isolate”, to deactivate services not belonging to the new runlevel).

Also, with the first approach, boot failure is detectable as all services stopping without having received a shutdown command. That is, “boot” is a service with dependencies; if one of the necessary dependencies fails to start, “boot” will also fail, and at that point it releases all other (successfully started) dependencies, so that they then stop. There is no need to have “special” knowledge of the boot service, or to have a special failure case for that particular service. This is arguably just an implementation detail, though.

Now advocating for the second approach: consider the case of repeatedly attempting to start a service which has several dependencies, but which is failing due to a configuration issue: the administrator tries to start the service, and watches as its dependencies start and then stop again since the service itself failed to start. They then attempt to repair the configuration, but do not succeed, and on attempting to start the service again see the dependencies bounce up and then down a second time (let’s hope they get it right the third time…). This would be avoided with the second approach, since the dependencies would simply remain active when the service failed to start.

The problem described above could probably be avoided, even with the first approach, in various ways, but any solution would no doubt add a little more complexity to the system.

I personally still find the first model more natural and compelling – but again, it’s arguably just personal preference.

Special targets

Some systems have special targets with special semantics. Often certain targets are started to perform, or as part of, particular system actions: a shutdown target can be started when the system is to shut down, for example. Systemd has a large list of special targets, including targets that get created by Systemd when certain hardware is detected, and targets to represent mount points, which Systemd has special handling for.

Systemd also adds dependencies automatically to or from special targets. For the basic target:

systemd automatically adds dependency of the type After= for this target unit to all services (except for those with DefaultDependencies=no).

And for the dbus.socket unit:

A special unit for the D-Bus system bus socket. All units with Type=dbus automatically gain a dependency on this unit.

(The dbus unit is for launching the D-Bus daemon, and causes Systemd to connect to the bus after the unit starts. Systemd and D-Bus are somewhat intertwined; D-Bus has the ability to start service providers by communicating with Systemd, and Systemd exposes various services via D-Bus, as well as being able to determine that a service is ready via a D-Bus name becoming available).

Other service managers don’t tend to have as many special targets. Nosh documents a few in its system-control man page, but not as many as Systemd, and it has no special relationship to D-Bus for example. Dinit uses boot as the default service to start, but otherwise does not treat that service specially in any way; other design choices (such as the activation model) made special treatment unnecessary.

Service description/configuration mechanism

A number of supervision/service managers have gone with a “directory-per-service” approach (which I think perhaps was pioneered by daemon-tools? I’m not sure). In the directory you have a script used to run the service, some files which each contain a parameter setting, and perhaps a subdirectory containing links to dependencies. (That’s a broad stroke; many of the systems have subtle differences. S6-RC dependencies are listed one-per-line in a “dependencies” file for example). The benefit of having one-setting-per-file is that it requires no parsing and makes the system simpler. The downside is that it is a little bit more complicated to easily check the whole service configuration (though tooling can help).

Other systems – including the venerable Sys V init, as well as OpenRC – simply have a script per service. In the case of OpenRC, the script (optionally) has a special interpreter, openrc-run, which offers dependency handling functions. Various metadata is extracted from the scripts (and cached in a separate database).

Dinit, and Systemd, both use a single file per service (“.ini” style). I find this more convenient for editing service descriptions generally; the downside is that parsing is required. In the case of Systemd running as system manager, this means parsing in the PID 1 process, which many would frown upon. I’m not convinced this is really a big problem (*); Dinit’s configuration parser is quite simple and has proved robust (in my own use) – though it’s worth noting that Dinit doesn’t demand that it runs as a system manager (PID 1), whereas Systemd does expect this (“Note that it is not supported booting and maintaining a full system with systemd running in --system mode, but PID not 1″).

(* edit: the “not a big problem” I was referring to here was parsing in general, not the parsing in Systemd, which has historically been problematic at times – though even that has, as best as I can tell, been significantly improved and become better tested).

S6-RC is unusual in that it requires the service descriptions to be compiled into a database. OpenRC, as mentioned, also stores service metadata separately to the service script, but only as a cache. In either case, I suppose it is potentially possible for the compiled data and the source to become inconsistent, though I doubt it is much of a problem in practice.

Monolithic vs modular process design

One question around the design of a supervision/service/system manager is, how many processes should make it up? A number of the smaller and simpler systems have gone for the approach of breaking things up into many processes. Taking S6-RC as a case in point, the service manager (S6-RC) is separate to the main supervision process (s6-svscan of S6) which in turn runs supervisor processes (s6-supervise) which, finally, run the service process. Typically the service process is launched via an execline script, which allows calling various chain-loading subprograms to set up environment, UID/GID, etc.

The idea behind breaking things up this way is, essentially, that it allows each component to be small, simple, and “obviously correct”. There are those who argue that this approach fits the “unix philosophy” of “do one thing and do it well”. This is not an entirely bogus argument; by limiting the function of an individual program, it’s somewhat easier to make sure that the program is fundamentally correct.

On the other hand, composing multiple small programs into a more complex system still results in, well, a more complex system. If the functions of a system can easily be decomposed into separate processes, they can most likely be decomposed to individual modules within a single-process program as well. (And, having multiple processes comes with its own disadvantages: certain system-level functionality is only going to be possible to implement by communicating between modules; if the modules are separate processes, that means inter-process communication, and in general that’s going to increase complexity significantly. This might not prove to be a problem for a service manager, though, if the need for such communication is really limited).

The main point that I am trying to make is that breaking functionality into separate processes does not make the overall system any simpler. It may offer an advantage in terms of making it possible to use the individual components separately, but it’s not clear to me that this is really useful. Probably the main real benefit is, potentially, an increase in robustness: if one of your various sub-processes does crash, it won’t necessarily bring down the whole system.

Enter Systemd into the discussion. Systemd insists on incorporating not only service management and supervision into a single process, but system management as well: it wants to run the whole thing as PID 1, a process which, if it crashes, causes the kernel to panic (at least on Linux) and thus really does bring the whole system tumbling down. (Edit: to be fair, Systemd tries hard not to actually crash, but to catch eg SIGSEGV and go into a mode of limited operation which allows the system to function enough that you can sync filesystems before shutting down).

For Dinit, in comparison, I felt no concern about having just service management and supervision all in a single process. And in fact, Dinit does support running as a system manager, within the same process – but it does not require this; Dinit’s quite happy to act as a system-level service manager but have another process be the system manager. Additionally, Dinit is just generally far simpler than Systemd (as should be clear by now).

Some people are always going to prefer breaking things up into processes that are essentially as small as possible: I can understand this to an extent, I just don’t agree that it’s always a worthwhile goal, and I don’t think that Dinit suffers from being less modular than many of the alternatives.

Robustness and failure modes

The decision to write important system-level software in non-memory-safe languages such as C and C++ has been criticised. Yet, such software continues to be written in such languages (although certain other options such as Rust and Go have been gaining traction recently).

One of the systems I haven’t mentioned up this point is GNU Shepherd; mainly, my concern is that it’s written in Guile, an interpreted (or bytecode-interpreted) language with garbage collection – and I see both the “interpreted” and “garbage collection” parts as undesirable for system-level software (especially for a potential init). Interpreted software will be less efficient (if not in actual speed, since I’ll acknowledge that JITs can do amazing things, at least in memory usage) and garbage collection presents a similar issue. If the software was so complex that we couldn’t make it robust without using a memory-safe language/runtime – and if we weren’t willing to use Rust or another GC-less option for some reason – then perhaps the use of GC would be acceptable, but I don’t believe that’s actually the case; Dinit has so far proven to be robust, and even Systemd, despite early foibles, rarely actually crashes (even if it fails in other ways, as occasional rumbles on the web suggest).

A real concern of GC’d languages generally is, can programs in these languages be made resilient to out-of-memory conditions (are allocations even always explicit)? I haven’t looked closely enough at Shepherd to be able to pass comment, but I would not be surprised if it turned out that memory allocation failure is not something it is designed to handle (I’d be happy to be shown otherwise). Despite the low probability of an out-of-memory situation occurring, I still think it’s something that a service manager – and especially a system manager – needs to be able to deal with.

Conclusion

Well, that ends our tour of concerns. If you got this far – thanks for reading, and I hope it was interesting and informative. There are of course a lot of other aspects of service manager design – and some unique features of particular systems – but this article has gotten quite long already. Please feel free to add constructive comment, correction or discussion.

Advertisements

Escape from System D, episode V

Well, yes, I’m still working on Dinit, my portable and “lightweight” intended-as-an-alternative to Systemd. The first commit was on August 27, 2015 – just under three years ago – and my first announcement about Dinit on this blog was on June 14 last year. In looking up these dates, I’m surprised myself: I was working on Dinit for two years before I wrote the introductory blog post! It didn’t feel like that long, but it goes to show how long these things can take (when you’re working as a one-man development team in your spare time).

I recently issued a new release – 0.2.0, still considered alpha – with some new features (and bugfixes), and am planning a 0.3.0 release soon, but progress certainly has been slow. On the other hand, things really have come a long way, and I’m looking forward to being able to call the software “beta” rather than “alpha” at some point soon (though I suppose it’s open question if those terms really mean much anymore). One year in seems like a good time for a retrospective, so here it is; I’ll discuss a number of things that occur to me about the experience of developing some non-trivial software as a lone developer.

On software quality

One thing that’s always bothered me about open-source projects, although it’s not universally true, is that the quality isn’t always that great. There are a huge number of half-done software projects out there on Github (for example), but more importantly there are also a large number of 95% done projects – where they are basically working, but have a number of known bugs which have been sitting in the issue tracker for a year or more, and the documentation is mostly-correct but a bit out-of-date and some of the newer features aren’t mentioned at all. Build documentation is often seen as optional; you can always “just run ./configure –help” though of course it’s not entirely clear what all the options do or how they affect the result, and in my experience the chance that a configure script correctly checks for all the required dependencies is pretty low anyway.

Take the source of any major project, even an established one, and do a search for “TODO” and “XXX”, and the results are often a little disturbing. I try to avoid those in Dinit, though to be fair the count is not zero. There are some in Dasynq (the event-loop library which I’ve also released separately), and some in Dinit’s utility programs (dinitctl and shutdown), but at least there are none in the Dinit core daemon code. But keeping it that way means consistently going back over the code and fixing the things that are marked as needing fixing – or just avoiding creating such holes in the first place. By the time I release version 1.0 I’d like to have no TODO comments in any of the Dinit code.

Documentation is another thing that I’ve been very careful about. Whenever I add any feature, no matter how small, I make sure that the documentation gets updated in the same or the very next commit. I’m glad to say that the documentation is in really good shape; I plan to keep it that way.

Also, tests are important. I don’t enjoy writing them, but they are really the only way I can ensure that I don’t cause regressions when I make changes or add new features, and it satisfying to see all those “PASSED” lines when I run “make check”. I still need to add more tests, though; some parts of the code, particularly the control protocol handling and much of the service description loading, don’t have tests yet.

On autoconf and feature checks and portability

Dinit doesn’t use autoconf and doesn’t have a “configure” script. Basic build settings like compiler and compiler switches are specified in a configuration file which must be hand-edited, though this process isn’t onerous and will generally take all of a whole minute. I wouldn’t be against having a script which would probe and determine those particular settings but I also don’t see a strong need for such a thing.

In terms of system call features, Dinit largely sticks to POSIX, and in the few cases where it doesn’t it uses an #ifdef (eg `#if defined(__FreeBSD__)’). The latter probably isn’t ideal, but the danger of feature checks for system calls is that they usually can only check for the existence of a function with a particular name, and not that it does what we need it to do; I think I’d rather that you have to explicitly specify in the build configuration that such-and-such a call is available with the right semantics than to just check it exists and then blindly assume that it is what we think it is, but just checking for specific systems seems like a nice compromise, at least during development.

As it is now, if you run a current version of Linux, FreeBSD, OpenBSD or MacOS then you can build by editing a single file, uncommenting the appropriate section, and then running GNU make. I’ve also experimented briefly with building it on Sortix but ran into an issue that prevented me from getting it working.

On contributions (and lack thereof)

I’ve had one very minor contribution, from the one person other than myself who I know actually uses Dinit (he also maintains RPM packages of Dinit for Fedora and CentOS). I do sometimes wish that others would take an interest in the development of Dinit, but I’m not sure if there’s any way I can really make that happen, other than by trying to generate interest via blog posts like this one.

What I really should do, I guess, is clean up the presentation a bit – Dinit’s README is plain text, whereas a markdown version would look a lot more professional, and I really should create a web page for it that’s separate to the Github repository. But whatever I do, I know I can’t be certain that other contributors will step forward, nor even that more than handful of people will ever use the software that I’m writing.

On burnout (and avoiding it)

Keeping the momentum up has been difficult, and there’s been some longish periods where I haven’t made any commits. In truth, that’s probably to be expected for a solo, non-funded project, but I’m wary that a month of inactivity can easily become three, then six, and then before you know it you’ve actually stopped working on the project (and probably started on something else). I’m determined not to let that happen – Dinit will be completed. I think the key is to choose the right requirements for “completion” so that it can realistically happen; I’ve laid out some “required for 1.0” items in the TODO file in the repository and intend to implement them, but I do have to restrain myself from adding too much. It’s a balance between producing software that you are fully happy with and that feels complete and polished.

On C++

I’ve always thought C++ was superior to C and I stand by that, though there are plenty who disagree. Most of the hate for C++ seems to be about its complexity. It’s true that C++ is a complex language, but that doesn’t mean the code you write in it needs to be difficult to understand. A lot of Dinit is basically “C with classes (and generic containers)”, though I have a few templates in the logging subsystem and particularly in Dasynq. I have to be very careful that the code is exception safe – that is, there’s nowhere that I might generate an exception and fail to catch it, since that would cause the process to terminate (disastrously if it is running as “init”) – but this turns out to be easy enough; most I/O uses POSIX/C interfaces rather than C++ streams, and memory allocation is carefully controlled (it needs to be in any case).

I could have written Dinit in C, but the code would be quite a bit uglier in a number of places, and quite frankly I wouldn’t have enjoyed writing it nearly as much.

Of course there are other languages, but most of the “obvious” choices use garbage collection (I’d rather avoid this since it greatly increases memory use for comparable performance, and it often comes paired with a standard library / runtime  that doesn’t allow for catching allocation failures). Rust might seem to be a potential alternative which offers memory safety without imposing garbage collection, but its designers made the unfortunate choice of having memory allocation failure cause termination – which is perhaps ok for some applications, but not in general for system programs, and certainly not for init. Even if it weren’t for that, Rust is still a young language and I feel like it has yet to find its feet properly; I’m worried it will mutate (causing maintenance burden) at a rate faster than the more established languages will. It also supports less platforms than C++ does, and I feel like non-Linux OSes are always going to be Rust’s second-class citizens. Of course I hope to be proved wrong, but the panic-on-OOM issue still makes Rust a non-starter for this particular project.

On Systemd

Even when I announced Dinit after working on it for some time I struggled to explain exactly why I don’t like Systemd. There have been some issues with its developers’ attitudes towards certain bugs, and their habit of changing defaults in ways which break established workflows and generally caused problems that were seen by many as unnecessary (the tmux/screen issue for example), but few specific technical issues that couldn’t be classified as one-off bugs.

I think what really bothers me is just the scope of the thing. Systemd isn’t an init system; it’s a software ecosystem, a whole slew of separate programs which are designed to work together and to manage various different aspects of the system, not simply just manage services. The problem is, despite the claims of modularity, it’s somewhat difficult to separate out the pieces. Right from the start, building Systemd, you have a number of dependencies and a huge set of components that you may or may not be able to disable; if you do disable certain components, it’s not clear what the ramifications might be, whether you need to replace them, and what you might be able to replace them with. I’d be less bothered if I could download a source bundle just for “Systemd, the init daemon” and compile that separately, and pick and choose the other parts on an individual basis in a similar way, but that’s just not possible – and this is telling; sure, it’s “modular” but clearly the modules are all designed to be used together. In theory you may be able to take the core and a few select pieces but none of the distributions are doing that and therefore it’s not clear that it really is possible.

Also, I think it’s worth saying that while Systemd has a lot of documentation, it’s not necessarily good documentation. For example (from here):

Slices do not contain processes themselves, but the services and slices contained in them do

Is it (a) slices do not contain processes or (b) slices do contain processes?

This is just one example of something that’s clearly incorrect, but I have read much of the Systemd documentation a number of times and still struggled to find the exact information I was looking for on any number of occasions. And if you’re ever looking for details of internals / non-public APIs – good luck.

Regardless of whether Systemd’s technical merits and flaws are real, having another option doesn’t seem like a bad thing; after all, if you don’t want to use it, you don’t have to. I’m writing Dinit because I see it as what Systemd could have been: a good and reliable standalone service manager with dependency management that can function as a system init.

On detractors and trolls

I guess you can’t take on something as important as an init system and not raise some eyebrows, at least. Plenty of comments have been made since I announced Dinit that are less than positive:

(for the record, not trolling, not a newbie – if that is even a bad thing. And it is both stable and crossplatform).

Or this one:

(If you say so, though I can see some irony in accusing someone of hubris and then immediately following up with a tweet essentially claiming that you yourself are the only person in the world who understands how to do multi-process supervision).

Maybe I bought the last one on myself to some degree by saying that I was aware I could be accused of NIH and that I didn’t care – I was trying to head off this sort of criticism before it began, but may have inadvertently had the opposite effect.

Then, there’s the ever-pleasant commentary on hacker news:

>I’m making an init system

Awesome, maybe I won’t have to!

>C++

Whelp, nevermind.

(Dear Sir_Cmpwn of hacker news: I am quietly confident that my real init system written in C++ is better than your vapour-ware init system that is written in nothing).

And of course on Reddit:

> It will be both efficient and maintainable. It will be stable. Solid-as-a-rock stable.

Author does not have any tests whatsoever and uses a memory unsafe language. I don’t see how he wants to achieve the above goals.

(I know that it is difficult to believe, but truly, it is possible to write tests after you have written other code).

Anyway, this is the internet; of course people will say bad (and stupid) things. There were plenty of positive comments too, such as this one from hacker news:

I’m not a detractor, but there are many things systemd can still improve, but it feels we’re kind of stuck. I’m quite happy if we have some competition here.

Yes! Thank you. There were also some really good comments on my blog posts, and some good discussion elsewhere including on lobste.rs. Ultimately I’ve had probably as much positive as negative feedback, and that’s really helped to keep the motivation up.

The worst thing is, I’ve been guilty of trash-talking other projects myself in the past. I’ve only done so when I thought there was genuine technical issues, and usually out of frustration from wanting software to be better, but that’s no excuse; it doesn’t feel good when someone says bad things about software (or other work) that you created. If only one good thing comes from writing Dinit, it’s that I’ve learned to reign in my rants and focus on staying objective when discussion technical issues.

I guess that’s about a wrap – thanks for reading, as ever. Hopefully next time I write about Dinit it’ll be to report on all the great progress I’ve made since now!

A quick update on Dinit

I’ve been very busy lately, though have managed to spend quite a bit of time coding on Dinit, and of course I released Dasynq which forms the “backbone” of Dinit, in a sense, by providing a robust event loop library. I don’t want to write a major article right now and in truth probably don’t have the content to do so, so just a quick point-by-point update:

  • As mentioned, Dasynq 1.0 was released, and there have since been (ahem) 4 minor bugfix releases.
  • The basic service management functionality of Dinit is largely complete; it supports the dependency types I determined were needed; it handles process supervision pretty well (most recently, I implemented start and stop timeouts, which are configurable per-service). Running services under different uid’s is not yet supported but should be trivial to add.
  • However, the major thing still missing is the ability to modify services on-the-fly, or even unload/reload service definitions for services that aren’t running. That’s a priority, but it will not be trivial.
  • On the other hand, system boot and shutdown/restart are handled pretty well. Dinit has been the init system of my desktop PC for many months now.
  • One significant milestone was reached: I got my first pull request for Dinit. It was small, but it showed that at least there is someone out there who is following progress, and it came as a pleasant surprise.
  • There are plenty of other rough edges. There’s no way to specify initial environment, either per-service or globally – that shouldn’t be hard to do but I’ve been putting it off (it’d make a good task for a new contributor, hint hint…). I’d like to separate PID 1 (the actual init) from the service manager, or at least make it a supported option to do so. Cgroups, namespaces and jails aren’t supported yet. There is only a “poor man’s” version of socket activation. And so on.
  • Even with all that, we’re along way from the full functionality of Systemd. That might be a good thing, though. The plan has pretty much always been to delegate parts of that functionality to other packages. The goal is to provide, together with other packages, a replacement that’s capable of running a desktop with all of the important functionality available.
  • The test suit has improved a lot, and I put a lot of effort into mocking system interfaces for the purpose of testing. That’s starting to pay off, and the number of tests is rapidly increasing. Of course the down side is that writing tests takes time away from adding functionality, but in the end it’s a certainly a win to have a comprehensive test suite.
  • I spent some time recently looking a bit more closely at both Nosh and S6-RC, two service managers which can function as or cooperate with an init system. Both are pretty decent, and both are in a more complete state than Dinit, although Dinit is catching up reasonably fast and I believe Dinit offers at least some functionality that these lack. One idea I might need to borrow from these is the concept of chaining processes together (so a logging process can be run separately to the service process, but the file descriptors that tie them together can be maintained by Dinit so that you can potentially re-start either process with minimal risk of losing log messages).

That’s about all I’ve got to say for the moment. Hopefully I can find some time to craft a longer blog post next time, and with some more interesting news to share. Thanks for reading and questions/comments welcome as always!

Introducing Dasynq

For someone looking at the rate of commits being pushed to Dinit, it might appear that development has halted. The good news is that this isn’t really the case; instead of working directly on Dinit, I’ve been working on a sub-project that came out of Dinit’s development. Allow me to introduce: Dasynq, the C++ event-loop library for robust clients!

The Background Story

Dinit, as an init system / service manager, needs to be able to respond to several different types of external event:

  • It needs to know when child processes have terminated, so that it can log and restart or continue to shut down any dependencies as appropriate
  • It needs to respond to signals which control its operation
  • It needs to receive and respond to requests coming over a socket connection, to allow service control
  • It needs to monitor timeouts so that a process which is taking too long to start or stop can be dealt with appropriately.

These requirements aren’t specific to service managers and in fact many programs, particularly network servers, need to be able to deal with a similar set of events. Typically an event-loop library is used to manage this; such a library allows monitoring a range of event types, and specifying callbacks to run when the events are detected. Most event-loop libraries use modern OS facilities such as kqueue or epoll as a back-end event delivery mechanism; in order to be able to offer some more advanced functionality such as event priorities, an event-loop library typically inserts received events in a queue rather than delivering them to the application immediately as they are detected.

When I started writing Dinit, my initial prototype used Libev, an event-loop library which is cross-platform, efficient and well-documented. It was good enough to get started with, but for an init system it had one glaring deficiency: insufficient support for error handling. In fact, the usual response of libev to encountering an error is to abort() the entire process, and there is no way to make the relevant functions return an error code instead. I began to look for a replacement. There were other event libraries, such as the venerable Libevent and the more recent Libuv, which improved error handling to the point that they could actually return error codes: but I wanted something better. Specifically, I wanted to know that certain operations could not fail, not just that I could meaningfully detect their failure.

Consider the case of a timer. If we have a service running as a process and receive a stop command for the service (perhaps as part of a system shutdown), we can send the process a signal – such as SIGTERM – requesting it to stop. But, we want to give it a reasonable time limit to respond to this signal, in case it has hung; so, we start a timer, and on expiry of the timer we can send SIGKILL in order to finish off the hung process. The issue is that, when using these existing event-loop libraries, the action of starting a timer can fail (for instance, due to resource limitations); this would leave us in the awkward position of not being able to time the process shutdown, and unless we take drastic action such as sending SIGKILL immediately, it potentially hangs the whole shutdown process.

Another example: event loops allow us to monitor the status of child processes, so we can detect when they terminate. However, in other event-loops, adding a watcher for a child process is a function that can fail. Again, this would leave us in an awkward position; we could terminate the child immediately, but it would be much better if we could have the ability to add a child watcher with no failure mode, or at least prevent forking the child if we could detect the current inability to add a watch for it.

The Birth of Dasynq

So, I set about writing Dasynq to address these issues. With Dasynq, you can pre-allocate timers and child process watchers, so that arming a timer or adding a child watch is an operation that simply cannot fail. Enabling and disabling I/O watchers, similarly, cannot fail.

At the same time, I addressed what I saw as some shortcomings in some of the other event-loop libraries (note that some of these apply to some libraries; they do not all apply to all libraries):

  • They did not allow setting timers against the system clock (the clock that potentially jumps when it is corrected by the user). This arguably shouldn’t be a common concern in this age of NTP-by-default configurations, but I still consider it a shortcoming
  • They use bad time representations; Libev for instance uses floating-point values to represent absolute time, which I consider an inherently bad idea. (edit: to be fair, though, a ‘double’ as used by Libev is fine for hundreds of years unless you need better than microsecond precision).
  • They had limited, or no, support for prioritising certain events over others.
  • They had limited support for multi-threaded applications.

Some of these were not a concern for Dinit, but I saw them as general shortcomings which could and should be addressed. And so I created Dasynq, and I’m now using it in Dinit. However, it’s fully documented, and should be usable in a range of other projects, too! As usual, feedback is welcome.

(Edit: I didn’t include boost::asio in any of the discussion above, mainly because it lacks a lot of the functionality that is present in the other event loops – such as POSIX signals, and child process watches – but also because I have concerns about the API it presents; of course it also retains the failure modes that formed my original motivation for creating Dasynq).


For more information about Dasynq, check the website or the Github repository.

Let’s Talk about Service Dependencies

(aka: Escape from System D, part IV).

First: anyone who’s been keeping tabs will have noticed that there hasn’t been a lot of progress on Dinit recently; this has been due to multiple factors, one being the hard disk drive in my laptop dying and this impeding my ability to work on the train to and from work, which is when I usually found time to work on Dinit. However, I’ve by no means abandoned the project, will hopefully have a replacement laptop soon, and expect the commits to resume in due course (there have been a small number made recently, in fact).

In this post I wanted to discuss service dependencies and pros and cons of managing them in slightly different ways. In an earlier post I touch on the basics of service management with dependencies:

if one service needs another, then starting the first should also start the other, and stopping the second should also require the first to stop.

It’s clear that there are two reasons that a service could be running:

  1. It has been explicitly started, or
  2. It has been started because another service which depends on it has been started.

This is all very well, but in the 2nd case, there’s an open question about what to do when the dependency service stops. There are two choices in this regard:

  1. A started service remains running when its dependencies stop, even if the service has not itself been explicitly started, or
  2. A started service automatically stops when its dependencies stop (unless it has itself been explicitly started).

Which is the better option? The first option is probably simpler to implement (it doesn’t require tracking whether a service was explicitly started, for instance); the second option, though, has the nice properties that (a) it doesn’t keep unneeded services running and (b) explicitly starting and then stopping a service will return the system to the original state (in terms of which services are running). Also, if you want to emulate the concept of run levels (which essentially describe a set of services to run exclusively), you can do so easily enough; switching run level is equivalent to explicitly starting the appropriate run level service and stopping the current one.

(Systemd makes a distinction between service units, which describe a process to run, and target units, which group services. However, I’m not sure there’s a real need for this distinction; services can depend on other services anyway, so the main difference is that one has an individual associated process and the other doesn’t. Indeed Systemd’s systemctl isolate command can accept a service unit, although it expects a target unit by default. Dinit on the other hand makes no real distinction between services and targets at this higher level.)

There are some complications, though, which necessarily add complexity to the service model described above. Mainly, we want some flexibility in how dependency termination is handled. The initial “boot” service, for instance, probably shouldn’t stop (and release all its dependencies as a result) if a single dependency (let’s say the sshd server, for example) terminates unexpectedly; similarly, we wouldn’t necessarily want boot to be considered failed if any of a number of certain dependency services failed to start. On the other hand, for other service/dependency combinations, we might want exactly that: if the dependency fails then the dependent also fails, and if the dependency stops then the dependent also stops.

Other problems we need to solve:

  • It may be convenient to have persistent services that remain started after they are started (due to a dependent starting, even when the dependent stops. For instance, if we have a service which mounts the filesystem read/write (from read-only) it’s probably convenient to leave it “running” after it starts, since undoing this is complicated and may be error-prone.
  • Boot failure needs a contingency; it should be possible to configure what happens if some service essential for boot fails (whether it be to start a single-user shell, reboot, power off, or simply stop with an error message).

With all the above in mind, I’ve narrowed down the necessary dependency types as follows:

  • regular – the dependency must start before the dependent starts, and if the dependency stops then the dependent stops.
  • soft – the dependency starts (in parallel) with the dependent, but if it fails or stops this does not affect the dependent. It’s not precisely clear that this dependency type is necessary in its own right, but it forms the basis for the following two dependency types.
  • waits-for – as for soft, but the dependent waits until the dependency starts (or fails) before it starts itself.
  • “milestone” – The dependency must start before the dependent starts, but once the dependent has started, the dependency link becomes soft. This is different from “waits-for” in that if the dependency fails, the dependent will not start.

This is what I’m currently implementing (up until now, only “regular” and “waits-for” dependencies have been supported by Dinit).

For the boot failure case, Dinit currently starts the service named “single” (i.e. the single-user service); however, some flexibility / configurability might be added at a later date.

For next time

There are a lot of things that I want write about and implement, and though finding the time has been increasingly difficult lately I’m hoping things will calm down a little over the next few months.

One thing I really need to do is look again, properly, at some of the other supervision/init systems out there. There are two motivations for this: one, determining whether Dinit is really necessary in its own right –  that is, can any of the existing systems do everything that I’m hoping Dinit will be able to, and would it make sense to collaborate with / contribute to one of them? In particular s6 and Nosh are two suites which seem like they are well-designed and capable. (Note that I don’t envisage stopping work on Dinit altogether, and don’t feel like availability of another quality init system is going to be a bad thing).

There’s still a lot more work that needs to be done with Dinit, too. Presently it’s not possible to modify loaded service definitions (including changing dependencies) which is certainly a must-have-for-1.0 feature, but that’s really just the tip of the iceberg. At some point I’d like to create a formal list of what is needed to truly supplant Systemd in the common Linux software ecosystem. Completing the basic Dinit functionality remains a priority for now, however.

Thanks for reading and, as always, constructive comments are welcome.

Safety and Daemons

(aka. Escape from System D, part III).

So Dinit (github) is a service manager and supervisor which can function as an init process. As I’ve previously discussed, an init needs to be exceptionally stable: if it crashes, the whole system will come down with it. A service manager which manages system services, though, also needs to be stable, even if it’s not also running as an init: it’s likely that a service manager failure will cause parts of the system to stop working correctly.

But what do we mean by stable, in this case? Well, obviously, part of what we mean is that it shouldn’t crash, and part of that means we want no bugs. But that’s a narrow interpretation and not a useful one; we don’t really want bugs in any software. A big part of being stable – the kind of stable we want in an init or service manager – is being robust in the face of resource scarcity. One resource we are concerned about is file descriptors, and one of the most obvious is memory. In C, malloc can fail: it returns a null pointer if it cannot allocate a chunk of the requested size – and this possibility is ignored only at some peril. (One class of security vulnerability occurs when a program can be manipulated into attempting allocation of a chunk so large that the allocation will certainly fail, and the program fails to check whether the allocation was successful).

Consider now the xmalloc function, implementations of which abound. One can be found in the GNU project’s libiberty library, for example. xmalloc behaves just like malloc except that it aborts the program when the allocation fails, rather than returning a null pointer. This is “safe” in the sense that it prevents program misbehaviour and potential exploits, although is sometimes less than desirable from an end-user perspective. In a service manager, it would almost certainly be problematic. In an init, it would be disastrous. (Note that in Dinit, it is planned to separate the init process from the service manager process. Currently, however, they are combined).

So, in Dinit, if a memory allocation fails, we want to be able to handle it. But also, importantly, we want to avoid (as much as possible) making critical allocations during normal operation – that is, if we could not proceed when an allocation failed, it would be better if avoided the need for allocation altogether.

How Dinit plays safe

In general Dinit tries to avoid dynamic memory allocation when it’s not essential; I’ll discuss some details shortly. However, there’s another memory-related resource which can be limited: the stack. Any sort of unbounded recursion potentially exhausts the stack space, and this form of exhaustion is much harder to detect and deal with than regular heap space exhaustion. The simplest way to deal with this is to avoid unbounded recursion, which Dinit mostly does (there is still one case that I know of remaining – during loading of service descriptions – but I hope to eliminate it in due course).

Consider the process of starting a service. If the service has dependencies, those must be started too, and the dependencies of those dependencies must be started, and so on. This would be expressed very naturally via recursion, something like:

void service::start() {
    for (auto dep : dependencies) {
        dep->start();
    }
    do_start(); // actually start this service
}

(Note this is very simplified code). However, we don’t want recursion (at least, we don’t want recursion which uses our limited stack). So instead, we could use a queue allocated on the heap:

void service::start() {
    // (throws std::bad_alloc on out-of-memory).
    // start with a queue containing this service,
    // and an empty (heap-allocating) stack:
    std::queue<service *> start_queue = { this };
    std::stack<service *> start_stack;

    // for each dependency, add to the queue. Build the stack:
    while (! start_queue.empty()) {
        for (auto dep : start_queue.front()->dependencies) {
            start_queue.push(dep);
        }
        start_stack.push(start_queue.front());
        start_queue.pop();
    }

    // start each service in reverse dependency order:
    while (! start_stack.empty()) {
        start_stack.top()->do_start();
        start_stack.pop();
    }
}

This is considerably more complicated code, but it doesn’t implicitly use our limited stack, and it allows us to catch memory space exhaustion (via the std::bad_alloc exception, which is thrown from the queue and stack allocators as appropriate). It’s an improvement (if not in readability), but we’ve really just traded the use of one limited resource for another.

(Also, we need to be careful that we don’t forget to catch the exception somewhere and handle it appropriately! An uncaught exception in C++ will also terminate the program – so we essentially get xmalloc behaviour by default – and because of this, exceptions are arguably a weakness here; however, they can improve code readability and conciseness compared to continually checking for error status returns, especially in conjunction with the RAII paradigm. We just need to be vigilant in checking that we always do catch them!).

Edit: incidentally, if you’re thinking that memory allocation failure during service start is a sure sign that we won’t be able to launch the service process anyway, you’re probably right. However, consider service stop. It follows basically the same procedure as start, but in reverse, and not being able to stop services in a low-memory environment would clearly be bad.

We can improve further on the above: note that while the service dependency graph is not necessarily a tree, we only need to start each dependency once (the above code doesn’t take this into account, potentially issuing do_start() to the same service multiple times if it is a dependency of multiple other services). Given that a service only need appear in start_queue and start_stack once, we can actually manage those data structures as linked lists where the node is internal to the service (i.e. the node doesn’t need to be allocated separately).

For example, service might be defined as something like:

class service {
    std::string name;
    std::list<service *> dependencies;
    // (other details)
    bool is_in_start_queue = false;
    bool is_in_start_stack = false;
    service * next_in_start_queue = nullptr;
    service * next_in_start_stack = nullptr;
public:
    void start();
    void do_start();
};

Now, although it requires extra code (again) because we can’t use the standard library’s queue or stack, we can manage the two data structures without performing any allocations. This means we can rewrite our example start() in such a way that it cannot fail (though of course in reality starting a service requires various additional steps – such as actually starting a process – for which we can’t absolutely guarantee success; however, we’ve certainly reduced the potential failure cases).

In fact, in Dinit a service can be part of several different lists (technically, order-preserving sets). I wrote some template classes to avoid duplicating code to deal with the different lists, which you can find in the source repository. Using these templates, we can rewrite the example service class and the start() method, as follows:

class service {
    std::string name;
    std::list<service *> dependencies;
    // (other details)
    lld_node<service> start_queue_node;
    lls_node<service> start_stack_node;
public:
    void start();
    void do_start();

    static auto &get_startq_node(service *s) {
        return s->start_queue_node;
    }
    static auto &get_starts_node(service *s) {
        return s->start_stack_node;
    } 
};

void service::start() {
    // start with a queue containing this service,
    // and an empty (heap-allocating) stack:
    dlist<service, service::get_startq_node> start_queue;
    slist<service, service::get_starts_node> start_stack;
    start_queue.append(this);

    // for each dependency, add to the queue. Build the stack:
    while (! start_queue.is_empty()) {
        auto front = start_queue.pop_front();
        for (auto dep : front->dependencies) {
            if (! start_queue.is_queued(dep))
                start_queue.append(dep);
        }
        if (! start_stack.is_queued(dep)) {
            start_stack.insert(front);
        }
    }

    // start each service in reverse dependency order:
    while (! start_stack.is_empty()) {
        start_stack.pop_front()->do_start();
    }
}

(Note that the templates take two arguments: one is the element type in the list, which is service in this case, and the other is a function to extract the list node from the element. The call to this function will normally be inlined by the compiler, so you end up paying no abstraction penalty).

This is a tiny bit more code, but it’s not too bad, and compared to the previous effort it performs no allocations and avoids issuing do_start() to any service more than once. The actual code in Dinit is somewhat more complicated, but works roughly as outlined here. (Note, I snuck some C++14 into the code above; Dinit itself remains C++11 compatible at this stage).

There’s more to resource safety than memory and stack usage; I may discuss a little bit more in the future. I hope this post has provided some interesting perspective, however. As usual, comments are welcome.

Progress

Since last post, I’ve added a “stop timeout” for services – this allows setting a maximum time for a service to stop. If it takes longer than the allowed time, the service process is issued a SIGKILL which (unless something really whack is going on) should cause it to terminate immediately. I’ve set the default to 10 seconds, which seems reasonable, but it can be configured (and disabled) via the service description file.

(I’m not sure if I really want this to be enabled by default, or whether 10 seconds is really enough as a default value – so this decision may be revisited. Opinions welcome).

Other than that, it’s been bugfixes, cleaning up TODO’s in the code, and minor robustness improvements. I’m aiming for complete service management functionality soon (and in fact Dinit already works well in this capacity, but is missing one or two features that I consider important).

Escape from System D (2)

Episode II: Init versus the service management daemon

I was pleased that my announcement of another in-development init/service manager met with a mostly positive response. I plan to keep making semi-regular posts where I post both general discussion around the issues of service management and progress updates on my own effort, dubbed Dinit.

In this post I will give a little background on init systems and service management generally. I expect a lot of readers will not learn much, since it is already well understood, but it is worth laying out some background for reference in future posts/discussion.

What is “init”?

The init process, traditionally started from /sbin/init on the filesystem, is the first userspace process to launch on the system. As such it is the only process with no parent process. Most (if not all) operating systems give it a process ID of 1, making it easy to identify. There are two special things about the init process:

  1. First, it automatically becomes the new parent of otherwise orphaned processes. In particular processes which “daemonise” themselves by double-forking and letting the intermediate parent die get re-parented to the init process.
  2. If the init process terminates, for any reason, the kernel panics (so the whole system crashes).

The second point is in fact not necessarily true – it just so happens that, at least on Linux, if the init process dies then the system dies with it. I am not sure how the various *BSD systems react, but in general, it is not expected that the init process will terminate. This means that it is very, very important that the init process does not crash. However, the first point above has some implications as well, which we’ll get to shortly.

Notionally, the init system has two jobs: to reap its child processes when they have terminated (this is accomplished using the wait system call or one of its variants; reaping a terminated process ensures that its resources are freed and that it is no longer listed in the process table of the system) and also to start up the system, which it can potentially do just by running another process. An init may also be involved in the system shutdown process as well, though strictly speaking that’s not necessary.

You might be interested in Rich Felker’s example of a minimal init system, which is part of one of his blog posts (where he also discusses Systemd). It’s less than a screenful of text – small enough that it can be “obviously bug free” – a nice attribute to have for an init, for reasons outlined above.

So what is a “service manager”?

A service manager provides, at the most basic level, a means for stopping and starting individual services. Services quite typically run as a process – consider for example the ssh server daemon, sshd – but sometimes exist in some other form; having the network connection(s) up and operational, for example, could be enacted by means of a service. Typical modern systems have a service manager which is either started from the init process or incorporated in it (Systemd is an example of an init process which incorporates service management functionality, but there are various others which do the same).

Aside from just an interface to starting and stopping services, service managers may provide:

  • process supervision – which normally amounts to the ability to restart a service process if it terminates unexpectedly (in general, this is a mitigation measure against software faults)
  • service dependency management – if one service needs another, then starting the first should also start the other, and stopping the second should also require the first to stop.
  • a logging mechanism for dealing with output from service processes (in general, though, this can be delegated largely to a secondary process).

Since a service manager is naturally somewhat more complex than a standalone init system, it should be obvious that incorporating the two in one process has some inherent risks. If an init system terminates unexpectedly, the whole system will generally crash; not only is this inconvenient for the user, but it also makes analysing the bug that caused the crash more difficult.

Why combine them, then?

The obvious question: if it’s better to keep init as simple as possible, why does it get combined with service management? One reason is so that double-forking processes, which have re-parented to the init process, can be supervised; normal POSIX functions only allow receiving status notifications for direct child processes. (Various *BSDs support watching arbitrary process status via the kqueue system calls, but the interface has flaws – that I will perhaps discuss another time – and anyway, any mechanism to watch a non-immediate-child process by process ID, without co-ordination with the parent process, is prone to a race condition: at least in theory, a process with a given ID can die, and be reaped, and the process ID can be recycled, in between some other process discovering the process ID and setting up a watch for it or even worse sending a termination signal in an attempt to shut down a service).

Now we could just about argue that no service should double-fork, and this is eliminates any need for the service manager to run as the init process (PID 1). However, we can’t actually prevent processes from double-forking; on the other hand, there is a mechanism – at least on Linux – called cgroups, which allows for tracking process origin even through double-fork. Importantly, this can be used to track processes belonging to particular user sessions. One operation that we might naturally want to perform to a cgroup is to terminate it – or rather, terminate all processes in the cgroup – and this, once again, is racy unless we can co-ordinate with the parent process(es) of all processes in the cgroup (and by “coordinate” I mean that we want to prevent the parent process from reaping child processes which have terminated, to avoid the race where a process ID is recycled and the wrong process is then terminated, as described above),

(Some other systems might have functionality similar to cgroups – I have FreeBSD jails in mind, though I need to do some research to understand exactly how jails work and their limitations, and in particular if they also suffer the termination race problem described above).

So, for supervising double-forked processes, and for controlling user sessions, having control of the PID 1 (init) process is important for a service manager. However, there’s a hint in what hasn’t been said: while we may need co-operation between the init process and the service manager, it’s not absolutely necessary that they are the same process. One of the ideas I’d like to investigate with Dinit is whether we can keep a very simple init process and a separate, more complex, service manager / supervisor.

Dinit progress

For the most part reactions to my announcement of Dinit were positive. One comment on Reddit wondered how I was going to be able to achieve a “solid-as-a-rock stable” system using a non-memory-safe language (C++) and without having any tests. Of course, this wasn’t quite correct; I have always had tests for Dinit, but they were not automated. One thing that I’ve done since my initial announcement is implement a small number of automated tests (that you can run using “make check”). I plan to write many more tests, but this feels like a good start. I’ll discuss the reasons for using C++ at some point, but it needs to borne in mind that while C++ is not memory-safe it is still perfectly possible to write stable software in such a language; it just takes a little more effort!

I’ve also done a little refactoring, solved one or two minor bugs, and improved the man pages. My TODO list is slowly getting smaller and I think Dinit is approaching the stage where it can be considered a high-quality service manager, though it is a way off from being a full replacement for Systemd.

Please feel free to comment below and/or check out the source code on the Github repository.