Alex headshot

AlBlue’s Blog

Macs, Modularity and More

Apple finally kill off ZFS

2009 Zfs Mac

Earlier this year, I wrote about ZFS’ demise on Mac OSX and its disappearance from the (as then) up-and-coming Snow Leopard. Since then, there had been little public communication on the ZFS mailing lists (as is Apple’s wont for unreleased products) - but even once Snow Leopard was released, there was still precious little communication.

As of a few hours ago, an empty page on the old


page has removed all the content and left the message “The ZFS project has been discontinued. The mailing list and repository will also be removed shortly.” At least there’s a final conclusion to what once was a highly regarded project, even if it was an abrupt and faceless ending to the project.

The last released version of ZFS was 119, though you may be able to find some bits for 10a286 which have been leaked that were part of the earlier developer builds for OSX (of which I have no idea, so don’t ask me). Alledgedly the 119 bits still work on Snow Leopard, though there are still some bugs (similar to those on the Leopard install, one assumes).

In the vain hope of getting ZFS to live on, I’ve created the Mac ZFS project at Google. This is used as a project placeholder; the source, which is available under a CDDL (and APSL) license can’t be held as part of the project. There’s a zfs-macos mailing list, as well as some data rescued from the old wiki (including the Getting Started page). There’s even a cached version of ZFS-119.pkg installer and zfs-119_src.tar the on the zfs-macos files page. Fortunately, the contents of the SVN repository was captured on GitHub; was cloned from peaceful who imported the SVN repository before its demise.

There have been many reasons as to why ZFS was killed off by Apple; whatever the reason, it seems that this is indeed the final curtain on OSX. A number of people are looking into both Free BSD and Open Solaris as replacements for their file-serving needs; after all, it’s almost certainly better to have a lesser operating system that understands the filesystem data than a greater operating system that doesn’t. Which all in all is a shame, because the latest Mac Mini Server has all the hallmarks of a system that would work well with ZFS support.

The most likely reason for the abrupt demise is stability. ZFS119 had been known to crash, particularly if there were multiple snapshots/scrubs/drops concurrently running. In addition, if there’s a problem with a ZFS device, the system freezes; not an insurmountable one, but annoying none the less. There were also some reports of problems in the developer builds as well.

There’s a couple of other theories relating to the file system. One is that there were patents with NetApp lawsuit that Sun was defending; though that was back in 2008 when that started. The other, probably more likely, is the (impending) takeover from Oracle; the Big O is not known to play softball when it comes to licensing; but again, the acquisition was announced back in April 2009 and ZFS wasn’t yanked until July 2009. The most likely thing was that it wasn’t ready for prime-time in time for Snow Leopard’s golden master, which was already in danger of running late.

Not all has gone well though; in fact, now Apple is looking for a new file system engineer. Could it be that Apple is trying to re-invent ZFS on top of the beleaguered HFS? Or are they just going to bolt-on extras, like they’ve done with journaling, case-sensitivity, and more recently, compressed data sections and attributes. What of the existing filesystem team? Who knows, but Noël, if you’re reading this, thanks for all the hard work you put into ZFS – I’m sure I speak for many people on the list when I say thanks for all the hard work, even if the corporation decided to go in a different direction.

However, it may be that there were one problem too many in making it happen. I’m going to conclude by citing a post from an Apple e-mail address to the list:

Without implying anything about why ZFS was not shipped with SnowLeopard, the discussion that starts out at minute 22 [of this video presentation by Jeff Bonwick] exposes some continued mis-understanding by Mr. Bonwick and crew. I have a great deal of respect for Jeff but I am a bit surprised that he’s not aware of these issues.

At around 23:45 or so he says that “laptops never experience problems with flush-track-cache because they have a battery and thus never lose power”. While it is true that laptops have batteries, machines still crash and/or get force rebooted. When a reboot happens the bus is reset and what the drive will do with dirty data sitting in its cache is undefined at that point. Regardless of what the specs say, some drives toss any unwritten data while others will keep it and eventually write it after the reset.

To be fair, we guarantee that all internal drives Apple ships will honor flush track cache so you’re safe on a Mac laptop. However if you’re running Solaris on some other brand of laptop you may or may not be out of luck.

Last, I do not believe that the crash protection scheme used by ZFS can ever work reliably on drives that drop the flush track cache request. The only approach that is guaranteed to work is to keep enough data in a log that when you remount the drive, you can replay more data than the drive could have kept cached. This is effectively what the journal on HFS has done since Leopard.