Matt Connolly's Blog

my brain dumps here…

Category Archives: Low Power Server

Using ZFS Snapshots on Time Machine backups.

I use time machine because it’s an awesome backup program. However, I don’t really trust hard drives that much, and I happen to be a bit of a file system geek, so I backup my laptop an iMac to another machine that stores the data on ZFS.

I first did this using Netatalk on OpenSolaris, then OpenIndiana, and now on SmartOS. Netatalk is an open source project for running AFP (Apple Filesharing Protocol) services on unix operatings systems. It has great support for new features in the protocol required for Time Machine. As far as I’m aware, all embedded NAS devices use this software.

Sometimes, Time Machine “eats itself”. A backup will fail with a message like “Verification failed”, and you’ll need to make a new one. I’ve never managed to recover the disk from this point using Disk Utility.

My setup is RaidZ of 3 x 2TB drives, giving a total of 4TB of storage space (and 2TB redundancy). In the four years I’ve been running this, I have had 3 drives go bad and replace them. They’re cheap drives, but I’ve never lost data due to a bad disk and having to replace it. I’ve also seen silent data corruptions, and know that ZFS has corrected them for me.

Starting a new backup is a pain, so what do I do?

ZFS Snapshots

I have a script, which looks like this:

ZFS=zones/MacBackup/MattBookPro
SERVER=vault.local
if [ -n "$1" ]; then
  SUFFIX=_"$1"
fi
SNAPSHOT=`date "+%Y%m%d_%H%M"`$SUFFIX
echo "Creating zfs snapshot: $SNAPSHOT"
ssh -x $SERVER zfs snapshot $ZFS@$SNAPSHOT

This uses the zfs snapshot command to create a snapshot of the backup. There’s another one for my iMac backup. I run this script manually for the ZFS file system (directory) for each backup. I’m working on an automatic solution that listens to system logs to know when the backup has completed and the volume is unmounted, but it’s not finished yet (like many things). Running the script takes about a second.

Purging snapshots

My current list of snapshots looks like this:

matt@vault:~$ zfs list -r -t all zones/MacBackup/MattBookPro
NAME                                      USED  AVAIL  REFER  MOUNTPOINT
zones/MacBackup/MattBookPro               574G   435G   349G  /MacBackup/MattBookPro
...snip...
zones/MacBackup/MattBookPro@20131124_1344 627M      -   351G  -
zones/MacBackup/MattBookPro@20131205_0813 251M      -   349G  -
zones/MacBackup/MattBookPro@20131212_0643 0         -   349G  -

The used at the top shows the space used by this file system and all of its snapshots. The used column for the snapshot shows how much space is used by that snapshot on its own.

Purging old snapshots is a manual process for now. One day I’ll get around to keeping a snapshots on a rule like time machine’s hourly, daily, weekly rules.

Rolling back

So when Time Machine goes bad, it’s as simple as rolling back to the latest snapshot, which was a known good state.

My steps are:

  1. shut down netatalk service
  2. zfs rollback
  3. delete netatalk inode database files
  4. restart netatalk service
  5. rescan directory to recreate inode numbers (using netatalks “dbd -r ” command.)

This process is a little more involved, but still much faster than making a whole new backup.

The main reason for this is that HFS uses an “inode” number to uniquely identify each file on a volume. This is one trick that Mac Aliases use to track a file even if it changes name and moves to another directory. This concept doesn’t exist in other file systems, and so Netatalk has to maintain a database of which numbers to use for which files. There’s some rules, like inode numbers can’t be reused and they must not change for a given file.

Unfortunately, ZFS rollback, like any other operation on the server that changes files without netatalk knowing, ends up with files that have no inode number. The bigger problem seems to be deleting files and leaving their inodes in that database. This tends to make Time Machine quite unhappy about using that network share. So after a rollback, I have a rule that I nuke netatalk’s database and recreate it.

This violates the rule that inode numbers shouldn’t change (unless they magically come out the same, which I highly doubt), but this hasn’t seemed to cause a problem for me. Imagine plugging a new computer into a time machine volume, it has no knowledge of what the inode numbers were, so it just starts using them as is. It’s more likely to be an issue for Netatalk scanning a directory and seeing inodes for files that are no longer there.

Recreating the netatalk inode database can take an hour or two, but it’s local to the server and much faster than a complete network backup which also looses your history.

Conclusion

This used to happen a lot. Say once every 3-4 months when I first started doing it. This may have been due to bugs in Time Machine, bugs in Netatalk or incompatibilities between them. It certainly wasn’t due to data corruptions.

Pros:

  • Time Machine, yay!
  • ZFS durability and integrity.
  • ZFS snapshots allow point in time recovery of my backup volume.
  • ZFS on disk compression to save backup space!
  • Netatalk uses standard AFP protocol, so time machine volume can be accessed from your restore partition or a new mac – no extra software required on the mac!

Cons:

  • Effort – complexity to manage, install & configure netatalk, etc.
  • Rollback time.
  • Network backups are slow.

As time has gone on, both Time Machine and Netatalk have improved substantially. And I’ve added an SSD cache to the server, and its is swimmingly fast and reliable. And thanks to ZFS, durable and free of corruptions. I think I’ve had this happen only twice in the last year, and both times was on Mountain Lion. I haven’t had to do a single rollback since starting to use Mavericks beta back around June.

Where to from here?

I’d still like to see a faster solution, and I have a plan: a network block device.

This would, however, require some software to be installed on the mac, so it may not be as easy to use in a disaster recover scenario.

ZFS has a feature called a “volume”. When you create one, it appears to the system (that’s running zfs) as another block device, just like a physical hard disk, or file. A file system can be created on this volume which can then be mounted locally. I use this for the disks in virtual machines, and can snapshot them and rollback just as if they were a file system tree of files.

There’s an existing server module that’s been around for a while: http://nbd.sourceforge.net

If this volume could be mounted across the network on a mac, the volume could be formatted as HFS+ and Time Machine could backup to it using local disk mode, skipping all the slow sparse image file system work. And there’s a lot of work. My time machine backup of a Mac with a 256GB disk creates a whopping 57206 files in the bands directory of the sparseimage. It’s a lot of work to traverse these files, even locally on the server.

This is my next best solution to actually using ZFS on mac. Whatever “reasons” Apple has for ditching them are not good enough simply because we don’t know what they are. ZFS is a complex beast. Apple is good at simplifying things. It could be the perfect solution.

Time Machine Backups and silent data corruptions

I’ve recently heard many folk talking about Time Machine backup strategies. To do it well, you really do need to backup your backup, as Time Machine can “eat itself”, especially doing network backups.

Regardless of whether your Time Machine backup is to a locally attached disk or a network drive, when you make a backup of your backup, you want to make sure it’s valid, otherwise you’re propagating a corrupt backup.

So how do you know if your backup is corrupt? You could read it from beginning to end. But this would only protect you from data corruptions that can be detected by the drive itself. Disk verify, fsck, and others go further and validate the file system structures, but still not your actual data.

There are “silent corruptions”, which is where the data you wrote to the disk comes back corrupted (different data, not a read error). “That never happens”, you might say, but how would you know?

I have two servers running SmartOS using data stored on ZFS. I ran a data scrub on them, and both reported checksum errors. This is exactly the silent data corruption scenario.

ZFS features full checksumming of all data when stored, and if your data is in a RAIDZ or mirror configuration, it will also self-heal. This means that instead of returning an error, ZFS will go fetch the data from a good drive and also make another clean copy of that block so that its durability matches your setup.

Here’s the specifics of my corruptions:

On a XEON system with ECC RAM, the affected drive is a Seagate 1TB Barracuda 7200rpm, ST31000524AS, approximately 1 year old.

  pool: zones
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
   
  scan: resilvered 72.4M in 0h48m with 0 errors on Mon Nov 18 13:28:16 2013
config:

        NAME          STATE     READ WRITE CKSUM
        zones         ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            c1t1d0s0  ONLINE       0     0     0
            c1t0d0s0  ONLINE   2.61K  366k   635
            c1t4d0s1  ONLINE       0     0     0
        logs
          c1t2d0s0    ONLINE       0     0     0
        cache
          c1t2d0s1    ONLINE       0     0     0

errors: No known data errors

On a Celeron system with non-ECC RAM, the affected drive is a Samsung 2TB low power drive, approximately 2 years old.

  pool: zones
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 8K in 12h51m with 0 errors on Thu Nov 21 00:44:25 2013
config:

        NAME          STATE     READ WRITE CKSUM
        zones         ONLINE       0     0     0
          raidz1-0    ONLINE       0     0     0
            c0t1d0    ONLINE       0     0     0
            c0t3d0    ONLINE       0     0     0
            c0t2d0p2  ONLINE       0     0     2
        logs
          c0t0d0s0    ONLINE       0     0     0
        cache
          c0t0d0s1    ONLINE       0     0     0

errors: No known data errors

Any errors are scary, but the checksum errors even more so.

I had previously seen thousands of checksum errors on a Western Digital Green drive. I stopped using it and threw it in the bin.

I have other drives that are HFS formatted. I have no way of knowing if they have any corrupted blocks.

So unless your data is being checksummed, you are not protected from data corruption, and making a backup of a backup could easily be propagating data corruptions.

I dream of a day when we can have ZFS natively on Mac. And if it can’t be done for whatever ‘reasons’, at least give us the features from ZFS that we can use to protect our data.

Building netatalk in SmartOS

I’m looking at switching my home backup server from OpenIndiana to SmartOS. (there’s a few reasons, and that’s another post).

One of the main functions of my box is to be a Time Machine backup for my macs (my laptop and my wife’s iMac). I found this excellent post about building netatalk 3.0.1 in SmartOS, but it skipped a few of the dependencies, and did the patch after configure, which means if you change you reconfigure netatalk, then you need to reapply the patch.

Based on that article, I came up with a patch for netatalk, and here’s a gist of it: https://gist.github.com/mattconnolly/5230461

Prerequisites:

SmartOS already has most of the useful bits installed, but these are the ones I needed to install to allow netatalk to build:

$ sudo pkgin install gcc47 gmake libgcrypt

Build netatalk:

Download the latest stable netatalk. The netatalk home page has a handy link on the left.

$ cd netatalk-3.0.2
$ curl 'https://gist.github.com/mattconnolly/5230461/raw/27c02a276e7c2ec851766025a706b24e8e3db377/netatalk-3.0.2-smartos.patch' > netatalk-smartos.patch
$ patch -p1 < netatalk-smartos.patch
$ ./configure --with-bdb=/opt/local --with-init-style=solaris --with-init-dir=/var/svc/manifest/network/ --prefix=/opt/local
$ make
$ sudo make install

With the prefix of ‘/opt/local’ netatalk’s configuration file will be at ‘/opt/local/etc/afp.conf’

Enjoy.

[UPDATE]

There is a very recent commit in the netatalk source for an `init-dir` option to configure which means that in the future this patch won’t be necessary, and adding `--with-init-dir=/var/svc/manifest/network/` will do the job. Thanks HAT!

[UPDATE 2]

Netatalk 3.0.3 was just released, which includes the –init-dir option, so the patch is no longer necessary. Code above is updated.

ZFS = Data integrity

So, for a while now, I’ve been experiencing crappy performance of a Western Digital Green drive (WD15EARS) I have an a zfs mirror storing my time machine backups (using OpenIndiana and Netatalk).

Yesterday, the drive started reporting errors. Unfortunately, the system hung – that’s not so cool – ZFS is supposed to keep working when a drive fails… Aside from that, when I rebooted, the system automatically started a scrub to verify data integrity, and after about 10 minutes:

  pool: rpool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scan: resilver in progress since Thu Mar 10 10:19:42 2011
    1.68G scanned out of 1.14T at 107M/s, 3h5m to go
    146K resilvered, 0.14% done
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         DEGRADED     0     0     0
          mirror-0    DEGRADED     0     0     0
            c8t1d0s0  DEGRADED     0     0    24  too many errors  (resilvering)
            c8t0d0s0  ONLINE       0     0     0
        cache
          c12d0s0     ONLINE       0     0     0

errors: No known data errors

Check it out. It’s found 24 errors on the Western Digital Drive, but so far no data errors have been found, because they were correct on the other drive.

That’s obvious, right? But what other operating systems can tell the difference between the right and wrong data when they’re both there??? Most raid systems only detect a total drive failure, but don’t deal with incorrect data coming off the drive !!

Sure backing up to a network (Time Machine’s sparse image stuff) is *way* slower than a directly connected firewire drive, but in my opinion, it’s well worth doing it this way for the data integrity that you don’t get on a single USB or Firewire drive.

Thank you ZFS for keeping my data safe. B*gger off Western Digital for making crappy drives. I’m off to get a replacement today… what will it be? Samsung or Seagate?

ZFS for Mac Coming soon…

A little birdy told me, that there might be a new version of ZFS ported to Mac OS X coming up soon…

It seems the guys at Tens Compliment are working on a port of ZFS at a much more recent version than what was left behind by apple and forked as a Google code project: http://code.google.com/p/maczfs/

On my mac, I have installed the Mac-ZFS which can be found at the Google Code project. (I don’t have any ZFS volumes, it’s installed because I wanted to know what version it was up to.)

bash-3.2# uname -prs
Darwin 10.6.0 i386
bash-3.2# zpool upgrade
This system is currently running ZFS pool version 8.

All pools are formatted using this version.

My backup server at home is running OpenIndiana oi-148:

root@vault:~# uname -prs
SunOS 5.11 i386
root@vault:~# zpool upgrade
This system is currently running ZFS pool version 28.

All pools are formatted using this version.

Pretty exciting that we can get the same zpool version as the latest OpenIndiana… think of the backup/restore possibilities sending a snapshot over to a remote machine.

ZFS – dud hard drive slowing whole system

I have a low-power server running OpenIndiana oi-148. It has 4GB RAM and with three drives in it, like so:

matt@vault:~$ zpool status
  pool: rpool
 state: ONLINE
 scan: resilvered 588M in 0h3m with 0 errors on Fri Jan  7 07:38:06 2011
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            c8t1d0s0  ONLINE       0     0     0
            c8t0d0s0  ONLINE       0     0     0
        cache
          c12d0s0     ONLINE       0     0     0

errors: No known data errors

I’m running netatalk file sharing for mac, and using it as a time machine backup server for my mac laptop.

When files are copying to the server, I often see periods of a minute or so where network traffic stops. I’m convinced that there’s some bottleneck in the storage side of things because when this happens, I can still ping the machine and if I have an ssh window, open, I can still see output from a `top` command running smoothly. However, if I try and do anything that touches disk (eg `ls`) that command stalls. At the time it comes good, everything comes good, file copies across the network continue, etc.

If I have a ssh terminal session open and run `iostat -nv 5` I see something like this:

                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    1.2   36.0  153.6 4608.0  1.2  0.3   31.9    9.3  16  18 c12d0
    0.0  113.4    0.0 7446.7  0.8  0.1    7.0    0.5  15   5 c8t0d0
    0.2  106.4    4.1 7427.8  4.0  0.1   37.8    1.4  93  14 c8t1d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.4   73.2   25.7 9243.0  2.3  0.7   31.6    9.8  34  37 c12d0
    0.0  226.6    0.0 24860.5  1.6  0.2    7.0    0.9  25  19 c8t0d0
    0.2  127.6    3.4 12377.6  3.8  0.3   29.7    2.2  91  27 c8t1d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0   44.2    0.0 5657.6  1.4  0.4   31.7    9.0  19  20 c12d0
    0.2   76.0    4.8 9420.8  1.1  0.1   14.2    1.7  12  13 c8t0d0
    0.0   16.6    0.0 2058.4  9.0  1.0  542.1   60.2 100 100 c8t1d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.2    0.0   25.6  0.0  0.0    0.3    2.3   0   0 c12d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c8t0d0
    0.0   11.0    0.0 1365.6  9.0  1.0  818.1   90.9 100 100 c8t1d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.2    0.0    0.1    0.0  0.0  0.0    0.1   25.4   0   1 c12d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c8t0d0
    0.0   17.6    0.0 2182.4  9.0  1.0  511.3   56.8 100 100 c8t1d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c12d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c8t0d0
    0.0   16.6    0.0 2058.4  9.0  1.0  542.1   60.2 100 100 c8t1d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c12d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c8t0d0
    0.0   15.8    0.0 1959.2  9.0  1.0  569.6   63.3 100 100 c8t1d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.2    0.0    0.1    0.0  0.0  0.0    0.1    0.1   0   0 c12d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c8t0d0
    0.0   17.4    0.0 2157.6  9.0  1.0  517.2   57.4 100 100 c8t1d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c12d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c8t0d0
    0.0   18.2    0.0 2256.8  9.0  1.0  494.5   54.9 100 100 c8t1d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c12d0
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c8t0d0
    0.0   14.8    0.0 1835.2  9.0  1.0  608.1   67.5 100 100 c8t1d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.2    0.0    0.1    0.0  0.0  0.0    0.1    0.1   0   0 c12d0
    0.0    1.4    0.0    0.6  0.0  0.0    0.0    0.2   0   0 c8t0d0
    0.0   49.0    0.0 6049.6  6.7  0.5  137.6   11.2 100  55 c8t1d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0   55.4    0.0 7091.2  1.9  0.6   34.9    9.9  27  28 c12d0
    0.2  126.0    8.6 9347.7  1.4  0.1   11.4    0.6  20   7 c8t0d0
    0.0  120.8    0.0 9340.4  4.9  0.2   40.5    1.5  77  18 c8t1d0
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    1.2   57.0  153.6 7271.2  1.8  0.5   31.0    9.4  26  28 c12d0
    0.2  108.4   12.8 6498.9  0.3  0.1    2.5    0.6   6   5 c8t0d0
    0.2  104.8    5.2 6506.8  4.0  0.2   38.2    1.4  67  15 c8t1d0

The stall occurs when the drive c8t1d0 is 100% waiting, and doing only slow i/o, typically writing about 2MB/s. However, the other drive is all zeros… doing nothing.

The drives are:
c8t1d0 – Western Digital Green – SATA_____WDC_WD15EARS-00Z_____WD-WMAVU2582242
c8t0d0 – Samsung Silencer – SATA_____SAMSUNG_HD154UI_______S1XWJDWZ309550

I’ve installed smartmon and done a short and long test on both drives, all resulting in no found errors.

I expect that the c8t1d0 WD Green is the lemon here and for some reason is getting stuck in periods where it can write no faster than about 2MB/s. Why? I don’t know…

Secondly, what I wonder is why it is that the whole file system seems to hang up at this time. Surely if the other drive is doing nothing, a web page can be served by reading from the available drive (c8t0d0) while the slow drive (c8t1d0) is stuck writing slow. Is this a bug in ZFS?

If anyone has any ideas, please let me know!

ZFS saved my Time Machine backup

For a while now, I’ve been using Time Machine to backup to an AFP share hosted by netatalk on an OpenIndiana low powered home server.

Last night, Time Machine stopped, with an error message: “Time Machine completed a verification of your backups. To improve reliability, Time Machine must create a new backup for you“.

Periodically I create ZFS snapshots of the volume containing my Time Machine backup. I haven’t enabled any automatic snapshots yet (like OpenIndiana/Solaris’s Time Slider service), so I just do it manually every now and then.

So, I shutdown netatalk, rolled back the snapshot, checked the netatalk database, restarted netatalk, and was then back in business.

# /etc/init.d/netatalk stop
# zfs rollback rpool/MacBackup/TimeMachine@20100130
# /usr/local/bin/dbd -r /MacBackup/TimeMachine
# /etc/init.d/netatalk start

Lost only a day or two’s incremental backups, which was much more palatable than having to do another complete backup of >250GB.

ZFS is certainly proving to be useful, even in a low powered home backup scenario.

Mac Migration Assistant from Time Machine Backup

I just upgraded to a new MacBook Pro. I spent *hours* trying to get Migration Assistant to do the network transfer from the old one to the new one. All I could deduce was that it wasn’t working because NFS couldn’t connect to the old one – most likely because I’d changed something that prevented it from working. (I was playing with NFS a few months ago).

Next choice: migrate from the latest time machine backup of the old one. I made sure that the old one had just finished a successful backup and then turned Time Machine off (because I didn’t want the old one writing to the backup while the new one was restoring…)

Next, Migration Assistant: select from Time Machine backup. Empty window. What?

My Time Machine drive is on a network machine (actually running OpenIndiana for ZFS mirror redundancy and netatalk for Apple File Sharing). If I try to set up Time Machine on the new machine, it sees my Time Machine drive. Why doesn’t Migration Assistant see it?

The answer is simple, but not obvious. You need to manually mount the backup’s .sparsebundle image file, so that you have both the Time Machine network drive and the Time Machine disk image mounted. Then Migration Assistant will see it.

Select the drive, select what users/settings to migrate. Go. Took a few hours to do the 200ish GB – done by the time I woke the next morning.

At the end of the day, a very smooth transition, with all my settings, icons, etc. ready to roll on the new one – although there were a few hurdles along the way…

Goodbye OpenSolaris, Hello OpenIndiana

After the demise of OpenSolaris no thanks to Oracle, there’s finally a community fork available: OpenIndiana. I did the upgrade from OpenSolaris, following the instructions here, and it all seemed pretty straight forward. There were a few things that I’d installed (eg wordpress) which had dependencies on the older OpenSolaris packages, but apart from those few things, it appears like everything’s moved over to the new OpenIndiana package server nicely.

Netatalk (for my Time Machine backup) still runs perfectly.

It certainly will be interesting to see what comes from the community fork!