Matt Connolly's Blog

my brain dumps here…

Tag Archives: opensolaris

Goodbye OpenSolaris, Hello OpenIndiana

After the demise of OpenSolaris no thanks to Oracle, there’s finally a community fork available: OpenIndiana. I did the upgrade from OpenSolaris, following the instructions here, and it all seemed pretty straight forward. There were a few things that I’d installed (eg wordpress) which had dependencies on the older OpenSolaris packages, but apart from those few things, it appears like everything’s moved over to the new OpenIndiana package server nicely.

Netatalk (for my Time Machine backup) still runs perfectly.

It certainly will be interesting to see what comes from the community fork!

Advertisements

My first real Time Machine backup on a ZFS mirror

So following my last post about the impact of compression on ZFS, I’ve created a ZFS file system with Compression ON and am sharing it via Netatalk to my MacBook Pro.

I connected the Mac via gigabit ethernet for the original backup, and it backed up 629252 items (193.0 GB) in 7 hours, 23 minutes, 4.000 seconds, according the backup log. That’s an average of 7.4MB/sec. Nowhere near the maximum transfer rates that I’ve seen to the ZFS share, but acceptable nonetheless.

`zfs list` reports that the compression ratio is 1.11x. I would have expected more, but oh well.

And now my incremental backups are also working well over the wireless connection. Excellent.

ZFS performance networked from a Mac

Before I go ahead and do a full time machine backup to my OpenSolaris machine with a ZFS mirror, I thought I’d try and test out what performance hit there might be when using compression. I also figured, that I’d test out the impact of changing the recordsize too. Optimising this for the data record size seems to be best practices for databases, and since Time Machine stores data in a Mac Disk Image (SparseBundle) it probably writes data in 4k chunks matching the allocation size of the HFS filesystem in the disk image.

There were three copy tasks done:

  1. Copy a single large video file (1.57GB) to the Netatalk AFP share,
  2. Copy a single large video file (1.57GB) to a locally (mac) mounted disk image stored on the Netatalk AFP share,
  3. Copy a folder with 2752 files (117.3MB) to a locally (mac) mounted disk image stored on the Netatalk AFP share.

Here’s the results:

To Netatalk AFP share To Disk Image stored on AFP share To Disk Image stored on AFP share
ZFS Recordsize and compression 1 video file, 1.57GB 1 video file, 1.57GB 2752 files, 117.3MB
128k, off 0m29.826s

53.9MB/s

2m5.889s

12.7MB/s

1m45.809s

1.1MB/s

128k, on 0m52.179s

30.9MB/s

1m36.084s

16.7MB/s

1m34.367s

1.24MB/s

128k, gzip 0m31.290s

51.4MB/s

2m32.485s

10.5MB/s

2m29.141s

0.79MB/s

4k, off 0m27.131s

59.3MB/s

2m16.138s

11.8MB/s

2m47.718s

0.70MB/s

4k, on 0m25.651s

62.7MB/s

1m59.459s

13.5MB/s

1m41.551s

1.2MB/s

4k, gzip 0m30.348s

53.0MB/s

5m16.195s

5.08MB/s

4m48.378s

0.41MB/s

I think there was something else happening on the server for the 128k compression=on test, impacting on its data rate.

Conclusion:

The clear winner is the default compression and default record size. It must be that even my low powered Atom processor machine can compress the data faster than it can be written to disk resulting in less bandwidth to disc and therefore increasing performance at the same time as saving space. Well done ZFS!

Mac File sharing from OpenSolaris

I’ve just played around with 3 different ways of sharing files from OpenSolaris to my Mac:

  1. Using ZFS built in SMB sharing
  2. Using ZFS built in iSCSI sharing (with globalSAN iSCSI initiator for mac)
  3. Using AFP from netatalk 2.1

Using ZFS built in SMB sharing

This is by far the easiest, it requires no special software to install on either machine after the OS itself.

Using ZFS built in iSCSI sharing

Setting up the iSCSI share is just as easy as the SMB, however Mac doesn’t have an iSCSI client built in. You need to download and install the globalSAN iSCSI initiator for Mac.

This method should be good for Time Machine because the iSCSI device appears as a real hard drive, which you then format as Mac OS Extended and Time Machine’s funky little linked files and things should all work perfectly. No need to worry about users and accounts on the server, etc. In theory , this should deliver the best results, but it’s actually the worst performing of the lot.

Using AFP from netatalk 2.1

A little bit of work is required to install Netatalk 2.1. See my previous post and thanks again to the original posters where I learned how to do this.

This one should also be a very good candidate since it appears as a Mac server on the network and you should be able to access this shared Time Machine directly from the OS X install disc – an important consideration if the objective is to use it as a Time Machine backup (which it is for me).

Additionally, this one proved to have the best performance:

Performance

I tested copying 3GB files to each of the above shares and then reading it back again. Here’s the results:

Writing 3GB over gigabit ethernet:

iSCSI: 44m01s – 1.185MB/s

SMB share: 4m27 – 11.73MB/s

AFP share: 2m49 – 18.52MB/s

Reading 3GB over gigabit ethernet:

iSCSI: 4m36 – 11.34MB/s

SMB share: 1m45 – 29.81MB/s

AFP share: 1m16s – 41.19MB/s

The iSCSI was by far the slowest. Ten times slower than SMB – yikes! I have no idea if that’s due to the globalSAN initiator software or something else. Whatever the case, it’s not acceptable for now.

And Netatalk comes up trumps. Congratulations to everyone involved in Netatalk – great work indeed!

OpenSolaris screen sharing with Mac

I found two ways of connecting my Mac to my OpenSolaris box remotely:

1. Running a gnome-session over SSH.

$ ssh -X username@opensolaris.local gnome-session

And up pops a X11 app on the Mac and you can see the desktop. It’s slow and clunky, but it works.

2. Using Mac Snow Leopard’s built in Screen Sharing client.

This requires more configuration on the OpenSolaris side of this – apparently the Mac OS will only connect to a server that requires authentication in its expected method. This article showed me how to do it: Share your OpenSolaris 2008.11 screen to Mac Os X.

The second method is much prettier, doesn’t have windows that disappear under the Gnome application bar at the top, neatly puts everything in a window to the remote machine, and to boot it is actually way faster too.

Oh, and here’s a trick. Screen Sharing application for some reason doesn’t give you a nice interface to connect to a remote machine manually (as opposed to clicking the Share button in a Finder window). This also works, open your favourite web browser and type vnc://opensolaris.local/ to launch screen sharing on the machine “opensolaris.local” (also works with ip addresses).

Enjoy.

OpenSolaris + TimeMachine backup + network discovery

I found several tutorials on blogs about how to build “netatalk” on OpenSolaris to enable file sharing using the AFP protocol for better use by TimeMachine backups on the mac. Here are a few:

As far as making the AFP service discoverable, most people point to this article:

However, I found a better way to make the service discoverable, and that is by using the avahi-bridge. You’ll need to make sure that you have multicast dns running and the avahi bridge

1. Install netatalk according to the above instructions.

2. Install multicast dns and avahi bridge (I’m using snv_134 development branch, note that the package names have changed since 2009.06 release)
# pkg install system/network/avahi
# pkg install service/network/dns/mdns

3. enable both services (why is one called mdns and the next multicast?)

# svcadm enable network/dns/multicast:default
# svcadm enable system/avahi-bridge-dsd:default

4. setup a service xml file for avahi-bridge:

# cat > /etc/avahi/services/afp.service
<?xml version="1.0" standalone='no'?><!--*-nxml-*-->
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">
<service-group>
<name replace-wildcards="yes">%h</name>
<service>
<type>_device-info._tcp</type>
<port>548</port>
<txt-record>model=RackMac</txt-record>
</service>
<service>
<type>_afpovertcp._tcp</type>
<port>548</port>
</service>
</service-group>

If you’re interested in advertising other services so that they are discoverable by the Mac, this is a great article (it’s for linux, but the avahi side of things is the same using avahi-bridge): Benjamin Sherman » Advertising Linux Services via Avahi/Bonjour.

Now I have a server in my Finder side bars just like any other mac server.

Installing Open Solaris on the Intel D510 motherboard

So I opted to go for the Intel D510 motherboard. It was passively cooled, and had a little better support for devices to be attached to the motherboard from what I could gather. I fitted it out with 2GB RAM and 2 x SATA drives that were left over from previous laptop upgrades for starters.

I installed OpenSolaris 2009.06 from a USB key, using the Windows USB key gen method, and that worked a treat – or so it seemed at first.

My initial reasons to go for OpenSolaris was for ZFS and because it also sounds like a very mature system to run as a server.

On the other hand, hardware support is still quite young. In particular there were two bugs that caught me by surprise:

1. Microsoft mouse not supported. What? Any other mouse works….

2. Realtek ethernet driver didn’t work with multicast (Bonjour) so I couldn’t discover the network very well. Accessing the Internet, however, was just fine.

3. No Intel Graphics driver – the system runs with a bog standard vesa driver. boring.

So I proceeded to upgrade to the latest development branch (snv_134 at present) – which also seemed like quite a buggy and difficult process.

After a few hours of research, (I documented the errors and solution here) I have #1 and #2 above sorted. I’m not too worries about #3 since I’ll mostly be using the machine as a headless server anyway.

Oh, and the power consumption with 2 drives is steady at 37 Watts. Not too bad. I’ll have Time Machine backing up to this sucka as soon as I pick up a pair (or maybe a trio) of big drives)

Building a low cost, low power, home server: research

So I’ve decided to go down the path of building a low-power, low-cost, home server. We’ve got several PCs in the house now, as well as an Xbox and a LG hard disk recorder, all of which are network accessible.

My criteria for building the machine, in order of importance, are:

  1. something more powerful than an off-the-shelf NAS so that its expandable and I can run a few web apps and things on it too.
  2. low-power: If I’m going to leave this thing turned on, I want minimum power usage.
  3. low-cost: I really don’t want to spend a lot of money on it.
  4. re-use of existing parts where possible: I have an old PC and several hard drives that I’d like to use.

I’m looking at deploying open solaris, mostly for ZFS.

My initial research shows that AMD make some nice 45W processors, like the Athlon II X2 235e. This seems to be a really good balance between power consumption and processing power, but still may be total overkill for my application. What I’d like to know is, what is it’s idle power consumption.

It’s become standard comparison in cars these days to show power as well as efficiency. In this day and age when we are becoming more aware of our energy usage, shouldn’t PCs have the same thing?

But I’ve also been reading about the newer Atom processors, such as the Atom 330 and D510 which have 8W and 13W power usage respectively, the latter including an onchip graphics controller – which would suit me fine.