ZFS is awesome. Hands down. It has been the best filesystem I've used in the last 20 years.

I could get into some basic commands and tell you all about its features and all but Google FreeBSD would be better suited for that. This is a short story on how it's going to solve an issue between capped internet and large backups.

Over the last 20 years, I've aquired quite a colleciton of data. 9TB at the moment. I just finished setting up a new server and with it, a RAIDZ2 with 8x 4TB HGST drives. I believe I'll be okay for the next 5-10 years. Fingers crossed, but I have a problem... offsite backups.

Using parts from the old server, I scrapped up enough storage to safely store about 18TB of data on a RAIDZ1. I pushed a copy of the latest snapshot to the old server:

zfs snapshot -R storage@today
zfs send -R storage@today | pv | ssh 1.2.3.4 zfs recv backup/storage

...and 2 and a half days later, I have a replica of my data.
(AES-XTS is not handled well on the old Core2Duo 7500)

I am moving this old server offsite and configuring it with multiple openVPN connections. Using quagga OSPFd, I'll be able to connect to it via the shortest path and should survive a few link interruptions. Here's where the issue arrises...

The internet connections at each site are limited to about 30GB upload per day. With ZFS, I can send incremental changes to blocks between snapshots. For the average user, this is PLENTY, but when I can easily take +30GB of photos on a single outing, this might be a problem.

Looking around, I have a few 4TB USB HDDs in eSATA enclosures that I used to use to backup my data and store offsite. Since my user directory has grown over 4TB, it has become increasingly cumbersome to split my data onto multiple drives. I toyed with the idea of creating a RAID0 (stripe) across multiple 4TB disks but the increased potential of failure rate made me uneasy. I even considered creating a RAIDZ1 with three of them but then I'd have to attach them all via eSATA and I only have 2 ports left on my LSI HBA. Crap.

In a last ditch effort, I started exploring the ZFS send|recv options. First, I tried sending only the incremental changes between snapshots to a zpool on the USB drive:

zfs send -i snap0 storage/users@snap1 | pv | zfs recv usb/storage
cannot receive incremental stream: destination 'usb/storage' does not exist
warning: cannot send 'storage/users@snap1': signal received

I had the right idea, just the wrong execution. I needed to send the stream to a file instead:

zfs send -i snap0 storage/users@snap1 | pv | gzip > usb0.gz

Holy crap, it worked! I now have a file on my USB HDD that I can hand carry to the offsite location, connect, and append the changes!

gzip -c -d usb0.gz | zfs receive /backup/storage
gzcat usb0.gz | zfs receive /backup/storage

Win.