How To Install FreeBSD within a ZFS Boot Environment

Creating the initial ZFS Data Sets to be Used During Installation

ZFS gives us great flexibility in creating Data Sets, which act as individual file systems. Dividing the system up enabling easy upgrades (or roll-backs in case needed) to the operating system and applications, while maintaining data. Initially we really only need one, I like to create a couple more, just to avoid temp and log data from ending up on the disk prior to other changes later on. These wont be mounted until we reboot onto the newly installed system though. We will save some more until after the system is booted. Don't want to do too much prior to being able to use SSH to connect instead of the console. Since we can do some copying and pasting to save typing at that point.

This setup creates the Data Sets using the standard that the beadm utility expects so that we can use it later on to manage boot environments. This isn't strictly necessary as boot environments can be handled manually, but its a handy utility that makes life easier. The only system I have on FreeBSD and ZFS which I don't use it is on my laptop, simply because it doesn't play nice with my GELI encryption, booting from USB stick setup I have on that system. The trick to this is first create a ROOT data set within your boot ZFS pool, and then create the operating system dataset that will contain your / file system within that. As with the base ZFS data set, ROOT will not be mounted, we don't have to tell it that though as its inherited from the root dataset. I am using install as the name for my initial installation, you can use another name if you want, we are using legacy as the mount point, this is a special mount point, telling the system to not be auto mounted, and use another method. It is necessary to set the zpool bootfs property to this dataset, this has a similar effect as setting an active partition on an older MBR disk.

Then I create two temp datasets, /tmp and /var/tmp, these will successfully mount, as the live CD environment has these partitions as memory backed disks. Once they are mounted we can set the permissions for them. I then create a third dataset /var/log so that the logs are written to it rather than the installation, this way your logs carry over after updates.

zfs create zroot/ROOT
zfs create -o mountpoint=legacy zroot/ROOT/install
zpool set bootfs=zroot/ROOT/install zroot
zfs create -o mountpoint=/tmp -o setuid=off -o exec=on zroot/tmp
chmod 1777 /tmp
zfs create -o mountpoint=/var/tmp -o setuid=off -o exec=on zroot/var-tmp
chmod 1777 /var/tmp
zfs create -o mountpoint=/var/log zroot/var-log

Note:  -o, indicates a ZFS property and value follows, can be repeated for each property you want to set. I have used the mountpoint, setuid, and exec above, mountpoint is self explanatory. setuid=off, tells the file system to not allow users to change the uid on the files, exec=on allows files to have executable bit set.

Note:  chmod 1777, sets the permissions on the /tmp & /var/tmp file systems to read/write/execute for all users & groups, but sets the sticky user bit so that only the user creating the file and that users primary group can change the file after creation.

Note:  the last part is the dataset name, these work similar to a file system, using a / to designate levels zroot being the root, then following data sets. so the install is in ROOT which is in zroot. Data sets inherit the properties of the container unless overridden by new options just as a directory would inherit the permissions of its parent directory by default. This includes the mount point, so a new dataset within another will mount as a directory named the same as the dataset under its parent if its parent has a mount point set.