I had to create some containers for developers to do their work. Developers always seem to want root access to a machine. Containers work very nice in this scenario: if a developer messes up his container, I can just clone a new one off a “gold” container. ZFS can be very handy here as well: by installing a container on ZFS filesystem and assigning ZFS quota, you can limit how big the container can grow.
So, first I created a ZFS pool out of two slices on two disks. This is not really recommended way to create ZFS pool. You should really be using two whole disks. And, ignore the fact that those disks both reside on the same controller. Right after that I created dev1 filesystem within the zonepool:
bash-3.00# zpool create -m /export/home/zones zonepool mirror c0t0d0s3 c0t1d0s3
bash-3.00# zfs create zonepool/dev1
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zonepool 122K 55.6G 25.5K /export/home/zones
zonepool/dev1 24.5K 8.00G 24.5K /export/home/zones/dev1
Next I set ZFS quota on the filesystem to 8GB:
bash-3.00# zfs set quota=8G zonepool/dev1
bash-3.00# zfs get all zonepool/dev1
NAME PROPERTY VALUE SOURCE
zonepool/dev1 type filesystem -
zonepool/dev1 creation Fri Jun 4 9:17 2010 -
zonepool/dev1 used 24.5K -
zonepool/dev1 available 8.00G -
zonepool/dev1 referenced 24.5K -
zonepool/dev1 compressratio 1.00x -
zonepool/dev1 mounted yes -
zonepool/dev1 quota 8G local
zonepool/dev1 reservation none default
zonepool/dev1 recordsize 128K default
zonepool/dev1 mountpoint /export/home/zones/dev1 inherited from zonepool
zonepool/dev1 sharenfs off default
zonepool/dev1 checksum on default
zonepool/dev1 compression off default
zonepool/dev1 atime on default
zonepool/dev1 devices on default
zonepool/dev1 exec on default
zonepool/dev1 setuid on default
zonepool/dev1 readonly off default
zonepool/dev1 zoned off default
zonepool/dev1 snapdir hidden default
zonepool/dev1 aclmode groupmask default
zonepool/dev1 aclinherit secure default
zonepool/dev1 canmount on default
zonepool/dev1 shareiscsi off default
zonepool/dev1 xattr on default
Now, I should mention, that prior to configuring /export/home/zones to reside on ZFS I uninstalled dev1 container which was there previously. So, the container itself was gone, but the system still had knowledge of the container’s configuration. I wrote a post on configuring containers here.
bash-3.00# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
- dev1 configured /export/home/zones/dev1 native shared
Since the container was already configured, I went ahead and started installing it:
bash-3.00# zoneadm -z dev1 install
/export/home/zones/dev1 must not be group readable.
/export/home/zones/dev1 must not be group executable.
/export/home/zones/dev1 must not be world readable.
/export/home/zones/dev1 must not be world executable.
could not verify zonepath /export/home/zones/dev1 because of the above errors.
zoneadm: zone dev1 failed to verify
Woops, looks like the container directory permissions need some fixing:
bash-3.00# cd /export/home/zones/
bash-3.00# ls -l
drwxr-xr-x 2 root sys 2 Jun 3 09:48 dev1
bash-3.00# chmod 700 dev1
bash-3.00# chown root:root dev1
One more try to install the container:
bash-3.00# zoneadm -z dev1 install
Preparing to install zone .
Creating list of files to copy from the global zone.
Copying <2561> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1086> packages on the zone.
Initialized <1086> packages on zone.
Zone is initialized.
The file contains a log of the zone installation.
That’s it. After the container install completed, before booting dev1, I stuck the following sysidcfg file into /etc directory of dev1 container:
bash-3.00# more sysidcfg
That way I would not be asked any container configuration questions during first container boot. Except for the root password, of course.