Solaris Containers

Solaris Containers allow partitioning of a physical server into virtual servers. Containers do not provide virtualization in sense of Xen or VMware they are more similar to jails. Zone is a container without resource control.

Solaris instance running in a non-global zone shares parts of its filesystem with the global zone. There is only one kernel running - in the global zone. The kernel handles the physical machine on behalf of non-global zones running on the system. As far as non-global zones are concerned, they appear as separate machines with their own services running, etc. Non-global zones can not “see” each other. They can not see what is going on in the global zone. The global zone, however, can see what is going on inside the non-global zones.

Setting up a zone is fairly trivial. There are two types: full and sparse zones. Here is a quick rundown of a sparse zone setup on Solaris 10 08/07:

  1. Change system-wide scheduler to FSS
  2. Create a non-global zone
  3. Install the zone
  4. Boot the created zone

First you should set the default scheduler on the system to be Fair Share Scheduler. This will allow you to assign CPU shares to individual zones. This will also prevent a zone from monopolizing CPU:

bash-3.00# dispadmin -d FSS
bash-3.00# dispadmin -d
FSS     (Fair Share)

You will need to reboot in order for system to start using FSS. Note that default scheduler in Solaris 10 is TS. If you want to change scheduler without reboot, in addition to using dispadmin, you could try something like this:

bash-3.00# priocntl -s -c FSS -i class TS
bash-3.00# priocntl -s -c FSS -i pid 1

The first command will move processes from TS class to FSS class and the second command will move init process to FSS class.

To see if there are any zones installed on the system you can use zoneadm command inside the global zone:

bash-3.00# zoneadm list -v

To start seting up a zone use zonecfg command:

bash-3.00# zonecfg -z ns1
ns1: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:ns1>

Continuing with the interactive zonecfg session:

zonecfg:ns1> create
zonecfg:ns1> set zonepath=/export/home/zones/ns1
zonecfg:ns1> set autoboot=true
zonecfg:ns1> set bootargs="-m verbose"
zonecfg:ns1> set scheduling-class=FSS
zonecfg:ns1> set cpu-shares=5
zonecfg:ns1> add attr
zonecfg:ns1:attr> set name=comment
zonecfg:ns1:attr> set type=string
zonecfg:ns1:attr> set value="DNS Server"
zonecfg:ns1:attr> end

Remember, nothing is set in stone until you issue commit at the end of the zonecfg session. The above will start creating new zone called ns1. The zone will be installed in /export/home/zones/ns1. It will be booted automatically when the global zone is booted. Next you can add some boot arguments and scheduling class for the zone. If scheduling class is not defined it will be inherited from the global zone. Since you already set the system scheduling class to FSS, this entry is optional. You can also add a name attribute so you know what the purpose of the zone is.

Next, you can cap the memory usage for the zone:

zonecfg:ns1> add capped-memory
zonecfg:ns1:capped-memory> set physical=512M
zonecfg:ns1:capped-memory> set swap=1024M
zonecfg:ns1:capped-memory> end

The physical keyword specifies how much physical memory the zone is allowed to consume. Total swap consumed by user processes and tmpfs mounts inside the non-global zone is set by swap attribute.

Finally add a network interface:

zonecfg:ns1> add net
zonecfg:ns1:net> set physical=aggr1
zonecfg:ns1:net> set address=10.1.1.1
zonecfg:ns1:net> end

In this case, the physical interface for the zone will be aggr1 (aggregate interface in the global zone) with IP address of 10.1.1.1.

Now you can review the settings configured and then commit them:

zonecfg:ns1> info
zonename: ns1
zonepath: /export/home/zones/ns1
brand: native
autoboot: true
bootargs:
pool:
limitpriv:
scheduling-class: FSS
ip-type: shared
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
net:
address: 10.1.1.1
physical: aggr1
capped-memory:
physical: 512M
[swap: 1G]
attr:
name: comment
type: string
value: "DNS Server"
rctl:
name: zone.cpu-shares
value: (priv=privileged,limit=5,action=none)
rctl:
name: zone.max-swap
value: (priv=privileged,limit=1073741824,action=deny)
zonecfg:ns1> commit
zonecfg:ns1> exit

At this point if you list zones on the system you will see something similar:

bash-3.00# zoneadm list -vc
ID NAME             STATUS     PATH                           BRAND    IP
0 global           running    /                              native   shared
- ns1              configured /export/home/zones/ns1         native   shared

Now you can proceed with installing the zone. The install might take a little while depending on the type of the zone you are installing and the size of the global zone:

bash-3.00# zoneadm -z ns1 install
Preparing to install zone .
Creating list of files to copy from the global zone.
Copying  files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize  packages on the zone.
Initialized  packages on zone.
Zone  is initialized.
The file  contains a log of the zone installation.
bash-3.00#

It might be a good idea to set cpu-shares on the global zone so it’s not rendered unusable by a non-global zone going awol. This will require reboot:

bash-3.00# zonecfg -z global
zonecfg:global> set cpu-shares=10
zonecfg:global> commit
zonecfg:global> exit

Or in addition to the above you can use prctl command to avoid reboot:

bash-3.00# prctl -n zone.cpu-shares -v 10 -r -i zone global
bash-3.00# prctl -n zone.cpu-shares -i zone global
zone: 0: global
NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
zone.cpu-shares
        privileged         10       -   none                            -
        system          65.5K     max   none

Now you can boot the zone and log into it:

bash-3.00# zoneadm -z ns1 boot
bash-3.00# zlogin -C ns1

That’s pretty much it as far as simple setup is concerned. Few links of interest: