OviOS Linux 3.00 Admin Guide

OviOS Linux 3.00 Admin Guide

OviOS Linux is a linux based storage OS, with out-of-the-box support for iSCSI, SMB and NFS.
The ovios-shell is the storage management shell for OviOS Linux. The ovios-shell is designed to assist in setting up the server, and requires very little Linux/UNIX/storage knowledge. Most commands are interactive and if an error occurs, the messages can be found using 'ovilogs', or manually reading the sys.log file.
You should not create pools, volumes, LUNs and targets using the Linux bash shell. These will not work well in ovios-shell. Objects created and managed with the ovios-shell have specific configurations and specific options set.
The ovios_restore command can assist in setting the ovios specific options to pools not created with the ovios-shell.

There are 4 main steps to create a storage server using the ovios-shell, and 2 additional steps for replication and DR.

1. Configure the network
    1.1. bondadm
    1.2. netsetup

2. Create RAID sets / storage pools
    2.1. Create Pools
    2.2. Assign spares and log devices

3. Set up SAN (iSCSI) 
    3.1. Create iSCSI targets
    3.2. Create LUNs

4. Set up NAS (NFS and / or SMB)
    4.1. NFS server
        4.1.1. Export volumes via NFS
    4.2. SMB server
        4.2.1. Local authentication
        4.2.2. Remote authentication
        4.2.3. Export volumes via SMB

5. Set up replication
    5.1. Initial replication
    5.2. Incremental replication
    5.3. Break replication 

6. Disaster recovery (DR)

1. Configure the network

There are 3 main issues when configuring the network. IPs, aggregated links, and DNS. OviOS provides two tools for the network:
'bondadm' to create aggregated links, and 'netsetup' , to set up the network interfaces.
OviOS does not provide an automated method to configure the DNS. Edit the resolv.conf file manually if the DNS must be configured. 

1.1. bondadm

The syntax is:
bondadm -n name -i interface1 -i interface2 -m mode

You can use 'bondadm' from the ovios-shell, or from the linux shell. (Type 'linuxcmd' to drop to the linuxCLI)

If used from the ovios-shell,  bondadm opens it's own CLI menu. It requires only the arguments.
For example, creating a new bonded interface called bond0, using physical interfaces eth0, eth1 and eth2, and mode 0.

ovios-shell> bondadm  

Type only the command arguments, like: -l | -h | -n etc.
To exit the bondadm CLI enter q or quit.

bondadm > -i eth0 -i eth1 -i eth2 -n bond0 -m 0

Type -h to see all options:

bondadm > -h

Where: eth0, eth1 and eth2 are physical interfaces.
bond0 is the name given to the new aggregated link
0 is the mode chosen.
See more here.

1.2. netsetup

Type netsetup in ovios-shell or the linux shell.

ovios-shell> netsetup  
OviOS Linux netsetup utility
Select an interface to configure
Interface : eth0
Enter interface name to configure: eth0
Selected interface: eth0
Select a service:
2. Static IP 

Enter 1 or 2: 1 
Selected dhcp service.

If you choose DHCP, you can also setup MTU size (Jumbo Frames), after which netsetup will configure the interface and run dhcp to acquire an IP.
If you choose static service, the tool will ask for an IP, and the following optional values: Gateway, broadcast, subnet, MTU size, Speed and duplex settings, after which it will configure the interface and enable it.

2. Create RAID sets / storage pools

There are multiple options available:
    1. Add Raid0 (Stripped devices.) pool
    2. Add Raid1 (Mirrored devices. mirror) pool
    3. Add Raid5 (raidz) pool
    4. Add Raid6 (raidz2) pool
    5. Add Raid10 (mirrored stripped) pool

2.1. Create Pools

Type 'storage' to find available drives:

ovios-shell> storage  
=        Your root disk is /dev/sda2
=        Do not use /dev/sda2 to create pools
=        It will destroy your OviOS installation
=        Use either Path, ID or Dev names to create Pools

pci-0000:00:10.0-scsi-0:0:0:0   ->      ../../sda
pci-0000:00:10.0-scsi-0:0:1:0   ->      ../../sdb


Type 'pool create' to create a storage pool.

ovios-shell> pool create 
= Create a storage pool for volumes and LUNs

= Do not use /dev/sda2 as it will wipe your OviOS installation

= Make sure you don't use disks which are part of exported pools either

1. Add Raid0 (Stripped devices.) pool

2. Add Raid1 (Mirrored devices. mirror) pool

3. Add Raid5 (raidz) pool

4. Add Raid6 (raidz2) pool

5. Add Raid10 (mirrored stripped) pool

Choose a raid level for the new pool (Default: 1):

Enter the name for the new pool: newpool

Enter devices to add to the pool: sdb
Created pool newpool successfully
Enable compression? (Default: yes) [ y|n ] y
Enabling compression on newpool
You can list all storage pools using the 'pool list' command

In this example a storage pool of RAID level 0, called newpool was created using the device 'sdb'.

ovios-shell accepts devices which can be found in /dev/, such as sda, sdc, sdd etc. , or use the full path, such as /dev/disk/by-path/disk1 etc
For simplicity, the admin should use only dev names (sdb sdc etc).
When the pools are imported during a boot up, the zfs-admin script will import the devices by path.

2.2. Assign spares and log devices

During pool creation, the admin can specify spare and log devices at the end of the disk list, for EX:

Enter devices to add to the pool: sdb sdc sdd spare sde sdf log sdm  
In a mirrored pool (Raid10) you can add spares at the end of the last mirror created, or at the end of each mirror.
Enter devices to add to the pool: sdb sdc log sdd spare sde
Enter devices to create the mirror: sdf sdg spare sdh log sdi

The better method is to use 'pool modify' to add/ remove spares and logs.

ovios-shell> pool modify  
       1. Add spare device(s)
       2. Add read cache device(s)
       3. Add write cache (log) device(s)
       4. Remove spare device(s)
       5. Remove read cache device(s)
       6. Remove write cache (log) device(s)
       q. Exit this menu
Enter your choice: 3

Enter pool to modify: newpool

3. Set up SAN (iSCSI)

OviOS can be used as an iSCSI server to easily provide block storage devices to iSCSI initiators. OviOS LUNs can be used in UNIX/Linux environments, Windows or VMWare.
To setup an iSCSI server, the admin must create iSCSI targets, LUNs and map the LUNs to targets.
Start the iSCSI server with 'iscsi start'
Enable the server to start automatically during boot: options iscsi.enable on
Type 'options iscsi' for all iscsi related options.

ovios-shell> options iscsi 
option iscsi.debug off
option iscsi.port
option iscsi.enable off
option iscsi.address

By default, the iSCSI port is 3260, and all IP addresses are enabled.

3.1. Create iSCSI targets

Run 'target create' and enter a name for the target when prompted to do so.
Enter only a name for the target, the IQN and server identifier will be added automatically by the ovios-shell.

ovios-shell> target create 
= This command creates an iSCSI target. You can create multiple targets
= and control what initiators can connect to them.
= You can map LUNs to specific targets, thus controlling
= which client has access to each LUN.
The IQN will be appended by the OS. Enter a custom target name: ovios-tg01

Choose whether to setu ACLs, by adding initiators IPs or IQNs to the target. Thus the target will allow only specific initiators to connect.

If no innitiator IP or IQN is assigned to this target, all initiators will have access and will be able to connect. 
Would you like to define an initiator that can access this target? (Default: no) [y|n] y
Enter IP or IQN of the initiator to access this target:
Add another initiator? (Default: no) [y | n]: y
Enter IP or IQN: iqn.2012-04.org.ovios:9bfhvndth004-tg01
Add another initiator? (Default: no) [y | n]: n

In this example, only the initiator with the IP and the initiator with the IQN iqn.2012-04.org.ovios:9bfhvndth004-tg01 will be allowed to connect to the target.

3.2. Create LUNs

Run 'lun create' and create a LUN. A LUN must be created in a storage pool. Enter the storage pool name when prompted. Enter a LUN name when prompted.
A LUN can be thin or thick provisioned. Thin provisioning is disabled by default in OviOS because the target doesn't support the DISCARD SCSI command yet. However, thin LUNs can be created.
It is strongly recommended to create only thick LUNs to have a better grip on the space available.
If targets already exist, run 'lun_setup' to create and map the LUN in a single command.

Map the LUNs to targets using 'lun map'. A unique LUN ID will be assigned automatically. This LUN ID will not be changed.

After LUNs have been mapped, run 'iscsi reload' to make them available. A LUN can only be mapped to one target.
    Use meaningful target names.
    Use only thick LUNs.
    Use target ACL (when creating targets the command will ask for initiator IP or IQN) to control the iSCSI sessions.

4. Set up NAS (NFS and / or SMB)

OviOS can be used to share volumes via NFS and SMB. OviOS supports FTP, but not out-of-the-box,  the FTP service must be configured manually.
Create volumes in a pool with 'vol create' to start.

4.1. NFS server

OviOS supports all NFS versions.  The following options can be used to manage the NFS server.

ovios-shell> options nfs 
option nfs.threads
option nfs.enable off
option nfs.udp.disable off
option nfs.tcp.disable off
option nfs.idmap off
option nfs4.domain
option nfs.sys.log off
option nfs.disable.vers
option nfs.port
option nfs.debug off

Start the NFS server with 'nfs start'. Enable the server to start at boot with:

ovios-shell> options nfs.enable on 
Changing option: nfs.enable ==> on

4.1.1. Export volumes via NFS

Run 'nfs_export' to export volumes NFS clients.  This tool will walk you through the steps to export a volume (an entire pool can also be exported). One can use the default options, or specify custom options. 
If IPs must be specified,  use colon to separate them, with no spaces inbetween.

Default options are: 

Use 'nfs ss' to see what volumes are exported and what options have been used. To change, or remove exports, use 'nfs_export' as well.

4.2. SMB server

OviOS supports SMB versions 1, 2 and 3. 
To start the OviOS SMB server, type: 'smb start'. Enable with 'options smb.enabled on'. 
The following options manage SMB, WINBIND and NETBIOS.

ovios-shell> options smb  
option smb.enable off
option smb.debug.level
option smb.port
ovios-shell> options winbind 
option winbind.debug.level
ovios-shell> options netbios  
option netbios.enable off
option netbios.debug.level

By default NETBIOS is disabled. Enable if needed with: 

ovios-shell> options netbios.enable on 
Changing option: netbios.enable ==> on

4.2.1. Local authentication

In OviOS Linux, SMB requires users to access shares. These users may then configure guest access via Windows ACLs, however guest access cannot be directly configured in OviOS. 
To use local authentication (that means without a Domain Controller), one must create smb local users. 

ovios-shell> smbuseradd  
Enter share path [Ex: /pool/volume ] to create a homedirectory for this user: retpool/vol0-received
Enter username: smbuser
new password:
retype new password:

'smbuserlist' and 'smbuserdel' are also usefull to list or delete smb local users.

4.2.2. Remote authentication

OviOS SMB Server can join a windows Domain Controller and use AD users for authentication.

Type 'smbovios --join' to start the tool to join a DC. 

This tool will guide you through to joining a DC. It will require:
- a domain username with joining privileges 
- the domain name
- the workgroup as defined in your DC
- the AD computer name (hostname, most cases FQDN)
- a volume path to the root share (/pool/volume) 
- IP of the DC
- IP of the DNS and IP of the NTP server (most cases same as DC)

Type 'smbovios --info ' or 'smbovios --unjoin' to get details about the DC or to unjoin the DC. 

During the joining process the resolv.conf and nsswitch.conf files are modified. If you require modifications to these files, redo so after joining the DC.

When the --unjoin option is used and the OviOS SMB server leaves a DC, the config files are set back to the state prior to joining the DC.

Type 'smb ad-users' for a list of AD users. 

4.2.3. Export volumes via SMB

Type 'smb_export' to export volumes via SMB. This tool will require a user (can be local smb user or AD user) which will be granted admin rights to this share. This user will be used to manage the share permissions.       

NOTE: ovios is not an SMB user, therefore this user won't be accepted. Only users listed with 'smb ad-users' and 'smbuserlist' can manage shares.

'smb_export' can also be used to remove shares. 

5. Set up replication

Replication can be setup between 2 OviOS nodes or one OviOS node and another Linux system which runs zfs on linux. If the destination is not an OviOS system, it must have the 'zfs' and 'zpool' binaries in /usr/sbin.
Before setting up replication, make sure that: 

            The ssh passwordless authentication
            must be set up for user root between the nodes.
            Make sure the source node can ssh into
            the destination node with user root without password.

            When running a full replication with retadm initiate,
            the source pool and dataset MUST exist.
            On the destination, only the pool must exists,
            the destination dataset will be created by retadm.
            If the destination dataset exists, the command will
            print an error and exit.

The replication tool is called 'retadm'.

ovios-shell> retadm  

       retadm initiate - initiates a new full replication
       retadm inc <poolname/dataset> - runs an incremental replication for poolname/dataset
       retadm modify <poolname/dataset> - allows to modify the destination hostname or IP
       retadm reset <poolname/dataset> - resets all replication properties for poolname/dataset
       retadm status - displays a current status for all datasets in the system, on which
                       replication is enabled. The ones without replication enabled are ignored
       retadm help - prints usage and help menu.


5.1. Initial replication

Run 'retadm initiate' . The interactive tool will ask for the destination hostname or IP, will check if the destination is reachable and passwordless authentication works.
The following example sets up replication for lun1 in retpool, will be sent to remote host with IP, and will create a replicated LUN lun1-replicated in the remote pool retpool.

ovios-shell> retadm initiate 
Initiates a new replication for a vol or LUN.
Requires ssh authetication to already be configured.
Enter the destination host or IP:
Checking is available found and is accessible
Checking if passwordless authentication works to
Passwordless authentication tested successfully to
Enter the source poolname: retpool

Enter the volume name or LUN name: lun1

Enter the destination poolname: retpool

Enter the destination dataset to be created: lun1-replicated
Checking is available found and is accessible
Checking if passwordless authentication works to
Passwordless authentication tested successfully to
Created a new snapshot for retpool/lun1 named retpool/lun1@ovios_repl_full-2019-01-08_13:59:07
Started full replication for retpool/lun1 to retpool/lun1-replicated

5.2. Incremental replication

Once the initial replication completes, the admin can setup a cron job to schedule replication for multiple volumes and LUNs. 'retadm' logs activity and errors in /var/log/ret.
To schedule replication, use 'edit_cron' from ovios-shell, and add a cron job. For ex:

ovios-shell> edit_cron vim 
2019-01-08 14:07:13  INFO listing root's fcrontab 
0 3 * * *   /usr/sbin/logrotate -f /etc/logrotate.conf
*/30 * * * * /sbin/retadm inc retpool/lun0

This line (*/30 * * * * /sbin/retadm inc retpool/lun0) means retpool/lun1 will be replicated every 30 minutes. 

To modify the destination for a dataset, use 'retadm modify'. 

5.3. Break replication 

Breaking a replication will remove the replication relationship between source and destination and will remove base snapshots on source and destination. 
The source and destination volume or LUNs will not be removed.

ovios-shell> retadm reset retpool/vol0 
This will break replication for retpool/vol0 and delete the base snapshot.
It will not remove the destination dataset, but will remove the destination base snapshot
Enter YES to continue: YES
Finished resetting replication for retpool/vol0

6. Disaster recovery (DR)

DR refers to the possibility to easily and quickly make production data accessible to clients during a period when the production system is unavailable.
To make sure this is possible, always use an OviOS remote system where you replicate your critical volumes and LUN. 

The replicated shares and LUNs are always available on the destination system. The destination system's configuration can be synced with the source automatically, so that in a DR scenario all shares, LUNs and iSCSI Targets are available immediately.

For this to work, the pool name on the source and destination must be identical, and all shares and LUNs on the destination must have the same name as the source.

If these requirements are met, run 'sync-config dest <destination host or IP> This will sync all users, groups, hosts, SMB settings, iSCSI settings to the destination. 
The network settings are , for obvious reasons, not synced, as the source has a different hostname and IP addresses. 

Therefor, a scheduled cron job should be setup to periodically sync the config.
For ex: */30 * * * * /sbin/sync-config dest-hostname

Once a DR scenario occurs and the remote system is required to serve production data, via iSCSI, SMB and NFS, and sync-config is up to date (all users, iSCSI settings, NFS shares, SMB users and shares) simply point the clients to the remote system IP or hostname. 


In cases when this is not a true DR system, and the destination host requires different local settings, then the admin must prepare the config manually. Create local users, create SMB users, join to an AD and so on. 
There is a tool called ovios_restore that can be used on the destination to automatically create the targets and map the replicated LUNs to the targets, by not removing any targets which exist already on the system. However, the LUNs will be mapped with their original LUN IDs, so in this case the admin must be carefull not to have duplicate LUN IDs,  and correct those manually if needed.