This section provides advanced instructions for manually configuring a storage array. Normally, you do not need to perform any of these tasks, as they are all automatically performed by the XR Configuration Utility. This utility is installed with the DKU and automates the LUN creation process.
NoteMake sure you do not re-partition the system drive or any other disks that you want to preserve as they are. Partitioning destroys all data on the disks.
Creating Hardware LUNs
LUNs, also referred to as Logical Units or Logical Drives, are groups of disk drives that are striped together to provide optimal performance and RAID protection. Once configured, LUNs are seen by the Linux operating system as if they were single disk drives.
For systems with two sets of enclosures, you have to configure one set at a time with the XR Configuration Utility. Connect the first set of enclosures, and use the utility to configure it. When done, disconnect the first set and connect the second set. When the second set of enclosures is configured, re-connect both sets.
To configure LUNs on XR-series storage:
Open a terminal and log in as root and run /usr/discreet/DKU/current/Utils/Storage/current/XR_config.pl. The utility detects whether a LUN configuration exists on the storage attached to that workstation.
If a LUN configuration already exists on the storage, you are prompted for confirmation to overwrite that configuration.
WarningLUN configuration is destructive. Make sure you want to overwrite an existing configuration before you confirm.
After the script detects the number of enclosures and drives, it prompts you to indicate the filesystem your storage uses. Type 2.
When asked if you have a 2-loop or a 4-loop configuration, select the option that applies to your storage. The XR Configuration Utility configures your storage.
Type x to exit the XR Configuration Utility.
Reboot your workstation, so that the newly-created LUNs are rescanned by the operating system.
The XR Configuration Utility exits without configuring your storage if any of the following is detected:
An incorrect number of disks. The total number of disks must be a multiple of 12.
One or more of the enclosures do not have the correct firmware.
In a dual RAID enclosure environment, the number of expansion chassis on each RAID enclosure is not the same.
An odd number of enclosures in a 4-loop configuration. Only even numbers of enclosures are supported.
Partitioning Disks or LUN devices as Primary Partitions
To achieve optimal performance, each disk or LUN in the array should be partitioned as a single primary partition.
On storage arrays with 450 GB drives, use the parted utility to create GPT (GUID Partition Table) type partitions. On arrays with smaller drives, use the fdisk utility to create Linux LVM type partitions.
To partition disk or LUN devices with 450 GB drives or larger:
Reboot your system to reload the fibre channel adapter drivers.
Open a terminal, and log in as root and view a list of disks or LUN devices detected by the operating system, using the following command: fdisk -l | grep dev. Identify the disk or LUN devices that are part of the storage array to be configured with a standard filesystem. These devices will be re-partitioned.
Use the parted command to re-partition each disk device identified in the previous step: /sbin/parted -s -- <disk name> mklabel gpt mkpart primary 0 -1 where <disk name> is the name of a disk device identified in step 1, without a partition number, such as /dev/sdb.
Repeat for each disk.
To partition disk or LUN devices with drives smaller than 450 GB:
Reboot your system to reload the fibre channel adapter drivers.
Open a terminal, and log in as root and view a list of disks or LUN devices detected by the operating system: fdisk -l | grep dev. Identify the disk or LUN devices that are part of the storage array to be configured with a standard filesystem. These devices will be re-partitioned.
If you plan to configure a standard filesystem on a former Stone FS storage array, delete the volume label and volume table on each LUN device that is part of the storage array. Type the following command for each LUN device: dd if=/dev/zero of=<LUN device> count=4096 Where <LUN device> is the device name of a LUN in your storage array, such as /dev/sdc.
WarningWhen using the dd command, be very careful to not delete your system drive (usually /dev/sda) or any other drive aside from the LUNs in your storage array.
Use fdisk to re-partition each disk device identified in the previous step: fdisk <disk name> where <disk name> is a disk device name without a partition number, such as /dev/sdf. The fdisk utility starts, checks the disk device, and then displays its prompt.
NoteWhen fdisk starts, a warning about the number of disk cylinders may appear. You can disregard this warning.
Type n to display the New partition creation menu. fdisk displays the type of partitions you can create (primary or extended).
Create a primary partition on the disk device by typing p at the prompt.
When prompted to enter a partition number, type 1 to make the primary partition the first one on the LUN.
NoteYou may have to delete pre-existing partitions by entering d when prompted, and repeating step 3.
When prompted to set the starting cylinder number, press Enter twice to accept the defaults, which are the first and last cylinder on the device. The fdisk prompt reappears.
Type t to set the partition type. You are prompted to enter the hexadecimal code of the partition type to be created on the LUN.
Type 8e to set the partition type to Linux LVM. fdisk sets the partition as Linux LVM and the following output appears: Changed system type of partition 1 to 8e (Linux LVM)
Type w to save the new partition table.
Repeat steps 2 through 9 for each disk or LUN device identified in step 1.
Assembling the Disk or LUN Devices into a Logical Volume
After you have formatted each disk or LUN device as a partition, you must assemble the LUNs into a single logical volume on which you create the XFS filesystem. This procedure does not cover creating fault-tolerance and assumes that the LUNs are RAID-protected, as is the case with Stone Direct XR-series arrays.
To assemble a logical volume:
Verify that the disk or LUN devices are detected by the operating system: fdisk -l | grep dev All devices appear in a list similar to the following example (your values may vary):
/dev/sdb1 1 180482 1449713663+ ee EFI GPT
/dev/sdc1 1 180482 1449713663+ ee EFI GPT
/dev/sdd1 1 180482 1449713663+ ee EFI GPT
/dev/sde1 1 180482 1449713663+ ee EFI GPT
Partitions created with the parted command for arrays with 450 GB disks are marked “EFI GPT”. Partitions created in fdisk for arrays with smaller capacity disks are marked “Linux LVM”. Other devices of different types may be listed before and after the GPT or LVM devices.
Create a physical volume on each of the devices: pvcreate <list of devices> where <list of devices> is a list of all the devices in the storage array. For example, if you have four devices, ranging from /dev/sdb1 to /dev/sde1, you would type: pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1. The physical volumes are created.
TipYou can use the command pvremove to delete any erroneously entered devices.
Verify that the physical volumes were initialized correctly: pvscan -v. A list of all of the physical volumes you created appears. The following sample output is for the previous example of 4 physical volumes created on devices /dev/sdb1 through /dev/sde1:
Create the volume group “vg00” from the physical volumes you created in the preceding step: vgcreate vg00 <list of volumes> where <list of volumes> is the list of physical volumes you created in the preceding step.
TipYou can use the command vgremove to delete any erroneously entered volume.
Verify the volume was created and obtain the value of the “Free PE / Size” field: vgdisplay -v. In the output, find the line that contains the “Free PE / Size” field and write down the value of the “Free PE”. For example, in the following example output the “Free PE” value is 2124556. Free PE / Size 2124556 / 8.10 TB
Create a new logical volume on “vg00”: lvcreate -l <Free_PE_value> -i <#_of_physical_volumes> -I 32 -n lvol1 vg00 where <Free_PE_value> is the “Free PE” value you noted in the preceding step and <#_of_physical_volumes> is the number of physical volumes. If we continue with the example used in the previous steps, you would type: lvcreate -l 2124556 -i 4 -I 32 -n lvol1 vg00. The output confirms the creation of the logical volume: Logical volume “lvol1” created
NoteIf the command outputs several lines about a file descriptor leaked on lvdisplay invocation, ignore them.
Check if the adsk_lvm startup script has been installed by the DKU to enable automatic logical volume reassembly upon reboot: chkconfig --list | grep adsk_lvm. If the script is properly configured, the command output is: adsk_lvm 0:off 1:off 2:on 3:on 4:on 5:on 6:off. If the command output is different, enable the script with:
chkconfig --add adsk_lvm
chkconfig adsk_lvm on
Creating the XFS Filesystem on the Logical Volume
After having created the logical volume, you are now ready to create and mount the XFS filesystem.
To create and mount an XFS filesystem:
Identify the optimal agsize value for your array by running the mkfs.xfs command: mkfs.xfs -d agcount=128 -f /dev/vg00/lvol1. This command displays diagnostics information similar to the following (your values may differ):
From the diagnostic information printed in the previous step, note: agsize on the first line, sunit and swidth on the fourth line.
Depending on the values of sunit and swidth, calculate a new agsize value using one of the following three methods:
If the values of sunit and swidth are both equal to 0, multiply the agsize value by 4096. For example (your values will differ): 1066667 * 4096 = 4369068032. Proceed using the value calculated above as the new agsize value.
If the command displays a warning message about the agsize being a multiple of the stripe width, multiply the agsize value by 4096, and subtract the sunit value multiplied by 4096. For example (your values will differ):
Continue using the value calculated above as the new agsize value.
If the values of sunit and swidth are not equal to 0, and no warning message appears, proceed to step 4 using the agsize value displayed by the mkfs.xfs command in step 1.
Run mkfs.xfs again to create the XFS filesystem on the device /dev/vg00/lvol1 using the value calculated in one of the previous steps: mkfs.xfs -d agsize= <new agsize> -f /dev/vg00/lvol1. The filesystem is created on the storage array.
NoteIf the command fails, redo your calculations starting from step 1.
Verify that the storage can be mounted by running one of the following commands:
For HP Z800 systems: mount /mnt/StorageMedia
For older systems: mount /mnt/stoneMedia
The storage should mount, as the DKU installation script should have created the mount point directory for your storage (/mnt/StorageMedia on HP Z800 workstations, or /mnt/stoneMedia on older workstations), as well as the corresponding entry in the /etc/fstab file. If you receive an error message and the storage does not mount, follow the instructions in the next section to manually mount the storage.
Manually Creating a Mount Point and Mounting the Storage
If the mount point directory for your storage was not created automatically by the DKU, or if the storage does not mount, create the mount point and mount the storage manually:
Create the directory that will serve as the mount point for the filesystem, if it does not exist. For example: mkdir /mnt/StorageMedia
WarningDo not use the word “stonefs” as the name for your mount point directory. “Stonefs” is a reserved word, and can cause issues if used as the mount point directory name.
Mount the XFS filesystem from the logical volume /dev/vg00/lvol1 on the directory you created in the previous step. For example: mount -av -t xfs -o rw,noatime,inode64 /dev/vg00/lvol1 /mnt/StorageMedia . The filesystem is mounted as /mnt/StorageMedia.
Confirm that the storage is now mounted: df -h. The output should list /dev/mapper/vg00-lvol1 mounted on your mount point directory.
Append a line to /etc/fstab so the filesystem is mounted at startup, for example: /dev/vg00/lvol1 /mnt/ StorageMedia xfs rw,noatime,inode64
Optional: Confirm that the filesystem can mount automatically by rebooting the workstation and using the command df -h again.