GlusterFS is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace. GlusterFS features high expandability, high availability, and super-high performance, and it avoids single point of failures because it does not have any metadata servers. ARM host do not support GlusterFS.
GlusterFS is applicable to school scenarios that require high system performance.
Add a vSwitch
For a cluster to use GlusterFS storage, you must add a storage vSwitch with the same name on each host that has GlusterFS configured. For information about vSwitches, see "Manage virtual switches for a host."
If the management node in the cluster will not be used as a service node, to use GlusterFS storage, you must add a storage vSwitch on the management node, and make sure the name of the storage vSwitch is the same on each node. Add the management node to the cluster, add a storage vSwitch, and then delete the management node from the cluster. |
Teaching storage initialization
This task allows you to format an idle disk on a host and mount it to the local storage path /vms/learningspace (For more information about the procedures, see "Create a local storage volume."). This task also allows you to mount the shared storage in the cluster to the local storage path /vms/learningspace (For more information about the procedures, see "Create a shared storage volume."). The newly created GlusterFS mounting path and storage block will be mounted to the /vms/learningspace/glusterfs path on the same disk or the shared storage.
Create a GlusterFS volume
To create a volume of the custom or stateful failover type, follow these restrictions and guidelines: · For a storage block to use the space on a drive of the host, make sure the storage block directory starts with the directory for mounting the drive on the host. In addition, the mount point and the storage block directory cannot start with /vms/. For example, if the drive is mounted on /test/brick, use /test/brick/XX as the storage block directory and /test/brick/gluster as the directory for mounting the GFS file system as a best practice. · Make sure the directory where the GFS file system will be mounted is not a sub directory of the directory where the storage block is stored, and vice versa. For example, if the directory for mounting the GFS file system is /test/brick/gluster, the directory for storing the storage block cannot be /test, /test/brick, /test/brick/gluster, or /test/brick/gluster/XX. · To avoid mounting information loss after a server reboot when you use an additional disk, mount and format the disk in the back end and then write mounting information into the /etc/fstab file. |
From the left navigation pane, select Data Center > Teaching Storage > GlusterFS Settings.
Click Create Volume.
Configure the parameters as required, and then click OK.
Type: Select a storage type. If you select Teaching Image Storage, the mount point is /vms/learningspace/glusterfs/courseImages and the storage block path is /vms/learningspace/glusterfs/brick. If you select Custom Storage, you can define a mount point and storage block path as needed. If you select Stateful Failover Storage, GFS storage is used as the shared storage for the stateful failover system, but the mount point and storage block path cannot begin with /vms/. For the Stateful Failover Storage type, you can define a mount point and storage block path as needed, but the mount point and storage block path cannot begin with /vms/. A GFS volume of the Custom Storage or Stateful Failover Storage type cannot be used as the course storage path.
Host Pool: Select a host pool.
Storage Cluster: Select a cluster.
Host: Select a host for storing course images. The number of hosts must be a multiple of the number of backups in the topology type.
vSwitch: Select the vSwitch to which the host attached.
Topology Type: Select the number of GlusterFS storage backups. You can select 2 to 10 backups depending on the cluster scale and backup requirements.
GFS Mount Point: Directory where the GFS storage will be mounted.
Storage Block Path: Directory where the GlusterFS physical storage block resides.
· If you use a host that has acted as a GlusterFS node for storage expansion, make sure the bricks on the original GlusterFS volume have been deleted. · Before expansion, create and mount the same storage block directory on a newly-added host for expansion as the existing storage block path. For a storage block to use the space on a drive of the host, make sure the storage block directory starts with the directory for mounting the drive on the host. For more information, see "Create a GlusterFS volume." |
Perform this task to add storage backups or add hosts that acts as storage nodes to expand a GlusterFS volume.
From the left navigation pane, select Data Center > Teaching Storage > GlusterFS Settings.
Click Expand in the Actions column for a GlusterFS volume.
In the dialog box that opens, select whether to change the topology type. If you select to not change the topology type, make sure the number of available hosts in the cluster is a multiple of the number of backups in the topology type.
Select target hosts, and then click OK.
This task might result in severe or undesirable impact. Back up the data in the desired GlusterFS volume.
As a best practice, use two replicas.
For better storage performance, do not use the full replica policy when over three hosts exist.
From the left navigation pane, select Data Center > Teaching Storage > GlusterFS Settings.
Click Shrink in the Actions column for a GlusterFS volume.
In the dialog box that opens, select a host, select whether to delete the storage pool, select I am fully aware of the impacts of this operation, enter the password of the current administrator, and then click OK.
From the left navigation pane, select Data Center > Teaching Storage > GlusterFS Settings.
Click Delete in the Actions column for a GlusterFS volume.
In the dialog box that opens, click OK.
From the left navigation pane, select Data Center > Teaching Storage > GlusterFS Settings. On the Overview tab, you can view the topology, storage node states, and partition states.
From the left navigation pane, select Data Center > Teaching Storage > GlusterFS Settings.
Click the Node Management tab.
Click Repair Node in the Actions column for a node that has mounting or partition connection errors.
If the GlusterFS storage pool is inactive, and cannot be manually started, perform the following steps to resolve the issue: 1. Verify that the storage network is reachable. 2. Mount the GlusterFS volume to a temporary directory to check for duplicate files: a. Execute the mkdir -p /vms/tmp command to create temporary directory /vms/tmp. b. Execute the mount -t glusterfs IP:/VolumeName /vms/tmp command to mount the GlusterFS volume. IP represents the IP address of a node on the GlusterFS volume, and VolumeName represents the name of the volume. c. Execute the df -h command to verify that the volume has been mounted successfully. d. Access the /vms/temp directory to execute the ls -a command to check for duplicate files. Delete the duplicate files, if any. 3. Execute the gluster volume heal VolumeName info command to check for the ls in split brain file. Delete the file, if any. To reserve the file, save the file in another directory, and then copy the file to its original directory after the storage pool is recovered. |