- Table of Contents
-
- H3C G6 Servers Storage Controller User Guide-6W100
- 00-Preface
- 01-Storage controller overview
- 02-Storage controller features
- 03-Configuring a VROC SATA RAID controller
- 04-Configuring an NVMe VROC module
- 05-Configuring an LSI-9540 or 9560 storage controller
- 06-Configuring a P460, P2404 or P4408 storage controller
- 07-Configuring an LSI 9500 series storage controller
- 08-Configuring a RAID-MARVELL-SANTACRUZ-LP-2i storage controller
- 09-Configuring a RAID-MARVELL-M.2 storage controller
- 10-Appendix A Troubleshooting storage controllers
- 11-Appendix B RAID arrays and fault tolerance
- Related Documents
-
Title | Size | Download |
---|---|---|
03-Configuring a VROC SATA RAID controller | 3.37 MB |
Configuring the VROC SATA RAID controller
|
NOTE: The BIOS screens might vary by the BIOS version. The screenshots in this chapter are for illustration only. |
About the VROC SATA RAID controller
Intel Virtual RAID on CPU (VROC) is an embedded virtual RAID technology from Intel that provides RAID capacity for systems that use SATA drives. If the RAID controller has multiple drives attached, you can configure RAID arrays to protect data and improve read/write performance.
Features
Operating modes
The VROC SATA RAID controller supports the following operating modes:
· AHCI mode—This is the default mode. In this mode, all physical drives are exposed to the operating system (OS). The drives can be directly used for data storage. Native Command Queuing (NCQ) is supported, which improves the I/O performance of SATA drives.
· RAID mode—In this mode, RAID functions are enabled and RAID arrays can be created on physical drives to improve I/O performance and data security.
|
NOTE: · For the new mode to take effect, restart the server after an operating mode change. · The OS might fail to start up after the operating mode of the storage controller is changed. To resolve this issue, reinstall the OS. If the issue persists, contact Technical Support. |
RAID levels
The supported RAID levels vary by storage controller model. For more information about the supported RAID levels of each storage controller, see H3C Servers Storage Controllers Technical Specifications.
Table 1 shows the minimum number of drives required by each RAID level and the maximum number of failed drives supported by each RAID level. For more information about RAID levels, see "Appendix B RAID arrays and fault tolerance."
Table 1 RAID levels and the numbers of drives for each RAID level
RAID level |
Min. drives required |
Max. drives |
Max. failed drives |
RAID 0 |
2 |
Maximum number of drives supported by the drive backplane |
0 |
RAID 1 |
2 |
2 |
1 |
RAID 5 |
3 |
Maximum number of drives supported by the drive backplane |
1 |
RAID 10 |
4 |
4 |
2 |
Restrictions and guidelines for RAID configuration
· As a best practice, install drives that do not contain RAID information.
· To avoid degraded RAID performance or RAID creation failures, make sure all drives in the RAID are the same type (HDDs or SSDs) and have the same connector type (SAS or SATA).
· For efficient use of storage, use drives that have the same capacity to build a RAID. If the drives have different capacities, the lowest capacity is used across all drives in the RAID.
· If one drive is used by several logical drives, RAID performance might be affected and maintenance complexities will increase.
Configuring RAID arrays in UEFI mode
This section describes how to configure RAID arrays through a VROC SATA RAID controller in UEFI mode. For more information about how to enter the BIOS and set the boot mode to UEFI, see the BIOS user guide for the server.
Setting the VROC SATA operating mode
1. During server POST, press Delete, Esc, or F2 as prompted to open the BIOS setup screen as shown in Figure 1.
For how to navigate screens and modify settings, see the operation instructions at the lower right corner.
Figure 1 BIOS setup screen
2. On the screen as shown in Figure 2, select Advanced > Platform Configuration > PCH-IO Configuration, and press Enter.
Figure 2 Platform Configuration screen
3. On the screen as shown in Figure 3, select SATA And RST Configuration, and then press Enter.
Figure 3 PCH-IO Configuration screen
4. On the screen as shown in Figure 4, select Controller 1 SATA And RST Configuration, Controller 2 SATA And RST Configuration or Controller 3 SATA And RST Configuration based on the drive position, and then press Enter.
Figure 4 Modifying the storage controller operating mode
5. Select an operating mode. For more information about setting the operating mode for a VROC SATA RAID controller, see "Features."
Figure 5 Selecting an operating mode
6. After the configuration is complete, press F4. On the dialog box that opens, select Yes to save the current configuration and reboot the system to complete the storage controller operating mode configuration.
Accessing the VROC SATA configuration screen
1. Access the BIOS setup screen.
2. On the screen as shown in Figure 6, select Advanced > Intel(R) VROC SATA Controller or Intel(R) VROC sSATA Controller, and press Enter.
|
NOTE: For the Intel(R) VROC SATA Controller or Intel(R) VROC sSATA Controller option to appear on the screen shown in Figure 6, make sure the SATA or sSATA controller has been configured to operate in RAID mode. For how to set the operating mode, see "Setting the VROC SATA operating mode." |
Figure 6 Advanced screen
The VROC SATA configuration screen as shown in Figure 7 opens.
Figure 7 VROC SATA configuration screen
Configuring RAID arrays
1. On the VROC SATA configuration screen as shown in Figure 8, select Create RAID Volume, and press Enter.
The Create RAID Volume option is available only when the interface corresponding to the SATA or sSATA controller connects to two or more drives.
Figure 8 Selecting Create RAID Volume
2. On the screen as shown in Figure 9, set the Name, RAID Level, Select Disks, Strip Size, and Capacity parameters, select Create Volume, and press Enter to create the RAID volume.
Table 2 describes the configuration parameters for creating a RAID volume.
Figure 9 Setting RAID parameters
Table 2 Configuration parameters
Parameter |
Description |
Name |
RAID array name. |
RAID Level |
RAID level, which determines the performance, fault tolerance capability, and capacity for the logical drive. |
Select Disks |
Select member drives for the RAID array. Available drives are displayed under Select Disks. Press Enter to select drives. [X] indicates that the corresponding drive has been selected. |
Strip Size |
Stripe size, which determines the size of the data block that can be written into a stripe on each drive. |
Capacity |
Logical drive capacity. |
3. The screen as shown in Figure 10 opens.
After the RAID volume is created, the RAID volume is displayed in the RAID Volumes directory. To view details about a RAID volume, select the RAID volume, and press Enter. Details about a RAID volume include the RAID array name, RAID level, and member drives.
Figure 10 Viewing the created RAID volume
Configuring a hot spare drive
This task configures a global hot spare drive. The hot spare drive is effective for all created RAID volumes. Multiple hot spare drives can be created to protect multiple RAID volumes.
CAUTION: With hot spare drives configured, if a RAID level is degraded due to drive failures, hot spare drives replace the failed drives automatically. The system starts RAID rebuilding automatically after accessing the OS. |
To configure a hot spare drive:
1. On the VROC SATA configuration screen as shown in Figure 11, select a drive to configure as a hot spare drive, and press Enter.
Figure 11 Selecting a drive for hot spare
2. Select Mark as Spare, and press Enter.
Figure 12 Configuring a spare drive
3. On the following screen that opens, select Yes and press Enter.
Figure 13 Confirming the hot spare configuration
Deleting RAID arrays
1. On the VROC SATA configuration screen as shown in Figure 14, select the RAID volume to be deleted under RAID Volumes, and press Enter.
Figure 14 Selecting the RAID volume to be deleted
2. On the screen as shown in Figure 15, select Delete and press Enter to delete the selected RAID volume.
Figure 15 RAID volume information screen
Locating a drive
1. On the VROC SATA configuration screen as shown in Figure 16, select the drive to locate, and press Enter.
Figure 16 Selecting the drive to locate
2. Set the Locate LED On option to On, and then press Enter.
For information about drive LEDs, see the user guide for the server.
Figure 17 Locating a logical drive
Initializing a logical drive
The embedded RAID controller automatically initializes the RAID after RAID configuration. After initialization, the logical drive can be used by the OS and the member drives can meet the requirement of the RAID level if the RAID is redundant.
Configuring RAID arrays in legacy mode
The NVMe VROC RAID controller does not support RAID configuration in legacy mode.
Configuring RAID in the operating system
Viewing physical drive letters
|
NOTE: · To configure the VROC SATA RAID controller, you must set the PCH SATA Configuration (or PCH sSATA Configuration) field in PCH Configuration to RAID, and restart the server for the configuration to take effect. · After you switch the working mode of the storage controller, the system disk of the original mode may experience abnormalities, which may prevent the operating system from starting normally. In this case, you can try to reinstall the operating system. If the issue persists, contact Technical Support. |
Perform this task to view physical drive letters used for RAID creation.
Syntax
lsscsi
Examples
# View physical drive letters used for RAID creation.
[root@localhost ~]# lsscsi
[0:0:0:0] disk ATA INTEL SSDSCKKB48 1132 /dev/sda
[1:0:0:0] disk ATA Micron_5300_MTFD U001 /dev/sdb
[N:0:0:1] disk INTEL SSDPE2KX010T8__1 /dev/nvme0n1
In this example, the system disk letter is /dev/nvme0n1, and the drive letters used for RAID creation is /dev/sda and /dev/sdb.
Creating a container
Perform this task to create a container for logical drive creation.
Syntax
mdadm –C container_id phy_id –n num –e imsm
Parameters
container_id: Specifies the container name.
phy_id: Specifies the physical drive letters.
num: Specifies the number of physical drives.
Examples
# Create a container named /dev/md0.
[root@localhost ~]# mdadm -C /dev/md0 /dev/sda /dev/sdb -n 2 -e imsm
mdadm: container /dev/md0 prepared.
Adding RAID arrays to a container
Perform this task to add RAID arrays to a container.
Syntax
mdadm –C RAID_id container_id –n num –l RAID_level
Parameters
RAID_id: Specifies the name of a RAID array.
container_id: Specifies the container name.
num: Specifies the number of physical drives.
RAID_level: Specifies the RAID level.
Examples
# Add RAID 1 array named /dev/md/R1 to container /dev/md0.
[root@localhost ~]# mdadm -C /dev/md/R1 /dev/md0 -n 2 -l 1
mdadm: array /dev/md/R1 started.
# Add RAID 0 array named /dev/md/R1 to container /dev/md0.
[root@localhost ~]# mdadm -C /dev/md/R1 /dev/md0 -n 2 -l 0
mdadm: array /dev/md/R1 started.
# Use two drives to create a RAID 0 array in container /dev/md0.
[root@localhost ~]# mdadm -C /dev/md0 /dev/sdc /dev/sdd -n 2 -e imsm
mdadm: /dev/sdc appears to contain an ext2fs file system
size=976762584K mtime=Wed Dec 31 19:00:00 1969
mdadm: /dev/sdd appears to contain an ext2fs file system
size=976762584K mtime=Wed Dec 31 19:00:00 1969
Continue creating array? y
mdadm: container /dev/md0 prepared.
[root@localhost ~]# mdadm -C /dev/md/R0 /dev/md0 -n 2 -l 0
mdadm: array /dev/md/R0 started.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid0]
md125 : active raid0 sdc[1] sdd[0]
1953513472 blocks super external:/md0/0 128k chunks
md0 : inactive sdd[1](S) sdc[0](S)
2210 blocks super external:imsm
md126 : active raid0 sda[1] sdb[0]
890817536 blocks super external:/md127/0 128k chunks
md127 : inactive sdb[1](S) sda[0](S)
10402 blocks super external:imsm
unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md125
/dev/md125:
Container : /dev/md0, member 0
Raid Level : raid0
Array Size : 1953513472 (1863.02 GiB 2000.40 GB)
Raid Devices : 2
Total Devices : 2
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 128K
Consistency Policy : none
UUID : cdba8f2d:aad3144b:8a4c2d03:b16742d0
Number Major Minor RaidDevice State
0 8 48 0 active sync /dev/sdd
1 8 32 1 active sync /dev/sdc
# Use three drives to create a RAID 5 array in container /dev/md0.
[root@localhost ~]# mdadm -C /dev/md0 /dev/sdc /dev/sdd /dev/sde -n 3 -e imsm
mdadm: /dev/sdc appears to contain an ext2fs file system
size=976762584K mtime=Wed Dec 31 19:00:00 1969
mdadm: /dev/sdd appears to contain an ext2fs file system
size=976762584K mtime=Wed Dec 31 19:00:00 1969
mdadm: /dev/sde appears to contain an ext2fs file system
size=468851544K mtime=Wed Dec 31 19:00:00 1969
Continue creating array? y
mdadm: container /dev/md0 prepared.
[root@localhost ~]# mdadm -C /dev/md/R1 /dev/md0 -n 3 -l 5
mdadm: array /dev/md/R1 started.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4]
md125 : active raid5 sde[2] sdd[1] sdc[0]
937691136 blocks super external:/md0/0 level 5, 128k chunk, algorithm 0 [3/3] [UUU]
[>....................] resync = 0.3% (1692948/468845568) finish=59.7min speed=130226K/sec
md0 : inactive sde[2](S) sdd[1](S) sdc[0](S)
3315 blocks super external:imsm
md126 : active raid0 sda[1] sdb[0]
890817536 blocks super external:/md127/0 128k chunks
md127 : inactive sdb[1](S) sda[0](S)
10402 blocks super external:imsm
unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md125
/dev/md125:
Container : /dev/md0, member 0
Raid Level : raid5
Array Size : 937691136 (894.25 GiB 960.20 GB)
Used Dev Size : 468845568 (447.13 GiB 480.10 GB)
Raid Devices : 3
Total Devices : 3
State : clean, resyncing
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-asymmetric
Chunk Size : 128K
Consistency Policy : resync
Resync Status : 4% complete
UUID : 62e76373:ba6fb28a:4ae39dba:bdf52d6c
Number Major Minor RaidDevice State
0 8 32 0 active sync /dev/sdc
1 8 48 1 active sync /dev/sdd
2 8 64 2 active sync /dev/sde
# Use four drives to create a RAID 10 array in container /dev/md0.
[root@localhost ~]# mdadm -C /dev/md0 /dev/sdc /dev/sdd /dev/sde /dev/sdf -n 4 -e imsm
mdadm: /dev/sdc appears to contain an ext2fs file system
size=976762584K mtime=Wed Dec 31 19:00:00 1969
mdadm: /dev/sdd appears to contain an ext2fs file system
size=976762584K mtime=Wed Dec 31 19:00:00 1969
mdadm: /dev/sdf appears to contain an ext2fs file system
size=3750738264K mtime=Wed Dec 31 19:00:00 1969
Continue creating array? y
mdadm: container /dev/md0 prepared.
[root@localhost ~]# mdadm -C /dev/md/R1 /dev/md0 -n 4 -l 10
mdadm: array /dev/md/R1 started.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid0] [raid6] [raid5] [raid4] [raid10]
md125 : active raid10 sde[3] sdd[2] sdf[1] sdc[0]
937691136 blocks super external:/md0/0 128K chunks 2 near-copies [4/4] [UUUU]
[>....................] resync = 0.1% (1635136/937691136) finish=76.3min speed=204392K/sec
md0 : inactive sdf[3](S) sde[2](S) sdd[1](S) sdc[0](S)
4420 blocks super external:imsm
md126 : active raid0 sda[1] sdb[0]
890817536 blocks super external:/md127/0 128k chunks
md127 : inactive sdb[1](S) sda[0](S)
10402 blocks super external:imsm
unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md125
/dev/md125:
Container : /dev/md0, member 0
Raid Level : raid10
Array Size : 937691136 (894.25 GiB 960.20 GB)
Used Dev Size : 468845568 (447.13 GiB 480.10 GB)
Raid Devices : 4
Total Devices : 4
State : clean, resyncing
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 128K
Consistency Policy : resync
Resync Status : 0% complete
UUID : bd08e6b3:90b4c83c:498c7d20:2b33be77
Number Major Minor RaidDevice State
0 8 32 0 active sync set-A /dev/sdc
1 8 80 1 active sync set-B /dev/sdf
2 8 48 2 active sync set-A /dev/sdd
3 8 64 3 active sync set-B /dev/sde
Viewing the states of all RAID arrays
Perform this task to view the state of all RAID arrays.
Syntax
cat /proc/mdstat
Examples
# View the states of all RAID arrays.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md127 : active raid1 sda[1] sdb[0]
468845568 blocks super external:/md0/0 [2/2] [UU]
[>....................] resync = 0.5% (2401792/468845568) finish=35.6min speed=218344K/sec
md0 : inactive sdb[1](S) sda[0](S)
2210 blocks super external:imsm
unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md127
/dev/md127:
Container : /dev/md0, member 0
Raid Level : raid1
Array Size : 468845568 (447.13 GiB 480.10 GB)
Used Dev Size : 468845568 (447.13 GiB 480.10 GB)
Raid Devices : 2
Total Devices : 2
State : clean, resyncing
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Resync Status : 1% complete
UUID : af26b232:b11886cf:24caa39f:e04f675a
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 0 1 active sync /dev/sda
Wait for the resync process to be complete.
[root@localhost ~]# mdadm -D /dev/md127
/dev/md127:
Container : /dev/md0, member 0
Raid Level : raid1
Array Size : 468845568 (447.13 GiB 480.10 GB)
Used Dev Size : 468845568 (447.13 GiB 480.10 GB)
Raid Devices : 2
Total Devices : 2
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
UUID : af26b232:b11886cf:24caa39f:e04f675a
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 0 1 active sync /dev/sda
Creating RAID arrays in Windows
Installing the VROC driver
When you install the VROC driver, related software programs are also installed.
To install the VROC driver:
1. Access the H3C official website and download the VROC driver.
2. Upload the SetupVROC.exe file to the system, and double-click the file to open the installation window.
Figure 18 Installing the VROC tool
3. Select the installation location.
Figure 19 Selecting the installation location
Figure 20 Installation completed
Creating a RAID array
1. Open Intel Virtual RAID on CPU installed together with the VROC driver.
Figure 21 Creating a RAID array
2. Click the + icon.
Figure 22 Selecting to create a volume
3. Select the controller and RAID level and then click Next. In this example, two M.2 drives are used to create a RAID 1 array.
Figure 23 Selecting the controller and RAID level
4. Select drives, and then click Next.
Figure 24 Selecting drives
5. Enter the RAID volumn name, select the volumn size, and then click Next to create the volume
Figure 25 Configuring the RAID volume parameters
After the RAID volumn is created, the page as shown in Figure 26 opens.
Figure 26 Volume creattion completed
Deleting a RAID array
1. Select the RAID array to be deleted, and then click Delete Volume.
Figure 27 Selecting the RAID array to be deleted
2. In the dialog box that opens, click Yes.
Figure 28 Confirming the deletion
3. Verify that the RAID array has been deleted on the Volumes page.
Figure 29 Verifying the deletion result
Troubleshooting
For detailed information about collecting storage controller fault information, diagnosing and locating faults, and troubleshooting servers, see H3C Servers Troubleshooting Guide.
Compatibility
For information about storage controller and server compatibility, access http://www.h3c.com/en/home/qr/default.htm?id=66.
Downloading and installing drivers
Access https://www.h3c.com/en/Support/Resource_Center/EN/Severs/Catalog/Optional_Parts/Storage_Controller/?tbox=Software to download the storage controller drivers. For more information about installing drivers, see the release notes for the driver program.