- Table of Contents
-
- H3C S3610[5510] Series Ethernet Switches Operation Manual-Release 0001-(V1.02)
- 00-1Cover
- 00-2Product Overview
- 01-Login Operation
- 02-VLAN Operation
- 03-IP Address and Performance Operation
- 04-QinQ-BPDU Tunnel Operation
- 05-Port Correlation Configuration Operation
- 06-MAC Address Table Management Operation
- 07-MAC-IP-Port Binding Operation
- 08-MSTP Operation
- 09-Routing Overview Operation
- 10-IPv4 Routing Operation
- 11-IPv6 Routing Operation
- 12-IPv6 Configuration Operation
- 13-Multicast Protocol Operation
- 14-802.1x-HABP-MAC Authentication Operation
- 15-AAA-RADIUS-HWTACACS Operation
- 16-ARP Operation
- 17-DHCP Operation
- 18-ACL Operation
- 19-QoS Operation
- 20-Port Mirroring Operation
- 21-Cluster Management Operation
- 22-UDP Helper Operation
- 23-SNMP-RMON Operation
- 24-NTP Operation
- 25-DNS Operation
- 26-File System Management Operation
- 27-Information Center Operation
- 28-System Maintenance and Debugging Operation
- 29-NQA Operation
- 30-VRRP Operation
- 31-SSH Operation
- 32-Appendix
- Related Documents
-
Title | Size | Download |
---|---|---|
21-Cluster Management Operation | 304 KB |
Chapter 1 Cluster Management Configuration
1.1 Cluster Management Overview
1.3 Configuring the Management Device
1.3.1 Enabling NDP Globally and for Specific Ports
1.3.2 Configuring NDP Parameters
1.3.3 Enabling NTDP Globally and for Specific Ports
1.3.4 Configuring NTDP Parameters
1.3.5 Enabling the Cluster Function
1.3.7 Configuring Member Management
1.3.8 Configuring Topology Management
1.3.9 Configuring Cluster Parameters
1.3.10 Configuring Interaction for a Cluster
1.4 Configuring the Member Devices
1.4.1 Enabling NDP Globally and for Specific Ports
1.4.2 Enabling NTDP Globally and for Specific Ports
1.4.3 Enabling the Cluster Function
1.4.4 Adding a Candidate Device to a Cluster
1.5 Displaying and Maintaining Cluster Management
1.6 Cluster Management Configuration Example
Chapter 1 Cluster Management Configuration
1.1 Cluster Management Overview
1.1.1 Defining Cluster
A cluster is an aggregation of a group of communication devices. Cluster management is to implement management of large numbers of distributed layer 2 devices.
Cluster management is implemented through HGMP v2. By employing huawei group management protocol (HGMP v2), a network administrator can manage multiple switches using the public IP address of a switch known as a management device. Figure 1-1 illustrates a typical cluster implementation.
Figure 1-1 Network diagram for a cluster
Each cluster has one (and only one) management device. You need to designate the management device when creating a cluster. The management device of a cluster is the portal of the cluster. That is, operations performed outside a cluster and intended for the member devices of a cluster, such as accessing, configuring, managing, and monitoring a cluster member, are all implemented through the management device.
l The management device of a cluster identifies and controls all the member devices in the cluster, regardless of their position and the way they are connected.
l The management device collects the topology information about all the cluster member devices and candidate devices for users to retrieve.
l A management device manages and monitors the cluster member devices based on the neighbor information collected, which contains network topology information.
Cluster management offers the following advantages:
l The procedures to configure multiple switches remarkably simplified. By configuring a public IP address on the management device, you can configure and manage a group of member devices on the management device without the trouble of logging onto each device separately.
l Providing topology discovery and display function, which is useful for network monitoring and debugging
l Allowing simultaneously software upgrading and parameter configuring on multiple switches
l Free of topology and distance limitations
l Saving public IP address resource
1.1.2 Roles in a Cluster
The switches in a cluster play different roles. They function differently and are thus in different status. You can specify the role a switch plays. A switch can switch to other roles according to specific rules.
The following three roles exist in a cluster: management device, member device, and candidate device.
Role |
Configuration |
Function |
Management device |
l Configured with the public IP address of the cluster l Management commands intended for a cluster are processed by the management device |
l A management device provides an interface for managing all the devices in the cluster. l A management device manages the member devices through NAT. l A management device provides these functions: collecting neighbor information and topology information of the network, managing the entire cluster, maintaining the cluster. It also supports proxies. |
Member device |
A member device does not own the public IP address of the cluster. |
l Members in the cluster l A member device is under the management of the management device. It discovers neighbor information, carries out the commands passed by proxies, and reports faults/log. |
Candidate device |
A candidate device does not own the public IP address of the cluster. |
A candidate device refers to a switch that does not belong to any cluster but has the cluster capability. |
As shown in the following figure, a device in a cluster changes its role according to specific rules:
Figure 1-2 Role change in a cluster
l Each cluster has one (and only one) management device. A management device collects NDP/NTDP information to discover and determine candidate devices.
l A candidate device becomes a member device after being added to a cluster.
l A member device becomes a candidate device after being removed from the cluster.
l When a user removes a management device, the management becomes a candidate device.
1.1.3 How a Cluster Works
Cluster management involves topology information collection and the establishment and maintenance of the cluster. Topology information collection and cluster maintenance are independent from each other, with the former starting before the cluster is created:
l All devices use neighbor discovery protocol (NDP) to collect the information of the immediately connected neighbors, including their software version, host name, MAC address and port number.
l The management device uses neighbor topology discovery protocol (NTDP) to collect the information of the devices within the specified hops and the topology information of all devices and specify the candidate devices of the cluster.
l The management device adds or deletes a member device and modifies cluster management configuration.
I. Introduction to NDP
NDP is used to discover the information about directly connected neighbors, including the device name, software version, and connecting port of the adjacent devices. NDP works in the following ways:
l An NDP-enabled device periodically sends NDP packets containing its information to its neighbors. An NDP packet carries the holdtime field, which indicates the period for the receiving devices to keep the NDP data. maintains an NDP information table. Each entry in an NDP table ages with time. You can also clear the current NDP information manually to have adjacent information collected again.
l An NDP-enabled device also receives NDP packets, storing and maintaining an NDP table. The corresponding data entry in the NDP table is updated when the received information is different from the existing one. Otherwise, only the holdtime of the corresponding entry is updated.
II. Introduction to NTDP
NTDP is a protocol for network topology information collection in a specific network range, providing the information about the devices that can be added to clusters for a management device. The management device collects the topology information periodically or manually:
l The management device periodically sends NTDP topology collection request from the NTDP-enabled ports.
l Upon receiving the request, the devices sends NTDP topology collection response to the management device, copies this response packet on the NTDP-enable port and sends it to the directly connected neighbor.
l The directly connected device performs the same operation until the NTDP topology collection request is sent to all the devices within specified hops.
When the NTDP topology collection request is advertised in the network, large numbers of network devices receive the NTDP topology collection request and send NTDP topology collection response at the same time, which may cause congestion and the management device business. To avoid such case, the following methods can be used to control the NTDP topology collection request advertisement speed:
l Upon receiving an NTDP topology collection request the device does not forward the NTDP topology collection request, instead, it waits for a period of time and then forwards the NTDP topology collection request on the first NTDP-enabled port.
l On the same device, except the first port, each NTDP-enabled port waits for a period of time and then forwards the NTDP topology collection request after the port before it sends the NTDP topology collection request.
III. Introduction to handshake packets
Handshake packets are used to maintain status of the members in a cluster.
l The member devices and the management device in a cluster send handshake packets to each other periodically. In response to a received handshake packet, the management device or a member device simply remains its active state without any acknowledgement.
l If the management device does not receive the handshake packets from a member device for three consecutive times, it changes the status of the member device from active to connect. Likewise, if a member device fails to receive the handshake packets from the management device for three consecutive times, the status of the member device will also be changed from active to connect. You can use the timer command in cluster view on the management device to configure the interval to send handshake packets. The system default is 10 seconds.
l A member device in the connect state changes to the disconnect state if it does not receive a handshake packet or a management packet within the holdtime. In this case, the management device considers that the member device is disconnected. You can set the holdtime by executing the holdtime command in the cluster view on the management device (the system default is 60 seconds). The active state and connect state indicate that a member device is connected.
1.2 Configuring a Cluster
Table 1-2 HGMPv2 configuration tasks
Configuration task |
Description |
Related sections |
|
Configuring the management device |
Enabling NDP globally and for specific ports |
Required |
|
Configuring NDP parameters |
Optional |
||
Enabling NTDP globally and for specific ports |
Required |
||
Configuring NTDP parameters |
Optional |
||
Enabling the cluster function |
Required |
||
Establishing a cluster |
Required |
||
Configuring member management |
Required |
||
Configuring topology management |
Optional |
||
Configuring cluster parameters |
Optional |
||
Configuring interaction for a cluster |
Optional |
||
Configuring the member devices |
Enabling NDP globally and for specific ports |
Required |
|
Enabling NTDP globally and for specific ports |
Required |
||
Enabling the cluster function |
Required |
||
Adding a candidate device to a cluster |
Optional |
Caution:
If you disable the NDP and NTDP functions on the management device and member devices after creating a cluster, you will not cause the cluster to be dismissed, but will influence the normal operation of the cluster.
1.3 Configuring the Management Device
1.3.1 Enabling NDP Globally and for Specific Ports
Follow these steps to enable NDP globally and for specific ports:
To do... |
Use the command... |
Remarks |
|
Enter system view |
system-view |
- |
|
Enable NDP globally |
ndp enable |
Optional Enabled by default. |
|
Enable the NDP feature for the port(s) |
In system view |
ndp enable interfaceinterface-list |
Use either command By default, NDP is enabled globally and also on all ports. |
In Ethernet port view |
interface interface-type interface-number |
||
ndp enable |
Caution:
For NDP to work normally, you must enable NDP both globally and on the specified port.
1.3.2 Configuring NDP Parameters
Follow these steps to configure NDP parameters:
To do... |
Use the command... |
Remarks |
Enter system view |
system-view |
— |
Configure the holdtime of NDP information |
ndp timer aging aging-time |
Optional By default, the NDP information holdtime is 180 seconds. |
Configure the interval to send NDP packets |
ndp timer hello hello-time |
Optional By default, the interval of sending NDP packets is 60 seconds. |
1.3.3 Enabling NTDP Globally and for Specific Ports
Follow these steps to enable NTDP globally and for specific ports
To do... |
Use the command... |
Remarks |
Enter system view |
system-view |
— |
Enable NTDP globally. |
ntdp enable |
Optional Enabled by default |
Enable NTDP for the port |
interface interface-type interface-number |
Optional Enabled by default |
ntdp enable |
Caution:
l To enable NTDP to work normally, you must enable NTDP both globally and on the specified port.
l The NTDP function is not compatible with the BPDU Tunnel function on a port. So, do not enable the both on a port at the same time. For information about BPDU Tunnel, refer to QinQ-BPDU Tunnel Configuration of this manual.
1.3.4 Configuring NTDP Parameters
Follow these steps to configure NTDP parameters:
To do... |
Use the command... |
Remarks |
Enter system view |
system-view |
— |
Configure the range within which topology information is to be collected |
ntdp hop hop-value |
Optional By default, the hop range for topology collection is 3 hops. |
Configure the interval to collect topology information |
ntdp timer interval-time |
Optional By default, the interval to collect topology information is 1 minute. |
Configure the delay to forward topology-collection request packets |
ntdp timer hop-delay time |
Optional By default, the delay of the device is 200 ms. |
Configure the port delay to forward topology collection request packets |
ntdp timer port-delay time |
Optional By default, the port delay is 20 ms. |
Exit system view |
quit |
— |
Enable topology information collecting |
ntdp explore |
Optional |
& Note:
The ntdp command is not compatible with the bpdu-tunnel enable command. So, do not configure them at the same time. For information about BPDU Tunnel, refer to QinQ-BPDU Tunnel Configuration of this manual.
1.3.5 Enabling the Cluster Function
Follow these steps to enable the cluster function:
To do... |
Use the command... |
Remarks |
Enter system view |
system-view |
— |
Enable the cluster function globally |
cluster enable |
Optional By default, the cluster function is enabled. |
1.3.6 Establishing a Cluster
Before establishing a cluster, you need to configure a private IP address pool for the devices to be added to the cluster. When a candidate device is added to a cluster, the management device assigns a private IP address to it for the candidate device to communicate with other devices in the cluster. This enables you to manage and maintain member devices in a cluster through the management device.
Caution:
When you configure a cluster, if the number of routing table entries reaches the maximum number allowed, no more routing table entries can be configured on a management device and candidate devices, which will result in the unreachability of handshake packets and the repeated add and removal of the devices to and from the devices:
l If the number of routing table entries of the management device reaches the maximum number allowed when a cluster is created, all candidate devices will be added to and removed from the cluster repeatedly.
l If the number of routing table entries of a candidate device reaches the maximum number allowed when the candidate device is added to a cluster, it will be added to and removed from the cluster repeatedly.
I. Establishing a cluster manually
Follow these steps to establish a cluster manually:
To do... |
Use the command... |
Remarks |
Enter system view |
system-view |
— |
Specify the management VLAN |
management-vlan vlan-id |
Optional By default, VLAN 1 is the management VLAN. |
Enter cluster view |
cluster |
— |
Configure the private IP address range for member devices on a device which is to be configured as the management device |
ip-pool administrator-ip-address { mask | mask-length } |
Required For a cluster to work normally, the IP addresses of the VLAN interfaces of the management device and member devices must not be in the same network segment as the cluster address pool. |
Configure the current device as the management device and assign a name to it |
build name |
Required By default, the device is not a management device. |
Caution:
l If you did not configure the management VLAN as VLAN 1, and the port of the management device connecting to the member devices is the Trunk or Hybrid port, you have to enable the port to pass the packets (with tags) from the management VLAN in order to implement cluster management. If the port of the management device connecting to the member devices is the Access port, you must change it to Hybrid port or Trunk port and at the same time enable the port to pass the tagged packets of the management VLAN to achieve cluster management. For detailed configurations, refer to “VLAN configuration” in Access Volume.
l If the management VLAN is VLAN 1, and the port of the management device connecting to the member devices is configured as to allow the port to pass the packets (with tags) from the management VLAN, refer to the description above to for configurations at the management device end. When the port of the management device connecting to the member devices is configured as to allow the port to pass the packets (without tags) from the management VLAN, in order to implement cluster management, you need to perform one of the follow configurations: configuring the corresponding port on the management device as Access; configuring the port as Trunk, with the default VLAN being VLAN 1; or configuring the port as Hybrid, with the default VLAN being VLAN 1, and allowing the packets (without tags) from the management VLAN to pass the port. For detailed configurations, refer to “VLAN configuration” in Access Volume.
l You must configure the IP address range before establishing a cluster and configure on the management device only. If a cluster has already been established, you are not allowed to change the IP address range.
II. Establishing a cluster automatically
In addition to build a cluster manually, you are also provided with the means to establish a cluster automatically. With only a few commands (as shown in the table below) on the management device, you can let the system automatically build a cluster.
During the process, you will first be asked to enter a name for the cluster you want to establish, the system then lists all the candidate devices within your predefined hop counts and starts to add them automatically to the cluster.
You can use <Ctrl + C> anytime during the adding process to exit cluster auto-building. However, this will only stop adding new devices into the cluster, and devices already added in the cluster are not removed.
Follow these steps to automatically build a cluster:
To do... |
Use the command... |
Remarks |
Enter system view |
system-view |
— |
Specify the management VLAN |
management-vlan vlan-id |
Optional By default, VLAN 1 is the management VLAN. |
Enter cluster view |
cluster |
— |
Configure the private IP address range for member devices on a device which is to be configured as the management device |
ip-pool administrator-ip-address { mask | mask-length } |
Required For a cluster to work normally, the IP addresses of the VLAN interfaces of the management device and member devices must not be in the same network segment as the cluster address pool. |
Set up a cluster automatically |
auto-build [ recover ] |
Required |
To make a cluster to operate properly, the IP addresses of the VLAN interfaces of the management device and the member devices and those in the cluster address pool cannot be in the same segment.
1.3.7 Configuring Member Management
Member management includes the following aspects:
l Manually adding a candidate device to a cluster, or deleting a member device from a cluster. These operations must be done through the management device, otherwise you will be prompted with an error message.
l Communication between the management and member devices may be interrupted due to some configuration errors. Through the remote control function of member devices, you can control them remotely on the management device. For example, you can reboot a member device that operates improperly and specify to delete the booting configuration file when the member device reboots, and thus achieve normal communication between the management and member devices.
l Switching between the management device view and the member device view for configuration and management
Follow these steps to configure member management:
To do... |
Use the command... |
Remarks |
Enter system view |
system-view |
— |
Enter cluster view |
cluster |
— |
Add a candidate device to the cluster |
add-member [ member-number ] mac-address mac-address [ password password ] |
Optional The candidate devices are numbered sequentially. |
Remove a member device from the cluster |
delete-member member-number [ to-black-list ] |
Optional |
Reboot a specified member device |
reboot member { member-number | mac-address mac-address } [ eraseflash ] |
Optional |
Exit cluster view |
quit |
— |
Exit system view |
quit |
— |
Switch between the management device view and member device view |
cluster switch-to { member-number | mac-address mac-address | administrator | sysname member-sysname } |
Optional Before executing this command, you need to enable the telnet server function on the remote device. In addition, ring switching is not allowed. |
& Note:
Normally, member devices are numbered sequentially. The numbers assigned are tracked by the management device. When a member device is added to the cluster again after it exits the cluster, the management device assigns its original number to this member if the number is not assigned.
You can use the cluster switch-to command to switch between the management device and a member device through Telnet. Note the following when performing this operation.
l Before performing this operation, you need to configure the peer device as a Telnet server by using the telnet server command.
l Authentication is performed when you switch between the management device and a member device. The system authenticates the super password with user level 3. If the super password of the current device is not consistent with that of the device to be switched to, switching will fail. After a candidate device joins a cluster, its super password is synchronized to that of the management device automatically. After a cluster is created, you are recommended not to modify the super passwords of both the cluster members and the management device for fear of switching failures.
l After you switch from the management device to a member device, the user level on the member device inherits that of the management device.
l When you switch from a member device to the management device, the user level is determined by the predefined setting on the management device.
l You will fail to switch to a device if the number of the Telnet users on the device to be accessed reaches the upper limit.
l Avoid ring switching to prevent performance degrading. For example, if you switch from the management device to a member device, and then want to switch back to the management device, execute the quit command instead of the cluster switch-to administrator command.
1.3.8 Configuring Topology Management
The concepts of blacklist and whitelist are used for topology management.
l A whitelist is a list of devices that have been confirmed by the administrator as adequate in the topology. You can get the information of a node and its neighbors from the current topology. Based on the information, you can manage and maintain the blacklist by adding, deleting or modifying a node.
l A blacklist is a list of devices that are not allowed to join a cluster unless the administrator manually removes them from the list. In the blacklist contains the MAC addresses of devices. If a blacklist device is connected to network through another device not included in the blacklist, the MAC address and access port of the latter is also included in the blacklist.
A whitelist member cannot be a blacklist member, and vice versa. However, a topology node can belong to neither the whitelist nor the blacklist. This type of nodes is usually newly added nodes, whose identity are to be confirmed by the administrator.
The whitelist and blacklist do not lost when a power failure occurs to the management device. They are stored in the FTP server or the flash of the management device and can be restored when necessary. When the management restarts or a cluster rebuilds, the whitelist and blacklist will be automatically restored from the flash.
Follow these steps to configure topology management:
To do... |
Use the command... |
Remarks |
Enter system view |
system-view |
— |
Enter cluster view |
cluster |
— |
Add a device to the blacklist |
black-list add-mac mac-address |
Optional |
Remove a device from the blacklist |
black-list delete-mac { all | mac-address } |
Optional |
Confirm the current topology and save it as the standard topology |
topology accept { all [ save-to { ftp-server | local-flash } ] | mac-address mac-address | member-id member-number } |
Optional |
Save the standard topology to the FTP server or the local flash |
topology save-to { ftp-server | local-flash } |
Optional |
Restore the standard topology information from the FTP server or the local flash |
topology restore-from { ftp-server | local-flash } |
Optional You must ensure that the topology is correct before restoring as the device itself cannot judge the correctness in topology. |
Caution:
The topology save-to ftp-server command needs to be coupled with the FTP server software. However, some FTP server software (such as 3CDAEMON) cannot save topology information properly.
1.3.9 Configuring Cluster Parameters
Cluster parameters mainly include:
l If the management device does not receive any information from a member device during the holdtime, it sets the state of the member device to “Disconnect”. When the communication between the two resumes, the corresponding member device is re-added to the cluster (automatically). If the downtime does not exceed the holdtime, the member device stays in the normal state and needs not to be added again.
l Inside a cluster, the communication between member devices and the management device are kept through handshake packet exchange. Handshake packets in a cluster enable the management device to acquire the information about member states link states.
Follow these steps to configure cluster parameters:
To do... |
Use the command... |
Remarks |
Enter system view |
system-view |
— |
Enter cluster view |
cluster |
— |
Configure the holdtime |
holdtime seconds |
Optional 60 seconds by default |
Configure the interval to send handshake packets |
timer interval-time |
Optional 10 seconds by default |
1.3.10 Configuring Interaction for a Cluster
After establishing a cluster, you can configure FTP, TFTP server, NM host and logging host for the cluster on the management device. The members in a cluster access the servers configured through the management device.
All the log information of the members in a cluster will be saved on the configured logging host in the following way: first, the member devices sends their log information to the management device; the management device then converts the addresses of logging information and sends them to the logging host. In the same way, the member devices send their trap packets to the NM host configured for the cluster.
Follow these steps to configure the interaction for a cluster:
To do... |
Use the command... |
Remarks |
Enter system view |
system-view |
— |
Enter cluster view |
cluster |
— |
Configure the FTP server by setting an IP address, username and password |
ftp-server ip-address [ user-name username password { simple | cipher } password ] |
Optional By default, FTP server is not configured. |
Configure the TFTP server for the member devices |
tftp-server ip-address |
Optional By default, no TFTP server is configured. |
Configure the logging host |
logging-host ip-address |
Optional By default, no logging host is configured. |
Configure the SNMP host |
snmp-host ip-address [ community-string read string1 write string2 ] |
Optional By default, no SNMP host is configured. |
Configure the NM interface of the management device |
nm-interface vlan-interface vlan-id |
Optional |
Caution:
For the configured logging host to take effect, you must execute the info-center loghost command in system view first. For more information about the info-center loghost command, refer to “Information Center Commands” in System Volume.
1.4 Configuring the Member Devices
1.4.1 Enabling NDP Globally and for Specific Ports
Refer to 1.3.1 Enabling NDP Globally and for Specific Ports.
1.4.2 Enabling NTDP Globally and for Specific Ports
Refer to 1.3.3 Enabling NTDP Globally and for Specific Ports.
1.4.3 Enabling the Cluster Function
Refer to 1.3.5 Enabling the Cluster Function.
1.4.4 Adding a Candidate Device to a Cluster
Follow these steps to add a candidate device to a cluster:
To do... |
Use the command... |
Remarks |
Enter system view |
system-view |
— |
Enter cluster view |
cluster |
— |
Add a candidate device to the cluster |
administrator-address mac-address name name |
Optional By default, a device is not a member of a cluster. |
1.5 Displaying and Maintaining Cluster Management
To do... |
Use the command... |
Remarks |
Display NDP configuration information |
display ndp [ interface interface-list ] |
Available in any view |
Display the global NTDP information. |
display ntdp |
Available in any view |
Display the device information collected through NTDP |
display ntdp device-list [ verbose ] |
Available in any view |
View cluster state and statistics |
display cluster |
Available in any view |
View the standard topology map |
display cluster base-topology [ mac-address mac-address | member-id member-number ] |
Available in any view |
View the current blacklist of the cluster |
display cluster black-list |
Available in any view |
View the information of candidate devices |
display cluster candidates [ mac-address mac-address | verbose ] |
Available in any view |
Display the current topology map or the topology path between two devices |
display cluster current-topology [ mac-address mac-address [ to-mac-address mac-address ] | member-id member-number [ to-member-id member-number ] ] |
Available in any view |
Display members in a cluster |
display cluster members [ member-number | verbose ] |
Available in any view |
Clear NDP statistics |
reset ndp statistics [ interface interface-list ] |
Available in user view |
1.6 Cluster Management Configuration Example
Three switches form a cluster, among which
l The management device is an S3610 switch.
l Other switches serve as member devices.
The detailed configuration is as follows:
The two devices are connected to Ethernet1/0/2 and Ethernet1/0/3 port of the management device.
The management device is connected to the external network through Ethernet1/0/1.
Ethernet1/0/1 on the management device belongs to VLAN2. The IP address of VLAN 2 interface is 163.172.55.1. The network management interface of the management device belongs to VLAN2. The network management device accesses the management device through VLAN2.
The entire cluster uses the same FTP server and TFTP server.
The FTP server and TFTP server share the same IP address: 63.172.55.1.
The network management workstation and logging host share the same IP address: 69.172.55.4.
The management device collects topology in every three minutes.
Add the device with the MAC address 000f-e20f-0013 to the blacklist.
I. Networking diagram
Figure 1-3 Networking diagram for cluster management
II. Configuration procedure
1) Configure a member device.
# Enable NDP globally and on Ethernet 1/0/1.
<Sysname> system-view
[Sysname] ndp enable
[Sysname] interface Ethernet 1/0/1
[Sysname-Ethernet1/0/1] ndp enable
[Sysname-Ethernet1/0/1] quit
# Enable NTDP globally and on Ethernet 1/0/1.
[Sysname] ntdp enable
[Sysname] interface Ethernet 1/0/1
[Sysname-Ethernet1/0/1] ntdp enable
[Sysname-Ethernet1/0/1] quit
# Enable the cluster function.
[Sysname] cluster enable
2) Configure the management device.
# Enable NDP globally and on Ethernet 1/0/2 and Ethernet 1/0/3.
<Sysname> system-view
[Sysname] ndp enable
[Sysname] interface Ethernet 1/0/2
[Sysname-Ethernet1/0/2] ndp enable
[Sysname-Ethernet1/0/2] quit
[Sysname] interface Ethernet 1/0/3
[Sysname-Ethernet1/0/3] ndp enable
[Sysname-Ethernet1/0/3] quit
# Configure the holdtime of the NDP information as 200 seconds.
[Sysname] ndp timer aging 200
# Configure the interval of sending NDP packets as 70 seconds.
[Sysname] ndp timer hello 70
# Enable NTDP globally and on Ethernet 1/0/2 and Ethernet 1/0/3.
[Sysname] ntdp enable
[Sysname] interface Ethernet 1/0/2
[Sysname-Ethernet1/0/2] ntdp enable
[Sysname-Ethernet1/0/2] quit
[Sysname] interface Ethernet 1/0/3
[Sysname-Ethernet1/0/3] ntdp enable
[Sysname-Ethernet1/0/3] quit
# Configure the topology collection range as two hops.
[Sysname] ntdp hop 2
# Configure the delay for the devices to forward topology collecting requests as 150 ms.
[Sysname] ntdp timer hop-delay 150
# Configure the delay for the ports on the devices to forward topology collecting requests as 15 ms.
[Sysname] ntdp timer port-delay 15
# Configure the interval of collecting topology information as three minutes.
[Sysname] ntdp timer 3
# Enable the cluster function.
[Sysname] cluster enable
# Enter cluster view.
[Sysname] cluster
# Configure the IP address pool used by the cluster, which contains six addresses, starting from 172.16.0.1.
[Sysname-cluster] ip-pool 172.16.0.1 255.255.255.248
# Configure a cluster name and establish the cluster.
[Sysname-cluster] build aabbcc
Restore topology from local flash file,for there is no base topology.
(Please confirm in 30 seconds, default No). (Y/N)
n
#Apr 29 23:33:35:991 2000 H3C CLST/4/Cluster_Trap:
OID:1.3.6.1.4.1.2011.6.7.1.0.3: member 00.00.00.00.00.0f.e2.00.22.00 role change, NTDPIndex:0.00.00.00.00.00.0f.e2.00.22.00, Role:1
[aabbcc_0.Sysname-cluster]
%Apr 29 23:34:12:686 2000 H3C CLST/4/LOG:
Member 00e0-fc01-0011 is joined in cluster aabbcc.
%Apr 29 23:34:12:691 2000 H3C CLST/4/LOG:
Member 00e0-fc01-0012 is joined in cluster aabbcc.
# Configure the holdtime as 100 seconds.
[aabbcc_0.Sysname-cluster] holdtime 100
# Configure the interval of sending handshake packets as 10 seconds.
[aabbcc_0.Sysname-cluster] timer 10
# Configure the FTP server, TFTP server, logging host, and SNMP workstation for the cluster.
[aabbcc_0.Sysname-cluster] ftp-server 63.172.55.1
[aabbcc_0.Sysname-cluster] tftp-server 63.172.55.1
[aabbcc_0.Sysname-cluster] logging-host 69.172.55.4
[aabbcc_0.Sysname-cluster] snmp-host 69.172.55.4
# Add the device with the MAC address 000f-e20f-0013 to the blacklist.
[aabbcc_0.Sysname-cluster] black-list add-mac 000f-e20f-0013
# Configure a network management interface.
[aabbcc_0.Switch] vlan 2
[aabbcc_0.Switch-vlan2] port Ethernet 1/0/1
[aabbcc_0.Switch-vlan2] quit
[aabbcc_0.Switch] interface Vlan-interface 2
[aabbcc_0.Switch-Vlan-interface2] ip address 163.172.55.1 24
[aabbcc_0.Switch-Vlan-interface2] quit
[aabbcc_0.Switch] cluster
[aabbcc_0.Switch-cluster] nm-interface Vlan-interface 2
& Note:
l After the above-mentioned configuration, you can use the cluster switch-to { member-number | mac-address mac-address } command on the management device to switch to a member device to manage the specified member. You can use the quit command to switch back to the management device from a member device.
l On a member device, you can use the cluster switch-to administrator command to switch to the management device. After the configuration, you can switch back to the member device with the quit command. Before switching, make sure telnet server is enabled on the peer device.
l On the management device, you can use the reboot member { member-number | mac-address mac-address} [ eraseflash ] command to restart a member device. For detail, see related sections of this chapter.
l After the above-mentioned configuration, you can check the logs and SNMP trap messages of all the member devices on the SNMP workstation.