- Table of Contents
-
- H3C S5500-SI Series Ethernet Switches Operation Manual(V1.01)
- 00-1Cover
- 00-2Overview
- 01-Login Configuration
- 02-VLAN Configuration
- 03-IP Addressing and IP Performance Configuration
- 04-QinQ-BPDU TUNNEL Configuration
- 05-Port Correlation Configuration
- 06-Link Aggregation Configuration
- 07-MAC Address Table Management Configuration
- 08-Port Security Configuration
- 09-MSTP Configuration
- 10-IPv6 Configuration
- 11-IP Routing Overview Configuration
- 12-IPv4 Routing Configuration
- 13-IPv6 Routing Configuration
- 14-Multicast Configuration
- 15-802.1x-HABP-MAC Authentication Configuration
- 16-AAA-RADIUS-HWTACACS Configuration
- 17-ARP Configuration
- 18-DHCP Configuration
- 19-ACL Configuration
- 20-QoS Configuration
- 21-Port Mirroring Configuration
- 22-UDP Helper Configuration
- 23-Cluster Management Configuration
- 24-SNMP-RMON Configuration
- 25-NTP Configuration
- 26-DNS Configuration
- 27-File System Management Configuration
- 28-Information Center Configuration
- 29-System Maintaining and Debugging Configuration
- 30-NQA Configuration
- 31-SSH Configuration
- 32-Track Configuration
- 33-PoE Configuration
- 34-SSL-HTTPS Configuration
- 35-PKI Configuration
- 36-Stack Management Configuration
- 37-Appendix
- Related Documents
-
Title | Size | Download |
---|---|---|
23-Cluster Management Configuration | 235 KB |
Table of Contents
Chapter 1 Cluster Management Configuration
1.1 Cluster Management Overview
1.1.1 Cluster Management Definition
1.2 Cluster Configuration Task List
1.3 Configuring the Management Device
1.3.1 Enabling NDP Globally and for Specific Ports
1.3.2 Configuring NDP Parameters
1.3.3 Enabling NTDP Globally and for Specific Ports
1.3.4 Configuring NTDP Parameters
1.3.5 Manually Collecting NTDP Information
1.3.6 Enabling the Cluster Function
1.3.9 Configuring Cluster Member Management
1.4 Configuring the Member Devices
1.4.1 Enabling NDP Globally and for Specific Ports
1.4.2 Enabling NTDP Globally and for Specific Ports
1.4.3 Manually Collecting NTDP Information
1.4.4 Enabling the Cluster Function
1.4.5 Deleting a Member Device from a Cluster
1.5 Configuring Access Between the Management Device and Its Member Devices
1.6 Adding a Candidate Device to a Cluster
1.7 Configuring Advanced Cluster Functions
1.7.1 Configuring Topology Management
1.7.2 Configuring Interaction for a Cluster
1.8 Displaying and Maintaining Cluster Management
1.9 Cluster Management Configuration Examples
1.9.1 Cluster Management Configuration Example One
Chapter 1 Cluster Management Configuration
When configuring cluster management, go to these sections for information you are interested in:
l Cluster Configuration Task List
l Configuring the Management Device
l Configuring the Member Devices
l Configuring Access Between the Management Device and Its Member Devices
l Adding a Candidate Device to a Cluster
l Configuring Advanced Cluster Functions
l Displaying and Maintaining Cluster Management
l Cluster Management Configuration Examples
1.1 Cluster Management Overview
1.1.1 Cluster Management Definition
A cluster is an aggregation of a group of communication devices. Cluster management is to implement management of large numbers of distributed network devices.
Cluster management is implemented through Huawei Group Management Protocol version 2 (HGMPv2). By employing HGMPv2, a network administrator can manage multiple devices using the public IP address of one device in a cluster. The device that configured with a public address and performs the management function is known as the management device and other managed devices are called member devices, which together form a cluster. Figure 1-1 illustrates a typical cluster implementation.
Figure 1-1 Network diagram for a cluster
Cluster management offers the following advantages:
l Saving public IP address resource
l Simplifying configuration and management tasks. By configuring a public IP address on the management device, you can configure and manage a group of member devices on the management device without the trouble of logging onto each device separately.
l Providing topology discovery and display function, which is useful for network monitoring and debugging
l Allowing simultaneous software upgrading and parameter configuring on multiple devices, free of topology and distance limitations
1.1.2 Roles in a Cluster
The devices in a cluster play different roles according to their different functions and status. You can specify the role a device plays. The following three roles exist in a cluster: management device, member device, and candidate device.
l Management device: The device providing management interfaces for all devices in the cluster and the only device configured with a public IP address. Any configuration, management, and monitoring of the member devices in a cluster can only be implemented through the management device. When a device is specified as the management device, it collects Neighbor Discovery Protocol (NDP) and Neighbor Topology Discovery Protocol (NTDP) information to discover and define a candidate device.
l Member device: The device being managed by the management device in a cluster.
l Candidate device: A device that does not belong to any cluster but can be added to a cluster. Different from a member device, its topology information has been collected by the management device but it has not been added to the cluster.
Figure 1-2 Role change in a cluster
A device in a cluster changes its role according to the following rules:
l A candidate device becomes a management device when you create a cluster on it. Note that a cluster must have one (and only one) management device. On becoming a management device, the device collects network topology information and tries to discover and determine candidate devices, which can then be added to the cluster through configuration.
l A candidate device becomes a member device after being added to a cluster.
l A member device becomes a candidate device after it is removed from the cluster.
l A management device becomes a candidate device only after the cluster is removed.
1.1.3 How a Cluster Works
HGMPv2 consists of the following three protocols:
l Neighbor Discovery Protocol (NDP)
l Neighbor Topology Discovery Protocol (NTDP)
l Cluster
A cluster configures and manages the devices in it through the above three protocols.
Cluster management involves topology information collection and the establishment and maintenance of a cluster. Topology information collection and cluster maintenance are independent from each other, with the former starting before the cluster is created:
l All devices use NDP to collect the information of the directly connected neighbors, including their software version, host name, MAC address and port number.
l The management device uses NTDP to collect the information of the devices within user-specified hops and the topology information of all devices and specify the candidate devices of the cluster.
l The management device adds or deletes a member device and modifies cluster management configuration according to the candidate device information collected through NTDP.
I. Introduction to NDP
NDP is used to discover the information about directly connected neighbors, including the device name, software version, and connecting port of the adjacent devices. NDP works in the following ways:
l A device running NDP periodically sends NDP packets to its neighbors. An NDP packet carries NDP information (including the device name, software version, and connecting port, etc.) and the holdtime, which indicates how long the receiving devices will keep the NDP information. At the same time, the device also receives but does not forward the NDP packets from its neighbors.
l A device running NDP stores and maintains an NDP table. The device creates an entry in the NDP table for each neighbor. If a new neighbor is found, meaning the device receives an NDP packet sent by the neighbor for the first time, the device adds an entry in the NDP table. When another NDP packet is received, if the NDP information carried in the NDP packet is different from the stored information, the corresponding entry in the NDP table is updated; otherwise, only the holdtime of the entry is updated. If no NDP information from the neighbor is received within the holdtime, the corresponding entry is removed from the NDP table.
NDP runs on the data link layer, and therefore supports different network layer protocols.
II. Introduction to NTDP
NTDP is a protocol used to collect network topology information. NTDP provides information required for cluster management: it collects topology information about the devices within the specified hop count, to identify candidate devices for a cluster.
Based on the neighbor information stored in the neighbor table maintained by NDP, NTDP on the management device advertises NTDP topology collection requests to collect the NDP information of each device in a specific network range as well as the connection information of all its neighbors. The information collected will be used by the management device or the network management software to implement required functions.
When a member device detects a change on its neighbors through its NDP table, it informs the management device through handshake packets. Then the management device triggers its NTDP to perform specific topology collection, so that its NTDP can discover topology changes timely.
The management device collects topology information periodically. You can also administratively launch a topology information collection with commands. The process of topology information collection is as follows:
l The management device periodically sends NTDP topology collection request from the NTDP-enabled ports.
l Upon receiving the request, the device sends NTDP topology collection response to the management device, copies this response packet on the NTDP-enabled port and sends it to the adjacent device. Topology collection response includes the basic information of the NDP-enabled device and NDP information of all adjacent devices.
l The adjacent device performs the same operation until the NTDP topology collection request is sent to all the devices within specified hops.
When the NTDP topology collection request is advertised in the network, large numbers of network devices receive the NTDP topology collection request and send NTDP topology collection response at the same time, which may cause congestion and the management device busyness. To avoid such case, the following methods can be used to control the speed of the NTDP topology collection request advertisement:
l Upon receiving an NTDP topology collection request the device does not forward it, instead, it waits for a period of time and then forwards the NTDP topology collection request on the first NTDP-enabled port.
l On the same device, except the first port, each NTDP-enabled port waits for a period of time and then forwards the NTDP topology collection request after the port before it sends the NTDP topology collection request.
III. Cluster management maintenance
1) Adding a candidate device to a cluster
You should specify the management device before creating a cluster. The management device discovers and defines a candidate device through NDP and NTDP protocols. The candidate device can be automatically or manually added to the cluster.
After the candidate device is added to the cluster, it can obtain the member number assigned by the management device and the private IP address used for cluster management.
2) Communication within a cluster
In a cluster the management device communicates with its member devices by sending handshake packets to maintain connection between them. The management/member device state change is shown in Figure 1-3.
Figure 1-3 Management/member device state change
l After a cluster is created and a candidate device is added to the cluster and becomes a member device, the management device saves the state information of its member device and identifies it as Active. And the member device also saves its state information and identifies it as Active.
l After a cluster is created, its member devices begin to send handshake packets first. The management device also sends handshake packets to the member devices at the same interval. Upon receiving the handshake packets from the other side, the management device or member device simply changes or remains its state as Active, without sending a response.
l If the management device does not receive the handshake packets from a member device in an interval three times of the interval to send handshake packets, it changes the status of the member device from Active to Connect. Likewise, if a member device fails to receive the handshake packets from the management device in an interval three times of the interval to send handshake packets, the status of the member device will also be changed from Active to Connect.
l If this management device, in information holdtime, receives the handshake or management packets from its member device which is in Connect state, it changes the state of its member device to Active; otherwise, it changes the state of its member device to Disconnect, in which case the management device considers its member device disconnected. If this member device, which is in Connect state, receives handshake or management packets from the management device in information holdtime, it changes its state to Active; otherwise, it changes its state to Disconnect.
l If the communication between the management device and a member device is recovered, the member device which is in Disconnect state will be added to the cluster. After that, the state of the member device locally and on the management device will be changed to Active.
IV. Management VLAN
The management VLAN limits the cluster management range. Through configuration of the management VLAN, the following functions can be implemented:
l Management packets (including NDP, NTDP and handshake packets) are restricted within the management VLAN, therefore isolated from other packets, which enhances security.
l The management device and the member devices communicate with each other through the management VLAN.
For a cluster to work normally, you must set the packets from the management VLAN to pass the subtending ports (If a candidate device is connected to the management device through another candidate device, the ports connecting these two candidate devices are called subtending ports.) and the ports connecting the management device and the member/candidate devices. Therefore:
l If the packets from the management VLAN cannot pass a port, the device connected with the port cannot be added to the cluster. Therefore, if the ports (including the subtending ports) connecting the management device and the member/candidate devices prohibit the packets from the management VLAN, you can set the packets from the management VLAN to pass the ports on candidate devices with the management VLAN auto-negotiation function.
l Only when the default VLAN ID of the subtending ports and the ports connecting the management device and the member/candidate devices is that of the management VLAN can you set the packets without tags from the management VLAN to pass the ports; otherwise, only the packets with tags from the management VLAN can pass the ports.
Refer to VLAN Configuration for the introduction to VLAN.
1.2 Cluster Configuration Task List
Before configuring a cluster, you need to determine the roles and functions the devices play. You also need to configure the related functions, preparing for the communication between devices within the cluster.
Complete these tasks to configure a cluster:
Tasks |
Remarks |
|
Optional |
||
Optional |
||
Optional |
||
Optional |
||
Optional |
||
Optional |
||
Required |
||
Configuring Communication Between the Management Device and the Member Devices Within a Cluster |
Optional |
|
Optional |
||
Optional |
||
Optional |
||
|
||
Optional |
||
Optional |
||
Configuring Access Between the Management Device and Its Member Devices |
Optional |
|
Optional |
||
Optional |
||
Optional |
Caution:
Disabling the NDP and NTDP functions on the management device and member devices after a cluster is created will not cause the cluster to be dismissed, but will influence the normal operation of the cluster.
1.3 Configuring the Management Device
1.3.1 Enabling NDP Globally and for Specific Ports
Follow these steps to enable NDP globally and for specific ports:
To do… |
Use the command… |
Remarks |
|
Enter system view |
system-view |
— |
|
Enable NDP globally |
ndp enable |
Optional Enabled by default. |
|
Enable the NDP feature for the port(s) |
In system view |
ndp enable interfaceinterface-list |
Use either command By default, NDP is enabled globally and also on all ports. |
In Ethernet port view |
interface interface-type interface-number |
||
ndp enable |
Caution:
l For NDP to work normally, you must enable NDP both globally and on the specified port.
l If the subtending port or the port connecting the management device to a member/candidate device is a port of a member in an aggregation group, you must enable NDP on all member ports of the aggregation group at the same time. Otherwise, NDP will work abnormally.
l You are recommended to disable NDP on the port which connects with the devices that do not need to join the cluster, preventing the management device from adding the device which needs not to join the cluster and collecting the topology information of this device.
1.3.2 Configuring NDP Parameters
Follow these steps to configure NDP parameters:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Configure the period for the receiving devices to keep the NDP packets |
ndp timer aging aging-time |
Optional 180 seconds by default. |
Configure the interval to send NDP packets |
ndp timer hello hello-time |
Optional 60 seconds by default. |
Caution:
The time for the receiving device to hold NDP packets cannot be shorter than the interval to send NDP packets; otherwise, the NDP table may become instable.
1.3.3 Enabling NTDP Globally and for Specific Ports
Follow these steps to enable NTDP globally and for specific ports:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enable NTDP globally |
ntdp enable |
Optional Enabled by default |
Enable NTDP for the port |
interface interface-type interface-number |
Optional NTDP is enabled on all ports by default. |
ntdp enable |
Caution:
l For NTDP to work normally, you must enable NTDP both globally and on the specified port.
l The NTDP function is mutually exclusive with the BPDU TUNNEL function under a port and you cannot enable them at the same time. For the detailed description of the BPDU TUNNEL function, refer to BPDU TUNNEL Configuration.
l If the subtending port or the port connecting the management device to a member/candidate device is a port of a member in an aggregation group, you must enable NDP on all member ports of the aggregation group at the same time. Otherwise, NDP will work abnormally.
l You are recommended to disable NDP on the port which connects with the devices that do not need to join the cluster, preventing the management device from adding the device which needs not to join the cluster and collecting the topology information of this device.
1.3.4 Configuring NTDP Parameters
Follow these steps to configure NTDP parameters:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Configure the range within which topology information is to be collected |
ntdp hop hop-value |
Optional By default, the hop range for topology collection is 3 hops. |
Configure the interval to collect topology information |
ntdp timer interval-time |
Optional 1 minute by default. |
Configure the delay to forward topology-collection request packets on the first port |
ntdp timer hop-delay time |
Optional 200 ms by default. |
Configure the port delay to forward topology collection request |
ntdp timer port-delay time |
Optional 20 ms by default. |
1.3.5 Manually Collecting NTDP Information
The management device collects topology information periodically after a cluster is created. In addition, you can configure to manually collect NTDP information to initiate NTDP information collection, thus managing and monitoring the device on real time, regardless of whether a cluster is created.
Follow these steps to configure to manually collect NTDP information:
To do… |
Use the command… |
Remarks |
Manually collect NTDP information |
ntdp explore |
Required |
1.3.6 Enabling the Cluster Function
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enable the cluster function globally |
cluster enable |
Optional Enabled by default. |
1.3.7 Establishing a Cluster
Before establishing a cluster, you need to configure a private IP address pool for the devices to be added to the cluster. When a candidate device is added to a cluster, the management device assigns a private IP address to it for the candidate device to communicate with other devices in the cluster. This enables you to manage and maintain member devices in a cluster through the management device.
Caution:
l If the routing table of the management device is full when a cluster is created, that is, entries with the destination address as a candidate device cannot be added to the routing table, all candidate devices will be added to and removed from the cluster repeatedly.
l If the routing table of a candidate device is full when the candidate device is added to the cluster, that is, entries with the destination address as the management device cannot be added to the routing table, the candidate device will be added to and removed from the cluster repeatedly.
Caution:
l You can only specify a management VLAN before establishing a cluster. After a device has been added to the cluster, you cannot modify the management VLAN. To change the management VLAN after the cluster is established, you should remove the cluster on the management device, re-specify the management VLAN and reestablish a cluster.
l For the purpose of security, you are not recommended to configure the VLAN ID of the management VLAN as the default VLAN ID of the port connecting the management device to its member devices.
l Only when the default VLAN ID of all subtending ports and the port connecting the management device to its member device is that of the management VLAN, can the packets without a tag from the management VLAN pass the ports. Otherwise, you must configure the packets from the management VLAN to pass these ports. For the configuration procedure, refer to VLAN Configuration in the Access Volume.
l You must configure the IP address pool before establishing a cluster and configure it on the management device only. If a cluster has already been established, you are not allowed to change the IP address pool.
I. Manually establishing a cluster
Follow these steps to manually establish a cluster:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Specify the management VLAN |
management-vlan vlan-id |
Optional By default, VLAN 1 is the management VLAN. |
Enter cluster view |
cluster |
— |
Configure the private IP address range for member devices on a device which is to be configured as the management device |
ip-pool administrator-ip-address { mask | mask-length } |
Required For a cluster to work normally, the IP addresses of the VLAN interfaces of the management device and member devices must not be in the same network segment as that of the cluster address pool. |
Configure the current device as the management device and assign a name to it |
build name |
Required By default, the device is not the management device. |
II. Automatically establishing a cluster
In addition to establishing a cluster manually, you are also provided with the means to establish a cluster automatically. With only a few commands (as shown in the table below) on the management device, you can let the system automatically build a cluster.
During the process, you will first be asked to enter a name for the cluster you want to establish, the system then lists all the candidate devices within your predefined hop counts and starts to automatically add them to the cluster.
You can use <Ctrl+C> anytime during the adding process to exit cluster auto-building. However, this will only stop adding new devices into the cluster, and devices already added in the cluster are not removed.
Follow these steps to automatically establish a cluster:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Specify the management VLAN |
management-vlan vlan-id |
Optional By default, VLAN 1 is the management VLAN. |
Enter cluster view |
cluster |
— |
Configure the private IP address range for member devices on a device which is to be configured as the management device |
ip-pool administrator-ip-address { mask | mask-length } |
Required For a cluster to work normally, the IP addresses of the VLAN interfaces of the management device and member devices must not be in the same network segment as the cluster address pool. |
Establish a cluster automatically |
auto-build [ recover ] |
Required |
1.3.8 Configuring Communication Between the Management Device and the Member Devices Within a Cluster
In a cluster, the management device and member devices communicate by sending handshake packets to maintain connection between them. You can configure interval of sending handshake packets and the holdtime of a device on the management device. This configuration applies to all member devices within the cluster.
Follow these steps to configure communication between the management device and the member devices within a cluster:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Configure the interval to send handshake packets |
timer interval-time |
Optional 10 seconds by default |
Configure the holdtime of a device |
holdtime seconds |
Optional 60 seconds by default |
1.3.9 Configuring Cluster Member Management
I. Adding/Removing a member device
You can manually add a candidate device to a cluster, or remove a member device from a cluster. These operations must be done through the management device, otherwise you will be prompted with an error message.
Follow these steps to add/remove a member device:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enter cluster view |
cluster |
— |
Add a candidate device to the cluster |
add-member [ member-number ] mac-address mac-address [ password password ] |
Optional |
Remove a member device from the cluster |
delete-member member-number [ to-black-list ] |
Required |
II. Rebooting a member device
Communication between the management and member devices may be interrupted due to some configuration errors. Through the remote control function of member devices, you can control them remotely on the management device. For example, you can reboot a member device that operates improperly and specify to delete the booting configuration file when the member device reboots, and thus achieve normal communication between the management and member devices.
Follow these steps to reboot a member device:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enter cluster view |
cluster |
— |
Reboot a specified member device |
reboot member { member-number | mac-address mac-address } [ eraseflash ] |
Required |
1.4 Configuring the Member Devices
1.4.1 Enabling NDP Globally and for Specific Ports
Refer to Enabling NDP Globally and for Specific Ports.
1.4.2 Enabling NTDP Globally and for Specific Ports
Refer to Enabling NTDP Globally and for Specific Ports.
1.4.3 Manually Collecting NTDP Information
Refer to Manually Collecting NTDP Information.
1.4.4 Enabling the Cluster Function
Refer to Enabling the Cluster Function.
1.4.5 Deleting a Member Device from a Cluster
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enter cluster view |
cluster |
— |
Delete a member device from the cluster |
undo administrator-address |
Required |
1.5 Configuring Access Between the Management Device and Its Member Devices
After having successfully configured NDP, NTDP and cluster, you can configure, manage and monitor the member devices through the management device. You can manage member devices in a cluster through switching from the operation interface of the management device to that of a member device or configure the management device by switching from the operation interface of a member device to that of the management device.
Follow these steps to configure access between member devices of a cluster:
To do… |
Use the command… |
Remarks |
Switch from the operation device of the management device to that of a member device |
cluster switch-to { member-number | mac-address mac-address | sysname member-sysname } |
Required |
Switch from the operation interface of a member device to that of the management device |
cluster switch-to administrator |
Required |
Caution:
Telnet connection is used on the switch between the management device and member devices. Note the following when switching between them:
l Before the switch, execute the telnet server enable command to enable Telnet. Otherwise, the switch fails.
l Authentication is required when you switch a member device to the management device. The switch fails if authentication is not passed. Your user level is allocated according to the predefined level by the management device if authentication is passed.
l When a candidate device is added to a cluster and becomes a member device, its super password will be automatically synchronized to the management device. Therefore, after a cluster is established, you are not recommended to modify the super password of the member device (including management device and member devices) of the cluster; otherwise, the switch may fail because of authentication failure.
l When you switch the management device to a member device, if member n does not exist, the system prompts error; if the switch succeeds, your user level on the management device is retained.
l If the Telnet users on the device to be logged in reach the maximum number, the switch fails.
l To prevent resource waste, avoid recycling switch when configuring access between cluster members. For example, if you switch from the operation interface of the management device to that of a member device and then need to switch back to that of the management device, use the quit command to end the switch, but not the cluster switch-to administrator command to switch to the operation interface of the management device.
1.6 Adding a Candidate Device to a Cluster
Follow these steps to add a candidate device to a cluster:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enter cluster view |
cluster |
— |
Add a candidate device to the cluster |
administrator-address mac-address name name |
Required |
1.7 Configuring Advanced Cluster Functions
This section covers these topics:
l Configuring Topology Management
l Configuring Interaction for a Cluster
1.7.1 Configuring Topology Management
The concepts of blacklist and whitelist are used for topology management. An administrator can diagnose the network by comparing the current topology and the standard topology.
l Current topology: The information of a node and its neighbors of the cluster.
l Topology management whitelist (standard topology): A whitelist is a list of topology information that has been confirmed by the administrator as correct. You can get the information of a node and its neighbors from the current topology. Based on the information, you can manage and maintain the whitelist by adding, deleting or modifying a node.
l Topology management blacklist: A blacklist is a list of devices that are not allowed to join a cluster unless the administrator manually removes them from the list. A blacklist contains the MAC addresses of devices. If a blacklist device is connected to network through another device not included in the blacklist, the MAC address and access port of the latter are also included in the blacklist.
A whitelist member cannot be a blacklist member, and vice versa. However, a topology node can belong to neither the whitelist nor the blacklist. Nodes of this type are usually newly added nodes, whose identities are to be confirmed by the administrator.
You can back up the whitelist and blacklist to prevent them from missing when a power failure occurs to the management device. The following two backup and restore mechanisms are available:
l Backing them up on the FTP server shared by the cluster. You can manually restore the whitelist and blacklist from the FTP server.
l Backing them up in the Flash of the management device. When the management device restarts, the whitelist and blacklist will be automatically restored from the Flash. When a cluster is reestablished, you can choose whether to restore the whitelist and blacklist from the Flash automatically, or you can manually restore them from the Flash of the management device.
Follow these steps to configure cluster topology management:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enter cluster view |
cluster |
— |
Add a device to the blacklist |
black-list add-mac mac-address |
Optional |
Remove a device from the blacklist |
black-list delete-mac { all | mac-address } |
Optional |
Confirm the current topology and save it as the standard topology |
topology accept { all [ save-to { ftp-server | local-flash } ] | mac-address mac-address | member-id member-number } |
Optional |
Save the standard topology to the FTP server or the local Flash |
topology save-to { ftp-server | local-flash } |
Optional |
Restore the standard topology information from the FTP server or the local Flash |
topology restore-from { ftp-server | local-flash } |
Optional You must ensure that the topology is correct before restoring it as the device itself cannot judge the correctness in topology. |
1.7.2 Configuring Interaction for a Cluster
After establishing a cluster, you can configure FTP/TFTP server, NM host and log host for the cluster on the management device.
l After you configure an FTP/TFTP server for a cluster, the members in the cluster access the FTP/TFTP server configured through the management device.
l After you configure a log host for a cluster, all the log information of the members in the cluster will be output to the configured log host in the following way: first, the member devices send their log information to the management device, which then converts the addresses of log information and sends them to the log host.
l After you configure an NM host for a cluster, the member devices in the cluster send their Trap messages to the shared SNMP NM host through the management device.
If the port of an access NM device (including FTP/TFTP server, NM host and log host) does not allow the packets from the management VLAN to pass, the NM device cannot manage the devices in a cluster through the management device. In this case, on the management device, you need to configure the VLAN interface of the access NM device (including FTP/TFTP server, NM host and log host) as the NM interface.
Follow these steps to configure the interaction for a cluster:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enter cluster view |
cluster |
— |
Configure the FTP server shared by the cluster by setting an IP address, username and password |
ftp-server ip-address [ user-name username password { simple | cipher } password ] |
Required By default, no FTP server is configured for a cluster. |
Configure the TFTP server shared by the member devices in the cluster |
tftp-server ip-address |
Required By default, no TFTP server is configured for a cluster. |
Configure the log host shared by the member devices in the cluster |
logging-host ip-address |
Required By default, no log host is configured for a cluster. |
Configure the SNMP NM host shared by the member devices in the cluster |
snmp-host ip-address [ community-string read string1 write string2 ] |
Required By default, no SNMP host is configured. |
Configure the NM interface of the management device |
nm-interface vlan-interface vlan-interface-id |
Optional |
Caution:
l For the configured log host to take effect, you must execute the info-center loghost command in system view first. For more information about the info-center loghost command, refer to Information Center Commands in the System Volume.
l To isolate management protocol packets of a cluster from packets outside the cluster, you are recommended to configure to prohibit packets from the management VLAN from passing the ports that connect the management device with the devices outside the cluster and configure the NM interface for the management device.
1.8 Displaying and Maintaining Cluster Management
To do… |
Use the command… |
Remarks |
Display NDP configuration information |
display ndp [ interface interface-list ] |
Available in any view |
Display the global NTDP information |
display ntdp |
|
Display the device information collected through NTDP |
display ntdp device-list [ verbose ] |
|
Display the detailed NTDP information of a specified device |
display ntdp single-device mac-address mac-address |
|
View cluster state and statistics |
display cluster |
|
View the standard topology information |
display cluster base-topology [ mac-address mac-address | member-id member-number ] |
|
View the current blacklist of the cluster |
display cluster black-list |
|
View the information of candidate devices |
display cluster candidates [ mac-address mac-address | verbose ] |
|
Display the current topology information or the topology path between two devices |
display cluster current-topology [ mac-address mac-address [ to-mac-address mac-address ] | member-id member-number [ to-member-id member-number ] ] |
|
Display members in a cluster |
display cluster members [ member-number | verbose ] |
|
Clear NDP statistics |
reset ndp statistics [ interface interface-list ] |
Available in user view |
& Note:
Support for the display ntdp single-device mac-address command varies with devices.
1.9 Cluster Management Configuration Examples
1.9.1 Cluster Management Configuration Example One
I. Network requirements
Three switches form a cluster, in which:
l One device serves as the management device.
l The other two are the member devices.
The specific requirements are as follows:
l The management device is connected to the external network through its GigabitEthernet1/0/1 port. The two member devices are connected to GigabitEthernet1/0/2 and GigabitEthernet1/0/3 ports of the management device.
l GigabitEthernet1/0/1 port of the management device belongs to VLAN 2, whose interface IP address is 163.172.55.1/24. The network management interface of the management device is VLAN-interface 2. VLAN 2 is the network management (NM) interface of the management device.
l All the devices in the cluster use the same FTP server and TFTP server, which share one IP address: 63.172.55.1/24.
l The SNMP NMS and log host share one IP address: 69.172.55.4/24.
l The management VLAN of the cluster is VLAN 10.
l Add the device whose MAC address is 00E0-FC01-0013 to the blacklist.
II. Network diagram
Figure 1-4 Network diagram for cluster management
III. Configuration procedure
1) Configuring the member device (All member devices have the same configuration, taking one member as an example)
# Enable NDP globally and for the Ethernet1/0/1 port.
<Switch> system-view
[Switch] ndp enable
[Switch] interface Ethernet1/0/1
[Switch- Ethernet1/0/1] ndp enable
[Switch- Ethernet1/0/1] quit
# Enable NTDP globally and for the Ethernet1/0/1 port.
[Switch] ntdp enable
[Switch] interface Ethernet1/0/1
[Switch-Ethernet1/0/1] ntdp enable
[Switch-Ethernet1/0/1] quit
# Enable the cluster function.
[Switch] cluster enable
2) Configuring the management device
# Enable NDP globally and for the GigabitEthernet1/0/2,GigabitEthernet1/0/3 ports.
<Switch> system-view
[Switch] ndp enable
[Switch] interface GigabitEthernet1/0/2
[Switch-GigabitEthernet1/0/2] ndp enable
[Switch-GigabitEthernet1/0/2] quit
[Switch] interface GigabitEthernet1/0/3
[Switch-GigabitEthernet1/0/3] ndp enable
[Switch-GigabitEthernet1/0/3] quit
# Configure the period for the receiving device to keep NDP packets as 200 seconds.
[Switch] ndp timer aging 200
# Configure the interval to send NDP packets as 70 seconds.
[Switch] ndp timer hello 70
# Enable NTDP globally and for the GigabitEthernet1/0/2,GigabitEthernet1/0/3 ports.
[Switch] ntdp enable
[Switch] interface GigabitEthernet1/0/2
[Switch-GigabitEthernet1/0/2] ntdp enable
[Switch-GigabitEthernet1/0/2] quit
[Switch] interface GigabitEthernet1/0/3
[Switch-GigabitEthernet1/0/3] ntdp enable
[Switch-GigabitEthernet1/0/3] quit
# Configure the hop count to collect topology as 2.
[Switch] ntdp hop 2
# Configure the delay time for topology-collection request packets to be forwarded on member devices as 150 ms.
[Switch] ntdp timer hop-delay 150
# Configure the delay time for topology-collection request packets to be forwarded through the ports of member devices as 15 ms.
[Switch] ntdp timer port-delay 15
# Configure the interval to collect topology information as 3 minutes.
[Switch] ntdp timer 3
# Configure the management VLAN of the cluster as VLAN 10.
[Switch] vlan 10
[Switch-vlan10] quit
[Switch] management-vlan 10
# Configure the port connecting the management device to candidate devices as a Trunk port and allow packets from the management VLAN to pass.
[Switch] interface GigabitEthernet 1/0/2
[Switch-GigabitEthernet 1/0/2] port link-type trunk
[Switch-GigabitEthernet 1/0/2] port trunk permit vlan 10
[Switch-GigabitEthernet 1/0/2] quit
[Switch] interface GigabitEthernet 1/0/3
[Switch-GigabitEthernet 1/0/3] port link-type trunk
[Switch-GigabitEthernet 1/0/3] port trunk permit vlan 10
[Switch-GigabitEthernet 1/0/3] quit
# Enable the cluster function.
[Switch] cluster enable
# Enter cluster view.
[Switch] cluster
# Configure an IP address pool for the cluster. The IP address pool contains six IP addresses, starting from 172.16.0.1.
[Switch-cluster] ip-pool 172.16.0.1 255.255.255.248
# Specify a name for the cluster and create the cluster.
[Switch-cluster] build aabbcc
Restore topology from local flash file,for there is no base topology.
(Please confirm in 30 seconds, default No). (Y/N)
N
# Enable management VLAN auto-negotiation.
[aabbcc_0.Switch-cluster] management-vlan synchronization enable
# Configure the holdtime of the member device information as 100 seconds.
[aabbcc_0.Switch-cluster] holdtime 100
# Configure the interval to send handshake packets as 10 seconds.
[aabbcc_0.Switch-cluster] timer 10
# Configure the FTP Server, TFTP Server, Log host and SNMP host for the cluster.
[aabbcc_0.Switch-cluster] ftp-server 63.172.55.1
[aabbcc_0.Switch-cluster] tftp-server 63.172.55.1
[aabbcc_0.Switch-cluster] logging-host 69.172.55.4
[aabbcc_0.Switch-cluster] snmp-host 69.172.55.4
# Add the device whose MAC address is 00E0-FC01-0013 to the blacklist.
[aabbcc_0.Switch-cluster] black-list add-mac 00e0-fc01-0013
[aabbcc_0.Switch-cluster] quit
# Configure the network management interface.
[aabbcc_0.Switch] vlan 2
[aabbcc_0.Switch-vlan2] port GigabitEthernet 1/0/1
[aabbcc_0.Switch] quit
[aabbcc_0.Switch] interface vlan-interface 2
[aabbcc_0.Switch-Vlan-interface2] ip address 163.172.55.1 24
[aabbcc_0.Switch-Vlan-interface2] quit
[aabbcc_0.Switch] cluster
[aabbcc_0.Switch-cluster] nm-interface vlan-interface 2
& Note:
l Upon completion of the above configurations, you can execute the cluster switch-to { member-number | mac-address mac-address } command on the management device to switch to the operation interface of a member device to maintain and manage it. You can then execute the quit command to return to the operation interface of the management device.
l You can also reboot a member device by executing the reboot member command on the management device.
l You can execute the cluster switch-to administrator command to switch to the operation interface of the management device.
l For detailed information about these configurations, refer to the preceding description in this chapter.