- Table of Contents
-
- 02-Layer 2-LAN Switching Configuration Guide
- 00-Preface
- 01-MAC address table configuration
- 02-Bulk interface configuration
- 03-Ethernet interface configuration
- 04-Ethernet link aggregation configuration
- 05-Port isolation configuration
- 06-VLAN configuration
- 07-Loopback, null, and inloopback interface configuration
- 08-Loop detection configuration
- 09-Spanning tree configuration
- 10-LLDP configuration
- 11-L2PT configuration
- 12-Service loopback group configuration
- 13-Layer 2 forwarding configuration
- Related Documents
-
Title | Size | Download |
---|---|---|
10-LLDP configuration | 321.59 KB |
Contents
Transmitting and receiving LLDP frames
Restrictions and guidelines: LLDP configuration
Setting the LLDP operating mode
Setting the LLDP reinitialization delay
Configuring the advertisable TLVs
Configuring advertisement of the management address TLV
Setting the encapsulation format for LLDP frames
Setting LLDP frame transmission parameters
Setting the timeout for receiving LLDP frames
Configuring port ID subtype of port ID TLVs advertised by LLDP
Disabling LLDP PVID inconsistency check
Configuring LLDP trapping and LLDP-MED trapping
Configuring LLDP neighbor validation and aging
Configuring LLDP neighbor validation on an interface
Configuring LLDP neighbor aging on an interface
Configuring MAC address learning for DCN
Setting the source MAC address of LLDP frames
Enabling generation of ARP or ND entries for received management address TLVs
Display and maintenance commands for LLDP
Example: Configuring basic LLDP functions
Enabling LLDP and DCBX TLV advertising
Configuring the 802.1p-to-local priority mapping
Configuring group-based WRR queuing
Configuring LLDP
About LLDP
The Link Layer Discovery Protocol (LLDP) is a standard link layer protocol that allows network devices from different vendors to discover neighbors and exchange system and configuration information.
In an LLDP-enabled network, a device advertises local device information in LLDP Data Units (LLDPDUs) to the directly connected devices. The information distributed through LLDP is stored by its recipients in standard MIBs, making it possible for the information to be accessed by a Network Management System (NMS) through SNMP.
Information that can be distributed through LLDP includes (but is not limited to):
· Major capabilities of the system.
· Management IP address of the system.
· Device ID.
· Port ID.
LLDP agents and bridge modes
An LLDP agent is a mapping of a protocol entity that implements LLDP. Multiple LLDP agents can run on the same interface.
LLDP agents are classified into the following types:
· Nearest bridge agent.
· Nearest customer bridge agent.
· Nearest non-TPMR bridge agent.
A Two-port MAC Relay (TPMR) is a type of bridge that has only two externally-accessible bridge ports. It supports a subset of the features of a MAC bridge. A TPMR is transparent to all frame-based media-independent protocols except for the following protocols:
¡ Protocols destined for the TPMR.
¡ Protocols destined for reserved MAC addresses that the relay feature of the TPMR is configured not to forward.
LLDP exchanges packets between neighbor agents and creates and maintains neighbor information for them. Figure 1 shows the neighbor relationships for these LLDP agents.
Figure 1 LLDP neighbor relationships
The types of supported LLDP agents vary with the bridge mode in which LLDP operates. LLDP supports the following bridge modes: customer bridge (CB) and service bridge (SB).
· Customer bridge mode—LLDP supports nearest bridge agent, nearest non-TPMR bridge agent, and nearest customer bridge agent. LLDP processes the LLDP frames with destination MAC addresses for these agents and transparently transmits the LLDP frames with other destination MAC addresses in VLANs.
· Service bridge mode—LLDP supports nearest bridge agent and nearest non-TPMR bridge agent. LLDP processes the LLDP frames with destination MAC addresses for these agents and transparently transmits the LLDP frames with other destination MAC addresses in VLANs.
LLDP frame formats
LLDP sends device information in LLDP frames. LLDP frames are encapsulated in Ethernet II or Subnetwork Access Protocol (SNAP) format.
LLDP frame encapsulated in Ethernet II
Figure 2 Ethernet II-encapsulated LLDP frame
Table 1 Fields in an Ethernet II-encapsulated LLDP frame
Field |
Description |
Destination MAC address |
MAC address to which the LLDP frame is advertised. LLDP specifies different multicast MAC addresses as destination MAC addresses for LLDP frames destined for agents of different types. This helps distinguish between LLDP frames sent and received by different agent types on the same interface. The destination MAC address is fixed to one of the following multicast MAC addresses: · 0x0180-c200-000E for LLDP frames destined for nearest bridge agents. · 0x0180-c200-0000 for LLDP frames destined for nearest customer bridge agents. · 0x0180-c200-0003 for LLDP frames destined for nearest non-TPMR bridge agents. |
Source MAC address |
MAC address of the sending port. |
Type |
Ethernet type for the upper-layer protocol. This field is 0x88CC for LLDP. |
Data |
LLDPDU. |
FCS |
Frame check sequence, a 32-bit CRC value used to determine the validity of the received Ethernet frame. |
LLDP frame encapsulated in SNAP
Figure 3 SNAP-encapsulated LLDP frame
Table 2 Fields in a SNAP-encapsulated LLDP frame
Field |
Description |
Destination MAC address |
MAC address to which the LLDP frame is advertised. It is the same as that for Ethernet II-encapsulated LLDP frames. |
Source MAC address |
MAC address of the sending port. |
Type |
SNAP type for the upper-layer protocol. This field is 0xAAAA-0300-0000-88CC for LLDP. |
Data |
LLDPDU. |
FCS |
Frame check sequence, a 32-bit CRC value used to determine the validity of the received Ethernet frame. |
LLDPDUs
Each LLDP frame contains one LLDPDU. Each LLDPDU is a sequence of type-length-value (TLV) structures.
Figure 4 LLDPDU encapsulation format
As shown in Figure 4, each LLDPDU starts with the following mandatory TLVs: Chassis ID TLV, Port ID TLV, and Time to Live TLV. The mandatory TLVs are followed by a maximum of 29 optional TLVs.
TLVs
A TLV is an information element that contains the type, length, and value fields.
LLDPDU TLVs include the following categories:
· Basic management TLVs.
· Organizationally (IEEE 802.1 and IEEE 802.3) specific TLVs.
· LLDP-MED (media endpoint discovery) TLVs.
Basic management TLVs are essential to device management.
Organizationally specific TLVs and LLDP-MED TLVs are used for enhanced device management. They are defined by standardization or other organizations and are optional for LLDPDUs.
Basic management TLVs
Table 3 lists the basic management TLV types. Some of them are mandatory for LLDPDUs.
Type |
Description |
Remarks |
Chassis ID |
Specifies the bridge MAC address of the sending device. |
Mandatory. |
Port ID |
Specifies the ID of the sending port: · If the LLDPDU carries LLDP-MED TLVs, the port ID TLV carries the MAC address of the sending port. · Otherwise, the port ID TLV carries the port name. |
|
Time to Live |
Specifies the life of the transmitted information on the receiving device. |
|
End of LLDPDU |
Marks the end of the TLV sequence in the LLDPDU. |
|
Port Description |
Specifies the description for the sending port. |
Optional. |
System Name |
Specifies the assigned name of the sending device. |
|
System Description |
Specifies the description for the sending device. |
|
System Capabilities |
Identifies the primary features of the sending device and the enabled primary features. |
|
Management Address |
Specifies the following elements: · The management address of the local device. · The interface number and object identifier (OID) associated with the address. |
IEEE 802.1 organizationally specific TLVs
Table 4 lists the IEEE 802.1 organizationally specific TLVs.
The device can receive protocol identity TLVs and VID usage digest TLVs, but it cannot send these TLVs.
Table 4 IEEE 802.1 organizationally specific TLVs
Type |
Description |
Port VLAN ID (PVID) |
Specifies the port PVID. |
Port And Protocol VLAN ID (PPVID) |
Indicates whether the device supports protocol VLANs and, if so, what VLAN IDs these protocols will be associated with. |
VLAN Name |
Specifies the textual name of any VLAN to which the port belongs. |
Protocol Identity |
Indicates protocols supported on the port. |
DCBX |
Data center bridging exchange protocol. |
EVB module |
Edge Virtual Bridging module, including EVB TLV and CDCP TLV. EVB module TLVs are not supported in the current software version. |
Link Aggregation |
Indicates whether the port supports link aggregation, and if yes, whether link aggregation is enabled. |
Management VID |
Management VLAN ID. |
VID Usage Digest |
VLAN ID usage digest. |
ETS Configuration |
Enhanced Transmission Selection configuration. |
ETS Recommendation |
ETS recommendation. |
PFC |
Priority-based Flow Control. |
APP |
Application protocol. |
QCN |
Quantized Congestion Notification. QCN TLVs are not supported in the current software version. |
IEEE 802.3 organizationally specific TLVs
Table 5 shows the IEEE 802.3 organizationally specific TLVs.
The Power Stateful Control TLV is defined in IEEE P802.3at D1.0 and is not supported in later versions. The device sends this type of TLVs only after receiving them.
Table 5 IEEE 802.3 organizationally specific TLVs
Type |
Description |
MAC/PHY Configuration/Status |
Contains the bit-rate and duplex capabilities of the port, support for autonegotiation, enabling status of autonegotiation, and the current rate and duplex mode. |
Link Aggregation |
Indicates whether the port supports link aggregation, and if yes, whether link aggregation is enabled. |
Power Via MDI |
Contains the power supply capabilities of the port: · Port class (PSE or PD). · Power supply mode. · Whether PSE power supply is supported. · Whether PSE power supply is enabled. · Whether pair selection can be controlled. · Power supply type. · Power source. · Power priority. · PD requested power. · PSE allocated power. |
Maximum Frame Size |
Indicates the supported maximum frame size. |
Power Stateful Control |
Indicates the power state control configured on the sending port, including the following: · Power supply mode of the PSE/PD. · PSE/PD priority. · PSE/PD power. |
Energy-Efficient Ethernet |
Indicates Energy Efficient Ethernet (EEE). Energy-Efficient Ethernet TLVs are not supported in the current software version. |
LLDP-MED TLVs
LLDP-MED TLVs provide multiple advanced applications for voice over IP (VoIP), such as basic configuration, network policy configuration, and address and directory management. LLDP-MED TLVs provide a cost-effective and easy-to-use solution for deploying voice devices in Ethernet. LLDP-MED TLVs are shown in Table 6.
If the MAC/PHY configuration/status TLV is not advertisable, none of the LLDP-MED TLVs will be advertised even if they are advertisable.
If the LLDP-MED capabilities TLV is not advertisable, the other LLDP-MED TLVs will not be advertised even if they are advertisable.
Type |
Description |
LLDP-MED Capabilities |
Allows a network device to advertise the LLDP-MED TLVs that it supports. |
Network Policy |
Allows a network device or terminal device to advertise the VLAN ID of a port, the VLAN type, and the Layer 2 and Layer 3 priorities for specific applications. |
Extended Power-via-MDI |
Allows a network device or terminal device to advertise power supply capability. This TLV is an extension of the Power Via MDI TLV. |
Hardware Revision |
Allows a terminal device to advertise its hardware version. |
Firmware Revision |
Allows a terminal device to advertise its firmware version. |
Software Revision |
Allows a terminal device to advertise its software version. |
Serial Number |
Allows a terminal device to advertise its serial number. |
Manufacturer Name |
Allows a terminal device to advertise its vendor name. |
Model Name |
Allows a terminal device to advertise its model name. |
Asset ID |
Allows a terminal device to advertise its asset ID. The typical case is that the user specifies the asset ID for the endpoint to facilitate directory management and asset tracking. |
Location Identification |
Allows a network device to advertise the appropriate location identifier information for a terminal device to use in the context of location-based applications. |
Management address
The network management system uses the management address of a device to identify and manage the device for topology maintenance and network management. The management address is encapsulated in the management address TLV.
LLDP operating modes
An LLDP agent can operate in one of the following modes:
· TxRx mode—An LLDP agent in this mode can send and receive LLDP frames.
· Tx mode—An LLDP agent in this mode can only send LLDP frames.
· Rx mode—An LLDP agent in this mode can only receive LLDP frames.
· Disable mode—An LLDP agent in this mode cannot send or receive LLDP frames.
Each time the operating mode of an LLDP agent changes, its LLDP protocol state machine reinitializes. A configurable reinitialization delay prevents frequent initializations caused by frequent changes to the operating mode. If you configure the reinitialization delay, an LLDP agent must wait the specified amount of time to initialize LLDP after the LLDP operating mode changes.
Transmitting and receiving LLDP frames
Transmitting LLDP frames
An LLDP agent operating in TxRx mode or Tx mode sends LLDP frames to its directly connected devices both periodically and when the local configuration changes. To prevent LLDP frames from overwhelming the network during times of frequent changes to local device information, LLDP uses the token bucket mechanism to rate limit LLDP frames. For more information about the token bucket mechanism, see ACL and QoS Configuration Guide.
LLDP automatically enables the fast LLDP frame transmission mechanism in either of the following cases:
· A new LLDP frame is received and carries device information new to the local device.
· The LLDP operating mode of the LLDP agent changes from Disable or Rx to TxRx or Tx.
The fast LLDP frame transmission mechanism successively sends the specified number of LLDP frames at a configurable fast LLDP frame transmission interval. The mechanism helps LLDP neighbors discover the local device as soon as possible. Then, the normal LLDP frame transmission interval resumes.
Receiving LLDP frames
An LLDP agent operating in TxRx mode or Rx mode confirms the validity of TLVs carried in every received LLDP frame. If the TLVs are valid, the LLDP agent saves the information and starts an aging timer. The initial value of the aging timer is equal to the TTL value in the Time To Live TLV carried in the LLDP frame. When the LLDP agent receives a new LLDP frame, the aging timer restarts. When the aging timer decreases to zero, all saved information ages out.
Collaboration with Track
You can configure a track entry and associate it with an LLDP interface. The LLDP module checks the neighbor availability of the LLDP interface and reports the check result to the Track module. The Track module changes the track entry status accordingly so the associated application module can take correct actions.
The Track module changes the track entry status based on the neighbor availability of a monitored LLDP interface as follows:
· If the neighbor of the LLDP interface is available, the Track module sets the track entry to Positive state.
· If the neighbor of the LLDP interface is unavailable, the Track module sets the track entry to Negative state.
For more information about collaboration between Track and LLDP, see the track configuration in High Availability Configuration Guide.
Protocols and standards
· IEEE 802.1AB-2005, Station and Media Access Control Connectivity Discovery
· IEEE 802.1AB-2009, Station and Media Access Control Connectivity Discovery
· ANSI/TIA-1057, Link Layer Discovery Protocol for Media Endpoint Devices
· IEEE Std 802.1Qaz-2011, Media Access Control (MAC) Bridges and Virtual Bridged Local Area Networks-Amendment 18: Enhanced Transmission Selection for Bandwidth Sharing Between Traffic Classes
Restrictions and guidelines: LLDP configuration
When you configure LLDP, follow these restrictions and guidelines:
· Some of the LLDP configuration tasks are available in different interface views (see Table 7).
Table 7 Support of LLDP configuration tasks in different views
Tasks |
Supported views |
Layer 2 Ethernet interface view Management Ethernet interface view Layer 2 aggregate interface view |
|
Configuring LLDP trapping |
|
Configuring port ID subtype of port ID TLVs advertised by LLDP (not supported in Layer 2 aggregate interface view) |
LLDP tasks at a glance
To configure LLDP, perform the following tasks:
2. Setting the LLDP bridge mode
3. Setting the LLDP operating mode
4. (Optional.) Setting the LLDP reinitialization delay
5. (Optional.) Configuring LLDP packet-related settings
¡ Configuring the advertisable TLVs
¡ Configuring advertisement of the management address TLV
¡ Setting the encapsulation format for LLDP frames
¡ Setting LLDP frame transmission parameters
¡ Setting the timeout for receiving LLDP frames
¡ Configuring port ID subtype of port ID TLVs advertised by LLDP
6. (Optional.) Enabling LLDP polling
7. (Optional.) Disabling LLDP PVID inconsistency check
8. (Optional.) Configuring LLDP trapping and LLDP-MED trapping
9. (Optional.) Configuring LLDP neighbor validation and aging
¡ Configuring LLDP neighbor validation on an interface
¡ Configuring LLDP neighbor aging on an interface
10. (Optional.) Configuring MAC address learning for DCN
¡ Setting the source MAC address of LLDP frames
¡ Enabling generation of ARP or ND entries for received management address TLVs
Enabling LLDP
Restrictions and guidelines
For LLDP to take effect on specific ports, you must enable LLDP both globally and on these ports.
Procedure
1. Enter system view.
system-view
2. Enable LLDP globally.
lldp global enable
The default LLDP global state varies by switch configuration.
¡ If the switch is started with the software default settings, LLDP is disabled globally.
¡ If the switch is started with the factory default settings, LLDP is enabled globally.
For more information about device startup with software or factory default settings, see Fundamentals Configuration Guide.
3. Enter interface view.
interface interface-type interface-number
4. Enable LLDP.
lldp enable
By default, LLDP is enabled on a port.
Setting the LLDP bridge mode
1. Enter system view.
system-view
2. Set the LLDP bridge mode.
¡ Set the LLDP bridge mode to service bridge.
lldp mode service-bridge
By default, LLDP operates in customer bridge mode.
¡ Set the LLDP bridge mode to customer bridge.
undo lldp mode
By default, LLDP operates in customer bridge mode.
Setting the LLDP operating mode
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Set the LLDP operating mode.
¡ In Layer 2 Ethernet interface view or management Ethernet interface view:
lldp [ agent { nearest-customer | nearest-nontpmr } ] admin-status { disable | rx | tx | txrx }
In Ethernet interface view, if you do not specify an agent type, the command sets the operating mode for the nearest bridge agent.
¡ In Layer 2 aggregate interface view:
lldp agent { nearest-customer | nearest-nontpmr } admin-status { disable | rx | tx | txrx }
In aggregate interface view, you can set the operating mode only for the nearest customer bridge agent and nearest non-TPMR bridge agent.
By default:
¡ The nearest bridge agent operates in TxRx mode.
¡ The nearest customer bridge agent and nearest non-TPMR bridge agent operate in Disable mode.
Setting the LLDP reinitialization delay
About this task
When the LLDP operating mode changes on a port, the port initializes the protocol state machines after an LLDP reinitialization delay. By adjusting the delay, you can avoid frequent initializations caused by frequent changes to the LLDP operating mode on a port.
Procedure
1. Enter system view.
system-view
2. Set the LLDP reinitialization delay.
lldp timer reinit-delay delay
The default LLDP reinitialization delay is 2 seconds.
Configuring the advertisable TLVs
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Configure the advertisable TLVs.
¡ In Layer 2 Ethernet interface view:
lldp tlv-enable { basic-tlv { all | port-description | system-capability | system-description | system-name | management-address-tlv [ ipv6 ] [ ip-address | interface loopback interface-number ] } | dot1-tlv { all | port-vlan-id | link-aggregation | dcbx | protocol-vlan-id [ vlan-id ] | vlan-name [ vlan-id-list ] | management-vid [ mvlan-id ] } | dot3-tlv { all | link-aggregation | mac-physic | max-frame-size | power } | med-tlv { all | capability | inventory | network-policy [ vlan-id ] | power-over-ethernet | location-id { civic-address device-type country-code { ca-type ca-value }&<1-10> | elin-address tel-number } } }
By default, the nearest bridge agent advertises all supported TLVs except the following TLVs:
- DCBX TLVs.
- Location identification TLVs.
- Port and protocol VLAN ID TLVs.
- VLAN name TLVs.
- Management VLAN ID TLVs.
lldp agent nearest-nontpmr tlv-enable { basic-tlv { all | port-description | system-capability | system-description | system-name | management-address-tlv [ ipv6 ] [ ip-address ] } | dot1-tlv { all | port-vlan-id | link-aggregation } | dot3-tlv { all | link-aggregation } }
lldp tlv-enable dot1-tlv { protocol-vlan-id [ vlan-id ] | vlan-name [ vlan-id-list ] | management-vid [ mvlan-id ] }
By default, the nearest non-TPMR bridge agent does not advertise any TLVs.
lldp agent nearest-customer tlv-enable { basic-tlv { all | port-description | system-capability | system-description | system-name | management-address-tlv [ ipv6 ] [ ip-address ] } | dot1-tlv { all | port-vlan-id | link-aggregation } | dot3-tlv { all | link-aggregation } }
lldp tlv-enable dot1-tlv { protocol-vlan-id [ vlan-id ] | vlan-name [ vlan-id-list ] | management-vid [ mvlan-id ] }
By default, the nearest customer bridge agent advertises all the supported basic management TLVs, port VLAN ID TLVs, and link aggregation TLVs.
¡ In management Ethernet interface view:
lldp tlv-enable { basic-tlv { all | port-description | system-capability | system-description | system-name | management-address-tlv [ ipv6 ] [ ip-address ] } | dot1-tlv { all | link-aggregation } | dot3-tlv { all | link-aggregation | mac-physic | max-frame-size | power } | med-tlv { all | capability | inventory | power-over-ethernet | location-id { civic-address device-type country-code { ca-type ca-value }&<1-10> | elin-address tel-number } } }
By default, the nearest bridge agent advertises all types of LLDP TLVs except network policy TLVs and location identification TLVs. Among IEEE 802.1 organizationally specific TLVs, only link aggregation TLVs are supported.
lldp agent { nearest-nontpmr | nearest-customer } tlv-enable { basic-tlv { all | port-description | system-capability | system-description | system-name | management-address-tlv [ ipv6 ] [ ip-address ] } | dot1-tlv { all | link-aggregation } | dot3-tlv { all | link-aggregation } }
By default:
- The nearest non-TPMR bridge agent does not advertise anyTLVs.
- The nearest customer bridge agent advertises all supported basic management TLVs and link aggregation TLVs in the IEEE 802.1 organizationally specific TLV set.
¡ In Layer 2 aggregate interface view:
lldp tlv-enable dot1-tlv { protocol-vlan-id [ vlan-id ] | vlan-name [ vlan-id-list ] | management-vid [ mvlan-id ]
lldp agent nearest-nontpmr tlv-enable { basic-tlv { all | management-address-tlv [ ipv6 ] [ ip-address | interface loopback interface-number ] | port-description | system-capability | system-description | system-name } | dot1-tlv { all | port-vlan-id } }
By default, the nearest non-TPMR bridge agent does not advertise any TLVs.
lldp agent nearest-customer tlv-enable { basic-tlv { all | management-address-tlv [ ipv6 ] [ ip-address ] | port-description | system-capability | system-description | system-name } | dot1-tlv { all | port-vlan-id } }
By default, the nearest customer bridge agent advertises all supported basic management TLVs and port VLAN ID TLVs in the IEEE 802.1 organizationally specific TLV set.
The nearest bridge agent is not supported.
Configuring advertisement of the management address TLV
About this task
LLDP encodes management addresses in numeric or string format in management address TLVs.
If a neighbor encodes its management address in string format, set the encoding format of the management address to string on the connecting port. This guarantees normal communication with the neighbor.
You can configure advertisement of the management address TLV globally or on a per-interface basis. The device selects the management address TLV advertisement setting for an interface in the following order:
1. Interface-based setting, configured by using the lldp tlv-enable command with the management-address-tlv keyword.
2. Global setting, configured by using the lldp global tlv-enable basic-tlv management-address-tlv command.
3. Default setting for the interface.
By default:
¡ The nearest bridge agent and nearest customer bridge agent advertise the management address TLV.
¡ The nearest non-TPMR bridge agent does not advertise the management address TLV.
Procedure
1. Enter system view.
system-view
2. Enable advertisement of the management address TLV globally and set the management address to be advertised.
lldp [ agent { nearest-customer | nearest-nontpmr } ] global tlv-enable basic-tlv management-address-tlv [ ipv6 ] { ip-address | interface loopback interface-number | interface m-gigabitethernet interface-number | interface vlan-interface interface-number }
By default, advertisement of the management address TLV is disabled globally.
3. Enter interface view.
interface interface-type interface-number
4. Enable advertisement of the management address TLV on the interface and set the management address to be advertised.
¡ In Layer 2 Ethernet interface view:
lldp tlv-enable basic-tlv management-address-tlv [ ipv6 ] [ ip-address | interface loopback interface-number ]
lldp agent { nearest-customer | nearest-nontpmr } tlv-enable basic-tlv management-address-tlv [ ipv6 ] [ ip-address ]
¡ In management interface view:
lldp [ agent { nearest-customer | nearest-nontpmr } ] tlv-enable basic-tlv management-address-tlv [ ipv6 ] [ ip-address ]
¡ In Layer 2 aggregate interface view:
lldp agent { nearest-customer | nearest-nontpmr } tlv-enable basic-tlv management-address-tlv [ ipv6 ] [ ip-address ]
By default:
¡ The nearest bridge agent and nearest customer bridge agent advertise the management address TLVs.
¡ The nearest non-TPMR bridge agent does not advertise the management address TLV.
5. Set the encoding format of the management address to string.
¡ In Layer 2 Ethernet interface view or management Ethernet interface view:
lldp [ agent { nearest-customer | nearest-nontpmr } ] management-address-format string
¡ In Layer 2 aggregate interface view:
lldp agent { nearest-customer | nearest-nontpmr } management-address-format string
The default management address encoding format is numeric.
The device supports only the numeric encoding format for IPv6 management addresses.
Setting the encapsulation format for LLDP frames
About this task
Earlier versions of LLDP require the same encapsulation format on both ends to process LLDP frames. To successfully communicate with a neighboring device running an earlier version of LLDP, the local device must be set with the same encapsulation format.
Procedure
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Set the encapsulation format for LLDP frames to SNAP.
¡ In Layer 2 Ethernet interface view or management Ethernet interface view:
lldp [ agent { nearest-customer | nearest-nontpmr } ] encapsulation snap
¡ In Layer 2 aggregate interface view:
lldp agent { nearest-customer | nearest-nontpmr } encapsulation snap
By default, the Ethernet II encapsulation format is used.
Setting LLDP frame transmission parameters
About this task
The Time to Live TLV carried in an LLDPDU determines how long the device information carried in the LLDPDU can be saved on a recipient device.
By setting the TTL multiplier, you can configure the TTL of locally sent LLDPDUs. The TTL is expressed by using the following formula:
TTL = Min (65535, (TTL multiplier × LLDP frame transmission interval + 1))
As the expression shows, the TTL can be up to 65535 seconds. TTLs greater than 65535 will be rounded down to 65535 seconds.
Procedure
1. Enter system view.
system-view
2. Set the TTL multiplier.
lldp hold-multiplier value
The default setting is 4.
3. Set the LLDP frame transmission interval.
lldp timer tx-interval interval
The default setting is 30 seconds.
4. Set the token bucket size for sending LLDP frames.
lldp max-credit credit-value
The default setting is 5.
5. Set the number of LLDP frames sent each time fast LLDP frame transmission is triggered.
lldp fast-count count
The default setting is 4.
6. Set the fast LLDP frame transmission interval.
lldp timer fast-interval interval
The default setting is 1 second.
Setting the timeout for receiving LLDP frames
About this task
This feature allows the device to detect the presence of directly connected neighbors by setting the timeout timer for receiving LLDP frames. If an interface has not received any frames when the timeout timer expires, the device reports a no LLDP neighbor event to the NETCONF module.
Restrictions and guidelines
To avoid misdetection, make sure the timeout for receiving LLDP frames is greater than the LLDP frame transmission interval.
Procedure
1. Enter system view.
system-view
2. Set the timeout for receiving LLDP frames.
lldp timer rx-timeout timeout
By default, no timeout is set for receiving LLDP frames, and the device does not report no LLDP neighbor events.
Configuring port ID subtype of port ID TLVs advertised by LLDP
About this task
This task enables an H3C device to advertise only port ID TLVs that contain interface names. By default, an H3C device advertises port ID TLVs that contain interface MAC addresses or interface names. The media devices from some vendors can obtain interface information from H3C devices only through LLDP. For the media devices to obtain interface names, you must configure H3C devices to generate port ID TLVs that use interface names as the port ID subtype.
Restrictions and guidelines
Perform this task only when LLDP neighbors must obtain interface names from LLDPDUs. Do not perform this task in any other scenarios.
Procedure
1. Enter system view.
system-view
2. Enter Layer 2 Ethernet interface view/management Ethernet interface view.
interface interface-type interface-number
3. Configure the port ID subtype of port ID TLVs advertised by LLDP.
lldp tlv-config basic-tlv port-id type-id
By default, an interface advertises port ID TLVs that contain interface MAC addresses if it receives LLDP-MED TLVs, and advertises port ID TLVs that contain interface names if no LLDP-MED TLVs are received.
Enabling LLDP polling
About this task
With LLDP polling enabled, a device periodically searches for local configuration changes. When the device detects a configuration change, it sends LLDP frames to inform neighboring devices of the change.
Procedure
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Enable LLDP polling and set the polling interval.
¡ In Layer 2 Ethernet interface view or management Ethernet interface view:
lldp [ agent { nearest-customer | nearest-nontpmr } ] check-change-interval interval
¡ In Layer 2 aggregate interface view:
lldp agent { nearest-customer | nearest-nontpmr } check-change-interval interval
By default, LLDP polling is disabled.
Disabling LLDP PVID inconsistency check
About this task
By default, when the system receives an LLDP packet, it compares the PVID value contained in the packet with the PVID configured on the receiving interface. If the two PVIDs do not match, a log message will be printed to notify the user.
You can disable PVID inconsistency check if different PVIDs are required on a link.
Procedure
1. Enter system view.
system-view
2. Disable LLDP PVID inconsistency check.
lldp ignore-pvid-inconsistency
By default, LLDP PVID inconsistency check is enabled.
Configuring LLDP trapping and LLDP-MED trapping
About this task
LLDP trapping or LLDP-MED trapping notifies the network management system of events such as newly detected neighboring devices and link failures.
Before you set a trap transmission interval for LLDP, make sure you understand the following influences of the LLDP trap and LLDP-MED trap transmission interval:
· If the interval value is large, the trap sending frequency is low, which occupies fewer system resources. This is suitable for scenarios where the network topology on the device changes frequently. However, do not set the interval to a too large value. If the interval value is too large, the network management system cannot identify neighbor changes in time, which will cause the network topology on the device not to update in time.
· If the interval value is small, the trap sending frequency is high. This is suitable for topology-sensitive scenarios. However, do not set the interval to a too small value. If the interval value is too small, the network management system will frequently refresh the neighbor status information. This will not only occupies too many system resources but also cause network topology vibration on the device.
Restrictions and guidelines
In the early stage of networking, the network topology changes frequently. If LLDP trapping or LLDP-MED trapping is enabled, the device will frequently send trap information to the network management system, which increases the system burden and wastes resources. As a best practice, do not enable LLDP trapping or LLDP-MED trapping in the early stage of networking.
Procedure
1. Enter system view.
system-view
2. Enter interface view.
interface interface-type interface-number
3. Enable LLDP trapping.
¡ In Layer 2 Ethernet interface view or management Ethernet interface view:
lldp [ agent { nearest-customer | nearest-nontpmr } ] notification remote-change enable
¡ In Layer 2 aggregate interface view:
lldp agent { nearest-customer | nearest-nontpmr } notification remote-change enable
By default, LLDP trapping is disabled.
4. (In Layer 2 Ethernet interface view or management Ethernet interface view.) Enable LLDP-MED trapping.
lldp notification med-topology-change enable
By default, LLDP-MED trapping is disabled.
5. Return to system view.
quit
6. (Optional.) Set the LLDP trap transmission interval.
lldp timer notification-interval interval
The default setting is 30 seconds.
Configuring LLDP neighbor validation and aging
Configuring LLDP neighbor validation on an interface
About this task
LLDP neighbor validation enables an interface to validate the identity of the neighbor based on the neighbor validation criteria configured on the interface. The neighbor validation criteria can be the chassis ID TLV, port ID TLV, or both. Each incoming LLDP packet must match all the validation criteria configured on the interface. If the neighbor information in an incoming LLDP packet does not match the criteria, the system shuts down the data link layer and disables data transmission on the interface.
Procedure
1. Enter system view.
system-view
2. Enter Layer 2 Ethernet interface view.
interface interface-type interface-number
3. Configure the neighbor validation criteria. Choose the following tasks as needed:
¡ Configure the chassis ID TLV criterion.
lldp neighbor-identity chassis-id chassis-id-subtype chassis-id
¡ Configure the port ID TLV criterion.
lldp neighbor-identity port-id port-id-subtype port-id
By default, no neighbor validation criteria exist on an interface.
4. Enable LLDP neighbor validation on the interface.
lldp neighbor-protection validation
By default, LLDP neighbor validation is disabled on an interface.
Configuring LLDP neighbor aging on an interface
About this task
An LLDP neighbor aging-enabled interface ages out a neighbor if it does not receive an LLDP packet from the neighbor within the aging time.
LLDP takes either of the following actions when neighbor aging occurs on an interface:
· Block—Blocks the interface. The block action places the data link layer protocol of the interface in DOWN state. In this state, the interface cannot transfer data packets. The data transfer capability automatically recovers when the interface receives an LLDP packet.
· Shutdown—Shuts down the interface. The shutdown action places the interface in LLDP DOWN state. In this state, the interface can neither transfer data packets nor LLDP packets. You must manually execute the undo lldp neighbor-protection aging or undo shutdown command to bring up the interface.
Procedure
1. Enter system view.
system-view
2. Enter Layer 2 Ethernet interface view.
interface interface-type interface-number
3. Enable LLDP neighbor aging on the interface.
lldp neighbor-protection aging { block | shutdown }
By default, neighbor aging is disabled on an interface.
Configuring MAC address learning for DCN
Setting the source MAC address of LLDP frames
About this task
This feature must be configured with generation of ARP or ND entries for received management address TLVs to meet the following requirements:
· The source MAC address of outgoing LLDP frames is the MAC address of a VLAN interface instead of the MAC address of the egress interface.
· The neighbor device can generate correct ARP or ND entries for the local device.
In Layer 2 Ethernet interface view or Layer 2 aggregate interface view, this feature sets the source MAC address of outgoing LLDP frames to the MAC address of a VLAN interface to which the specified VLAN ID belongs. The source MAC address of outgoing LLDP frames is the MAC address of the Layer 2 interface in the following situations:
· The specified VLAN or the corresponding VLAN interface does not exist.
· The VLAN interface to which the VLAN ID belongs is physically down.
Procedure
1. Enter system view.
system-view
2. Enter Layer 2 Ethernet interface view or Layer 2 aggregate interface view.
interface interface-type interface-number
3. Set the source MAC address of LLDP frames to the MAC address of a VLAN interface.
lldp source-mac vlan vlan-id
By default, the source MAC address of LLDP frames is the MAC address of the egress interface.
Enabling generation of ARP or ND entries for received management address TLVs
About this task
This feature enables the device to generate an ARP or ND entry after receiving an LLDP frame containing a management address TLV on an interface. The ARP or ND entry maps the advertised management address to the source MAC address of the frame.
You can enable generation of both ARP and ND entries on an interface. If the management address TLV contains an IPv4 address, the device generates an ARP entry. If the management address TLV contains an IPv6 address, the device generates an ND entry.
In Layer 2 Ethernet interface view or Layer 2 aggregate interface view, this feature sets the Layer 2 interface to the output interface in the generated entries. The VLAN to which the entries belong is the VLAN specified by this feature. The device cannot generate ARP or ND entries in one of the following situations:
· The specified VLAN or the corresponding VLAN interface does not exist.
· The VLAN interface to which the VLAN ID belongs is physically down.
Restrictions and guidelines
This feature must be configured with the feature of setting the source MAC address of LLDP frames to meet the following requirements:
· The device uses the MAC address of a VLAN interface instead of the egress interface's MAC address as the source MAC address of LLDP frames.
· The neighbor device can generate correct ARP or ND entries.
Procedure
1. Enter system view.
system-view
2. Enter Layer 2 Ethernet interface view or Layer 2 aggregate interface view.
interface interface-type interface-number
3. Enable generation of ARP or ND entries for management address TLVs received on the interface.
lldp management-address { arp-learning | nd-learning } vlan vlan-id
By default, generation of ARP or ND entries for received management address TLVs is disabled on an interface.
In Layer 2 Ethernet interface view, the vlan vlan-id option specifies the ID of the VLAN to which the generated ARP or ND entry belongs. To prevent the ARP or ND entries from overwriting each other, do not specify the same VLAN ID for different Layer 2 Ethernet interfaces.
You can enable generation of both ARP and ND entries on an interface.
Display and maintenance commands for LLDP
Execute display commands in any view.
Command |
|
Display local LLDP information. |
display lldp local-information [ global | interface interface-type interface-number ] |
Display the information contained in the LLDP TLVs sent from neighboring devices. |
display lldp neighbor-information [ [ [ interface interface-type interface-number ] [ agent { nearest-bridge | nearest-customer | nearest-nontpmr } ] [ verbose ] ] | list [ system-name system-name ] ] |
Display LLDP statistics. |
display lldp statistics [ global | [ interface interface-type interface-number ] [ agent { nearest-bridge | nearest-customer | nearest-nontpmr } ] ] |
Display LLDP status of a port. |
display lldp status [ interface interface-type interface-number ] [ agent { nearest-bridge | nearest-customer | nearest-nontpmr } ] |
Display types of advertisable optional LLDP TLVs. |
display lldp tlv-config [ interface interface-type interface-number ] [ agent { nearest-bridge | nearest-customer | nearest-nontpmr } ] |
Clear LLDP statistics on ports. |
reset lldp statistics [ interface interface-type interface number ] [ agent { nearest-bridge | nearest-customer | nearest-nontpmr } ] |
LLDP configuration examples
Example: Configuring basic LLDP functions
Network configuration
As shown in Figure 5, enable LLDP globally on Switch A and Switch B to perform the following tasks:
· Monitor the link between Switch A and Switch B on the NMS.
· Monitor the link between Switch A and the MED device on the NMS.
Procedure
1. Configure Switch A:
# Enable LLDP globally.
<SwitchA> system-view
[SwitchA] lldp global enable
# Enable LLDP on Twenty-FiveGigE 1/0/1. By default, LLDP is enabled on ports.
[SwitchA] interface twenty-fivegige 1/0/1
[SwitchA-Twenty-FiveGigE1/0/1] lldp enable
# Set the LLDP operating mode to Rx on Twenty-FiveGigE 1/0/1.
[SwitchA-Twenty-FiveGigE1/0/1] lldp admin-status rx
[SwitchA-Twenty-FiveGigE1/0/1] quit
# Enable LLDP on Twenty-FiveGigE 1/0/2. By default, LLDP is enabled on ports.
[SwitchA] interface gigabitethernet1/2
[SwitchA-Twenty-FiveGigE1/0/2] lldp enable
# Set the LLDP operating mode to Rx on Twenty-FiveGigE 1/0/2.
[SwitchA-Twenty-FiveGigE1/0/2] lldp admin-status rx
[SwitchA-Twenty-FiveGigE1/0/2] quit
2. Configure Switch B:
# Enable LLDP globally.
<SwitchB> system-view
[SwitchB] lldp global enable
# Enable LLDP on Twenty-FiveGigE 1/0/1. By default, LLDP is enabled on ports.
[SwitchB] interface twenty-fivegige 1/0/1
[SwitchB-Twenty-FiveGigE1/0/1] lldp enable
# Set the LLDP operating mode to Tx on Twenty-FiveGigE 1/0/1.
[SwitchB-Twenty-FiveGigE1/0/1] lldp admin-status tx
[SwitchB-Twenty-FiveGigE1/0/1] quit
Verifying the configuration
# Verify the following items:
· Twenty-FiveGigE 1/0/1 of Switch A connects to a MED device.
· Twenty-FiveGigE 1/0/2 of Switch A connects to a non-MED device.
· Both ports operate in Rx mode, and they can receive LLDP frames but cannot send LLDP frames.
[SwitchA] display lldp status
Global status of LLDP: Enable
Bridge mode of LLDP: customer-bridge
The current number of LLDP neighbors: 2
The current number of CDP neighbors: 0
LLDP neighbor information last changed time: 0 days, 0 hours, 4 minutes, 40 seconds
Transmit interval : 30s
Fast transmit interval : 1s
Transmit credit max : 5
Hold multiplier : 4
Reinit delay : 2s
Trap interval : 30s
Fast start times : 4
LLDP status information of port 1 [Twenty-FiveGigE1/0/1]:
LLDP agent nearest-bridge:
Port status of LLDP : Enable
Admin status : Rx_Only
Trap flag : No
MED trap flag : No
Polling interval : 0s
Number of LLDP neighbors : 1
Number of MED neighbors : 1
Number of CDP neighbors : 0
Number of sent optional TLV : 21
Number of received unknown TLV : 0
LLDP agent nearest-customer:
Port status of LLDP : Enable
Admin status : Disable
Trap flag : No
MED trap flag : No
Polling interval : 0s
Number of LLDP neighbors : 0
Number of MED neighbors : 0
Number of CDP neighbors : 0
Number of sent optional TLV : 16
Number of received unknown TLV : 0
LLDP status information of port 2 [Twenty-FiveGigE1/0/2]:
LLDP agent nearest-bridge:
Port status of LLDP : Enable
Admin status : Rx_Only
Trap flag : No
MED trap flag : No
Polling interval : 0s
Number of LLDP neighbors : 1
Number of MED neighbors : 0
Number of CDP neighbors : 0
Number of sent optional TLV : 21
Number of received unknown TLV : 3
LLDP agent nearest-nontpmr:
Port status of LLDP : Enable
Admin status : Disable
Trap flag : No
MED trap flag : No
Polling interval : 0s
Number of LLDP neighbors : 0
Number of MED neighbors : 0
Number of CDP neighbors : 0
Number of sent optional TLV : 1
Number of received unknown TLV : 0
LLDP agent nearest-customer:
Port status of LLDP : Enable
Admin status : Disable
Trap flag : No
MED trap flag : No
Polling interval : 0s
Number of LLDP neighbors : 0
Number of MED neighbors : 0
Number of CDP neighbors : 0
Number of sent optional TLV : 16
Number of received unknown TLV : 0
# Remove the link between Switch A and Switch B.
# Verify that Twenty-FiveGigE 1/0/2 of Switch A does not connect to any neighboring devices.
[SwitchA] display lldp status
Global status of LLDP: Enable
The current number of LLDP neighbors: 1
The current number of CDP neighbors: 0
LLDP neighbor information last changed time: 0 days, 0 hours, 5 minutes, 20 seconds
Transmit interval : 30s
Fast transmit interval : 1s
Transmit credit max : 5
Hold multiplier : 4
Reinit delay : 2s
Trap interval : 30s
Fast start times : 4
LLDP status information of port 1 [Twenty-FiveGigE1/0/1]:
LLDP agent nearest-bridge:
Port status of LLDP : Enable
Admin status : Rx_Only
Trap flag : No
MED trap flag : No
Polling interval : 0s
Number of LLDP neighbors : 1
Number of MED neighbors : 1
Number of CDP neighbors : 0
Number of sent optional TLV : 0
Number of received unknown TLV : 5
LLDP agent nearest-nontpmr:
Port status of LLDP : Enable
Admin status : Disable
Trap flag : No
MED trap flag : No
Polling interval : 0s
Number of LLDP neighbors : 0
Number of MED neighbors : 0
Number of CDP neighbors : 0
Number of sent optional TLV : 1
Number of received unknown TLV : 0
LLDP status information of port 2 [Twenty-FiveGigE1/0/2]:
LLDP agent nearest-bridge:
Port status of LLDP : Enable
Admin status : Rx_Only
Trap flag : No
MED trap flag : No
Polling interval : 0s
Number of LLDP neighbors : 0
Number of MED neighbors : 0
Number of CDP neighbors : 0
Number of sent optional TLV : 0
Number of received unknown TLV : 0
LLDP agent nearest-nontpmr:
Port status of LLDP : Enable
Admin status : Disable
Trap flag : No
MED trap flag : No
Polling interval : 0s
Number of LLDP neighbors : 0
Number of MED neighbors : 0
Number of CDP neighbors : 0
Number of sent optional TLV : 1
Number of received unknown TLV : 0
LLDP agent nearest-customer:
Port status of LLDP : Enable
Admin status : Disable
Trap flag : No
MED trap flag : No
Polling interval : 0s
Number of LLDP neighbors : 0
Number of MED neighbors : 0
Number of CDP neighbors : 0
Number of sent optional TLV : 16
Number of received unknown TLV : 0
Configuring DCBX
About DCBX
Data Center Ethernet (DCE), also known as Converged Enhanced Ethernet (CEE), is enhancement and expansion of traditional Ethernet local area networks for use in data centers. DCE uses the Data Center Bridging Exchange Protocol (DCBX) to negotiate and remotely configure the bridge capability of network elements.
DCBX versions
DCBX has the following self-adaptable versions:
· DCB Capability Exchange Protocol Specification Rev 1.00.
· DCB Capability Exchange Protocol Base Specification Rev 1.01.
· IEEE Std 802.1Qaz-2011 (Media Access Control (MAC) Bridges and Virtual Bridged Local Area Networks-Amendment 18: Enhanced Transmission Selection for Bandwidth Sharing Between Traffic Classes).
DCBX functions
DCBX offers the following functions:
· Discovers the peer devices' capabilities and determines whether devices at both ends support these capabilities.
· Detects configuration errors on peer devices.
· Remotely configures the peer device if the peer device accepts the configuration.
|
NOTE: H3C devices support only the remote configuration feature. |
DCBX application scenario
Figure 6 DCBX application scenario
DCBX enables lossless packet transmission on DCE networks.
As shown in Figure 6, DCBX applies to an RoCE-based data center network, and operates on an access switch. DCBX enables the switch to control the server or storage adapter, and simplifies the configuration and guarantees configuration consistency. DCBX extends LLDP by using the IEEE 802.1 organizationally specific TLVs (DCBX TLVs) to transmit DCBX data, including:
· In DCBX Rev 1.00 and DCBX Rev 1.01:
¡ Application Protocol (APP).
¡ Enhanced Transmission Selection (ETS).
¡ Priority-based Flow Control (PFC).
· In IEEE Std 802.1Qaz-2011:
¡ ETS Configuration.
¡ ETS Recommendation.
¡ PFC.
¡ APP.
H3C devices can send these types of DCBX information to a server or storage adapter. However, H3C devices cannot accept these types of DCBX information.
Protocols and standards
· DCB Capability Exchange Protocol Specification Rev 1.00
· DCB Capability Exchange Protocol Base Specification Rev 1.01
DCBX tasks at a glance
To configure DCBX, perform the following tasks:
1. Enabling LLDP and DCBX TLV advertising
a. Configuring the 802.1p-to-local priority mapping
b. Configuring group-based WRR queuing
Enabling LLDP and DCBX TLV advertising
Restrictions and guidelines
To enable the device to advertise APP, ETS, and PFC data through an interface, perform the following tasks:
· Enable LLDP globally.
· Enable LLDP and DCBX TLV advertising on the interface.
Procedure
1. Enter system view.
system-view
2. Enable LLDP globally.
lldp global enable
By default:
¡ If the device is started with the software default settings, LLDP is disabled globally.
¡ If the device is started with the factory default settings, LLDP is enabled globally.
For more information about device startup with software or factory default settings, see configuration file management in Fundamentals Configuration Guide.
3. Enter Layer 2 Ethernet interface view.
interface interface-type interface-number
4. Enable LLDP on the interface.
lldp enable
By default, LLDP is enabled on an interface.
5. Enable the interface to advertise DCBX TLVs.
lldp tlv-enable dot1-tlv dcbx
By default, DCBX TLV advertisement is disabled on an interface.
Setting the DCBX version
Restrictions and guidelines
When you set the DCBX version, follow these restrictions and guidelines:
· For DCBX to work correctly, configure the same DCBX version on the local port and peer port. As a best practice, configure the highest version supported on both ends. IEEE Std 802.1Qaz-2011, DCBX Rev 1.01, and DCBX Rev 1.00 are in descending order.
· After the configuration, LLDP frames sent by the local port carry information about the configured DCBX version. The local port and peer port do not negotiate the DCBX version.
· When the DCBX version is autonegotiated, the version IEEE Std 802.1Qaz-2011 is preferably negotiated.
Procedure
1. Enter system view.
system-view
2. Enter Layer 2 Ethernet interface view.
interface interface-type interface-number
3. Set the DCBX version.
dcbx version { rev100 | rev101 | standard }
By default, the DCBX version is not configured. It is autonegotiated by the local port and peer port.
Configuring APP parameters
About this task
The device negotiates with the server adapter by using the APP parameters to achieve the following purposes:
· Control the 802.1p priority values of the protocol packets that the server adapter sends.
· Identify traffic based on the 802.1p priority values.
Restrictions and guidelines
When you configure APP parameters, follow these restrictions and guidelines:
· A Layer 2 ACL identifies application protocol packets by frame type.
· An IPv4 advanced ACL identifies application protocol packets by TCP/UDP port number.
· DCBX Rev 1.00 identifies application protocol packets only by frame type and advertises only TLVs with frame type 0x8906 (FCoE). The S6826 and S9826 switch series do not support FCoE.
· DCBX Rev 1.01 has the following attributes:
¡ Supports identifying application protocol packets by both frame type and TCP/UDP port number.
¡ Does not restrict the frame type or TCP/UDP port number for advertising TLVs.
¡ Can advertise up to 77 TLVs according to the remaining length of the current packet.
· In a QoS policy, you can configure multiple class-behavior associations. A packet might be configured with multiple 802.1p priority marking or mapping actions, and the one configured first takes effect.
Procedure
1. Enter system view.
system-view
2. Create an ACL and enter its view.
¡ Create a Layer 2 ACL and configure a rule for the ACL.
acl mac { acl-number | name acl-name } [ match-order { auto | config } ]
rule [ rule-id ] permit type protocol-type ffff
¡ Create an IPv4 advanced ACL and configure a rule for the ACL.
acl advanced { acl-number | name acl-name } [ match-order { auto | config } ]
rule [ rule-id ] permit { tcp | udp } destination-port eq port
DCBX Rev 1.00 supports only Layer 2 ACLs. DCBX Rev 1.01 and IEEE Std 802.1Qaz-2011 support both Layer 2 ACLs and IPv4 advanced ACLs.
3. Return to system view.
quit
4. Configure a class:
a. Create a class, specify the operator of the class as OR, and enter class view.
traffic classifier classifier-name operator or
b. Use the previously configured ACL as the match criterion of the class.
if-match acl acl-number
c. Return to system view.
quit
5. Configure a traffic behavior:
a. Create a traffic behavior and enter its view.
traffic behavior behavior-name
b. Configure the behavior to mark packets with an 802.1p priority.
remark dot1p 8021p
c. Return to system view.
quit
6. Configure a QoS policy:
a. Create a QoS policy and enter its view.
qos policy policy-name
b. Associate the class with the traffic behavior in the QoS policy, and apply the association to DCBX.
classifier classifier-name behavior behavior-name mode dcbx
c. Return to system view.
quit
7. Apply the QoS policy.
Choose one option as needed:
¡ Apply the QoS policy to the outgoing traffic of all interfaces.
qos apply policy policy-name global outbound
¡ Apply the QoS policy to the outgoing traffic of a Layer 2 Ethernet interface.
interface interface-type interface-number
qos apply policy policy-name outbound
The configuration in system view applies to all interfaces. The configuration in Layer 2 Ethernet interface view applies only to the Layer 2 Ethernet interface.
Configuring ETS parameters
About ETS parameters
ETS provides committed bandwidth. To avoid packet loss caused by congestion, the device performs the following operations:
· Uses ETS parameters to negotiate with the server adapter.
· Controls the server adapter's transmission speed of the specified type of traffic.
· Guarantees that the transmission speed is within the committed bandwidth of the interface.
Restrictions and guidelines
To configure ETS parameters, perform the following tasks:
1. Configure the 802.1p-to-local priority mapping by using either of the following methods:
¡ MQC method.
¡ Priority mapping table method.
If you configure the 802.1p-to-local priority mapping in both methods, the configuration made in the MQC method applies. For information about the QoS commands for configuring 802.1p-to-local priority mapping, see ACL and QoS Command Reference.
2. Configure group-based WRR queuing to allocate bandwidth.
For information about the WRR queuing configuration commands, see ACL and QoS Command Reference.
Configuring the 802.1p-to-local priority mapping
Configuring the 802.1p-to-local priority mapping in the MQC method
1. Enter system view.
system-view
2. Create a traffic class, specify the operator of the class as OR, and enter class view.
traffic classifier classifier-name operator or
3. Configure the class to match packets with the specified service provider network 802.1p priority values.
if-match service-dot1p 8021p-list
By default, no match criterion is configured for the class to match packets.
4. Return to system view.
quit
5. Create a traffic behavior and enter traffic behavior view.
traffic behavior behavior-name
6. Configure the behavior to mark packets with the specified local precedence value.
remark local-precedence local-precedence
By default, no local precedence marking action is configured.
7. Return to system view.
quit
8. Create a QoS policy and enter its view.
qos policy policy-name
9. Associate the class with the traffic behavior in the QoS policy, and apply the association to DCBX.
classifier classifier-name behavior behavior-name mode dcbx
By default, no class-behavior associations exist.
10. Return to system view.
quit
11. Apply the QoS policy.
Choose one option as needed:
¡ Apply the QoS policy to the outgoing traffic of all interfaces.
qos apply policy policy-name global outbound
¡ Apply the QoS policy to the outgoing traffic of a Layer 2 Ethernet interface.
interface interface-type interface-number
qos apply policy policy-name outbound
The configuration in system view applies to all interfaces. The configuration in Layer 2 Ethernet interface view applies only to the Layer 2 Ethernet interface.
Configuring the 802.1p-to-local priority mapping in the priority mapping table method
1. Enter system view.
system-view
2. Enter 802.1p-to-local priority mapping table view for the outgoing traffic.
qos map-table dot1p-lp
3. Configure the priority mapping table to map the specified 802.1p priority values to a local precedence value.
import import-value-list export export-value
For information about the default priority mapping tables, see ACL and QoS Configuration Guide.
Configuring group-based WRR queuing
1. Enter system view.
system-view
2. Enter Layer 2 Ethernet interface view.
interface interface-type interface-number
3. Enable WRR queuing.
qos wrr byte-count
By default, an interface uses the WRR queue scheduling algorithm.
4. Configure a queue.
Choose one option as needed:
¡ Add a queue to WRR priority group 1 and configure the scheduling weight for the queue.
qos wrr queue-id group 1 byte-count schedule-value
¡ Configure a queue to use SP queuing.
qos wrr queue-id group sp
Configuring PFC parameters
About this task
To prevent packets with an 802.1p priority value from being dropped, enable PFC for the 802.1p priority value. This feature reduces the sending rate of packets carrying this priority when network congestion occurs.
The device uses PFC parameters to negotiate with the server adapter and to enable PFC for the specified 802.1p priorities on the server adapter.
For more information about PFC commands, see Interface Command Reference.
Procedure
1. Enter system view.
system-view
2. Enter Layer 2 Ethernet interface view.
interface interface-type interface-number
3. Enable PFC in auto mode on the Ethernet interface.
priority-flow-control auto
By default, PFC is disabled.
To advertise the PFC data, you must enable PFC in auto mode.
4. Enable PFC for the specified 802.1p priorities.
priority-flow-control no-drop dot1p dot1p-list
By default, PFC is disabled for all 802.1p priorities.
DCBX configuration examples
Example: Configuring DCBX
Network configuration
As shown in Figure 7, HundredGigE 1/0/29 of the access switch (Switch A) connects to the RDMA adapter of the data center server (DC server).
Configure Switch A to implement lossless RoCE frame transmission to DC server.
|
NOTE: In this example, both Switch A and the DC server support DCBX Rev 1.01. |
Procedure
1. Enable LLDP and DCBX TLV advertising:
# Enable LLDP globally.
<SwitchA> system-view
[SwitchA] lldp global enable
# Enable LLDP and DCBX TLV advertising on HundredGigE 1/0/29.
[SwitchA] interface hundredgige 1/0/29
[SwitchA-HundredGigE1/0/29] lldp enable
[SwitchA-HundredGigE1/0/29] lldp tlv-enable dot1-tlv dcbx
2. Set the DCBX version to Rev. 1.01 on HundredGigE 1/0/29.
[SwitchA-HundredGigE1/0/29] dcbx version rev101
[SwitchA-HundredGigE1/0/29] quit
3. Enable byte-count WRR queuing on HundredGigE 1/0/29, and configure queue 3 on the interface to use SP queuing.
[SwitchA] interface hundredgige 1/0/29
[SwitchA-HundredGigE1/0/29] qos wrr byte-count
[SwitchA-HundredGigE1/0/29] qos wrr 3 group sp
4. Configure PFC:
# Enable PFC in auto mode on HundredGigE 1/0/29.
[SwitchA-HundredGigE1/0/29] priority-flow-control auto
# Enable PFC for 802.1 priority 3.
[SwitchA-HundredGigE1/0/29] priority-flow-control no-drop dot1p 3
Verifying the configuration
The DC server does not detect any packet loss during RoCE frame transmission.