- Table of Contents
- Related Documents
-
Title | Size | Download |
---|---|---|
01-MPLS Basics Configuration | 447.79 KB |
Chapter 1 MPLS Basics Configuration
1.1.3 MPLS and Routing Protocols
1.2.1 Label Advertisement and Management
1.3.3 Fundamental Operation of LDP
1.4 Configuring MPLS Basic Capability
1.4.1 Configuration Prerequisites
1.5.1 Configuration Prerequisites
1.6.1 Configuration Prerequisites
1.7.1 Configuration Prerequisites
1.7.2 MPLS LDP Configuration Tasks
1.7.3 Configuring MPLS LDP Capability
1.7.4 Configuring Local LDP Session Parameters
1.7.5 Configuring Remote LDP Session Parameters
1.7.6 Configuring the Policy for Triggering LSP Establishment
1.7.7 Specifying the Label Processing Modes
1.7.8 Configuring LDP Loop Detection
1.7.9 Configuring LDP MD5 Authentication
1.8.1 Configuration Prerequisites
1.9 Configuring MPLS IP TTL Processing
1.9.1 Configuration Prerequisites
1.9.2 Configuring MPLS IP TTL Propagation
1.9.3 Specifying the Type of Path for ICMP Responses
1.10 Setting the Interval for Reporting Statistics
1.13 Displaying and Maintaining MPLS
1.13.2 Displaying MPLS Operation
1.13.3 Displaying MPLS LDP Operation
1.13.4 Clearing MPLS Statistics
1.14 MPLS Configuration Examples
1.14.1 LDP Session Configuration Example
1.14.2 Configuring LDP to Establish LSPs
Chapter 1 MPLS Basics Configuration
When performing MPLS basics configuration, go to these sections for information you are interested in:
l Configuring MPLS Basic Capability
l Configuring MPLS IP TTL Processing
l Setting the Interval for Reporting Statistics
l Displaying and Maintaining MPLS
& Note:
l A routing switch can also function as a router while running MPLS. The term “router” in this document refers to a router in a generic sense or a Layer 3 Ethernet switch running MPLS.
l For the S9500 Series Routing Switches, only the line processor units (LPUs) and VPLS service processor cards (SPCs) whose names contain such suffix like C, CA or CB support MPLS. To enable the MPLS VPN function on an S9500 switch, you need to configure an MPLS-capable LPU or VPLS SPC. You can identify a card name suffix by the silkscreen at the upper right corner of the front panel of a card. For example, the silkscreen of the LSB1P4G8CA0 card is P4G8CA and its suffix is CA.
1.1 MPLS Overview
Multiprotocol label switching (MPLS), originating in Internet Protocol version 4 (IPv4), was initially proposed to improve forwarding speed. Its core technology can be extended to multiple network protocols, such as Internet Protocol version 6 (IPv6), Internet packet exchange (IPX), and connectionless network protocol (CLNP). That is what the term multiprotocol means.
MPLS integrates both Layer 2 fast switching and Layer 3 routing and forwarding, satisfying the requirements of various new applications for network performance.
& Note:
For details about MPLS architecture, refer to RFC 3031 “Multiprotocol Label Switching Architecture”.
1.1.1 Basic Concepts of MPLS
I. FEC
As a forwarding technology based on classification, MPLS groups packets to be forwarded in the same manner into a class called the forwarding equivalence class (FEC). That is, packets of the same FEC are handled in the same way.
The classification of FECs is very flexible. It can be based on any combination of source address, destination address, source port, destination port, protocol type and VPN. For example, in the traditional IP forwarding using longest match, all packets to the same destination belongs to the same FEC.
II. Label
A label is a short fixed length identifier for identifying a FEC. A FEC may correspond to multiple labels in scenarios where, for example, load sharing is required, while a label can only represent a single FEC.
A label is carried in the header of a packet. It does not contain any topology information and is local significant.
A label is four octets, or 32 bits, in length. Figure 1-1 illustrates its format.
A label consists of four fields:
l Label: Label value of 20 bits. Used as the pointer for forwarding.
l Exp: For QoS, three bits in length.
l S: Flag for indicating whether the label is at the bottom of the label stack, one bit in length. 1 indicates that the label is at the bottom of the label stack. This field is very useful when there are multiple levels of MPLS labels.
l TTL: Time to live (TTL) for the label. Eight bits in length. This field has the same meaning as that for an IP packet.
Similar to the VPI/VCI in ATM and the DLCI in frame relay, an MPLS label functions as a connection identifier. If the link layer protocol has a label field like VPI/VCI in ATM or DLCI in frame relay, the MPLS label is encapsulated in that field. Otherwise, it is inserted between the data link layer header and the network layer header as a shim. As such, an MPLS label can be supported by any link layer protocol.
Figure 1-2 shows the place of a label in a packet.
Figure 1-2 Place of a label in a packet
& Note:
Currently, the S9500 series does not support the cell mode.
III. LSR
Label switching router (LSR) is a fundamental component on an MPLS network. All LSRs support MPLS.
IV. LSP
Label switched path (LSP) means the path along which a FEC travels through an MPLS network. Along an LSP, two neighboring LSRs are called upstream LSR and downstream LSR respectively. In Figure 1-3, R2 is the downstream LSR of R1, while R1 is the upstream LSR of R2.
An LSP is a unidirectional path from the ingress of the MPLS network to the egress. It functions like a virtual circuit in ATM or frame relay. Each node of an LSP is an LSR.
V. LDP
Label distribution protocol (LDP) means the protocol used by MPLS for control. An LDP has the same functions as a signaling protocol on a traditional network. It classifies FECs, distributes labels, and establishes and maintains LSPs.
MPLS supports multiple label distribution protocols of either of the following two types:
l Those dedicated for label distribution, such as LDP and constraint-based routing using LDP (CR-LDP).
l The existing protocols that are extended to support label distribution, such as border gateway protocol (BGP) and resource reservation protocol (RSVP).
In addition, you can configure static LSPs.
& Note:
l For information about CR-LDP and RSVP, refer to MPLS TE Configuration in the MPLS VPN Volume.
l For information about BGP, refer to BGP Configuration in the IP Routing Volume.
l Currently, the S9500 series does not support CR-LDP.
VI. LSP tunneling
MPLS support LSP tunneling.
An LSR of an LSP and its downstream LSR are not necessarily on a path provided by the routing protocol. That is, MPLS allows an LSP to be established between two LSRs that are not on a path established by the routing protocol. In this case, the two LSRs are respectively the start point and end point of the LSP, and the LSP is an LSP tunnel, which does not use the traditional network layer encapsulation tunneling technology. For example, the LSP <R2→R21→R22→R3> in Figure 1-3 is a tunnel between R2 and R3.
If the path that a tunnel traverses is exactly the hop-by-hop route established by the routing protocol, the tunnel is called a hop-by-hop routed tunnel. Otherwise, the tunnel is called an explicitly routed tunnel.
VII. Multi-level label stack
MPLS allows a packet to carry a number of labels organized as a last-in first-out (LIFO) stack, which is called a label stack. A packet with a label stack can travel along more than one level of LSP tunnel. At the ingress and egress of each tunnel, these operations can be performed on the top of a stack: PUSH and POP.
MPLS has no limit to the depth of a label stack. For a label stack with a depth of m, the label at the bottom is of level 1, while the label at the top has a level of m. An unlabeled packet can be considered as a packet with an empty label stack, that is, a label stack whose depth is 0.
1.1.2 Architecture of MPLS
I. Structure of the MPLS network
As shown in Figure 1-4, the element of an MPLS network is LSR. LSRs in the same routing or administrative domain form an MPLS domain.
In an MPLS domain, LSRs residing at the domain border to connect with other networks are label edge routers (LERs), while those within the MPLS domain are core LSRs. All core LSRs, which can be routers running MPLS or ATM-LSRs upgraded from ATM switches, use MPLS to communicate, while LERs interact with devices outside the domain that use traditional IP technologies.
Each packet entering an MPLS network is labeled on the ingress LER and then forwarded along an LSP to the egress LER. All the intermediate LSRs are called transit LSRs.
Figure 1-4 Structure of the MPLS network
The following describes how MPLS operates:
1) First, the LDP protocol and the traditional routing protocol (such as OSPF and ISIS) work together on each LSR to establish the routing table and the label information base (LIB) for intended FECs.
2) Upon receiving a packet, the ingress LER completes the Layer 3 functions, determines the FEC to which the packet belongs, labels the packet, and forwards the labeled packet to the next hop along the LSP.
3) After receiving a packet, each transit LSR looks up its label forwarding table for the next hop according to the label of the packet and forwards the packet to the next hop. None of the transit LSRs performs Layer 3 processing.
4) When the egress LER receives the packet, it removes the label from the packet and performs IP forwarding.
Obviously, MPLS is not a service or application, but actually a tunneling technology and a routing and switching technology platform combining label switching with Layer 3 routing. This platform supports multiple upper layer protocols and services, as well as secure transmission of information to a certain degree.
II. Structure of an LSR
Figure 1-5 Structure of an LSR
As shown in Figure 1-5, an LSR consists of two components:
l Control plane: Implements label distribution and routing, establishes the LFIB, and builds and tears LSPs.
l Forwarding plane: Forwards packets according to the LFIB.
An LER forwards both labeled packets and IP packets on the forwarding plane and therefore uses both the LFIB and the FIB. An ordinary LSR only needs to forward labeled packets and therefore uses only the LFIB.
1.1.3 MPLS and Routing Protocols
When establishing an LSP hop by hop, LDP uses the information in the routing tables of the LSRs along the path to determine the next hop. The information in the routing tables is provided by routing protocols such as IGPs and BGP. LDP only uses the routing information indirectly; it has no direct association with routing protocols.
On the other hand, existing protocols such as BGP and RSVP can be extended to support label distribution.
In MPLS applications, it may be necessary to extend some routing protocols. For example, MPLS-based VPN applications requires that BGP be extended to propagate VPN routing information, and MPLS-based traffic engineering (TE) requires that OSPF or IS-IS be extended to carry link state information.
1.1.4 Applications of MPLS
By integrating both Layer 2 fast switching and Layer 3 routing and forwarding, MPLS features improved route lookup speed. However, with the development of the application specific integrated circuit (ASIC) technology, route lookup speed is no longer the bottleneck hindering network development. This makes MPLS not so outstanding in improving forwarding speed.
Nonetheless, MPLS can easily implement the seamless integration between IP networks and Layer 2 networks of ATM, frame relay, and the like, and offer better solutions to quality of service (QoS), traffic engineering (TE), and virtual private network (VPN) applications thanks to the following advantages.
I. MPLS-based VPN
Traditional VPN depends on tunneling protocols such as GRE, L2TP, and PPTP to transport data between private networks across public networks, while an LSP itself is a tunnel over public networks. Therefore, implementation of VPN using MPLS is of natural advantages.
MPLS-based VPN connects geographically different branches of a private network to form a united network by using LSPs. MPLS-based VPN also supports the interconnection between VPNs.
Figure 1-6 shows the basic structure of an MPLS-based VPN. Two of the fundamental components are customer edge device (CE) and service provider edge router (PE). A CE can be a router, switch, or host. All PEs are on the backbone network.
PE is responsible for managing VPN users, establishing LSP connections between PEs, and allocating routes among different branches of the same VPN. Route allocation among PEs is usually implemented by LDP or extended BGP.
MPLS-based VPN supports IP address multiplexing between branches and interconnection between VPNs. Compared with a traditional route, a VPN route requires the branch and VPN identification information. Therefore, it is necessary to extend BGP to carry VPN routing information.
II. MPLS-based TE
MPLS-based TE and the Diff-Serv feature allow not only high network utilization, but different levels of services based on traffic precedence, providing voice and video streams with services of low delay, low packet loss, and stable bandwidth guarantee.
Since TE is more difficult to be implemented on an entire network, the Diff-Serv model is often adopted in practical networking schemes.
The Diff-Serv model maps a service to a certain service class at the network edge according to the QoS requirement of the service. The DS field (derived from the TOS field) in the IP packet identifies the service uniquely. Then, each node in the backbone network performs the preset service policies to diversified services according to the field to ensure the corresponding QoS.
The QoS classification in Diff-Serv is similar to the MPLS label distribution. In fact, the MPLS-based Diff-Serv is implemented by integrating the DS distribution into the MPLS label distribution.
1.2 MPLS Configuration Basics
& Note:
Currently, the device supports the graceful restart (GR) feature of LDP. For details about GR, refer to GR Configuration in System Volume.
1.2.1 Label Advertisement and Management
In MPLS, the decision to assign a particular label to a particular FEC is made by the downstream LSR. The downstream LSR informs the upstream LSR of the assignment. That is, labels are advertised in the upstream direction.
I. Label advertisement mode
Two label advertisement modes are available:
l Downstream on demand (DoD): In this mode, a downstream LSR binds a label to a particular FEC and advertises the binding only when it receives a label request from its upstream LSR.
l Downstream unsolicited (DU): In this mode, a downstream LSR does not wait for any label request from an upstream LSR before binding a label to a particular FEC.
An upstream LSR and its downstream LSR must use the same label advertisement mode; otherwise, no LSP can be established normally. For more information, refer to LDP Label Distribution.
II. Label distribution control mode
There are two label distribution control modes:
l Independent: In this mode, an LSR can notify label binding messages upstream anytime. The drawback of this mode is that an LSR may have advertised to the upstream LSR the binding of a label to a particular FEC when it receives a binding from its downstream LSR.
l Ordered: In this mode, an LSR can send label binding messages about a FEC upstream only when it receives a specific label binding message from the next hop for a FEC or the LSR itself is the egress node of the FEC.
III. Label retention mode
Label retention mode dictates how to process a label to FEC binding that is received by an LSR but not useful at the moment.
There are two label retention modes:
l Liberal: In this mode, an LSR keeps any received label to FEC binding regardless of whether the binding is from its next hop for the FEC or not.
l Conservative: In this mode, an LSR keeps only label to FEC bindings that are from its next hops for the FECs.
In liberal mode, an LSR can adapt to route changes quickly; while in conservative mode, there are less label to FEC bindings for an LSR to advertise and keep.
The conservative label retention mode is usually used together with the DoD mode on LSRs with limited Label space and LDP identifier.
IV. Basic concepts for label switching
l Next hop label forwarding entry (NHLFE): Operation to be performed on the label, which can be Push or Swap.
l FEC to NHLFE map (FTN): Mapping of a FEC to an NHLFE at the ingress node.
l Incoming label map (ILM): Mapping of each incoming label to a set of NHLFEs. The operations performed for each incoming label includes Null and Pop.
V. Label switching process
Each packet is classified into a certain FEC at the ingress LER. Packets of the same FEC travel along the same path in the MPLS domain, that is, the same LSP. For each incoming packet, an LSR examines the label, uses the ILM to map the label to an NHLFE, replaces the old label with a new label, and then forwards the labeled packet to the next hop.
1.2.2 PHP
As described in Architecture of MPLS, each transit LSR on an MPLS network forwards an incoming packet based on the label of the packet, while the egress LER removes the label from the packet and forwards the packet based on the network layer destination address.
In fact, on a relatively simple MPLS application network, the label of a packet is useless for the egress, which only needs to forward the packet based on the network layer destination address. In this case, the penultimate hop popping (PHP) feature can pop the label at the penultimate node, relieving the egress of the label operation burden and improving the packet processing capability of the MPLS network.
1.2.3 TTL Processing in MPLS
MPLS TTL processing involves two aspects: TTL propagation and ICMP response path.
I. IP TTL propagation
An MPLS label contains an 8-bit long TTL field, which has the same meaning as that of an IP packet.
According to RFC 3031 “Multiprotocol Label Switching Architecture”, when an LSR labels a packet, it copies the TTL value of the original IP packet or the upper level label to the TTL field of the newly added label. When an LSR forwards a labeled packet, it decrements the TTL value of the label at the stack top by 1. When an LSR pops a label, it copies the TTL value of the label at the stack top back to the TTL field of the IP packet or lower level label.
TTL can be used not only to prevent routing loops, but to implement the tracert function:
l With IP TTL propagation enabled at ingress, whenever a packet passes a hop along the LSP, its IP TTL gets decremented by 1. Therefore, the result of tracert will reflect the path along which the packet has traveled.
l With IP TTL propagation disabled at ingress, the IP TTL of a packet does not decrement when the packet passes a hop, and the result of tracert does not show the hops within the MPLS backbone, as if the ingress and egress were connected directly.
Caution:
l Within an MPLS domain, TTL propagation always occurs between the multi-level labels.
l The TTL value of a transmitted local packet is always copied regardless of whether IP TTL propagation is enabled or not. This ensures that the local administrator can tracert for network test.
l For network security, the structure of the MPLS backbone may need to be hidden in an MPLS VPN application. In this case, TTL propagation is not allowed for private network packets at ingress.
II. ICMP response
On an MPLS VPN, P routers cannot route VPN packets carried by MPLS. When the TTL of an MPLS packet expires, an ICMP response will be generated and transported along the LSP until it reaches the destination router of the LSP, where it is forwarded by IP routing. Such processing increases the network traffic and the packet forwarding delay.
& Note:
For description and configuration of P routers, refer to MPLS L3VPN Configuration and MPLS L2VPN Configuration in the MPLS VPN Volume.
For an MPLS packet with only one level of label, the ICMP response message travels along the IP route when the TTL expires.
1.2.4 Inspecting an MPLS LSP
In MPLS, the MPLS control plane is responsible for establishing an LSP. However, it cannot detect the error when an LSP fails to forward data. This brings difficulty to network maintenance.
MPLS LSP ping and traceroute provide a mechanism for detecting errors in LSP and locating nodes with failure in time. Similar to IP ping and traceroute, MPLS LSP ping and traceroute use MPLS echo requests and MPLS echo replies to check the availability of LSPs. The MPLS echo request message carries FEC information to be detected, and is sent along the LSP like other data packets of the same FEC. Thus, the LSP can be checked.
l MPLS LSP ping is a tool for checking the validity and availability of an LSP. It uses messages called MPLS echo requests. In a ping operation, an MPLS echo request is forwarded along an LSP to the egress, where the control plane determines whether the LSR itself is the egress of the FEC and responds with an MPLS echo reply. When the ping initiator receives the reply, the LSP is considered perfect for forwarding data.
l MPLS LSP traceroute is a tool for locating LSP errors. By sending MPLS echo requests to the control plane of each transit LSR, it can determine whether the LSR is really a transit node on the LSP.
& Note:
When an MPLS echo request reaches the egress, the destination address in the IP header is set to an address on 127.0.0.0/8 (loopback address of the LSR) and the TTL is set to 1, so as to prevent further forwarding of the request.
1.3 LDP Overview
1.3.1 LDP Basic Concepts
An LDP dictates the messages to be used in label distribution and the related processes.
Using LDP, LSRs can map network layer routing information to data layer switching paths directly and further establish LSPs. LSPs can be established between both neighboring LSRs and LSRs that are not directly connected, making label switching possible at all transit nodes on the network.
& Note:
For detailed description about LDP, refer to RFC 2036 “LDP Specification”.
I. LDP peer
Two LSRs with an LDP session established between them and using LDP to exchange label to FEC bindings are called LDP peers, each of which obtains the label to FEC bindings of its peer over the LDP session between them.
II. LDP session
LDP sessions are used to exchange messages for label binding and releasing.
LDP sessions come in two categories:
l Local LDP session: Established between two directly connected LSRs.
l Remote LDP session: Established between two indirectly connected LSRs.
III. LDP message type
There are four types of LDP messages:
l Discovery message: Used to declare and maintain the presence of an LSR on a network.
l Session message: Used to establish, maintain, and terminate sessions between LDP peers.
l Advertisement message: Used to create, alter, or remove label to FEC bindings.
l Notification message: Used to provide advisory information and signal errors.
For reliable transport of LDP messages, TCP is used for LDP session messages, advertisement messages, and notification messages, while UDP is used only for discovery messages.
IV. Label space and LDP identifier
A scope of labels that can be assigned to LDP peers is called a label space. A label space can be per interface or per platform. A per interface label space is interface-specific, while a per platform label space is for an entire LSR.
An LDP identifier is used to identify an LSR label space. It is a six-byte numerical value in the format of <LSR ID>:<Label space ID>, where LSR ID is four-byte long. A label space ID of 1 means per interface, a label space ID of 0 means per platform.
& Note:
Currently, only per platform label space is supported.
1.3.2 LDP Label Distribution
Figure 1-7 illustrates how LDP distribute labels.
In Figure 1-7, B is the upstream LSR of C on LSP 1.
As described previously, there are two label advertisement modes. The main difference between them is whether the downstream advertises the bindings unsolicitedly or on demand.
The following details the advertisement process for each of the two modes.
I. DoD mode
In DoD mode, an upstream LSR sends a label request message containing the description of a FEC to its downstream LSR, which assigns a label to the FEC, encapsulates the binding information in a label mapping message and sends the message back to it.
When the downstream LSR responds with label binding information depends on the label distribution control mode used by the LSR:
l In ordered mode, an LSR responds to its upstream LSR with label binding information only when it receives that of its downstream LSR.
l In independent mode, an LSR immediately responds to its upstream LSR with label binding information no matter whether it receives that of its downstream LSR or not.
Usually, an upstream LSR selects its downstream LSR based on the information in its routing table. In Figure 1-7, all LSRs on LSP 1 work in ordered mode, while LSR F on LSP 2 works in independent mode.
II. DU mode
In DU mode, a downstream LSR advertises label binding information to its upstream LSR unsolicitedly after the LDP session is established, while the upstream LSR keeps the label binding information and processes the information based on its routing table information.
1.3.3 Fundamental Operation of LDP
LDP goes through four phases in operation: Discovery, Session establishment and maintenance, LSP establishment and maintenance, and Session termination.
I. Discovery
In this phase, an LSR who wants to establish a session sends Hello messages to its neighboring LSRs periodically, announcing its presence. This way, LSRs can automatically find their peers without manual configuration.
LDP provides two discovery mechanisms:
l Basic discovery mechanism
The basic discovery mechanism is used to discover local LDP peers, that is, LSRs directly connected at link layer, and to further establish local LDP sessions.
Using this mechanism, an LSR periodically sends LDP link Hellos as UDP packets out an interface to the multicast address known as “all routers on this subnet”. An LDP link Hello message carries information about the LDP identifier of a given interface and some other information. Receipt of an LDP link Hello message on an interface indicates that a potential LDP peer is connected to the interface at link layer.
l Extended discovery mechanism
The extended discovery mechanism is used to discover remote LDP peers, that is, LSRs not directly connected at link layer, and to further establish remote LDP sessions.
Using this mechanism, an LSR periodically sends LDP targeted Hellos as UDP packets to a given IP address.
An LDP targeted Hello message carries information about the LDP identifier of a given LSR and some other information. Receipt of an LDP targeted Hello message on an LSR indicates that a potential LDP peer is connected to the LSR at network layer.
At the end of the discovery phase, Hello adjacency is established between LSRs, and LDP is ready to initiate session establishment.
II. Session establishment and maintenance
In this phase, LSRs pass through two steps to establish sessions between them:
1) Establishing transport layer connections (that is, TCP connections) between them.
2) Initializing sessions and negotiating session parameters such as the LDP version, label distribution mode, timers, and label spaces.
After establishing sessions between them, LSRs send Hello messages and Keepalive messages to maintain those sessions.
III. LSP establishment and maintenance
Establishing an LSP is to bind FECs with labels and notify adjacent LSRs of the bindings. This is implemented by LDP. The following gives the primary steps when LDP works in DU mode and ordered mode:
1) When the network topology changes and an LER finds in its routing table a new destination address that does not belong to any existing FEC, the LER creates a new FEC for the destination address.
2) If the LER has upstream LSRs and has at least one free label, it assigns a label to the FEC and sends the label binding information to the upstream LSRs.
3) Upon receiving the label binding information, an upstream LSR records the binding. Then, it checks whether the source LSR of the binding information is the next hop of the FEC. If yes, it adds an entry in its LFIB, assigns a label to the FEC, and sends the new label binding information to its own upstream LSRs..
4) When the ingress LER receives the label binding information, it adds an entry in its LFIB. Thus, an LSP is established for the FEC, and packets of the FEC can be label switched along the LSP.
IV. Session termination
LDP checks Hello messages to determine adjacency and checks Keepalive messages to determine the integrity of sessions.
LDP uses different timers for adjacency and session maintenance:
l Hello timer: LDP peers periodically send Hello messages to indicate that they intend to keep the Hello adjacency. If the timer expires but an LSR still does not receive any new Hello message from its peer, it removes the Hello adjacency.
l Keepalive timer: LDP peers keep LDP sessions by periodically sending Keepalive message over LDP session connections. If the timer expires but an LSR still does not receive any new Keepalive message, it closes the connection and terminates the LDP session.
1.3.4 LDP Loop Detection
LSPs established in MPLS may be looping. The LDP loop detection mechanism can detect looping LSPs and prevent LDP messages from looping forever.
The LDP loop detection mechanism must be configured on all LSR for it to work. However, for an LDP session to be established, LDP loop detection configuration on LDP peers may be different.
LDP loop detection can be in either of the following two modes:
I. Maximum hop count
A label request message or label mapping message can include information about its hop count, which increments by 1 for each hop. When this value reaches the specified limit, LDP considers that a loop is present and the attempt to establish an LSP fails.
II. Path vector
A label request message or label mapping message can include path information in the format of path vector list. When such a message reaches an LSR, the LSR checks the path vector list of the message to see whether its MPLS LSR ID is in the list. If either of the following cases occurs, the attempt to establish an LSP fails:
l The MPLS LSR ID of the LSR is already in the path vector list.
l Hop counts of the path reaches the specified limit.
If the MPLS LSR ID of the LSR is not in the path vector list, the LSR adds it into the list.
1.4 Configuring MPLS Basic Capability
1.4.1 Configuration Prerequisites
Before configuring MPLS basic capability, be sure to complete these tasks:
l Configuring physical parameters on related interfaces
l Configuring link layer attributes on related interfaces
l Configuring IP addresses for related interfaces
l Configuring static routes or an IGP protocol, ensuring that LSRs can reach each other
& Note:
MPLS basic capability can be configured on LSRs even when LSRs cannot reach each other. However, you must configure the mpls ldp transport-address command in this case.
1.4.2 Configuration Procedure
Follow these steps to configure MPLS basic capability:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Configure the MPLS LSR ID |
mpls lsr-id lsr-id |
Required By default, no LSR ID is configured. |
Enable MPLS for the current node and enter MPLS view |
mpls |
Required Not enabled by default |
Exit to system view |
quit |
— |
Enter interface view |
interface interface-type interface-number |
— |
Enable MPLS for the interface |
mpls |
Required Not enabled by default |
& Note:
An LSR ID uses the format of an IP address and is unique within an MPLS domain. You are recommended to use the IP address of a loopback interface on an LSR as the LSR ID.
1.5 Configuring PHP
You configure PHP on the egress and select the type of labels for the egress to distribute based on whether the penultimate hop supports PHP.
1.5.1 Configuration Prerequisites
Before configuring PHP, be sure to complete configuring MPLS basic capability on all LSRs.
1.5.2 Configuration Procedure
According to RFC 3032 “MPLS Label Stack Encoding”:
l A label value of 0 represents an IPv4 explicit null label and is valid only when it appears at the bottom of the label stack. It indicates that the label of the packet must be popped out on the node, and that the next node will perform IP forwarding.
l A label value of 3 represents an implicit null label and never appears in the label stack. When an LSR finds that it is assigned an implicit null label, it directly performs a pop operation, rather than substitutes the value for the original label at the stack top.
Follow these steps to configure PHP:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enter MPLS view |
mpls |
— |
Specify the egress to support PHP and set the type of the label to be distributed to the penultimate hop |
label advertise { explicit-null | implicit-null | non-null } |
Optional By default, an egress supports PHP and distributes to the penultimate hop an implicit null label. Note that you must reset LDP sessions for the configuration to take effect. |
1.6 Configuring a Static LSP
An LSP can be static or dynamic. A static LSP is manually configured, while a dynamic LSP is established by MPLS LDP.
For a static LSP to work, all LSRs along the LSP must be configured properly.
Static LSPs can be used in MPLS L2VPN.
& Note:
For configuration of MPLS L2VPN, refer to MPLS L2VPN Configuration in the MPLS VPN Volume.
1.6.1 Configuration Prerequisites
Before configuring a static LSP, be sure to complete these tasks:
l Determining the ingress, transit LSRs, and egress for the static LSP
l Configuring MPLS basic capability on all the LSRs
1.6.2 Configuration Procedure
Follow these steps to configure a static LSP:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Configure a static LSP taking the current LSR as the ingress |
static-lsp ingress lsp-name destination dest-addr { mask | mask-length } { nexthop next-hop-addr | outgoing-interface interface-type interface-number } out-label out-label |
Optional |
Configure a static LSP taking the current LSR as a transit LSR |
static-lsp transit lsp-name incoming-interface interface-type interface-number in-label in-label { nexthop next-hop-addr | outgoing-interface interface-type interface-number } out-label out-label |
Optional |
Configure a static LSP taking the current LSR as the egress |
static-lsp egress lsp-name incoming-interface interface-type interface-number in-label in-label |
Optional |
& Note:
l If you specify the next hop when configuring a static LSP, and the address of the next hop is in the routing table, you must specify the next hop when configuring the static IP route.
l If you specify the outgoing interface for a static LSP, you must also specify the outgoing interface when configuring the static IP route.
l When configuring an ingress or transit LSR, the local public network address cannot be specified as the next hop.
l For information about configuring static IP route, refer to Static Routing Configuration in IP Routing Volume.
1.7 Configuring MPLS LDP
1.7.1 Configuration Prerequisites
Before configuring LDP, be sure to complete the following task:
l Configuring MPLS basic capability
l Configuring a route between an LSR and the opposite LSR
1.7.2 MPLS LDP Configuration Tasks
Complete these tasks to configure LDP:
Task |
Remarks |
Configuring MPLS LDP Capability |
Required |
Configuring Local LDP Session Parameters |
Optional |
Configuring Remote LDP Session Parameters |
Optional |
Configuring the Policy for Triggering LSP Establishment |
Optional |
Specifying the Label Processing Modes |
Optional |
Configuring LDP Loop Detection |
Optional |
Configuring LDP MD5 Authentication |
Optional |
Enabling MTU Signaling |
Optional |
1.7.3 Configuring MPLS LDP Capability
Follow these steps to enable MPLS LDP capability:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enable LDP capability for the current node and enter MPLS LDP view |
mpls ldp |
Required Not enabled by default |
Configure the LDP LSR ID |
lsr-id lsr-id |
Optional MPLS LSR ID of the LSR by default |
Exit to system view |
quit |
— |
Enter interface view |
interface interface-type interface-number |
— |
Enable LDP capability on the interface |
mpls ldp |
Required Not enabled by default |
& Note:
l Disabling LDP on an interface terminates all LDP sessions on the interface, causing all LSPs using the sessions to be deleted.
l Usually, the default value of the LDP LSR ID, that of the MPLS LSR ID, answers the requirement. In some networking schemes where VPN instances are deployed, such as MPLS L3VPN networking schemes, if the address space of a VPN and that of the public network overlap, you must configure an LDP LSR ID that is different from the MPLS LSR ID for the TCP connection to be established normally.
1.7.4 Configuring Local LDP Session Parameters
You can configure the local session transport address to be the IP address of the interface, or that of a specified interface.
Follow these steps to configure local LDP session parameters:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enter interface view |
interface interface-type interface-number |
— |
Set the link Hello timer |
mpls ldp timer hello-hold value |
Optional 15 seconds by default |
Set the link Keepalive timer |
mpls ldp timer keepalive-hold value |
Optional 45 seconds by default |
Configure the LDP transport address |
mpls ldp transport-address { interface-type interface-number | interface } |
Optional MPLS LSR ID of the LSR by default |
1.7.5 Configuring Remote LDP Session Parameters
Configure the remote session transport address to be the IP address of a specified interface.
Follow these steps to configure remote LDP session parameters:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Create a remote peer entity and enter MPLS LDP remote peer view |
mpls ldp remote-peer remote-peer-name |
Required |
Specify the LDP remote peer IP address |
remote-ip ip-address |
Required |
Set the targeted Hello timer |
mpls ldp timer hello-hold value |
Optional 45 seconds by default |
Set the targeted Keepalive timer |
mpls ldp timer keepalive-hold value |
Optional 45 seconds by default |
Configure the transport address |
mpls ldp transport-address interface-type interface-number |
Optional MPLS LSR ID of the LSR by default |
& Note:
In the current implementation, LDP itself does no send any label information through remote sessions, and remote sessions are used only to transfer messages for L2VPNs. For applications of remote sessions, refer to MPLS L2VPN Configuration in the MPLS VPN Volume.
Caution:
l If Hello adjacency exists between two peers, no remote adjacency can be established between them. If remote adjacency exists between two peers, and local adjacency is then created for the remote peer, the remote adjacency is removed. That is, only one remote session or local session can exist between two LSRs, and the local session takes precedence over the remote session.
l The remote peer IP address to be configured must be different from all existing remote peer IP addresses. Otherwise, the configuration fails.
1.7.6 Configuring the Policy for Triggering LSP Establishment
You can specify the routes that are allowed to trigger the establishment of LSPs:
l All static and IGP routes.
l IGP routes that can survive the IGP route filtering based on an IP address prefix list.
An IP address prefix list affects only static routes and IGP routes.
Follow these steps to configure the policy for triggering LSP establishment:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enter MPLS view |
mpls |
— |
Configure the LSP establishment triggering policy |
lsp-trigger { all | ip-prefix prefix-name } |
Optional Only local loopback addresses with 32-bit masks can trigger LDP to establish LSPs by default. |
& Note:
l If you do not specify prefix-name (that is, all FECs are specified) while executing the lsp-trigger command, all static routes and IGP routing entries can trigger LDP to establish LSPs.
l For an LSP to be established, an exactly matched routing entry must exist on the LSR. With loopback addresses using 32-bit masks, only exactly matched host routing entries can trigger LDP to establish LSPs.
1.7.7 Specifying the Label Processing Modes
Follow these steps to specify the LDP label advertisement mode, distribution control mode, and retention mode:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enter interface view |
interface interface-type interface-number |
— |
Specify the label advertisement mode |
mpls ldp advertisement { dod | du } |
Optional DU by default |
Exit to system view |
quit |
— |
Enter MPLS LDP view Enable LDP capability and enter MPLS LDP view |
mpls ldp |
Required |
Specify the label distribution control mode |
label-distribution { independent | ordered } |
Optional ordered by default Note that you must reset LDP sessions for the configuration to take effect. |
Specify the label retention mode |
label-retention { liberal | conservative } |
Optional liberal by default Note that you must reset LDP sessions for the configuration to take effect. |
Enable label readvertisement for DU mode |
du-readvertise |
Optional Enabled by default |
Set the interval for label readvertisement in DU mode |
du-readvertise timer value |
Optional 30 seconds by default |
1.7.8 Configuring LDP Loop Detection
Follow these steps to configure LDP loop detection:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enable LDP capability and enter MPLS LDP view |
mpls ldp |
Required |
Enable loop detection |
loop-detect |
Required Disabled by default |
Set the maximum hop count for loop detection |
hops-count hop-number |
Optional 32 by default |
Set the path vector maximum hop count |
path-vectors pv-number |
Optional 32 by default |
& Note:
Changing of the loop detection configuration does not affect the existing LSPs.
Caution:
You must enable loop detection before enabling LDP capability on any interface.
1.7.9 Configuring LDP MD5 Authentication
To improve the security of LDP sessions, you configure MD5 authentication for the used TCP connections.
Follow these steps to configure LDP MD5 authentication:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enable LDP capability and enter MPLS LDP view |
mpls ldp |
Required |
Configure LDP MD5 authentication |
md5-password { plain | cipher } peer-lsr-id password |
Required Disabled by default |
1.7.10 Enabling MTU Signaling
For correct path MTU detection, an IP router needs to know the MTU of each connected link.
LDP can automatically calculate the minimum MTU of all interfaces on an LSP. At ingress, MPLS uses the calculated minimum MTU to determine the size of the MPLS forwarding packets, preventing a packet of a bigger size from being dropped by a transit LSR.
Follow these steps to enable MTU signaling:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enable LDP capability and enter MPLS LDP view |
mpls ldp |
Required |
Enable MTU signaling |
mtu-signalling |
Optional Enabled by default |
& Note:
Enabling/disabling MTU signaling will cause existing sessions to be reestablished and all LSPs of the sessions to be reestablished accordingly.
1.8 Configuring LDP Instances
LDP instances are for carrier’s carrier networking applications of MPLS L3VPN. You need to configure LDP capability for existing VPN instances.
Except for the command for the LDP GR feature, all commands available in MPLS LDP view can be configured in MPLS LDP VPN instance view.
1.8.1 Configuration Prerequisites
Before configuring LDP instances, be sure to complete these tasks:
l Configuring VPN instances
l Configuring MPLS basic capability
l Configuring MPLS LDP capability
1.8.2 Configuration Procedure
Usually, the default value of the LDP LSR ID, that of the MPLS LSR ID, answers the requirement. In some networking schemes where VPN instances are deployed, such as MPLS L3VPN networking schemes, if VPN address spaces and the public network address space overlap, you must configure an LDP LSR ID that is different from the MPLS LSR ID for TCP connections to be established normally.
Follow these steps to configure LDP instances:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enable LDP capability for a VPN instance and enter MPLS LDP VPN instance view |
mpls ldp vpn-instance vpn-instance-name |
Required |
Configure the LSR ID for the VPN instance |
lsr-id lsr-id |
Optional MPLS LSR ID of the LSR by default |
Caution:
l For LDP-enabled interfaces, configurations in MPLS LDP VPN instance view affect only interfaces bound to VPN instances, while configurations in MPLS LDP view do not affect any interfaces bound to VPN instances. When configuring the transport address of an LDP instance, you must use the IP address of the interface bound to the VPN instance.
l By default, LDP adjacencies on the private network are established using addresses of the LDP-enabled interfaces, while those on the public network are established using the LDP LSR ID.
1.9 Configuring MPLS IP TTL Processing
1.9.1 Configuration Prerequisites
Before configuring MPLS IP TTL propagation, be sure to complete this task:
l Configuring MPLS basic capability
1.9.2 Configuring MPLS IP TTL Propagation
Follow these steps to configure IP TTL propagation of MPLS:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enter MPLS view |
mpls |
Required |
Enable MPLS IP TTL propagation for either public network packets or VPN packets |
ttl propagate { public | vpn } |
Optional Enabled for only public network packets by default |
Caution:
l The ttl propagate command affects only the propagation of the IP TTL to the MPLS label. At the egress, the system uses the smaller one between the IP TTL and MPLS TTL as the TTL of the IP packet and decrements the value by 1.
l If you want to enable MPLS IP TTL propagation for VPN packets on one LSR, you are recommended to do so on related PEs, guaranteeing that you can get the same result when tracerting from those PEs.
1.9.3 Specifying the Type of Path for ICMP Responses
ICMP responses can use two kinds of paths: IP route and LSP.
For MPLS packets with one-level labels, you can configure MPLS to send back ICMP responses along IP routes instead of LSPs when the TTL expires.
In MPLS, an IP router generally maintains public network routes only, and MPLS packets with one-level labels carry public network payload. Therefore, you can configure this function.
In MPLS VPN, for ASBRs and SPEs in HoVPN applications (including SPEs in applications), MPLS packets that carry VPN packets may have one-level labels. To view the forwarding path of VPN packets on public networks through tracert, you must:
l Configure the ttl propagate vpn command on all the related PEs to allow IP TTL propagation of VPN packets.
l Configure the undo ttl expiration pop command on the ASBRs and SPEs to assure that ICMP responses can be transported back through the original LSPs.
& Note:
l SPE refers to the superstratum PE or service provider-end PE.
l For details about HoVPN, refer to MPLS L3VPN Configuration in the MPLS VPN Volume.
Follow these steps to configure the path for ICMP responses:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enter MPLS view |
mpls |
— |
Specify that the ICMP response be transported back along the IP route when the TTL of an MPLS packet expires |
ttl expiration pop |
Optional. Use either command. By default, the ICMP response message of an MPLS packet with a one-level label is transported back along the IP route. |
Specify that the ICMP response be transported back along the LSP when the TTL of an MPLS packet expires |
undo ttl expiration pop |
& Note:
You should choose to execute the ttl expiration pop or undo ttl expiration pop command according to the desired path whereby to return an ICMP response.
1.10 Setting the Interval for Reporting Statistics
To view LSP statistics, you must set the interval for reporting statistics at first.
Follow these steps to set the interval for reporting statistics:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enter MPLS view |
mpls |
— |
Set the interval for reporting statistics |
statistics interval interval-time |
Required 0 seconds by default, meaning that the system should not report any statistics. |
1.11 Inspecting an MPLS LSP
To do… |
Use the command… |
Remarks |
Inspect the validity and reachability of an MPLS LSP |
Available in any view |
|
Locate an MPLS LSP error |
tracert lsp [-a source-ip | -exp exp-value | -h ttl-value | -r reply-mode |-t time-out ] * { ipv4 dest-addr mask-length [ destination-ip-addr-header ] | te interface-type interface-number } |
Available in any view |
1.12 Enabling MPLS Trap
Follow these steps to enable the MPLS trap function:
To do… |
Use the command… |
Remarks |
Enter system view |
system-view |
— |
Enable the MPLS trap function |
snmp-agent trap enable mpls |
Required Disabled by default |
1.13 Displaying and Maintaining MPLS
1.13.1 Resetting LDP Sessions
If you change any parameter of an LDP session in the state of Up, the LDP session will not be able to go on normally. In this case, to allow the involved LSRs to renegotiate parameters and establish a new session, you can use one of these commands to reset the LDP session:
To do… |
Use the command… |
Remarks |
Reset LDP sessions |
reset mpls ldp [ all | [ vpn-instance vpn-instance-name ] [ peer peer-id ] ] |
Optional Available in user view |
Reset LDP at the protocol level |
graceful-restart mpls ldp |
Optional |
1.13.2 Displaying MPLS Operation
To do… |
Use the command… |
Remarks |
Display information about a specified or all interfaces with MPLS enabled |
display mpls interface [ interface-type interface-number ] [ verbose ] |
Available in any view |
Display the status of the specified or all labels |
display mpls label { label-value1 [ to label-value2 ] | all } |
Available in any view |
Display information about LSPs |
display mpls lsp [ { incoming-interface | outgoing-interface } interface-type interface-number ] [ in-label in-label-value ] [ out-label out-label-value ] [ { exclude | include } dest-addr mask-length ] [ vpn-instance vpn-instance-name ] [ asbr | protocol { bgp | bgp-ipv6 | crldp | ldp | rsvp-te | static | static-cr } ] [ egress | ingress | transit ] [ verbose ] |
Available in any view |
Display LSP statistics |
display mpls lsp statistics |
Available in any view |
Display information about static LSPs |
display mpls static-lsp [ lsp-name lsp-name ] [ { include | exclude } dest-addr mask-length ] [ verbose ] |
Available in any view |
Display information about LSPs over specified or all routes for the public or a specified private network |
display mpls route-state [ vpn-instance vpn-instance-name ] [ dest-addr mask-length ] |
Available in any view |
Display MPLS statistics for a specified or all LSPs |
display mpls statistics lsp { all | index | lsp-name } |
Available in any view |
Display MPLS statistics for a specified or all interfaces |
display mpls statistics interface { interface-type interface-number | all } |
Available in any view |
1.13.3 Displaying MPLS LDP Operation
To do… |
Use the command… |
Remarks |
Display information about LDP |
display mpls ldp [ all [ verbose ] [ | { begin | exclude | include } regular-expression ] ] |
Available in any view |
Display information about LDP interfaces |
display mpls ldp interface [ [ vpn-instance vpn-instance-name ] [ interface-type interface-number ] | all ] [ verbose ] [ | { begin | exclude | include } regular-expression ] |
Available in any view |
Display information about LDP session peers |
display mpls ldp peer [ [ vpn-instance vpn-instance-name [ verbose ] ] [ peer-id ] | all [ verbose ] ] [ | { begin | exclude | include } regular-expression ] |
Available in any view |
Display information about LDP remote peers |
display mpls ldp remote-peer [ remote-name remote-peer-name ] [ | { begin | exclude | include } regular-expression ] |
Available in any view |
Display information about specified or all LDP sessions |
display mpls ldp session [ [ vpn-instance vpn-instance-name [ verbose ] ] [ peer-id ] | all [ verbose ] ] [ | { begin | exclude | include } regular-expression ] |
Available in any view |
Display information about LSPs established by LDP |
display mpls ldp lsp [ all | [ vpn-instance vpn-instance-name [ dest-addr mask-length ] ] ] [ | { begin | exclude | include } regular-expression ] |
Available in any view |
Display information about CR-LSPs established by LDP |
display mpls ldp cr-lsp [ vpn-instance vpn-instance-name [ lspid lsr-id lsp-id ] ] [ | { begin | exclude | include } regular-expression ] |
Available in any view |
Display information about a specified LDP instance |
display mpls ldp vpn-instance vpn-instance-name [ | { begin | exclude | include } regular-expression ] |
Available in any view |
1.13.4 Clearing MPLS Statistics
To do… |
Use the command… |
Remarks |
Clear MPLS statistics for a specified or all MPLS interfaces |
reset mpls statistics interface { interface-type interface-number | all } |
Available in user view |
Clear MPLS statistics for a specified or all LSPs |
reset mpls statistics lsp { index | all | name lsp-name } |
Available in user view |
1.14 MPLS Configuration Examples
1.14.1 LDP Session Configuration Example
I. Network requirements
l Switch A, Switch B, and Switch C support MPLS and use OSPF as the IGP for the MPLS backbone.
l A local LDP session is required between Switch A and Switch B, and a second local LDP session is required between Switch B and Switch C.
l A remote LDP session is required between Switch A and Switch C.
II. Network diagram
Figure 1-8 Network diagram for LDP session configuration
III. Configuration procedure
1) Configure the IP addresses of the interfaces
Configure the IP addresses and masks of the interfaces including the VLAN and loopback interfaces as required in Figure 1-8.
2) Configure OSPF to advertise host routes of LSR ID
# Configure Switch A.
<Sysname> system-view
[Sysname] sysname SwitchA
[SwitchA] ospf
[SwitchA-ospf-1] area 0
[SwitchA-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[SwitchA-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[SwitchA-ospf-1-area-0.0.0.0] quit
[SwitchA-ospf-1] quit
# Configure Switch B.
<Sysname> system-view
[Sysname] sysname SwitchB
[SwitchB] ospf
[SwitchB-ospf-1] area 0
[SwitchB-ospf-1-area-0.0.0.0] network 2.2.2.9 0.0.0.0
[SwitchB-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[SwitchB-ospf-1-area-0.0.0.0] network 20.1.1.0 0.0.0.255
[SwitchB-ospf-1-area-0.0.0.0] quit
[SwitchB-ospf-1] quit
# Configure Switch C.
<Sysname> system-view
[Sysname] sysname SwitchC
[SwitchC] ospf
[SwitchC-ospf-1] area 0
[SwitchC-ospf-1-area-0.0.0.0] network 3.3.3.9 0.0.0.0
[SwitchC-ospf-1-area-0.0.0.0] network 20.1.1.0 0.0.0.255
[SwitchC-ospf-1-area-0.0.0.0] quit
[SwitchC-ospf-1] quit
After completing the above configurations, you will see that every switch has learned the route to the LSR ID of its peer when you execute the display ip routing-table command. The following takes Switch A as an example:
[SwitchA] display ip routing-table
Routing Tables: Public
Destinations : 9 Routes : 9
Destination/Mask Proto Pre Cost NextHop Interface
1.1.1.9/32 Direct 0 0 127.0.0.1 InLoop0
2.2.2.9/32 OSPF 10 1563 10.1.1.2 Vlan1
3.3.3.9/32 OSPF 10 3125 10.1.1.2 Vlan1
10.1.1.0/24 Direct 0 0 10.1.1.1 Vlan1
10.1.1.1/32 Direct 0 0 127.0.0.1 InLoop0
10.1.1.2/32 Direct 0 0 10.1.1.2 Vlan1
20.1.1.0/24 OSPF 10 3124 10.1.1.2 Vlan1
127.0.0.0/8 Direct 0 0 127.0.0.1 InLoop0
127.0.0.1/32 Direct 0 0 127.0.0.1 InLoop0
Now, OSPF adjacency should have been established between Switch A and Switch B and between Switch B and Switch C respectively. If you execute the display ospf peer verbose command, you will find that the interfaces are at the state of Full. The following takes Switch A as an example:
[SwitchA] display ospf peer verbose
OSPF Process 1 with Switch ID 1.1.1.9
Neighbors
Area 0.0.0.0 interface 10.1.1.1(Vlan-interface1)'s neighbors
Router ID: 2.2.2.9 Address: 10.1.1.2 GR State: Normal
State: Full Mode:Nbr is Master Priority: 1
DR: None BDR: None MTU: 1500
Dead timer due in 39 sec
Neighbor is up for 00:02:13
Authentication Sequence: [ 0 ]
3) Configure MPLS basic capability and enable LDP
# Configure Switch A.
[SwitchA] mpls lsr-id 1.1.1.9
[SwitchA] mpls
[SwitchA-mpls] quit
[SwitchA] mpls ldp
[SwitchA-mpls-ldp] quit
[SwitchA] interface Vlan-interface 1
[SwitchA-Vlan-interface1] mpls
[SwitchA-Vlan-interface1] mpls ldp
[SwitchA-Vlan-interface1] quit
# Configure Switch B.
[SwitchB] mpls lsr-id 2.2.2.9
[SwitchB] mpls
[SwitchB-mpls] quit
[SwitchB] mpls ldp
[SwitchB-mpls-ldp] quit
[SwitchB] interface Vlan-interface 1
[SwitchB-Vlan-interface1] mpls
[SwitchB-Vlan-interface1] mpls ldp
[SwitchB-Vlan-interface1] quit
[SwitchB] interface Vlan-interface 2
[SwitchB-Vlan-interface2] mpls
[SwitchB-Vlan-interface2] mpls ldp
[SwitchB-Vlan-interface2] quit
# Configure Switch C.
[SwitchC] mpls lsr-id 1.1.1.9
[SwitchC] mpls
[SwitchC-mpls] quit
[SwitchC] mpls ldp
[SwitchC-mpls-ldp] quit
[SwitchC] interface Vlan-interface 1
[SwitchC-Vlan-interface1] mpls
[SwitchC-Vlan-interface1] mpls ldp
[SwitchC-Vlan-interface1] quit
After completing the above configurations, the local session between Switch A and Switch B and that between Switch B and Switch C should be established successfully. You can execute the display mpls ldp session command to check whether the local sessions have been established, or the display mpls ldp peer command to check the peers. The following takes Switch A as an example:
[SwitchA] display mpls ldp session
LDP Session(s) in Public Network
Total number of sessions: 1
----------------------------------------------------------------
Peer-ID Status LAM SsnRole FT MD5 KA-Sent/Rcv
----------------------------------------------------------------
2.2.2.9:0 Operational DU Passive Off Off 5/5
----------------------------------------------------------------
LAM : Label Advertisement Mode FT : Fault Tolerance
[SwitchA] display mpls ldp peer
LDP Peer Information in Public network
Total number of peers: 1
-----------------------------------------------------------------
Peer-ID Transport-Address Discovery-Source
----------------------------------------------------------------
2.2.2.9:0 2.2.2.9 Vlan-interface1
----------------------------------------------------------------
4) Configure the remote LDP session
# Configure Switch A.
[SwitchA] mpls ldp remote-peer peerc
[SwitchA-mpls-ldp-remote-peerc] remote-ip 3.3.3.9
[SwitchA-mpls-ldp-remote-peerc] quit
# Configure Switch C.
[SwitchC] mpls ldp remote-peer peera
[SwitchC-mpls-ldp-remote-peera] remote-ip 1.1.1.9
[SwitchC-mpls-ldp-remote-peera] quit
After completing the above configurations, you will find by issuing the following commands on Switch A that the remote LDP session with Switch C is already established:
[SwitchA] display mpls ldp session
LDP Session(s) in Public Network
Total number of sessions: 2
----------------------------------------------------------------
Peer-ID Status LAM SsnRole FT MD5 KA-Sent/Rcv
----------------------------------------------------------------
2.2.2.9:0 Operational DU Passive Off Off 35/35
3.3.3.9:0 Operational DU Passive Off Off 8/8
----------------------------------------------------------------
LAM : Label Advertisement Mode FT : Fault Tolerance
[SwitchA] display mpls ldp peer
LDP Peer Information in Public network
Total number of peers: 2
-----------------------------------------------------------------
Peer-ID Transport-Address Discovery-Source
-----------------------------------------------------------------
2.2.2.9:0 2.2.2.9 Vlan-interface1
3.3.3.9:0 3.3.3.9 Remote Peer : peerc
-----------------------------------------------------------------
1.14.2 Configuring LDP to Establish LSPs
I. Network requirements
On the network in Figure 1-8, an LSP is required to be established between Switch A and Switch C. Check the validity and reachability of the LSP.
II. Network diagram
See Figure 1-8.
III. Configuration procedure
1) Configure LDP sessions. Refer to LDP Session Configuration Example.
2) Configure the LSP establishment triggering policy for LDP to establish LSPs.
& Note:
For LDP to establish an LSP, LDP sessions are required between switches along the LSP. In Figure 1-8, an LDP LSP can be established provided that local LDP sessions exist between Switch A and Switch B, and between Switch B and Switch C; no remote LDP session is required between Switch A and Switch C.
# Configure Switch A.
[SwitchA] mpls
[SwitchA-mpls] lsp-trigger all
[SwitchA-mpls] quit
# Configure Switch B.
[SwitchB] mpls
[SwitchB-mpls] lsp-trigger all
[SwitchB-mpls] quit
# Configure Switch C.
[SwitchC] mpls
[SwitchC-mpls] lsp-trigger all
[SwitchC-mpls] quit
After completing the above configurations, you will see the LSPs established when you execute the display mpls ldp lsp command. The following takes Switch A as an example:
[SwitchA] display mpls ldp lsp
LDP LSP Information
-------------------------------------------------------------------
SN DestAddress/Mask In/OutLabel Next-Hop In/Out-Interface
------------------------------------------------------------------
1 1.1.1.9/32 3/NULL 127.0.0.1 Vlan1/InLoop0
2 2.2.2.9/32 NULL/3 10.1.1.2 -------/Vlan1
3 3.3.3.9/32 NULL/1025 10.1.1.2 -------/Vlan1
4 20.1.1.0/24 NULL/3 10.1.1.2 -------/Vlan1
-------------------------------------------------------------------
A '*' before an LSP means the LSP is not established
A '*' before a Label means the USCB or DSCB is stale
# Check the validity and reachability of the LSP.
<SwitchA> ping lsp ipv4 3.3.3.9 32
LSP PING FEC: LDP IPV4 PREFIX 3.3.3.9/32 : 100 data bytes, press CTRL_C to break
Reply from 20.1.1.2: bytes=100 Sequence=1 time = 1 ms
Reply from 20.1.1.2: bytes=100 Sequence=2 time = 1 ms
Reply from 20.1.1.2: bytes=100 Sequence=3 time = 1 ms
Reply from 20.1.1.2: bytes=100 Sequence=4 time = 1 ms
Reply from 20.1.1.2: bytes=100 Sequence=5 time = 1 ms
--- FEC: LDP IPV4 PREFIX 3.3.3.9/32 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 1/1/1 ms
1.15 Troubleshooting MPLS
Symptom:
An interface with LDP enabled cannot establish an LDP session with its peer.
Analysis:
An LDP session is established in two steps: establishing the TCP connection; initializing the session and negotiating the session parameters. Failure in either of the steps will lead to the failure of LDP session establishment.
The TCP connection established uses LSR ID as the address by default. Therefore, if you do not configure the mpls ldp transport-address command, the route to the address of LSR ID must be advertised to the peer.
Solution:
l Check whether the current LSR has obtained the route to the LSR ID of the peer by issuing the display ip routing-table command.
l Check whether the label advertisement mode configured locally is the same as that configured for the peer using the display mpls ldp interface command.
l If not, use the mpls ldp advertisement command to correct the configuration.