H3C S3600 Series Ethernet Switches Operation Manual-Release 1510(V1.04)

HomeSupportSwitchesH3C S3600 Switch SeriesConfigure & DeployConfiguration GuidesH3C S3600 Series Ethernet Switches Operation Manual-Release 1510(V1.04)
18-Multicast Operation
Title Size Download
18-Multicast Operation 2 MB

Table of Contents

Chapter 1 Multicast Overview.. 1-1

1.1 Multicast Overview. 1-1

1.1.1 Information Transmission in the Unicast Mode. 1-1

1.1.2 Information Transmission in the Broadcast Mode. 1-2

1.1.3 Information Transmission in the Multicast Mode. 1-3

1.1.4 Advantages and Applications of Multicast 1-4

1.2 Multicast Architecture. 1-5

1.2.1 Multicast Address. 1-6

1.2.2 IP Multicast Protocols. 1-9

1.3 Forwarding Mechanism of Multicast Packets. 1-10

Chapter 2 IGMP Snooping Configuration. 2-1

2.1 Overview. 2-1

2.1.1 IGMP Snooping Fundamentals. 2-1

2.1.2 IGMP Snooping Implementation. 2-2

2.2 IGMP Snooping Configuration. 2-6

2.2.1 Enabling IGMP Snooping. 2-6

2.2.2 Configuring Timers. 2-7

2.2.3 Enabling IGMP Fast Leave. 2-8

2.2.4 Configuring IGMP Snooping Filtering ACL. 2-9

2.2.5 Configuring to Limit Number of Multicast Groups on a Port 2-10

2.2.6 Configuring IGMP Querier 2-10

2.2.7 Configuring Multicast VLAN. 2-11

2.3 Displaying and Maintaining IGMP Snooping. 2-13

2.4 IGMP Snooping Configuration Example. 2-14

2.4.1 Example 1. 2-14

2.4.2 Example 2. 2-15

2.5 Troubleshooting IGMP Snooping. 2-18

Chapter 3 Common Multicast Configuration. 3-1

3.1 Overview. 3-1

3.2 Common Multicast Configuration. 3-1

3.2.1 Enabling Multicast and Configuring Limit on the Number of Route Entries. 3-2

3.2.2 Configuring Suppression on the Multicast Source Port 3-3

3.2.3 Clearing the Related Multicast Entries. 3-3

3.3 Displaying Common Multicast Configuration. 3-4

Chapter 4 Multicast MAC Address Entry Configuration. 4-1

4.1 Overview. 4-1

4.2 Configuring a Multicast MAC Address Entry. 4-1

4.3 Displaying and Maintaining Multicast MAC Address. 4-2

Chapter 5 Unknown Multicast Packet Drop Configuration. 5-1

5.1 Overview. 5-1

5.2 Unknown Multicast Packet Drop Configuration. 5-1

Chapter 6 IGMP Configuration. 6-1

6.1 Overview. 6-1

6.1.1 Introduction to IGMP. 6-1

6.1.2 IGMP Version. 6-1

6.1.3 Work Mechanism of IGMPv1. 6-1

6.1.4 Enhancements Provided by IGMPv2. 6-3

6.1.5 IGMP Proxy. 6-4

6.2 IGMP Configuration. 6-5

6.2.1 Configuring IGMP Version. 6-6

6.2.2 Configuring IGMP Query Packets. 6-6

6.2.3 Configuring IGMP Multicast Groups on the Interface. 6-9

6.2.4 Configuring Router Ports to Join the Specified Multicast Group. 6-11

6.2.5 Configuring IGMP Proxy. 6-12

6.2.6 Removing the Joined IGMP Groups from the Interface. 6-13

6.3 Displaying IGMP. 6-13

Chapter 7 PIM Configuration. 7-1

7.1 PIM Overview. 7-1

7.1.1 Introduction to PIM-DM.. 7-1

7.1.2 Work Mechanism of PIM-DM.. 7-2

7.1.3 Introduction to PIM-SM.. 7-4

7.1.4 Work Mechanism of PIM-SM.. 7-5

7.2 Common PIM Configuration. 7-10

7.2.1 Enabling PIM-DM (PIM-SM) on the Interface. 7-10

7.2.2 Configuring the Interval of Sending Hello Packets. 7-10

7.2.3 Configuring PIM Neighbors. 7-11

7.2.4 Clearing the Related PIM Entries. 7-12

7.3 PIM-DM Configuration. 7-13

7.3.1 Configuring Filtering Policies for Multicast Source/Group. 7-13

7.4 PIM-SM Configuration. 7-13

7.4.1 Configuring Filtering Policies for Multicast Source/Group. 7-14

7.4.2 Configuring BSR/RP. 7-14

7.4.3 Configuring PIM-SM Domain Boundary. 7-16

7.4.4 Filtering the Registration Packets from RP to DR. 7-17

7.4.5 Configuring the Threshold for Switching from RPT to SPT. 7-18

7.5 Displaying and Debugging PIM.. 7-19

7.6 PIM Configuration Example. 7-20

7.6.1 PIM-DM Configuration Example. 7-20

7.6.2 PIM-SM Configuration Example. 7-21

7.7 Troubleshooting PIM.. 7-24

Chapter 8 MSDP Configuration. 8-1

8.1 Overview. 8-1

8.1.1 MSDP Working Mechanism.. 8-4

8.2 Configuring MSDP Basic Functions. 8-6

8.2.1 Configuration Prerequisites. 8-7

8.2.2 Configuring MSDP Basic Functions. 8-7

8.3 Configuring Connection between MSDP Peers. 8-8

8.3.1 Configuration Prerequisites. 8-8

8.3.2 Configuring Description Information for MSDP Peers. 8-9

8.3.3 Configuring Anycast RP Application. 8-9

8.3.4 Configuring an MSDP Mesh Group. 8-10

8.3.5 Configuring MSDP Peer Connection Control 8-11

8.4 Configuring SA Message Transmission. 8-11

8.4.1 Configuration Prerequisites. 8-12

8.4.2 Configuring the Transmission and Filtering of SA Request Messages. 8-12

8.4.3 Configuring a Rule for Filtering the Multicast Sources of SA Messages. 8-13

8.4.4 Configuring a Rule for Filtering Received and Forwarded SA Messages. 8-14

8.4.5 Configuring SA Message Cache. 8-14

8.5 Displaying and Maintaining MSDP Configuration. 8-15

8.6 MSDP Configuration Example. 8-17

8.6.1 Configuration Example of Anycast RP Application. 8-17

8.7 Troubleshooting MSDP Configuration. 8-19

8.7.1 MSDP Peer Always in the Down State. 8-19

8.7.2 No SA Entry in the SA Cache of the Router 8-19

 


Chapter 1  Multicast Overview

 

&  Note:

l      Among S3600 series Ethernet switches, S3600-EI series support all the multicast protocols listed in this manual, while S3600-SI series only support IGMP Snooping.

l      When running IP multicast protocols, Ethernet switches also provide the functions of routers. In this manual, routers stand for not only the common routers but also the Layer 3 Ethernet switches running IP multicast protocols.

 

1.1  Multicast Overview

With development of networks on the Internet, more and more interaction services such as data, voice, and video services are running on the networks. In addition, services highly dependent on bandwidth and real-time data interaction, such as e-commerce, web conference, online auction, video on demand (VoD), and tele-education have come into being. These services have higher requirements for information security, legal use of paid services, and network bandwidth.

In the network, packets are sent in three modes: unicast, broadcast and multicast. The following sections describe and compare data interaction processes in unicast, broadcast, and multicast.

1.1.1  Information Transmission in the Unicast Mode

In unicast, the system establishes a separate data transmission channel for each user requiring this information, and sends separate copy information to the user, as shown in Figure 1-1:

Figure 1-1 Information transmission in the unicast mode

Assume that users B, D and E need this information. The source server establishes transmission channels for the devices of these users respectively. As the transmitted traffic over the network is in direct proportion to the number of users that receive this information, when a large number of users need this information, the server must send many pieces of information with the same content to the users. Therefore, the limited bandwidth becomes the bottleneck in information transmission. This shows that unicast is not good for the transmission of a great deal of information.

1.1.2  Information Transmission in the Broadcast Mode

When you adopt broadcast, the system transmits information to all users on a network. Any user on the network can receive the information, no matter the information is needed or not. Figure 1-2 shows information transmission in broadcast mode.

Figure 1-2 Information transmission in the broadcast mode

Assume that users B, D, and E need the information. The source server broadcasts this information through routers, and users A and C on the network also receive this information. The security and payment of the information cannot be guaranteed.

As we can see from the information transmission process, the security and legal use of paid service cannot be guaranteed. In addition, when only a small number of users on the same network need the information, the utilization ratio of the network resources is very low and the bandwidth resources are greatly wasted.

Therefore, broadcast is disadvantageous in transmitting data to specified users; moreover, broadcast occupies large bandwidth.

1.1.3  Information Transmission in the Multicast Mode

As described in the previous sections, unicast is suitable for networks with sparsely distributed users, whereas broadcast is suitable for networks with densely distributed users. When the number of users requiring information is not certain, unicast and broadcast deliver a low efficiency.

Multicast solves this problem. When some users on a network require specified information, the multicast information sender (namely, the multicast source) sends the information only once. With tree-type routes established for multicast data packets through a multicast routing protocol, the packets are duplicated and distributed at the nearest nodes, as shown in Figure 1-3:

 

Figure 1-3 Information transmission in the multicast mode

Assume that users B, D and E need the information. To transmit the information to the right users, it is necessary to group users B, D and E into a receiver set. The routers on the network duplicate and distribute the information based on the distribution of the receivers in this set. Finally, the information is correctly delivered to users B, D, and E.

The advantages of multicast over unicast are as follows:

l           No matter how many receivers exist, there is only one copy of the same multicast data flow on each link.

l           With the multicast mode used to transmit information, an increase of the number of users does not add to the network burden remarkably.

The advantages of multicast over broadcast are as follows:

l           A multicast data flow can be sent only to the receiver that requires the data.

l           Multicast brings no waste of network resources and makes proper use of bandwidth.

In the multicast mode, network components can be divided in to the following roles:

l           An information sender is referred to as a multicast source.

l           Multiple receivers receiving the same information form a multicast group. Multicast group is not limited by physical area.

l           Each receiver receiving multicast information is a multicast group member.

l           A router providing multicast routing is a multicast router. The multicast router can be a member of one or multiple multicast groups, and it can also manage members of the multicast groups.

For a better understanding of the multicast concept, you can assimilate a multicast group to a TV channel. A TV station is a multicast source. It sends data to the channel. The audiences are the receivers. After turning on a TV set (a computer), they can select a channel to receive a program (namely join a group) and then watch the program. Therefore, a multicast group should be an agreement between the sender and the receivers, like the frequency of a channel.

 

  Caution:

A multicast source does not necessarily belong to a multicast group. A multicast source sends data to a multicast group, and it is not necessarily a receiver. Multiple multicast sources can send packets to the same multicast group at the same time.

 

There may be routers that do not support multicast on the network. A multicast router encapsulates multicast packets in unicast IP packets in the tunnel mode, and then sends them to the neighboring multicast routers through the routers that do not support multicast. The neighboring multicast routers remove the header of the unicast IP packets, and then continue to multicast the packets, thus avoiding changing the network structure greatly.

1.1.4  Advantages and Applications of Multicast

I. Advantages of multicast

Advantages of multicast include:

l           Enhanced efficiency: Multicast decreases network traffic and reduces server load and CPU load.

l           Optimal performance: Multicast reduces redundant traffic.

l           Distributive application: Multicast makes multiple-point application possible.

II. Application of multicast

The multicast technology effectively addresses the issue of point-to-multipoint data transmission. By enabling high-efficiency point-to-multipoint data transmission, over an IP network, multicast greatly saves network bandwidth and reduces network load.

Multicast provides the following applications:

l           Applications of multimedia and flow media, such as Web TV, Web radio, and real-time video/audio conferencing.

l           Communication for training and cooperative operations, such as remote education.

l           Database and financial applications (stock), and so on.

l           Any point-to-multiple-point data application.

1.2  Multicast Architecture

The purpose of IP multicast is to transmit information from a multicast source to receivers in the multicast mode and to satisfy information requirements of receivers. You should be concerned about:

l           Host registration: What receivers reside on the network?

l           Technologies of discovering a multicast source: Which multicast source should the receivers receive information from?

l           Multicast addressing mechanism: Where should the multicast source transports information?

l           Multicast routing: How is information transported?

IP multicast is a kind of peer-to-peer service. Based on the protocol layer sequence from bottom to top, the multicast mechanism contains addressing mechanism, host registration, multicast routing, and multicast application, as shown in Figure 1-4:

Figure 1-4 Architecture of the multicast mechanism

The multicast addressing mechanism involves the planning of multicast addresses. Host registration and multicast routing are implemented based on the IP multicast protocol. Multicast application software is not described in this chapter.

l           Addressing mechanism: Information is sent from a multicast source to a group of receivers through multicast addresses.

l           Host registration: A receiving host joins and leaves a multicast group dynamically to implement membership registration.

l           Multicast routing: A router or switch establishes a packet distribution tree and transports packets from a multicast source to receivers.

l           Multicast application: A multicast source must support multicast applications, such as video conferencing. The TCP/IP protocol suite must support the function of sending and receiving multicast information.

1.2.1  Multicast Address

As receivers are multiple hosts in a multicast group, you should be concerned about the following questions:

l           What destination should the information source send the information to in the multicast mode?

l           How to select the destination address, that is, how does the information source know who the user is?

These questions are about multicast addressing. To enable the communication between the information source and members of a multicast group (a group of information receivers), network-layer multicast addresses, namely, IP multicast addresses must be provided. In addition, a technology must be available to map IP multicast addresses to link-layer MAC multicast addresses. The following sections describe these two types of multicast addresses:

I. IP multicast address

Internet Assigned Numbers Authority (IANA) categorizes IP addresses into five classes: A, B, C, D, and E. Unicast packets use IP addresses of Class A, B, and C based on network scales. Class D IP addresses are used as destination addresses of multicast packets. Class D address must not appear in the IP address field of a source IP address of IP packets. Class E IP addresses are reserved for future use.

In unicast data transport, a data packet is transported hop by hop from the source address to the destination address. In an IP multicast environment, the destination address of a packet is a multicast address identifying a mutlicast group.All the receivers join a group. Once they join the group, the data sent to this group of addresses starts to be transported to the receivers. All the members in this group can receive the data packets. This group is a multicast group.

A multicast group has the following characteristics:

l           The membership of a group is dynamic. A host can join and leave a multicast group at any time.

l           A multicast group can be either permanent or temporary.

l           A multicast group whose addresses are assigned by IANA is a permanent multicast group. It is also called reserved multicast group.

Note that:

l           The IP addresses of a permanent multicast group keep unchanged, while the members of the group can be changed.

l           There can be any number of, or even zero, members in a permanent multicast group.

l           Those IP multicast addresses not assigned to permanent multicast groups can be used by temporary multicast groups.

Class D IP addresses range from 224.0.0.0 to 239.255.255.255. For details, see Table 1-1.

Table 1-1 Range and description of Class D IP addresses

Class D address range

Description

224.0.0.0 to 224.0.0.255

Reserved multicast addresses (IP addresses for permanent multicast groups). The IP address 224.0.0.0 is reserved. Other IP addresses can be used by routing protocols.

224.0.1.0 to 231.255.255.255

233.0.0.0 to 238.255.255.255

Available any-source multicast (ASM) multicast addresses (IP addresses for temporary groups). They are valid for the entire network.

232.0.0.0 to 232.255.255.255

Available source-specific multicast (SSM) multicast group addresses.

239.0.0.0 to 239.255.255.255

Local management multicast addresses, which are for specific local use only.

 

As specified by IANA, the IP addresses ranging from 224.0.0.0 to 224.0.0.255 are reserved for network protocols on local networks. The following table lists commonly used reserved IP multicast addresses:

Table 1-2 Reserved IP multicast addresses

Class D address range

Description

224.0.0.1

Address of all hosts

224.0.0.2

Address of all multicast routers

224.0.0.3

Unassigned

224.0.0.4

Distance vector multicast routing protocol (DVMRP) routers

224.0.0.5

Open shortest path first (OSPF) routers

224.0.0.6

Open shortest path first designated routers (OSPF DR)

224.0.0.7

Shared tree routers

224.0.0.8

Shared tree hosts

224.0.0.9

RIP-2 routers

224.0.0.11

Mobile agents

224.0.0.12

DHCP server/relay agent

224.0.0.13

All protocol independent multicast (PIM) routers

224.0.0.14

Resource reservation protocol (RSVP) encapsulation

224.0.0.15

All core-based tree (CBT) routers

224.0.0.16

The specified subnetwork bandwidth management (SBM)

224.0.0.17

All SBMS

224.0.0.18

Virtual router redundancy protocol (VRRP)

224.0.0.19 to 224.0.0.255

Other protocols

 

&  Note:

Like having reserved the private network segment 10.0.0.0/8 for unicast, IANA has also reserved the network segments ranging from 239.0.0.0 to 239.255.255.255 for multicast. These are administratively scoped addresses. With the administratively scoped addresses, you can define the range of multicast domains flexibly to isolate IP addresses between different multicast domains, so that the same multicast address can be used in different multicast domains without causing collisions.

 

II. Ethernet multicast MAC address

When a unicast IP packet is transported in an Ethernet network, the destination MAC address is the MAC address of the receiver. When a multicast packet is transported in an Ethernet network, a multicast MAC address is used as the destination address because the destination is a group with an uncertain number of members.

As stipulated by IANA, the high-order 24 bits of a multicast MAC address are 0x01005e, while the low-order 23 bits of a MAC address are the low-order 23 bits of the multicast IP address. Figure 1-5 describes the mapping relationship:

Figure 1-5 Mapping relationship between multicast IP address and multicast MAC address

The high-order four bits of the IP multicast address are 1110, representing the multicast ID. Only 23 bits of the remaining 28 bits are mapped to a MAC address. Thus, five bits of the multicast IP address are lost. As a result, 32 IP multicast addresses are mapped to the same MAC address.

1.2.2  IP Multicast Protocols

IP multicast protocols include the multicast group management protocol and the multicast routing protocol. Figure 1-6 describes the positions of the protocols related to multicast in the network.

Figure 1-6 Positions of protocols related to multicast

I. Multicast group management protocol

Internet group membership protocol (IGMP) is adopted between hosts and multicast routers. This protocol defines the mechanism of establishing and maintaining group membership between hosts and multicast routers.

II. Multicast routing protocols

A multicast routing protocol operates between multicast routers to establish and maintain multicast routes and forward multicast packets accurately and effectively. A multicast route establishes a loop-free data transport path from a data source to multiple receivers. The task of multicast routing protocol is to establish a distribution tree structure. Multicast routers can establish the data transmission path (namely, distribution tree) in many ways.

Like unicast routes, multicast routes come in intra-domain routes and inter-domain routes. Intra-domain multicast routes are quite mature now. Protocol independent multicast (PIM) is the most commonly used protocol currently. It can cooperate with any unicast routing protocol.

1.3  Forwarding Mechanism of Multicast Packets

In a multicast model, a multicast source host transports information to the host group, which is identified by the multicast group address in the destination address field of an IP data packet. Unlike a unicast model, a multicast model must forward data packets to multiple external interfaces so that all receiver sites can receive the packets. Therefore the forwarding process of multicast is more complicated than that of unicast.

In order to guarantee the transmission of multicast packets in the network, multicast packets must be forwarded based on unicast routing tables or those specially provided to multicast (such as an MBGP multicast routing table). In addition, to prevent the interfaces from receiving the same information from different peers, routers must check the receiving interfaces. This check mechanism is reverse path forwarding (RPF) check, which is the basis of performing multicast forwarding for most multicast routing protocols.

Based on source addresses, multicast routers judge whether multicast packets come from specified interfaces; that is, RPF check determines whether inbound interfaces are correct by comparing the interfaces that the packets reach with the interfaces that the packets should reach. If the router resides on a shortest path tree (SPT), the interface that multicast packets should reach points to the multicast source. If the router resides on a rendezvous point tree (RPT), the interface that multicast packets should reach points to the rendezvous point (RP). When multicast data packets reach the router, if RPF check passes, the router forwards the data packets based on multicast forwarding entries; otherwise, the data packets are dropped.

 


Chapter 2  IGMP Snooping Configuration

2.1  Overview

2.1.1  IGMP Snooping Fundamentals

Internet group management protocol snooping (IGMP Snooping) is a multicast control mechanism running on Layer 2 switches. It is used to manage and control multicast groups.

When the IGMP messages transferred from the hosts to the router pass through the Layer 2 switch, the switch uses IGMP Snooping to analyze and process the IGMP messages, as shown in Table 2-1.

Table 2-1 IGMP message processing on the switch

Received message type

Sender

Receiver

Switch processing

IGMP host report message

Host

Switch

Add the host to the corresponding multicast group.

IGMP leave message

Host

Switch

Remove the host from the multicast group.

 

By listening to IGMP messages, a switch establishes and maintains IP multicast address tables, according to which it forwards the multicast packets delivered from the router.

As shown in Figure 2-1, multicast packets are broadcast at Layer 2 when IGMP Snooping is disabled and multicast (not broadcast) at Layer 2 when IGMP Snooping is enabled.

Figure 2-1 Multicast packet transmission with or without IGMP Snooping enabled

2.1.2  IGMP Snooping Implementation

I. IGMP Snooping terminologies

Before going on, we first describe the following terms involved in IGMP Snooping:

l           Router port: the switch port directly connected to the multicast router.

l           Multicast member port: a switch port connected to a multicast group member (a host in a multicast group).

l           MAC multicast group: a multicast group identified by a MAC multicast address and maintained by the switch.

The following three timers are closely associated with IGMP snooping.

Table 2-2 IGMP Snooping timers

Timer

Setting

Packet normally received before timeout

Timeout action on the switch

Router port aging timer

Aging time of the router port

IGMP general query message

Consider that this port is not a router port any more.

Multicast member port aging timer

Aging time of the multicast member ports

IGMP message/PIM message/Dvmrp Probe message

Send an IGMP group-specific query message to the multicast member port.

Query response timer

Query response timeout time

IGMP report message

Remove the port from the member port list of the multicast group.

 

II. Layer 2 multicast with IGMP Snooping

The switch runs IGMP Snooping to listen to IGMP messages, based on which the multicast forward table is established.

Figure 2-2 IGMP Snooping implementation

To implement Layer 2 multicast, the switch processes four different types of IGMP messages it received, as shown in Table 2-3.

Table 2-3 IGMP Snooping messages

Message

Sender

Receiver

Purpose

Switch action

IGMP general query message

Multicast router and multicast switch

Multicast member switch and host

Query if the multicast groups contain any member

Check if the message comes from the original router port

If yes, reset the aging timer of the router port

If not, notify the multicast router that a member is in a multicast group and start the aging timer for the router port

IGMP group-specific query message

Multicast router and multicast switch

Multicast member switch and host

Query if a specific IGMP multicast group contains any member

Send an IGMP group-specific query message to the IP multicast group being queried.

IGMP host report message

Host

Multicast router and multicast switch

Apply for joining a multicast group, or respond to an IGMP query message

Check if the IP multicast group has a corresponding MAC multicast group

If yes, check if the port exists in the MAC multicast group

If yes, add the IP multicast group address to the MAC multicast group table.

If not, add the port to the MAC multicast group, reset the aging timer of the port and check if the corresponding IP multicast group exists.

If yes, add the port to the IP multicast group.

If not, create an IP multicast group and add the port to it.

If not:

Create a MAC multicast group and notify the multicast router that a member is ready to join the multicast group.

Add the port to the MAC multicast group and start the aging timer of the port.

Add all ports in the VLAN owning this port to the forward port list of the MAC multicast group.

Add the port to the IP multicast group.

IGMP leave message

Host

Multicast router and multicast switch

Notify the multicast router and multicast switch that the host is leaving its multicast group.

Multicast router and multicast switch send IGMP group-specific query packet(s) to the multicast group whose member host sends leave packets to check if the multicast group has any members and enable the corresponding query timer.

If no response is received from the port before the timer times out, the switch will check whether the port corresponds to a single MAC multicast group.

l      If yes, remove the corresponding MAC multicast group and IP multicast group

l      If no, remove only those entries that correspond to this port in the MAC multicast group, and remove the corresponding IP multicast group entries

If no response is received from the multicast group before the timer times out, notify the router to remove this multicast group node from the multicast tree

 

  Caution:

An IGMP-Snooping-enabled S3600 Ethernet switch judges whether the multicast group exists when it receives an IGMP leave packet sent by a host in a multicast group. If this multicast group does not exist, the switch will drop the IGMP leave packet instead of forwarding it.

 

2.2  IGMP Snooping Configuration

The following table lists all the IGMP Snooping configuration tasks:

Table 2-4 IGMP Snooping configuration tasks

Operation

Description

Related section

Enable IGMP Snooping

Required

Section 2.2.1  "Enabling IGMP Snooping"

Configure timers

Optional

Section 2.2.2  "Configuring Timers"

Enable IGMP fast leave

Optional

Section 2.2.3  "Enabling IGMP Fast Leave"

Configure IGMP Snooping filter

Optional

Section 2.2.4  "Configuring IGMP Snooping Filtering ACL"

Configure the number of the multicast groups a port can be added to

Optional

Section 2.2.5  "Configuring to Limit Number of Multicast Groups on a Port"

Configure IGMP Snooping queriers

Optional

Section 2.2.6  "Configuring IGMP Querier"

Configure multicast VLAN

Optional

Section 2.2.7  "Configuring Multicast VLAN"

 

2.2.1  Enabling IGMP Snooping

You can use the command here to enable IGMP Snooping so that it can establish and maintain MAC multicast group forwarding tables at Layer 2.

Table 2-5 Enable IGMP Snooping

Operation

Command

Description

Enter system view

system-view

Enable IGMP Snooping globally

igmp-snooping enable

Required

By default, IGMP Snooping is disabled globally.

Enter VLAN view

vlan vlan-id

Enable IGMP Snooping on the VLAN

igmp-snooping enable

Required

By default, IGMP Snooping is disabled on the VLAN.

 

  Caution:

l      Although both Layer 2 and Layer 3 multicast protocols can run on the same switch simultaneously, they cannot run simultaneously on a VLAN or its corresponding VLAN interface.

l      Before configuring IGMP Snooping in VLAN view, you must enable IGMP Snooping globally in system view. Otherwise, the IGMP Snooping feature cannot be enabled in VLAN view.

 

2.2.2  Configuring Timers

This configuration task is to manually configure the aging timer of the router port, the aging timer of the multicast member ports, and the query response timer.

l           If the switch receives no general IGMP query message from a router within the aging time of the router port, the switch removes the router port from the port member lists of all the multicast groups.

l           If the switch receives no IGMP host report message within the aging time of the member port, it sends IGMP group-specific query to the port.

Table 2-6 Configure timers

Operation

Command

Description

Enter system view

system-view

Configure the aging timer of the router port

igmp-snooping router-aging-time seconds

Optional

By default, the aging time of the router port is 105 seconds.

Configure the query response timer

igmp-snooping max-response-time seconds

Optional

By default, the query response timeout time is 10 seconds.

Configure the aging timer of the multicast member port

igmp-snooping host-aging-time seconds

Optional

By default, the aging time of multicast member ports is 260 seconds

 

2.2.3  Enabling IGMP Fast Leave

Normally, when receiving an IGMP Leave message, Switch does not immediately remove the port from the multicast group but sends an IGMP group-specific query message. If no response is received in a given period, it then removes the port from the multicast group.

If IGMP fast leave processing is enabled, when receiving an IGMP Leave message, Switch immediately removes the port from the multicast group. When a port has only one user, enabling IGMP fast leave processing on the port can save bandwidth.

I. Enable the IGMP fast leave in system view

Table 2-7 Enable the IGMP fast leave in system view

Operation

Command

Description

Enter system view

system-view

Enable the fast leave from the multicast groups of specific VLANs

igmp-snooping fast-leave [ vlan vlan-list ]

Required

By default, the fast leave from the multicast group for a port is disabled.

 

II. Enable the IGMP fast leave in Ethernet port view

Table 2-8 Enable the IGMP fast leave in Ethernet port view

Operation

Command

Description

Enter system view

system-view

Enter Ethernet port view

interface interface-type interface-number

Enable the fast leave from the multicast groups of specific VLANs for a port

igmp-snooping fast-leave [ vlan vlan-list ]

Required

By default, the fast leave from the multicast group for a port is disabled.

 

&  Note:

The configuration performed in system view applies to all the ports in the specified VLANs. While the configuration performed in Ethernet port view only applies to the port in the specified VLANs.

 

2.2.4  Configuring IGMP Snooping Filtering ACL

You can configure multicast filtering ACLs on the switch ports connected to user ends so as to use the IGMP Snooping filter function to limit the multicast streams that the users can access. With this function, you can treat different VoD users in different ways by allowing them to access the multicast streams in different multicast groups.

In practice, when a user orders a multicast program, an IGMP report message is generated. When the message arrives at the switch, the switch examines the multicast filtering ACL configured on the access port to determine if the port can join the corresponding multicast group or not. If yes, it adds the port to the forward port list of the multicast group. If not, it drops the IGMP report message and does not forward the corresponding data stream to the port. In this way, you can control the multicast streams that users can access.

Make sure that ACL rules have been configured before configuring this feature.

Table 2-9 Configure IGMP Snooping filtering ACL

Operation

Command

Description

Enter system view

system-view

Enable IGMP Snooping filter

igmp-snooping group-policy acl-number [ vlan vlan-list ]

Required

l      You can configure the ACL to filter the IP addresses of corresponding multicast group.

l      By default, the multicast filtering feature is disabled.

Enter Ethernet port view

interface interface-type interface-number

Configure the multicast filtering feature on the port

igmp-snooping group-policy acl-number [ vlan vlan-list ]

Optional

l      You can configure the ACL to filter the IP addresses of corresponding multicast group.

l      By default, the multicast filtering feature is disabled.

 

2.2.5  Configuring to Limit Number of Multicast Groups on a Port

With a limit imposed on the number of multicast groups on the switch port, users can no longer have as many multicast groups as they want when demanding multicast group programs. In this way, the bandwidth on the port is controlled.

Table 2-10 Configure to limit the number of multicast groups on a port

Operation

Command

Description

Enter system view

system-view

Enter Ethernet port view

interface interface-type interface-number

Limit the number of multicast groups on a port

igmp-snooping group-limit limit [ vlan vlan-list [ overflow-replace ] ]

Required

The number of multicast groups on a port is not limited by default.

 

2.2.6  Configuring IGMP Querier

In an IGMP-enabled multicast network, a query multicast router or Layer 3 multicast switch is specifically responsible for sending IGMP query packets.

However, the Layer 2 multicast switch does not support the IGMP feature. Therefore, the Layer 2 multicast switch cannot implement the querier feature and cannot send general group query packets. By configuring IGMP Snooping queriers, you can enable the Layer 2 multicast switch to send general group query packets actively at data link layer, and thereby establish and maintain the multicast forwarding entries.

Additionally, you can enable the Layer 2 switch to send the source addresses, maximum query response time, and query interval of general group query packets,

Table 2-11 Configure IGMP Snooping querier

Operation

Command

Description

Enter system view

system-view

Enable the IGMP Snooping feature in system view

igmp-snooping enable

Required

The IGMP Snooping feature is disabled by default.

Enter VLAN view

vlan vlan-id

Enable the IGMP Snooping feature in VLAN view

igmp-snooping enable

Required

By default, the IGMP Snooping feature is disabled.

Configure the IGMP Snooping querier feature

igmp-snooping querier

Required

The IGMP Snooping querier feature is disabled by default.

Configure the interval of sending general query packets

igmp-snooping query-interval seconds

Optional

By default, the interval of sending general query packets is 60 seconds.

Configure the source IP address to send general query packets

igmp-snooping general-query source-ip { current-interface | ip-address }

Optional

By default, the source IP address to send general query packets is 0.0.0.0.

 

2.2.7  Configuring Multicast VLAN

In an old multicast mode, when users in different VLANs order the same multicast group, the multicast stream is copied to each of the VLANs. This mode wastes a lot of bandwidth.

By configuring a multicast VLAN, adding switch ports to the multicast VLAN and enabling IGMP Snooping, you can make users in different VLANs share the same multicast VLAN. This saves bandwidth since multicast streams are transmitted only within the multicast VLAN, and also guarantees security because the multicast VLAN is isolated from user VLANs.

Multicast VLAN is mainly used in Layer 2 switching, but you must make corresponding configuration on the Layer 3 switch.

Perform the following configuration to configure multicast VLAN.

Table 2-12 Configure multicast VLAN on Layer 3 switch

Operation

Command

Description

Enter system view

system-view

Create a multicast VLAN and enter VLAN view

vlan vlan-id

Create the multicast VLAN to be configured.

Exit VLAN view

quit

Create a multicast VLAN interface and enter VLAN interface view

interface Vlan-interface vlan-id

Enable IGMP

igmp enable

Required

By default, the IGMP feature is disabled.

Exit VLAN interface view

quit

Enter the view of the Ethernet port connected to the Layer 2 switch

interface interface-type interface-number

Define the port as a trunk or hybrid port

port link-type { trunk | hybrid }

Required

Specify the VLANs to be allowed to pass through the Ethernet

port hybrid vlan vlan-id-list { tagged | untagged }

Required

The multicast VLAN defined on the Layer 2 switch must be included and set as tagged.

port trunk pvid vlan vlan-list

 

Table 2-13 Configure multicast VLAN on Layer 2 switch

Operation

Command

Description

Enter system view

system-view

Enable IGMP Snooping globally

igmp-snooping enable

Required

Enter VLAN view

vlan vlan-id

vlan-id is a VLAN ID.

Enable IGMP Snooping on the VLAN

igmp-snooping enable

Required

By default, the IGMP Snooping feature is disabled

Enable multicast VLAN

service-type multicast

Required

Exit VLAN view

quit

Enter the view of the Ethernet port connected to the Layer 3 switch

interface interface-type interface-number

Define the port as a trunk or hybrid port

port link-type { trunk | hybrid }

Specify the VLANs to be allowed to pass through the Ethernet

port hybrid vlan vlan-list { tagged | untagged }

The multicast VLAN must be included and set as tagged.

port trunk pvid vlan vlan-list

Enter the view of the Ethernet port connected to a user device

interface interface-type interface-number

Define the port as a hybrid port

port link-type hybrid

Required

Specify the VLANs to be allowed to pass the port

port hybrid vlan vlan-id-list { tagged | untagged }

Required

The multicast VLAN must be included and set as untagged.

 

&  Note:

l      An Isolate user VLAN cannot be configured as a multicast VLAN.

l      One port can belong to only one multicast VLAN.

l      The port connected to a user end can only be a hybrid port.

l      The multicast member port must be in the same VLAN with the router port. Otherwise, the multicast member port cannot receive multicast packets.

l      When a router port is added to a multicast VLAN, the router port must be set as a Trunk port or tagged Hybrid port. Otherwise, all the multicast member ports in this multicast VLAN cannot receive multicast packets.

l      When the multicast VLAN is set up, all IGMP host report messages are broadcast in the multicast VLAN only. For a multicast member port of a non-multicast VLAN, its VLAN interface cannot establish the corresponding Layer 2 multicast entry. Therefore, you are recommended to delete the port from the multicast VLAN.

 

2.3  Displaying and Maintaining IGMP Snooping

After the configuration above, you can execute the display command in any view to verify the configuration by checking the displayed information.

You can execute the reset command in user view to clear the statistics information about IGMP Snooping.

Table 2-14 Display information about IGMP Snooping

Operation

Command

Description

Display the current IGMP Snooping configuration

display igmp-snooping configuration

You can execute the display commands in any view.

Display IGMP Snooping message statistics

display igmp-snooping statistics

Display IP and MAC multicast groups in one or all VLANs

display igmp-snooping group [ vlan vlanid ]

Clear IGMP Snooping statistics

reset igmp-snooping statistics

You can execute the reset command in user view.

 

2.4  IGMP Snooping Configuration Example

2.4.1  Example 1

Configure IGMP Snooping on a switch.

I. Network requirements

Connect the router port on the switch to the router, and connect non-router ports that belong to VLAN 10 to user PCs. Enable IGMP Snooping on the switch.

II. Network diagram

Figure 2-3 Network diagram for IGMP Snooping configuration

III. Configuration procedure

# Enable IGMP Snooping in system view.

<H3C> system-view

[H3C] igmp-snooping enable

# Enable IGMP Snooping on VLAN 10 where no Layer 3 multicast protocol is enabled.

[H3C] vlan 10

[H3C-vlan10] igmp-snooping enable

2.4.2  Example 2

Configure multicast VLAN on Layer 2 and Layer 3 switches.

I. Network requirements

The multicast source is Workstation. Switch A forwards the multicast data flows that the multicast source sends. The multicast data flows are forwarded by the Layer 2 switch Switch B to the end user PC1 and PC2.

Table 2-15 describes the network devices involved in this example and the configurations you should make on them.

Table 2-15 Network devices and their configurations

Device

Description

Switch A

Layer 3 switch

The interface IP address of VLAN 20 is 168.10.1.1. Ethernet1/0/1 is connected to the workstation and belongs to VLAN 20.

VLAN 10 is the multicast VLAN.

Ethernet1/0/5 belongs to VLAN 2, Ethernet1/0/6 belongs to VLAN 3, and Ethernet1/0/10 is connected to Switch B.

Switch B

Layer 2 switch

VLAN 2 contains Ethernet1/0/1 and VLAN 3 contains Ethernet1/0/2. The two ports are connected to PC1 and PC2, respectively.

Ethernet1/0/10 is connected to Switch A.

PC 1

User 1

PC1 is connected to Ethernet1/0/1 on Switch B.

PC 2

User 2

PC2 is connected to Ethernet1/0/2 on Switch B.

 

Configure a multicast VLAN, so that the users in VLAN 2 and VLAN 3 can receive multicast streams through the multicast VLAN.

II. Network diagram

Figure 2-4 Network diagram for multicast VLAN configuration

III. Configuration procedure

The following configuration is based on the prerequisite that the devices are properly connected and all the required IP addresses are already configured.

1)         Configure Switch A:

# Set the interface IP address of VLAN 20 to 168.10.1.1 and enable the PIM DM protocol on the VLAN interface.

<SwitchA> system-view

[SwitchA] multicast routing-enable

[SwitchA] vlan 20

[SwitchA-vlan20] interface Vlan-interface 20

[SwitchA-Vlan-interface20] ip address 168.10.1.1 255.255.255.0

[SwitchA-Vlan-interface20] pim dm

[SwitchA-Vlan-interface20] quit

# Configure multicast VLAN 10.

[SwitchA] vlan 10

[SwitchA-vlan10] quit

# Configure VLAN 2.

[SwitchA] vlan 2

[SwitchA-vlan2] quit

[SwitchA] interface Ethernet 1/0/5

[SwitchA-Ethernet1/0/5] port hybrid vlan 2

# Configure VLAN 3.

[SwitchA] vlan 3

[SwitchA-vlan3] quit

[SwitchA] interface Ethernet 1/0/6

[SwitchA-Ethernet1/0/6] port hybrid vlan 3

# Define Ethernet1/0/10 as a hybrid port, add the port to VLAN 2, VLAN 3, and VLAN 10, and configure the port to include VLAN tags in its outbound packets of VLAN 2, VLAN 3, and VLAN 10.

[SwitchA] interface Ethernet 1/0/10

[SwitchA-Ethernet1/0/10] port link-type hybrid

[SwitchA-Ethernet1/0/10] port hybrid vlan 2 3 10 tagged

[SwitchA-Ethernet1/0/10] quit

# Enable PIM DM and IGMP on VLAN 10.

[SwitchA] interface Vlan-interface 10

[SwitchA-Vlan-interface10] pim dm

[SwitchA-Vlan-interface10] igmp enable

2)         Configure Switch B:

# Enable the IGMP Snooping feature on Switch B.

<SwitchB> system-view

[SwitchB] igmp-snooping enable

# Configure VLAN 10 as a multicast VLAN and enable the IGMP Snooping feature on it.

[SwitchB] vlan 10

[SwitchB-vlan10] service-type multicast

[SwitchB-vlan10] igmp-snooping enable

[SwitchB-vlan10] quit

# Define Ethernet1/0/10 as a hybrid port, add the port to VLAN 2, VLAN 3, and VLAN 10, and configure the port to include VLAN tags in its outbound packets of VLAN 2, VLAN 3, and VLAN 10.

[SwitchB] interface Ethernet 1/0/10

[SwitchB-Ethernet1/0/10] port link-type hybrid

[SwitchB-Ethernet1/0/10] port hybrid vlan 2 3 10 tagged

[SwitchB-Ethernet1/0/10] quit

# Define Ethernet1/0/1 as a hybrid port, add the port to VLAN 2 and VLAN 10, and configure the port to exclude VLAN tags from its outbound packets of VLAN 2 and VLAN 10 and set VLAN 2 as the default VLAN of the port.

[SwitchB] interface Ethernet 1/0/1

[SwitchB-Ethernet1/0/1] port link-type hybrid

[SwitchB-Ethernet1/0/1] port hybrid vlan 2 10 untagged

[SwitchB-Ethernet1/0/1] port hybrid pvid vlan 2

[SwitchB-Ethernet1/0/1] quit

# Define Ethernet1/0/2 as a hybrid port, add the port to VLAN 3 and VLAN 10, and configure the port to exclude VLAN tags in its outbound packets of VLAN 3 and VLAN 10, and set VLAN 3 as the default VLAN of the port.

[SwitchB] interface Ethernet 1/0/2

[SwitchB-Ethernet1/0/2] port link-type hybrid

[SwitchB-Ethernet1/0/2] port hybrid vlan 3 10 untagged

[SwitchB-Ethernet1/0/2] port hybrid pvid vlan 3

[SwitchB-Ethernet1/0/2] quit

2.5  Troubleshooting IGMP Snooping

Symptom: Multicast function does not work on the switch.

Solution:

The reason may be:

1)         IGMP Snooping is not enabled.

l           Use the display current-configuration command to check the status of IGMP Snooping.

l           If IGMP Snooping is disabled, check whether it is disabled globally or on the corresponding VLAN. If it is disabled globally, use the igmp-snooping enable command in both system view and VLAN view to enable it both globally and on the corresponding VLAN at the same time. If it is only disabled on the corresponding VLAN, use the igmp-snooping enable command in VLAN view only to enable it on the corresponding VLAN.

2)         Multicast forwarding table set up by IGMP Snooping is wrong.

l           Use the display igmp-snooping group command to check if the multicast groups are expected ones.

l           If the multicast group set up by IGMP Snooping is not correct, contact your technical support personnel.

 


Chapter 3  Common Multicast Configuration

3.1  Overview

Common multicast configuration tasks are the common contents of multicast group management protocol and multicast routing protocol. You must enable the common multicast configuration on the switch before enabling the two protocols.

Common multicast configuration includes:

l           Configuring a limit on the number of route entries: When the multicast routing protocol is configured on the switch, plenty of multicast route entries will be sent to upstream Layer 3 switches or routers. In order to prevent plenty of multicast route entries from consuming all the memory of the Layer 3 switches or routers, you can configure a limit on the number of route entries to prevent too many route entries from being sent to Layer 3 switches or routers.

l           Configuring suppression on the multicast source port: In the network, some users may set up multicast servers privately, which results in the shortage of multicast network resources and affects the multicast bandwidth and the transmission of valid information in the network. You can configure the suppression on the multicast source port feature to filter multicast packets on the unauthorized multicast source port, so as to prevent the users connected to the port from setting up multicast servers privately.

l           Clearing the related multicast entries: By clearing the related multicast entries, you can clear the multicast route entries saved in the memory of the Layer 3 switches or routers to release the system memory

3.2  Common Multicast Configuration

Common multicast configuration tasks:

Table 3-1 Common multicast configuration tasks

Operation

Description

Related section

Enable multicast and configure limit on the number of route entries

Required

Section 3.2.1  "Enabling Multicast and Configuring Limit on the Number of Route Entries"

Configure suppression on the multicast source port

Optional

Section 3.2.2  "Configuring Suppression on the Multicast Source Port"

Clear the related multicast entries

Optional

Section 3.2.3  "Clearing the Related Multicast Entries"

 

3.2.1  Enabling Multicast and Configuring Limit on the Number of Route Entries

Table 3-2 Enable multicast and configure limit on the number of route entries

Operation

Command

Description

Enter system view

system-view

Enable multicast

multicast routing-enable

Required

Multicast must be enabled before the multicast group management protocol and the multicast routing protocol are configured.

Configure limit on the number of multicast route entries

multicast route-limit limit

Optional

By default, the limit on the number of multicast route entries is 256

 

&  Note:

To guard against attacks on any socket not in use, S3600 series provide the following functions to achieve enhanced security:

l      The system opens RAW Socket used by multicast routing only if multicast routing is enabled.

l      If you disable multicast routing, RAW Socket used by multicast routing will also be closed.

Perform the following steps to implement the above-mentioned functions:

l      Use the multicast routing-enable command to enable multicast routing and to open RAW Socket used by multicast routing.

l      Use the undo multicast routing-enable command to disable multicast routing as well as close RAW Socket.

 

  Caution:

The other multicast configurations do not take effect until multicast is enabled.

 

3.2.2  Configuring Suppression on the Multicast Source Port

I. Configure suppression on the multicast source port in system view

Table 3-3 Configure suppression on the multicast source port in system view

Operation

Command

Description

Enter system view

system-view

Configure suppression on the multicast source port

multicast-source-deny [ interface interface-list ]

Required

The suppression on the multicast source port feature is disabled by default.

 

II. Configure suppression on the multicast source port in Ethernet port view

Table 3-4 Configure suppression on the multicast source port in Ethernet port view

Operation

Command

Description

Enter system view

system-view

Enter Ethernet port view

interface interface-type interface-number

Configure suppression on the multicast source port in Ethernet port view

multicast-source-deny

Optional

The suppression on the multicast source port feature is disabled on all ports of the switch by default.

 

3.2.3  Clearing the Related Multicast Entries

Use the reset command in user view to clear the related statistics information about the common multicast configuration.

Table 3-5 Clear the related multicast entries

Operation

Command

Description

Clear the multicast forwarding case (MFC) forwarding entries or statistics information about the forwarding entries

reset multicast forwarding-table [ statistics ] { all | { group-address [ mask { group-mask | group-mask-length } ] | source-address [ mask { source-mask | source-mask-length } ] | incoming-interface interface-type interface-number } * }

Clear the related MFC forwarding entries

Clear the route entries in the core multicast routing table

reset multicast routing-table { all | { group-address [ mask { group-mask | group-mask-length } ] | source-address [ mask { source-mask | source-mask-length } ] | { incoming-interface interface-type interface-number } } * }

Clear the route entries in the core multicast routing table

 

3.3  Displaying Common Multicast Configuration

After the configuration above, you can execute the display command in any view to verify the configuration by checking the displayed information.

The multicast forwarding table is mainly used for debugging. Generally, you can get the required information by checking the core multicast routing table.

Table 3-6 Display common multicast configuration

Operation

Command

Description

Display the statistics information about the suppression on the multicast source port

display multicast-source-deny [ interface interface-type [ interface-number ] ]

This command can be executed in any view.

l      If neither the port type nor the port number is specified, the statistics information about the suppression on all the multicast source ports on the switch is displayed.

l      If only the port type is specified, the statistics information about the suppression on the multicast source ports of the type is displayed.

l      If both the port type and the port number are specified, the statistics information about the suppression on the specified multicast source port is displayed.

Display the information about the multicast routing table

display multicast routing-table [ group-address [ mask { group-mask | mask-length } ] | source-address [ mask { group-mask | mask-length } ] | incoming-interface { interface-type interface-number | register } ]*

These commands can be executed in any view.

Display the information about the multicast forwarding table

display multicast forwarding-table [ group-address [ mask { group-mask | mask-length } ] | source-address [ mask { group-mask | mask-length } ] | incoming-interface { interface-type interface-number ] register } ]*

Display the information about a multicast forward table containing port information

display mpm forwarding-table [ group-address ]

Display the information about the IP multicast groups and MAC multicast groups contained in a VLAN (or all the VLANs) configured on a switch

display mpm group [ vlan vlan-id ]

 

Three kinds of tables affect data transmission. Their correlations are as follows:

l           Each multicast routing protocol has its own multicast routing table.

l           The multicast routing information of all multicast routing protocols is integrated to form the core multicast routing table.

l           The core multicast routing table is consistent with the multicast forwarding table, which is actually in charge of multicast packet forwarding.

 


Chapter 4  Multicast MAC Address Entry Configuration

4.1  Overview

In Layer 2 multicast, the system can add multicast forwarding entries dynamically through a Layer 2 multicast protocol. Alternatively, you can statically bind a port to a multicast address entry by configuring a multicast MAC address entry manually.

Generally, when receiving a multicast packet whose multicast address has not yet been registered on the switch, the switch will broadcast the packet in the VLAN to which the port belongs. You can configure a static multicast MAC address entry to avoid this.

4.2  Configuring a Multicast MAC Address Entry

You can configure multicast MAC address entries in system view or Ethernet port view.

Table 4-1 Configure a multicast MAC address entry in system view

Operation

Command

Description

Enter system view

system-view

Create a multicast MAC address entry

mac-address multicast mac-address interface interface-list vlan vlan-id

Required

The mac-address argument must be a multicast MAC address

The vlan-id argument is the ID of the VLAN to which the port belongs

 

Table 4-2 Configure a multicast MAC address entry in Ethernet port view

Operation

Command

Description

Enter system view

system-view

Enter Ethernet port view

interface interface-type interface-number

Create a multicast MAC address entry.

mac-address multicast mac-address vlan vlan-id

Required

The mac-address argument must be a multicast MAC address

The vlan-id argument is the ID of the VLAN to which the port belongs.

 

&  Note:

l      If the multicast MAC address entry to be created already exists, the system gives you a prompt.

l      If you want to add a port to a multicast MAC address entry created through the mac-address multicast command, you need to remove the entry first, create this entry again, and then add the specified port to the forwarding ports of this entry.

l      The system does not support adding multicast MAC addresses to IRF ports. If a port is already an IRF port, the system will prompt that you cannot add multicast MAC addresses to this port.

l      You cannot enable link aggregation on a port on which you have configured a multicast MAC address; and you cannot configure a multicast MAC address on an aggregation port.

 

4.3  Displaying and Maintaining Multicast MAC Address

After the configuration above, you can execute the display command in any view to verify the configuration effect by checking the displayed information.

Table 4-3 Display and maintain multicast MAC address

Operation

Command

Description

Display the multicast MAC address entry/entries manually configured

display mac-address multicast [ static { { { mac-address vlan vlan-id | vlan vlan-id } [ count ] } | count } ]

You can use the display command in any view.

 


Chapter 5  Unknown Multicast Packet Drop Configuration

5.1  Overview

Generally, if the multicast address of the multicast packet received on the switch is not registered on the local switch, the packet will be broadcast in the VLAN. When the unknown multicast packet drop feature is enabled, the switch will drop the received multicast packet whose multicast address is not registered. Thus, the bandwidth is saved and the processing efficiency of the system is improved.

5.2  Unknown Multicast Packet Drop Configuration

Table 5-1 Configure unknown multicast packet drop

Operation

Command

Description

Enter system view

system-view

Configure the unknown multicast packet drop feature

unknown-multicast drop enable

Required

By default, the unknown multicast packet drop feature is disabled.

 


Chapter 6  IGMP Configuration

6.1  Overview

6.1.1  Introduction to IGMP

Internet group management protocol (IGMP) is responsible for the management of IP multicast members. It is used to establish and maintain membership between IP hosts and their directly connected neighboring routers.

However, the IGMP feature does not transmit or maintain the membership information among multicast routers. This task is completed by multicast routing protocols. All the hosts participating in multicast must support the IGMP feature.

IGMP is divided into two function parts:

l           Host side: the hosts participating IP multicast can join or exit a multicast group anywhere at anytime, without being restricted on the total number of group members.

l           Router side: through the IGMP protocol, a multicast router checks the network segment connected to each interface to see whether there are receivers of a multicast group, namely, group members.

A multicast router needs not and cannot save the membership information of all the hosts, while a host has to save the information that which multicast groups that it joins.

IGMP is asymmetric between the host and the router. The host needs to respond to the IGMP query packets of the multicast routers, that is, report packet responses as an IGMP hosts. The multicast router sends IGMP general query packets periodically and determines whether any host of a specified group joins its subnet based on the received response packets. When the router receives IGMP leave packets, it will send IGMPv2 group-specific query packets to find out whether the specified group still has any member.

6.1.2  IGMP Version

Until now, IGMP has three versions: including IGMP Version 1 (defined by RFC1112), IGMP Version 2 (defined by RFC2236), and IGMP Version 3.

6.1.3  Work Mechanism of IGMPv1

IGMPv1 manages multicast groups mainly based on the query and response mechanism.

Of multiple multicast routers on the same subnet, only one router is needed for sending IGMP queries because all the routers can receive IGMP reports from hosts. So, a querier election mechanism is required to determine which router will act as the IGMP querier on the subnet.

In IGMPv1, the designated router (DR) elected by the Layer 3 multicast routing protocol (such as PIM) serves as the IGMP querier.

Figure 6-1 Work mechanism of IGMPv1

Assume that Host B and Host C are expected to receive multicasts address to multicast group G1, while Host A is expected to receive multicasts address to G2, as shown in Figure 6-1. The hosts join the multicast group in a process described below:

1)         The IGMP querier (DR in the figure) periodically sends IGMP queries (with the destination address of 224.0.0.1) to all hosts and routers on the same subnet.

2)         Upon receiving a query message, either Host B or Host C (the delay timer of whichever expires first) that is of concern to G1 sends an IGMP report first, with the destination address being the group address of G1, to announce that it will join G1. Assume it is Host B that sends the report message.

3)         Because Host C is also interested in G1, it also receives the report that Host B sends to G1. Upon receiving the report, Host C will suppress itself from sending the same G1-specific message, because the IGMP routers already know that a host on the subnet is interested in G1. This IGMP report suppression mechanism helps reduce traffic over the local subnet.

4)         Meanwhile, because Host A is interested in G2, it sends a report (with the group address of G2 as the destination address) to announce that it will join G2.

5)         Through the query/response process, the IGMP routers learn about the receivers corresponding to G1 and G2 on the local subnet, and generate (*, G1) and (*, G2) multicast forwarding entries as the basis for forwarding the multicast information, where * represents any multicast source.

6)         When the multicast data addressed to G1 or G2 reaches an IGMP router, because the (*, G1) and (*, G2) multicast forwarding entries exist on the IGMP router, the router forwards the data to the local subnet so that the receivers on the subnet can receive the data.

As IGMPv1 does not specifically define a Leave Group message, upon leaving a multicast group, an IGMPv1 host stops sending reports with the destination address being the address of that multicast group. If no member of a multicast group exists on the subnet, the IGMP routers will not receive any report addressed to that multicast report, so the routers will delete the forwarding entries corresponding to that multicast group.

6.1.4  Enhancements Provided by IGMPv2

Compared with IGMPv1, IGMPv2 provides the querier election mechanism and Leave Group mechanism.

I. Querier election mechanism

In IGMPv1, the DR elected by the Layer 3 multicast routing protocol (such as PIM) serves as the querier.

In IGMPv2, an independent querier election mechanism is introduced, The querier election process is as follows:

1)         Initially, every IGMPv2 router assumes itself as the querier and sends IGMP general queries (with the destination address of 224.0.0.1) to all hosts and routers on the local subnet.

2)         Then, every IGMPv2 router compares the source IP address of the received message with its own interface address. After comparison, the IGMPv2 router with the lowest IP address wins the querier election and all other IGMPv2 routers are non-queriers.

3)         All the IGMP routers that have lost the querier election start a timer, namely the “other querier present interval”. If a router receives an IGMP query from the querier before the timer expires, it resets its timer; otherwise, it will assume the querier to have timed out and initiate a new querier election process.

II. “Leave group” mechanism

In IGMPv1, when a host leaves a multicast group, it does not send any notification to any multicast router. As a result, a multicast router relies on the response timeout to know that a member has left a group.

In IGMPv2, on the other hand, when a host leaves a multicast group:

1)         This host sends a leave message to the all-system group (224.0.0.2) on the local subnet.

2)         Upon receiving the leave message, the querier sends a group-specific query to the group that the host announced to leave.

3)         Up receiving this group-specific query, each of the other members of that group, if any, will send a membership report within the maximum response time specified in the query.

4)         If the querier receives a membership report sent by any member of the group within the maximum response time, it will maintain the memberships of that group; otherwise, the querier will assume that there is no longer any member of that group on the subnet and will stop maintaining the memberships of the group.

6.1.5  IGMP Proxy

A lot of leaf networks (leaf domains) are involved in the application of a multicast routing protocol (PIM-DM for example) over a large-scaled network. It is a hard work to configure and manage these leaf networks.

To reduce the workload of configuration and management without affecting the multicast connection of leaf networks, you can configure an IGMP Proxy on a Layer 3 switch in the leaf network (Switch B shown in Figure 6-2). The Layer 3 switch will then forward IGMP join or IGMP leave messages sent by the connected hosts. After IGMP Proxy is configured, the leaf switch is no longer a PIM neighbor but a host for the external network. The Layer 3 switch receives the multicast data of corresponding groups only when it has directly connected members.

Figure 6-2 Diagram for IGMP Proxy

Figure 6-2 shows an IGMP Proxy diagram for a leaf network.

Configure Switch B as follows:

l           Enable multicast routing on VLAN-interface1 and VLAN-interface2, and then configure the PIM protocol on it. And configure the IGMP protocol on VLAN-interface1 at the same time.

l           On VLAN-interface2, configure VLAN-interface1 as the outbound IGMP Proxy interface to external networks. You must enable the IGMP protocol on the interface first, and then configure the igmp proxy command.

Configure Switch A as follows:

l           Enable multicast routing and configure the IGMP protocol on VLAN-interface1.

l           Configure the pim neighbor-policy command to filter PIM neighbors in the network segment 33.33.33.0/24. That is, Switch A does not consider Switch B as its PIM neighbor.

In this case, when Switch B of leaf network receives from VLAN-interface2 an IGMP join or IGMP leave message sent by the host, it will change the source address of the IGMP information to the address of VLAN-interface1: 33.33.33.2 and send the information to VLAN-interface1 of Switch A. For Switch A, this works as if there is a host directly connected to VLAN-interface1.

Similarly, when Switch B receives the IGMP general group or group-specific query message from the Layer 3 Switch A, it will also change the source address of the query message to the IP address of VLAN-interface2: 22.22.22.1 and send the message from VLAN-interface2.

In Figure 6-2, VLAN-interface2 of Switch B is called the client and VLAN-interface1 of Switch B is called the proxy.

6.2  IGMP Configuration

You cannot perform other IGMP configuration tasks until you enable the IGMP protocol after multicast is enabled.

IGMP configuration tasks include:

Table 6-1 Configuration task overview

Operation

Description

Related section

Configure IGMP version

Optional

Section 6.2.1  "Configuring IGMP Version"

Configure IGMP query packets

Optional

Section 6.2.2  "Configuring IGMP Query Packets"

Configure IGMP multicast groups on the interface

Optional

Section 6.2.3  Configuring IGMP Multicast Groups on the Interface"

Configure router ports to join the specified multicast group

Optional

Section 6.2.4  "Configuring Router Ports to Join the Specified Multicast Group"

Configure IGMP Proxy

Optional

Section 6.2.5  "Configuring IGMP Proxy"

Remove the joined IGMP groups from the interface

Optional

Section 6.2.6  "Removing the Joined IGMP Groups from the Interface"

 

6.2.1  Configuring IGMP Version

Table 6-2 Configure IGMP version

Operation

Command

Description

Enter system view

system-view

Enable the multicast routing protocol

multicast routing-enable

Enable the multicast routing protocol

Enter VLAN interface view

interface Vlan-interface interface-number

Enable IGMP on the current interface

igmp enable

Required

IGMP is disabled on the interface by default.

Configure the IGMP version of the Layer 3 switch (router)

igmp version { 1 | 2 }

Optional

IGMP version 2 is used by default.

 

  Caution:

IGMP versions cannot be switched to one another automatically. Therefore, all the Layer 3 switches on a subnet must be configured to use the same IGMP version.

 

6.2.2  Configuring IGMP Query Packets

I. IGMP general query packets

The Layer 3 switch sends IGMP general query packets to the connected network segment periodically to get to know which multicast groups in the network segment have members according to the returned IGMP report packets. The multicast router also sends query packets periodically. When it receives the IGMP join packets of a group member, it will refresh the membership information of the network segment.

II. IGMP group-specific packets

The query router (querier for short) maintains the IGMP join packets on the interface on the shared network. After the related features are configured, the IGMP querier will send IGMP group-specific query packets at the user-defined interval for the user-defined times when it receives the IGMP leave packets from the hosts.

Suppose a host in a multicast group decides to leave the multicast group. The related procedure is as follows:

l           The host sends an IGMP leave packet.

l           When the IGMP querier receives the packet, it will send IGMP group-specific query packets at the interval configured by the igmp lastmember-queryinterval command (the interval is 1 second by default) for the robust-value times (the robust-value argument is configured by the igmp robust-count command and it is 2 by default).

l           If other hosts are interested in the group after receiving the IGMP group-specific query packet from the querier, they will send IGMP join packets in the maximum response time specified in the packet.

l           If the IGMP querier receives IGMP join packets from other hosts within the period of robust-value x lastmember-queryinterval, it will maintain the membership of the group.

l           If the IGMP querier does not receive IGMP join packets from other hosts after the period of robust-value x lastmember-queryinterval, it considers that the group has timed out and will not maintain the membership of the group.

 

&  Note:

You can use the igmp max-response-time command to set the maximum response time for general IGMP query packets, while that of an IGMP group-specific query packet is determined by the following expression:robust-value x lastmember-queryinterval.

 

The procedure is only fit for the occasion where IGMP queriers run IGMP version 2.

If the host runs IGMP version 1, it does not send IGMP leave messages when leaving a group, so the conditions will be the same as described in the procedure above.

III. IGMP querier substitution rules

In a network segment containing multiple IGMP-enabled interfaces, the one with the least IP address becomes the IGMP querier. If no query message is received within the period specified by the igmp timer other-querier-present command, the current IGMP querier is considered to be invalid. In this case, the interface with the second least IP address becomes the IGMP queerer instead.

IV. The maximum query time of IGMP packets

When the host receives a query message, it will set a timer for each of its multicast groups. The timer value is selected from 0 to the maximum response time at random. When the value of a timer decreases to 0, the host will send the membership information of the multicast group.

By configuring reasonable maximum response time, you can enable the host to respond to the query information quickly and enable the Layer 3 switch to understand the membership information of multicast groups quickly.

Table 6-3 Configure IGMP query packets

Operation

Command

Description

Enter system view

system-view

Enter VLAN interface view

interface Vlan-interface interface-number

Enable IGMP on the current interface

igmp enable

Required

IGMP is disabled on the interface by default.

Configure the query interval

igmp timer query seconds

Optional

The query interval is 60 seconds by default.

Configure the interval of sending IGMP group-specific query packets

igmp lastmember-queryinterval seconds

Optional

By default, the interval of sending IGMP group-specific query packets is one second.

Configure the times of sending IGMP group-specific query packets

igmp robust-count robust-value

Optional

By default, the number of times of sending IGMP group-specific query packets is 2.

Configure the maximum lifetime of an IGMP querier

igmp timer other-querier-present seconds

Optional

The system default is 120 seconds, twice that specified by the igmp timer query command.

Configure the maximum IGMP query response time

igmp max-response-time seconds

Optional

The maximum IGMP query response time is 10 seconds.

 

  Caution:

 

6.2.3  Configuring IGMP Multicast Groups on the Interface

You can perform the following configurations on the interface for the IGMP multicast groups:

l           Limit the number of joined multicast groups

l           Limit the range of multicast groups that the interface serves

I. Limit the number of joined multicast groups

If the number of joined IGMP groups on the multicast routing interface of the switch is not limited, the memory of the switch may be used out and the routing interface of the switch may fail when plenty of multicast groups join in the routing interface.

You can configure a limit on the number of joined IGMP multicast groups on the interface of the switch. Thus, when users order the programs of multicast groups, the network bandwidth can be controlled because the number of multicast groups is limited.

II. Limit the range of multicast groups that the interface serves

The Layer 3 switch determines the membership of the network segment by translating the received IGMP join packets. You can configure a filter for each interface to limit the range of multicast groups that the interface serves.

Table 6-4 Configure IGMP multicast groups on the interface

Operation

Command

Description

Enter system view

system-view

Enter VLAN interface view

interface Vlan-interface interface-number

Enable IGMP on the current interface

igmp enable

Required

IGMP is disabled on the interface by default.

Configure a limit on the number of joined IGMP groups on the interface

igmp group-limit limit

Required

By default, the number of multicast groups passing a port is not limited.

Limit the range of multicast groups that the interface serves

igmp group-policy acl-number [ 1 | 2 | port interface-type interface-number [ to interface-type interface-number ] ]

Optional

l      By default, the filter is not configured, that is, any multicast group is permitted on a port.

l      If the port keyword is specified, the specified port must belong to the VLAN of the VLAN interface.

l      You can configure to filter the IP addresses of some multicast groups in ACL.

l      1 and 2 are the IGMP version numbers. IGMPv2 is used by default.

Quit interface view.

quit

Enter Ethernet port view

interface interface-type interface-number

Limit the range of multicast groups that the interface serves

igmp group-policy acl-number vlan vlan-id

Optional

l      By default, the filter is not configured, that is, any multicast group is permitted on the port.

l      The port must belong to the IGMP-enabled VLAN specified in the command. Otherwise, the command does not take effect.

 

  Caution:

l      If the number of joined multicast groups on the interface exceeds the user-defined limit, new groups are not allowed to join.

l      If you configure the number of IGMP groups on the interface to 1, the new group takes precedence. That is, if a new group joins the interface, the former multicast group will be replaced automatically and leaves the interface automatically.

l      If the number of existing IGMP multicast groups has exceeded the configured limit on the number of joined multicast groups on the interface, the system will delete some existing multicast groups automatically until the number of multicast groups on the interface is conforming to the configured limit.

 

6.2.4  Configuring Router Ports to Join the Specified Multicast Group

Generally, the host running IGMP will respond to the IGMP query packets of the multicast switch. If the host cannot respond for some reason, the multicast switch may consider that there is no member of the multicast group in this network segment and then cancel the corresponding paths.

In order to avoid such cases, you must configure a port of the VLAN interface of the switch as a router port to add it to the multicast group. When the port receives IGMP query packets, the multicast switch will respond to it. As a result, the network segment where the Layer 3 interfaces reside can continue to receive multicast packets.

Table 6-5 Configure router ports to join the specified multicast group

Operation

Command

Description

Enter system view

system-view

Enable the multicast routing protocol

multicast routing-enable

Required

Enter VLAN interface view

interface Vlan-interface interface-number

Enable IGMP on the current interface

igmp enable

Required

IGMP is disabled on the interface by default.

Configure router ports to join a multicast group

igmp host-join group-address port interface-list

Optional

By default, the router port does not join any multicast group.

Quit VLAN interface view

quit

Enter Ethernet port view

interface interface-type interface-number

Configure router ports to join a multicast group

igmp host-join group-address vlan vlan-id

Optional

By default, the router port does not join in any multicast group.

 

6.2.5  Configuring IGMP Proxy

I. Configure IGMP Proxy

You can configure IGMP proxy to reduce the workload of configuration and management of leaf networks without affecting the multicast connections of the leaf network.

After IGMP Proxy is configured on the Layer 3 switch of the leaf network, the leaf Layer 3 switch is just a host for the external network. The Layer 3 switch receives the multicast data of corresponding groups only when it has directly connected members.

Table 6-6 Configure IGMP Proxy

Operation

Command

Description

Enter system view

system-view

Enable the multicast routing protocol

multicast routing-enable

Required

Enter VLAN interface (connected to the external network) view

interface Vlan-interface interface-number

Enable the IGMP protocol

igmp enable

Required

Configure IGMP Proxy

igmp proxy Vlan-interface interface-number

Required

 

The IGMP Proxy feature is disabled by default.

 

  Caution:

l      Both the multicast routing protocol and the IGMP protocol must be enabled on the proxy interface.

l      You must enable the PIM protocol on the interface before configuring the igmp proxy command. Otherwise, the IGMP Proxy feature does not take effect.

l      One interface cannot serve as the proxy interface of two or more interfaces.

 

6.2.6  Removing the Joined IGMP Groups from the Interface

You can remove all the joined IGMP groups on all ports of the router or all the joined IGMP groups on the specified interfaces, or remove the specified IGMP group address or group address network segment on the specified interface.

Perform the following configuration in user view.

Table 6-7 Remove the joined IGMP groups from the interface

Operation

Command

Description

Remove the joined IGMP groups from the interface

reset igmp group { all | interface interface-type interface-number { all | group-address [ group-mask ] } }

Optional

 

  Caution:

After an IGMP group is removed from an interface, the IGMP group can join the group again.

 

6.3  Displaying IGMP

After completing the above-mentioned configurations, you can execute the display command in any view to verify the configuration by checking the displayed information.

Table 6-8 Display IGMP

Operation

Command

Description

Display the membership information of the IGMP multicast group

display igmp group [ group-address | interface interface-type interface-number ]

You can execute the display command in any view.

Display the IGMP configuration and running information of the interface

display igmp interface [ interface-type interface-number ]

 


Chapter 7  PIM Configuration

7.1  PIM Overview

Protocol independent multicast (PIM) means that the unicast routing protocols providing routes for the multicast could be static routes, RIP, OSPF, IS-IS, or BGP. The multicast routing protocol is independent of unicast routing protocols as long as unicast routing protocols can generate route entries.

With the help of reverse path forwarding, PIM can transmit multicast information in the network. For the convenience of description, a network consisting of PIM-enabled multicast routers is called a PIM multicast domain.

7.1.1  Introduction to PIM-DM

Protocol independent multicast dense mode (PIM-DM) is a dense mode multicast protocol. It is suitable for small networks.

The features of such networks are:

l           Members in a multicast group are dense.

l           PIM-DM assumes that in each subnet of the network there is at least one receiver interested in the multicast source.

l           Multicast packets are flooded to all the nodes in the network, and the related resources (such as bandwidth and the CPU of the router) are consumed at the same time.

In order to reduce the network resource consumption, PIM-DM prunes the branches that do not forward multicast data and keeps only the branches containing receivers. In order that the pruned branches that are demanded to forward multicast data can receive multicast data flows again, the pruned braches can be restored to the forwarding status periodically.

In order to reduce the delay time for a pruned branch to be restored to the forwarding status, PIM-DM uses the graft mechanism to restore the multicast packet forwarding automatically. Such periodical floods and prunes are the features of PIM-DM, which is suitable for small LANs only. The "flood-prune” technology adopted in PIM-DM is unacceptable in WAN.

Generally, the packet forwarding path in PIM-DM is a shortest path tree (SPT) with the multicast source as the root and multicast members as the leaves. The SPT uses the shortest path from the multicast source to the receiver.

7.1.2  Work Mechanism of PIM-DM

The working procedure of PIM-DM is summarized as follows:

l           Neighbor discovery

l           SPT establishing

l           Graft

l           RPF check

l           Assert mechanism

I. Neighbor discovery

In a PIM-DM network, a multicast router needs to use Hello messages to perform neighbor discovery and maintain the neighbor relation when it is started. All routers keep in touch with each other by sending Hello messages periodically, and thus SPT is established and maintained.

II. SPT establishment

The procedure of establishing SPT is also called Flooding & Prune.

The procedure is as follows:

l           PIM-DM assumes that all hosts on the network are ready to receive multicast data.

l           When a multicast router receives a multicast packet sent from a multicast source "S" to a multicast group "G", it begins with an RPF check according to the unicast routing table.

l           If the RPF check passes, the router will create an entry (S, G) and forward the packet to all the downstream PIM-DM nodes. This process is known as flooding.

l           If the RPF check fails, the router considers that the multicast packets travel into the router through incorrect interfaces and just discards the packets.

After this process is complete, the router creates a (S, G) entry for every host in the PIM-DM domain.

If there is no multicast group member in downstream nodes, the router sends a prune message to upstream nodes to inform them not to forward data any more. The upstream nodes, as informed, remove the related interface from the outgoing interface list corresponding to the multicast forwarding entry (S, G). The pruning process continues until there are only necessary branches in PIM-DM. In this way, an SPT (Shortest Path Tree) rooted at source S is established.

The pruning process is initiated by leaf routers. As shown in Figure 7-1, the routers without receivers (such as the router connected to User A) initiate the pruning process automatically.

Figure 7-1 Diagram for SPT establishment in PIM-DM

The above-mentioned process is called "Flooding and Pruning". Every pruned node also provides a timeout mechanism. When pruning times out, the router initiates another flooding and pruning process. This process is performed periodically for PIM-DM.

III. Graft

When a pruned downstream node needs to be restored to the forwarding state, it may send a graft packet to inform the upstream node. As shown in Figure 7-1, user A receives multicast data again. Graft messages will be sent hop by hop to the multicast source S. The intermediate nodes return acknowledgements upon receiving Graft messages. Thus, the pruned branches are restored to the information transmission state.

IV. RPF check

PIM-DM adopts the RPF check mechanism to establish a multicast forwarding tree from the data source S based on the existing unicast routing table, static multicast routing table, and MBGP routing table.

The procedure is as follows:

l           When a multicast packet arrives, the router first checks the path.

l           If the interface this packet reaches is the one along the unicast route towards the multicast source, the path is considered correct.

l           Otherwise, the multicast packet will be discarded as a redundant one.

The unicast routing information on which path judgment is based can be of any unicast routing protocol such as RIP or OSPF. It is independent of the specified unicast routing protocol.

V. Assert mechanism

In a shared network such as Ethernet, the same packets may be sent repeatedly. For example, the LAN network segment contains multiple multicast routers, A, B, C, and D. They each have their own receiving path to the multicast source S, as shown in Figure 7-2:

Figure 7-2 Diagram for assert mechanism

When Router A, Router B, and Router C receive a multicast packet sent from the multicast source S, they will all forward the multicast packet to the Ethernet. In this case, the downstream node Router D will receive three copies of the same multicast packet.

In order to avoid such cases, the Assert mechanism is needed to select one forwarder. Routers in the network select the best path by sending Assert packets. If two or more paths have the same priority and metric to the multicast source, the router with the highest IP address will be the upstream neighbor of the (S, G) entry, which is responsible for forwarding the (S, G) multicast packets. The unselected routers will prune the corresponding interfaces to disable the information forwarding.

7.1.3  Introduction to PIM-SM

Protocol independent multicast sparse mode (PIM-SM) is a sparse mode multicast protocol. It is generally used in the following occasions where:

l           Group members are sparsely distributed

l           The range is wide

l           Large scaled networks exist

In PIM-SM, no host receives multicast packets by default. Multicast packets are forwarded to the hosts that need multicast packets explicitly.

In order that the receiver can receive the multicast data streams of the specific IGMP group, PIM-SM adopts rendezvous points (RP) to forward multicast information to all PIM-SM routers with receivers. RP is adopted in multicast forwarding. As a result, the network bandwidth that the data packets and control packets occupy is reduced, and the processing overhead of the router is also reduced.

At the receiving end, the router connected to the information receiver sends Join messages to the RP corresponding to the multicast group. The Join message reaches the root (namely, RP) after passing each router. The passed paths become the branches of the rendezvous point tree (RPT).

If the sending end wants to send data to a multicast group, the first hop router will send registration information to RP. When the registration information reaches RP, the source tree establishment is triggered. Then, the multicast source sends the data to RP. When the data reaches RP, the multicast packets are replicated and sent to the receiver along the RPT. Replication happens only where the tree branches. The procedure is repeated automatically until the multicast packets reach the receiver.

PIM-SM does not reply on any specific unicast routing protocol. Instead, it performs RPF check based on the existing unicast routing table.

7.1.4  Work Mechanism of PIM-SM

The working procedure of PIM-SM is:

l           Neighbor discovery

l           DR election

l           RP discovery

l           RPT shared tree building

l           Multicast source registration

l           Switching RPT to SPT

I. Neighbor discovery

The neighbor discovery mechanism is the same as described in PIM-DM. It is also implemented through Hello messages sent between each router.

II. DR election

With the help of Hello messages, DR can be elected for the shared network, such as Ethernet. DR will be the unique multicast information forwarder in the network. In either the network connected to the multicast source S or the network connected to the receiver, DR must be elected as long as the network is a shared network. The DR at the receiving end sends Join messages to RP and the DR at the multicast source side sends Register messages to RP, as shown in Figure 7-3:

Figure 7-3 Diagram for DR election

Each router on the shared network sends Hello messages with the DR priority option to each other. The router with the highest DR priority is elected as the DR in the network. If the priority is the same, the router with the highest IP address is elected as the DR. When DR fails, the received Hello messages will time out. A new DR election procedure will be triggered among neighboring routers.

 

&  Note:

l      S3600 Series Ethernet Switches do not support DR priority. In a network containing S3600 Series Ethernet Switches, the DR is elected by IP addresses.

l      In a PIM-SM network, DR mainly serves as the querier of IGMPv1.

 

III. RP discovery

RP is the core router in a PIM-SM domain. The shared tree established based on the multicast routing information is rooted in RP. There is a mapping relationship between the multicast group and RP. One multicast group is mapped to one RP, and multiple multicast groups can be mapped to the same RP.

In a small and simple network, there is only little multicast information. One RP is enough for information forwarding. In this case, you can statically specify the position of RP in each router in the SM domain.

However, a PIM-SM network is normally of very large scale and RP forwards a lot of multicast information. In order to reduce the workload of RP and optimize the topology of the shared tree, different multicast groups must have different RPs. In this case, RP must be elected dynamically through the auto-election mechanism and BootStrap router (BSR) must be configured.

BSR is the core management device in a PIM-SM network. It is responsible for:

l           Collecting the Advertisement messages sent by the Candidate-RP (C-RP) in the network.

l           Selecting part of the C-RP information to form the RP-set, namely, the mapping database between the multicast group and RP.

l           Advertising the RP-set to the whole network so that all the routers (including DR) in the network know the position of RP.

One or more candidate BSRs must be configured in a PIM domain. Through auto-election, the candidate BSRs elect a BSR that is responsible for collecting and advertising RP information. The auto-election among candidate BSRs is described in the following section:

l           Specify a PIM-SM-enabled interface when configuring a router as a candidate BSR.

l           Initially, each candidate BSR considers itself as the BSR of the PIM-SM and uses the IP address of the specified interface as the BSR address to send Bootstrap messages.

l           When the candidate BSR receives Bootstrap messages from other routers, it compares the BSR address in the received Bootstrap message with its own BSR address by priority and IP address. If the priority is the same, the candidate BSR with a higher IP address is considered to be better. If the former is better, the candidate BSR replaces its own BSR address with the new BSR address and does not consider itself as BSR any more. Otherwise, the candidate BSR keeps its own BSR address and continues to consider itself as BSR.

Figure 7-4 shows the positions of RPs and BSRs in the network:

Figure 7-4 Diagram for the communication between RPs and BSRs

Only one BSR can be elected in a network or management domain, while multiple candidate BSRs (C-BSRs) can be configured. In this case, once the BSR fails, other C-BSRs can elect a new BSR through auto-election. Thus, service interruption is avoided.

In the same way, multiple C-RPs can be configured in a PIM-SM domain, and the RP corresponding to each multicast group is worked out through the BSR mechanism.

IV. RPT building

Assume the receiver hosts are User B, User D, and User E. When a receiver host joins a multicast group G, it sends IGMP packets to inform the leaf router directly connected to the host. Thus, the leaf router acquires the receiver information of the multicast group G, and then the leaf router sends Join messages to the upper-layer nodes in the direction of RP, as shown in Figure 7-5:

Figure 7-5 Diagram for RPT building in PIM-SM

Each router on the path from the leaf router to RP generate (*, G) entries in the forwarding table. The routers on the path form a branch of RPT. A (*, G) entry represents the information from any source to the multicast group G. RP is the root of RPT and the receivers are leaves of RPT.

When the packet from the multicast source S to the multicast group G passes by RP, the packet reaches the leaf router and receiver host along the established path in RPT.

When the receiver is not interested in the multicast information any more, the multicast router nearest the receiver will send Prune messages to RP hop by hop in the direction reverse to RPT. When the first upstream router receives the Prune message, it deletes the links with the downstream routers from the interface list and check whether it has any receiver interested in the multicast information. If not, the upstream router continues to forward the Prune message to upstream routers.

V. Multicast source registration

In order to inform RP about the existence of multicast source S, when multicast source S sends a multicast packet to the multicast group G, the router directly connected to S will encapsulate the received packet into a Register packet and send it to the corresponding RP through unicast, as shown in Figure 7-6:

Figure 7-6 Diagram for SPT building in PIM-SM

When RP receives the registration information from S, it decapsulates the Register message and forwards the multicast information to the receiver along RPT, and on the other hand, it sends (S, G) Join messages to S hop by hop. The passed routers form a branch of SPT. The multicast source S is the root of SPT and RP is the destination of RP.

The multicast information sent by the multicast source S reaches RP along the built SPT, and then RP forwards the multicast information along the built RPT.

VI. Switching from RPT to SPT

When the multicast router nearest the receiver detects that the rate of the multicast packet from RP to the multicast group G exceeds the threshold value, it sends (S, G) Join messages to the upper-layer router of the multicast source S. The Join message reaches the router nearest the multicast source (namely, the first hop router) hop by hop and all the passed routers have the (S, G) entry. As a result, a branch of SPT is built.

Then, the last hop router sends a Prune message with the RP bit to RP hop by hop. When RP receives the message, it reversely forwards the Prune message to the multicast source. Thus, the multicast information stream is switched from RPT to SPT.

After the switching from RPT to SPT, the multicast information is sent from the multicast source S to the receiver directly. Through the switching from RPT to SPT, PIM-SM can build SPT in a more economical way than PIM-DM.

7.2  Common PIM Configuration

You can configure the PIM feature of the switch in interface view. The configuration includes:

Table 7-1 Configuration tasks

Operation

Description

Related section

Enable PIM-DM (PIM-SM) on the interface

Required

Section 7.2.1  "Enabling PIM-DM (PIM-SM) on the Interface"

Configure the interval of sending Hello packets

Optional

Section 7.2.2  "Configuring the Interval of Sending Hello

Configure PIM neighbors

Optional

Section 7.2.3  "Configuring PIM Neighbors"

Clear the related PIM entries

Optional

Section 7.2.4  "Clearing the Related PIM Entries"

 

7.2.1  Enabling PIM-DM (PIM-SM) on the Interface

Table 7-2 Enable PIM-DM (PIM-SM) on the interface

Operation

Command

Description

Enter system view

system-view

Enable the multicast routing protocol

multicast routing-enable

Required

Enter VLAN interface view

interface Vlan-interface interface-number

Enable PIM-DM/PIM-SM on the current interface

pim dm / pim sm

Optional

Configure the PIM protocol type on the interface

 

7.2.2  Configuring the Interval of Sending Hello Packets

PIM-DM must be enabled on each interface. After the configuration, PIM-DM sends PIM Hello packets periodically and processes protocol packets that PIM neighbors send.

Table 7-3 Configure the interval of sending Hello packets

Operation

Command

Description

Enter system view

system-view

Enable the multicast routing protocol

multicast routing-enable

Required

Enter VLAN interface view

interface Vlan-interface interface-number

Enable PIM-DM/PIM-SM on the current interface

pim dm / pim sm

Required

Configure the PIM protocol type on the interface.

Configure the interval of sending Hello packets on the interface

pim timer hello seconds

Required

The interval of sending Hello packets is 30 seconds.

 

  Caution:

l      When PIM-DM is enabled on an interface, PIM-SM cannot be enabled on the interface any more, and vice versa.

l      When PIM-DM is enabled on an interface of the switch, only PIM-DM can be enabled on the other interfaces of the switch, and vice versa.

 

7.2.3  Configuring PIM Neighbors

In order to prevent plenty of PIM neighbors from exhausting the memory of the router, which may result in router failure, you can limit the number of PIM neighbors on the router interface. However, the total number of PIM neighbors of a router is defined by the system, and you cannot modify it through commands.

You can configure basic ACL 2000 to 2999 (refer to the part about ACL in this manual). Only the filtered Layer 3 switches (routers) can serve as the PIM neighbors of the current interface.

Table 7-4 Configure PIM neighbors

Operation

Command

Description

Enter system view

system-view

Enable the multicast routing protocol

multicast routing-enable

Required

Enter VLAN interface view

interface Vlan-interface interface-number

Enable PIM-DM/PIM-SM on the current interface

pim dm / pim sm

Required

Configure the PIM protocol type on the interface

Configure a limit on the number of PIM neighbors on the interface

pim neighbor-limit limit

Optional

By default, the upper limit on the number of PIM neighbors on a interface is 128

Configure the filtering policy for PIM neighbors

pim neighbor-policy acl-number

Optional

l      You can configure to filter the IP addresses of some multicast groups in ACL.

l      By default, the filtering policy for neighbors cannot be enabled on an interface.

 

  Caution:

If the number of existing PIM neighbors exceeds the user-defined limit, the existing PIM neighbors will not be deleted.

 

7.2.4  Clearing the Related PIM Entries

You can execute the reset command in user view to clear the related statistics about multicast PIM.

Table 7-5 Clear the related PIM entries

Operation

Command

Description

Clear PIM route entries

reset pim routing-table { all | { group-address [ mask group-mask | mask-length group-mask-length ] | source-address [ mask source-mask | mask-length source-mask-length ] | { incoming-interface { interface-type interface-number | null } } } * }

Perform the configuration in user view.

Clear PIM neighbors

reset pim neighbor { all | { neighbor-address | interface interface-type interface-number } * }

Perform the configuration in user view.

 

7.3  PIM-DM Configuration

Perform the following configuration to configure PIM-DM. When the router runs in a PIM-DM domain, you are recommended to enable PIM-DM on all the interfaces of non-boarder routers.

7.3.1  Configuring Filtering Policies for Multicast Source/Group

Table 7-6 Configure filtering policies for multicast source/group

Operation

Command

Description

Enter system view

system-view

Enable the multicast routing protocol

multicast routing-enable

Required

Enter PIM view

pim

Perform source/group filter on the received multicast packets

source-policy acl-number

Optional

You can configure to filter the IP addresses of some multicast groups in ACL.

 

  Caution:

l      If you configure basic ACLs, the source address match is performed on all the received multicast packets. The packets failing to match are discarded.

l      If you configure advanced ACLs, the source address and group address match is performed on all the received multicast packets. The packets failing to match are discarded.

 

7.4  PIM-SM Configuration

PIM-SM configuration includes:

Table 7-7 Configuration tasks

Operation

Description

Section

Configure filtering policies for multicast sources/groups

Optional

Section 7.4.1  "Configuring Filtering Policies for Multicast Source/Group"

Configure BSR/RP

Optional

Section 7.4.2  "Configuring BSR/RP"

Configure PIM-SM domain boundary

Optional

Section 7.4.3  "Configuring PIM-SM Domain Boundary"

Filter the registration packets from RP to DR

Optional

Section 7.4.4  "Filtering the Registration Packets from RP to DR"

 

7.4.1  Configuring Filtering Policies for Multicast Source/Group

For the configuration of filtering policies for multicast source/group, refer to section 7.3.1  "Configuring Filtering Policies for Multicast Source/Group".

7.4.2  Configuring BSR/RP

Table 7-8 Configure BSR/RP

Operation

Command

Description

Enter system view

system-view

Enable the multicast routing protocol

multicast routing-enable

Required

Enter PIM view

pim

Configure candidate BSRs

c-bsr interface-type interface-number hash-mask-len [ priority ]

Optional

By default, candidate BSRs are not set for the switch and the value of priority is 0.

Configure candidate RPs

c-rp interface-type interface-number [ group-policy acl-number | priority priority ]*

Optional

l      You can configure to filter the IP addresses of some multicast groups in ACL.

l      By default, candidate RPs are not set for the switch and the value of priority is 0.

Configure static RPs

static-rp rp-address [ acl-number ]

Optional

l      You can configure to filter the IP addresses of some multicast groups in ACL.

l      By default, static RPs are not set for the switch.

Limit the range of valid BSRs

bsr-policy acl-number

Optional

l      You can configure to filter the IP addresses of some multicast groups in ACL.

l      By default, the range of valid BSRs is not set for the switch.

Limit the range of valid C-RPs

crp-policy acl-number

Optional

l      You can configure to filter the IP addresses of some multicast groups in ACL.

l      By default, the range of valid C-RPs is not set for the switch.

 

  Caution:

l      Only one candidate BSR can be configured on a Layer 3 switch. The BSR configuration on another interface will replace the former configuration.

l      You are recommended to configure both the candidate BSR and candidate RP on the Layer 3 switch in the backbone.

l      If the range of multicast groups that RP serves is not specified when RP is configured, the RP serves all multicast groups. Otherwise, the RP serves the multicast groups within the specified range.

l      You can configure basic ACLs to filter related multicast IP addresses and control the range of multicast groups that RP serves.

l      If you use static RPs, all routers in the PIM domain must adopt the same configuration.

l      If the configured static RP address is the address of an UP interface on the local switch, the switch will serve as RP.

l      Static RPs do not take effect when the RP generated by the BSR mechanism takes effect.

l      The PIM protocol does not need to be enabled on the interface of static RPs.

l      The limit on the range of valid BSRs is to prevent the valid BSRs in the network from being replaced maliciously. The other BSR information except the range will not be received by the Layer 3 switch, and thus the security of BSRs in the network is protected.

l      The limit on the range of C-RPs is to avoid C-RP cheating. You can limit the range of valid C-RPs and limit the range of multicast groups that each C-RP serves.

 

7.4.3  Configuring PIM-SM Domain Boundary

Table 7-9 Configure PIM-SM domain boundary

Operation

Command

Description

Enter system view

system-view

Enable the multicast routing protocol

multicast routing-enable

Required

Enter VLAN interface view

interface Vlan-interface interface-number

Enable PIM-SM on the current interface

pim sm

Required

Configure the PIM protocol type on the interface.

Configure PIM-SM domain boundary

pim bsr-boundary

Required

By default, domain boundary is not set for the switch.

 

  Caution:

l      After the PIM-SM domain boundary is set, Bootstrap messages cannot pass the boundary in any direction. In this way, PIM-SM domains are divided.

l      After this feature is configured, Bootstrap messages cannot pass the boundary. However, the other PIM messages can pass the domain boundary. The network can be effectively divided into domains that use different BSRs.

 

7.4.4  Filtering the Registration Packets from RP to DR

Through the registration packet filtering mechanism in a PIM-SM network, you can determine which sources send packets to which groups on RP, that is, RP can filter the registration packets sent from DR and receive the specified packets only.

Table 7-10 Filter the registration packets from RP to DR

Operation

Command

Description

Enter system view

system-view

Enable the multicast routing protocol

multicast routing-enable

Required

Enable the multicast routing protocol

Enter VLAN interface view

interface Vlan-interface interface-number

Enable PIM-SM on the current interface

pim sm

Required

Configure the PIM protocol type on the interface

Quit VLAN view

quit

Enter PIM view

pim

Configure to filter the registration packets from RP to DR

register-policy acl-number

Required

l      You can configure to filter the IP addresses of some multicast groups in ACL.

l      By default, the switch does not filter the registration packets from DR.

 

  Caution:

l      If a source group entry (S, G) is denied in ACL, or no operation on the entry is defined in the ACL, or ACLs are not defined, RP will send RegisterStop messages to DR to stop the registration process of the multicast data flow.

l      Only the registration packets matching the permit command of ACLs can be accepted. When an invalid ACL is defined, RP will reject all the registration packets.

 

7.4.5  Configuring the Threshold for Switching from RPT to SPT

PIM-SM routers initially use the RPT to forward multicast packets. If the threshold is specified as 0, the last hop switch that the packets pass initiates the switching from the RPT to the SPT.

Table 7-11 Configure the threshold for switching from RPT to SPT

Operation

Command

Description

Enter system view

system-view

Enter PIM view

pim

Configure the threshold for switching from RPT to SPT

spt-switch-threshold { traffic-rate | infinity } [ group-policy acl-number ]

Required

By default, a RPT-to-SPT switch occurs once the device receives the first multicast packet from the RPT.

 

&  Note:

When you execute the spt-switch-threshold command on an S3600 Ethernet switch, the traffic-rate argument can only be set to 0. That is, the threshold can be set to 0 or infinity.

l      With the threshold set to 0, the last hop switch switches to SPT once it receives the first multicast packet.

l      With the threshold set to infinity, the last hop switch never switches to SPT.

 

7.5  Displaying and Debugging PIM

After completing the above-mentioned configurations, you can execute the display command in any view to verify the configuration by checking the displayed information.

Table 7-12 Display and maintain PIM

Configuration

Command

Description

Display PIM multicast routing tables

display pim routing-table [ { { *g [ group-address [ mask { mask-length | mask } ] ] | **rp [ rp-address [ mask { mask-length | mask } ] ] } | { group-address [ mask { mask-length | mask } ] | source-address [ mask { mask-length | mask } ] } * } | incoming-interface { interface-type interface-number | null } | { dense-mode | sparse-mode } ] *

You can execute the display command in any view.

 

Display the information about PIM interfaces

display pim interface [ interface-type interface-number ]

Display the information about PIM neighbor routers

display pim neighbor [ interface interface-type interface-number ]

Display BSR information

display pim bsr-info

Display RP information

display pim rp-info [ group-address ]

 

7.6  PIM Configuration Example

7.6.1  PIM-DM Configuration Example

I. Network requirements

Lanswitch1 is connected to Multicast Source through VLAN-interface10, to Lanswitch2 through VLAN-interface11 and to Lanswitch3 through VLAN-interface12. Through PIM-DM, multicast is implemented among Receiver 1, Receiver 2, and Multicast Source.

II. Network diagram

Figure 7-7 Network diagram for PIM-DM configuration

III. Configuration procedure

1)          Configure Lanswitch1.

# Enable multicast routing protocol.

<H3C> system-view

[H3C] multicast routing-enable

# Enable IGMP and PIM-DM on the interfaces.

[H3C] vlan 10

[H3C-vlan10] port Ethernet 1/0/2 to Ethernet 1/0/3

[H3C-vlan10] quit

[H3C] vlan 11

[H3C-vlan11] port Ethernet 1/0/4 to Ethernet 1/0/5

[H3C-vlan11] quit

[H3C] vlan 12

[H3C-vlan12] port Ethernet 1/0/6 to Ethernet 1/0/7

[H3C-vlan12] quit

[H3C] interface Vlan-interface 10

[H3C-Vlan-interface10] ip address 1.1.1.1 255.255.0.0

[H3C-Vlan-interface10] pim dm

[H3C-Vlan-interface10] quit

[H3C] interface Vlan-interface 11

[H3C-Vlan-interface11] ip address 2.2.2.2 255.255.0.0

[H3C-Vlan-interface11] pim dm

[H3C-Vlan-interface11] quit

[H3C] interface Vlan-interface 12

[H3C-Vlan-interface12] ip address 3.3.3.3 255.255.0.0

[H3C-Vlan-interface12] pim dm

2)         Configure Lanswitch2.

# Enable multicast routing protocol.

<H3C> system-view

[H3C] multicast routing-enable

# Enable IGMP and PIM-DM on the ports.

[H3C] vlan 20

[H3C-vlan20] port Ethernet 1/0/2 to Ethernet 1/0/3

[H3C-vlan20] quit

[H3C] vlan 11

[H3C-vlan11] port Ethernet 1/0/4 to Ethernet 1/0/5

[H3C-vlan11] quit

[H3C] interface Vlan-interface 20

[H3C-Vlan-interface20] ip address 6.6.6.6 255.255.0.0

[H3C-Vlan-interface20] igmp enable

[H3C-Vlan-interface20] pim dm

[H3C-Vlan-interface20] quit

[H3C] interface Vlan-interface 11

[H3C-Vlan-interface11] ip address 4.4.4.4 255.255.0.0

[H3C-Vlan-interface11] pim dm

3)         The configuration for Lanswitch3is similar to that of Lanswitch2 and is thus omitted.

7.6.2  PIM-SM Configuration Example

I. Network requirements

All Ethernet switches are reachable to each other in the practical network.

l           LS_A is connected to LS_B through VLAN-interface10, to Host A through VLAN-interface11 and to LS_C through VLAN-interface12.

l           LS_B is connected to LS_A through VLAN-interface10, to LS_C through VLAN-interface11 and to LS_D through VLAN-interface12.

l           LS_C is connected to Host B through VLAN-interface10, to LS_B through VLAN-interface11 and to LS_A through VLAN-interface12.

Host A is the receiver of the multicast group whose multicast IP address is 225.0.0.1. Host B begins to send data to the destination 225.0.0.1 and LS_A receives the multicast data from Host B through LS_B.

II. Network diagram

Figure 7-8 Network diagram for PIM-SM configuration

III. Configuration procedure

1)         Configure LS_A.

# Enable PIM-SM.

<H3C> system-view

[H3C] multicast routing-enable

[H3C] vlan 10

[H3C-vlan10] port Ethernet 1/0/2 to Ethernet 1/0/3

[H3C-vlan10] quit

[H3C] interface Vlan-interface 10

[H3C-Vlan-interface10] pim sm

[H3C-Vlan-interface10] quit

[H3C] vlan 11

[H3C-vlan11] port Ethernet 1/0/4 to Ethernet 1/0/5

[H3C-vlan11] quit

[H3C] interface Vlan-interface 11

[H3C-Vlan-interface11] igmp enable

[H3C-Vlan-interface11] pim sm

[H3C-Vlan-interface11] quit

[H3C] vlan 12

[H3C-vlan12] port Ethernet 1/0/6 to Ethernet 1/0/7

[H3C-vlan12] quit

[H3C] interface Vlan-interface 12

[H3C-Vlan-interface12] pim sm

[H3C-Vlan-interface12] quit

2)         Configure LS_B.

# Enable PIM-SM.

<H3C> system-view

[H3C] multicast routing-enable

[H3C] vlan 10

[H3C-vlan10] port Ethernet 1/0/2 to Ethernet 1/0/3

[H3C-vlan10] quit

[H3C] interface Vlan-interface 10

[H3C-Vlan-interface10] pim sm

[H3C-Vlan-interface10] quit

[H3C] vlan 11

[H3C-vlan11] port Ethernet 1/0/4 to Ethernet 1/0/5

[H3C-vlan11] quit

[H3C] interface Vlan-interface 11

[H3C-Vlan-interface11] igmp enable

[H3C-Vlan-interface11] pim sm

[H3C-Vlan-interface11] quit

[H3C] vlan 12

[H3C-vlan12] port Ethernet 1/0/6 to Ethernet 1/0/7

[H3C-vlan12] quit

[H3C] interface Vlan-interface 12

[H3C-Vlan-interface12] pim sm

[H3C-Vlan-interface12] quit

# Configure candidate BSRs.

[H3C] pim

[H3C-pim] c-bsr Vlan-interface 10 30 2

# Configure candidate RPs.

[H3C] acl number 2000

[H3C-acl-basic-2000] rule permit source 225.0.0.0 0.255.255.255

[H3C] pim

[H3C-pim] c-rp Vlan-interface 10 group-policy 2000

[H3C-pim] quit

# Configure PIM domain boundary

[H3C] interface Vlan-interface 12

[H3C-Vlan-interface12] pim bsr-boundary

After VLAN-interface 12 is configured as the PIM domain boundary, LS_D cannot receive BSR information from LS_B any more; that is, LS_D is excluded from the PIM domain.

3)         Configure LS_C.

# Enable PIM-SM.

<H3C> system-view

[H3C] multicast routing-enable

[H3C] vlan 10

[H3C-vlan10] port Ethernet 1/0/2 to Ethernet 1/0/3

[H3C-vlan10] quit

[H3C] interface Vlan-interface 10

[H3C-Vlan-interface10] pim sm

[H3C-Vlan-interface10] quit

[H3C] vlan 11

[H3C-vlan11] port Ethernet 1/0/4 to Ethernet 1/0/5

[H3C-vlan11] quit

[H3C] interface Vlan-interface 11

[H3C-Vlan-interface11] pim sm

[H3C-Vlan-interface11] quit

[H3C] vlan 12

[H3C-vlan12] port Ethernet 1/0/6 to Ethernet 1/0/7

[H3C-vlan12] quit

[H3C] interface Vlan-interface 12

[H3C-Vlan-interface12] pim sm

[H3C-Vlan-interface12] quit

7.7  Troubleshooting PIM

Symptom: The router cannot set up multicast routing tables correctly.

Solution: You can troubleshoot PIM according to the following procedure.

Make sure that the unicast routing is correct before troubleshooting PIM.

l           Because PIM-SM needs the support of RP and BSR, you must execute the display pim bsr-info command to see whether BSR information exists. If not, you must check whether there is any unicast route to the BSR. Then, use the display pim rp-info command to check whether the RP information is correct. If RP information does not exist, you must check whether there is any unicast route to RP.

l           Use the display pim neighbor command to check whether the neighboring relationship is correctly established.

 


Chapter 8  MSDP Configuration

 

&  Note:

l      Because multicast source discovery protocol (MSDP) does not support the IRF feature, MSDP cannot be configured in Fabric.

l      Routers and router icons in this chapter represent routers in the common sense and Ethernet switches running routing protocols.

 

8.1  Overview

Internet service providers (ISP) are not willing to rely on devices of their competitors to forward multicast traffic. On the other hand, ISPs want to obtain information from information sources no matter where the information resources reside and forward the information to their own members. MSDP is designed to address this issue and used to discover multicast sources in other protocol independent multicast sparse mode (PIM-SM) domains. MSDP is only valid for the any-source multicast (ASM) model.

MSDP describes a mechanism of interconnecting multiple PIM-SM domains. It requires that the inter-domain multicast routing protocol must be PIM-SM and allows the rendezvous points (RPs) of different domains to share multicast source information.

I. MSDP peers

The RP in a PIM-SM domain can sense the existence of an active multicast source S, if any, in this domain through multicast source register messages. If a PIM-SM domain managed by another ISP wants to obtain information from this multicast source, the routers in both PIM-SM domains must establish an MSDP peering relationship with each other, as shown in Figure 8-1:

Figure 8-1 MSDP peering relationship

 

&  Note:

MSDP peers are interconnected over TCP connections (through port 639). A TCP connection can be established between RPs in different PIM-SM domains, between RPs in the same PIM-SM domain, between an RP and a common router, or between common routers. Figure 8-1 shows the MSDP peering relationship between RPs. Unless otherwise specified, examples in the following descriptions are based on MSDP peering relationship between RPs.

 

An active multicast source S exists in the PIM-SM1 domain. RP1 in this domain learns the specific location of the multicast source S through multicast source register messages, and then sends source active (SA) messages periodically to MSDP peers (RP nodes) in other PIM-SM domains. An SA message contains the IP address of the multicast source S, the multicast group address G, the address of the RP that has generated the SA message, and the first multicast data received by the RP in the PIM-SM1 domain. The SA message is forwarded by peers. Finally, the SA message reaches all the MSDP peers. In this way, the information of multicast source S in the PIM-SM domain is delivered to all PIM-SM domains.

By performing reverse path forwarding (RPF) check, MSDP peers accept SA messages only from the correct paths and forward the SA messages, thus avoiding SA message loop. In addition, you can configure a mesh group among MSDP peers to avoid SA flooding among MSDP peers.

Assume that RP4 in the PIM-SM4 domain receives the SA message. RP4 checks whether receivers exist in the corresponding multicast group. If yes, RP4 sends a (S, G) Join message hop by hop to the multicast source S, thus creating a shortest path tree (SPT) based on the multicast source S. However, a rendezvous point tree (RPT) exists between RP4 and receivers in the PIM-SM4 domain.

 

&  Note:

Through MSDP, a PIM-SM domain receiving information from the multicast source S does not rely on RPs in other PIM-SM domains; that is, receivers can directly join the SPT based on the multicast source without passing RPs in other PIM-SM domains.

 

II. MSDP application

You can also implement Anycast RP through MSDP. Anycast RP refers to such an application that an MSDP peering relationship can be established between two RPs with the same IP address in the same PIM-SM domain, to enable load balancing and redundancy backup between the two RPs in the same domain. The candidate RP (C-RP) function is enabled on an interface (typically the loopback interface) of each of multiple routers in the same PIM-SM domain, and these interfaces have the same IP address. An MSDP peering relationship is formed among these interfaces, as shown in Figure 8-2.

Figure 8-2 Typical networking of Anycast RP

Typically, a multicast source S registers with the nearest RP to create an SPT, and receivers also send Join messages to the nearest RP to construct an RPT. Therefore, it is likely that the RP with which the multicast source has registered is not the RP that receivers join. To ensure information consistency between RPs, the RPs, serving as MSDP peers of one another, learn information of the peer multicast source by sending SA messages to one another. As a result, each RP can know all the multicast sources in the PIM-SM domain. In this way, the receivers connected to each RP can receive multicast data sent by all the multicast sources in the entire PIM-SM domain.

As described above, RPs exchange information among one another through MSDP, a multicast source registers with the nearest RP, and receivers join the nearest RPT. In this way, RP load balancing can be achieved. When an RP fails, the multicast source and receivers previously registered to/joined it will register to or join another nearest RP automatically, thus implementing RP redundancy backup.

8.1.1  MSDP Working Mechanism

I. Identifying a multicast source and receiving multicast data

A network contains four PIM-SM domains, PIM-SM1, PIM-SM2, PIM-SM3, and PIM-SM4. An MSDP peering relationship is established between RPs in different domains. Multicast group members exist in the PIM-SM1 and PIM-SM4 domains. See Figure 8-3.

Figure 8-3 Identifying the multicast source and receiving multicast data

The complete interoperation process between a multicast source S in the PIM-SM1 domain and receivers in the PIM-SM1 and PIM-SM4 domains is as follows:

1)         The multicast source S in the PIM-SM1 domain begins to send data packets.

2)         The designated router (DR) connected to the multicast source S encapsulates the received data in a Register message, and then forwards the message to RP1 in the PIM-SM1 domain.

3)         RP1 in the PIM-SM1 domain decapsulates the Register message, and then forwards the message to all the members in the domain along the RPT. The members in the domain can select whether to switch to the SPT.

4)         At the same time, RP1 in the PIM-SM1 domain generates an SA message and sends the message to the corresponding MSDP peers (RPs in the PIM-SM2 and PIM-SM3 domains). Finally, the SA message is forwarded to the RP in the PIM-SM4 domain. The SA message contains the IP address of the multicast source, the multicast group address, the address of the RP that has generated the SA message, and the first multicast data received by the RP in the PIM-SM1 domain.

5)         If group members (namely, receivers) exist in the PIM-SM domains where MSDP peers of RP1 reside (for example, if group members exist in the PIM-SM4 domain), RP4 decapsulates the multicast data in the SA message and distributes the multicast data to receivers along the RPT. RP4 also sends a Join message to the multicast source S at the same time.

6)         To avoid SA loop, MSDP peers perform RPF check on the received SA message. After the RPF path is established, the data from the multicast source S is directly sent to RP4 in the PIM-SM4 domain. Then, RP4 forwards the data along the RPT within the domain. Now, the last-hop router connected to group members in the PIM-SM4 domain selects whether to switch to the SPT.

II. Forwarding messages between MSDP peers and performing RPF check

To establish an MSDP peering relationship between routers, you have to create routes between routers for SA messages to travel.

Assume that three autonomous systems (ASs) exist. They are AS1, AS2, and AS3. Each AS has a PIM-SM domain associated with it. Each PIM-SM domain contains at least one RP. See Figure 8-4.

Figure 8-4 Forwarding SA messages between MSDP peers

As shown above, RP1 belongs to AS1. RP2, RP3 and RP4 belong to AS2. RP5 and RP6 belong to AS3. An MSDP peering relationship exists among these RPs. RP2, RP3, and RP4 form a mesh group. These MSDP peers perform RPF check and process SA messages forwarded to one another according to the following rules:

l           If an MSDP peer sending an SA message is an RP in the PIM-SM domain where the multicast source resides (for example, when RP1 sends an SA message to RP2), the receiver accepts the SA message and forwards the message to other peers.

l           If an RP has only one MSDP peer (for example, when RP2 sends an SA message to RP1), the receiver accepts the SA message from the peer.

l           If an SA message comes from a static RPF peer (for example, when RP4 sends an SA message to RP5), the receiver accepts the SA message and forwards it to other peers.

l           If an SA message comes from a peer that belongs to the same MSDP mesh group with the receiver, the receiver accepts the SA message and forwards it to peers out of the mesh group. For example, when RP2 sends an SA message to RP4, RP4 accepts the message and forwards it to RP5 and RP6.

l           If an SA message comes from an MSDP peer in the same AS, and this peer is the next hop on the optimal path to the RP in the PIM-SM domain where the multicast source resides, the receiver accepts the SA message and forwards it to other peers. For example, when RP4 sends an SA message to RP5, RP5 receives the message and forwards it to RP6.

l           If an SA message comes from an MSDP peer in a different AS, and this AS is the next AS of the RP optimal path in the PIM-SM domain where the multicast source resides (for example, when RP4 sends an SA message to RP6), the receiver accepts the SA message and forwards it to other peers.

l           The receiver does not accept or forward other SA messages.

 

&  Note:

S3600 series switches do not support inter-domain routing (BGP protocol); therefore, the fifth rule described above is adopted in RPF check.

 

8.2  Configuring MSDP Basic Functions

To enable exchange of information from the multicast source S between two PIM-SM domains, you need to establish MSDP peering relationships between RPs in these PIM-SM domains. In this way, the information from the multicast source can be sent through SA messages between the MSDP peers, and the receivers in other PIM-SM domains can finally receive the multicast source information.

A route is required between two routers that are MSDP peers to each other. Through this route, the two routers can transfer SA messages between PIM-SM domains. For an area containing only one MSDP peer, known as a stub area, the route is not compulsory. SA messages are transferred in a stub area through the configuration of static RPF peers. In addition, the use of static RPF peers can avoid RPF check on the received SA messages, thus saving resources.

Before configuring static RPF peers, you must create an MSDP peering connection. If you configure only one MSDP peer on a router, the MSDP peer will act as a static RPF peer. If you configure multiple RPF peers, you need to handle them by using different rules according to the configured policies.

When configuring multiple static RPF peers for the same router, you must follow the following two configuration methods:

l           In the case that all the peers use the rp-policy keyword: Multiple static RPF peers function at the same time. RPs in SA messages are filtered based on the configured prefix list, and only the SA messages whose RP addresses pass the filtering are received. If multiple static RPF peers using the same rp-policy keyword are configured, when any of the peers receives an SA message, it will forward the SA message to other peers.

l           None of the peers use the rp-policy keyword: Based on the configured sequence, only the first static RPF peer whose connection state is UP is active. All the SA messages from this peer will be received, while the SA messages from other static RPF peers will be discarded. Once the active static RPF peer fails (because the configuration is removed or the connection is terminated), based on the configuration sequence, the subsequent first static RPF peer whose connection is in the UP state will be selected as the active static RPF peer.

8.2.1  Configuration Prerequisites

Before configuring basic MSDP functions, you need to configure:

l           A unicast routing protocol

l           PIM-SM basic functions

8.2.2  Configuring MSDP Basic Functions

Table 8-1 Configure MSDP basic functions

Operation

Command

Description

Enter system view

system-view

Enable IP multicast routing

multicast routing-enable

Required

Enable MSDP function and enter MSDP view

msdp

Required

Create an MSDP peer connection

peer peer-address connect-interface interface-type interface-number

Required

To establish an MSDP peer connection, you must configure the parameters on both peers. The peers are identified by an address pair (the address of the interface on the local router and the IP address of the remote MSDP peer).

Configure a static RPF peer

static-rpf-peer peer-address [ rp-policy ip-prefix-name ]

Optional

For an area containing only one MSDP peer, if BGP or MBGP does not run in this area, you need to configure a static RPF peer.

 

8.3  Configuring Connection between MSDP Peers

An AS may contain multiple MSDP peers. To avoid SA flooding between the MSDP peers, you can use the MSDP mesh mechanism to improve traffic. When multiple MSDP peers are fully connected with one another, these MSDP peers form a mesh group. When an MSDP peer in the mesh group receives SA messages from outside the mesh group, it sends them to other members of the group. On the other hand, a mesh group member does not perform RPF check on SA messages from within the mesh group and does not forward the messages to other members of the mesh group. This avoids SA message flooding since it is unnecessary to run BGP or MBGP between MSDP peers, thus simplifying the RPF checking mechanism.

The sessions between MSDP peers can be terminated and reactivated sessions as required. When a session between MSDP peers is terminated, the TCP connection is closed, and there will be no reconnection attempts. However, the configuration information is kept.

8.3.1  Configuration Prerequisites

Before configuring an MSDP peer connection, you need to configure:

l           A unicast routing protocol

l           Basic functions of IP multicast

l           PIM-SM basic functions

l           MSDP basic functions

Table 8-2 Configuration tasks

Operation

Description

Related section

Configure description information for MSDP peers

Required

Section 8.3.2  "Configuring Description Information for MSDP Peers"

Configure Anycast RP application

Optional

Section 8.3.3  "Configuring Anycast RP Application"

Configure an MSDP mesh group

Optional

Section 8.3.4  "Configuring an MSDP Mesh Group"

Configure MSDP peer connection control

Optional

Section 8.3.5  "Configuring MSDP Peer Connection Control"

 

8.3.2  Configuring Description Information for MSDP Peers

You can configure description information for each MSDP peer to manage and memorize the MSDP peers.

Table 8-3 Configure description information for an MSDP peer

Operation

Command

Description

Enter system view

system-view

Enter MSDP view

msdp

Configure description information for an MSDP peer

peer peer-address description text

Optional

The peer-address argument is the address of the peer. You can configure addresses of multiple peers for multiple times.

By default, an MSDP peer has no description text.

 

8.3.3  Configuring Anycast RP Application

If you configure the same interface (usually Loopback interface) addresses on two RPs in the same PIM-SM domain, the two RPs will be MSDP peers to each other. To prevent failure of RPF check on SA messages between MSDP peers, you must configure the RP address to be carried in the SA messages.

Table 8-4 Configure Anycast RP application

Operation

Command

Description

Enter system view

system-view

Enter MSDP view

msdp

Create an MSDP peer connection

peer peer-address connect-interface interface-type interface-number

Required

Configure the RP address to be carried in SA messages

originating-rp interface-type interface-number

Required

By default, the RP address in SA messages is the RP address configured by PIM.

 

&  Note:

In Anycast RP application, C-BSR and C-RP must be configured on different devices or ports.

 

8.3.4  Configuring an MSDP Mesh Group

Configure a mesh group name on all the peers that will become members of the MSDP mesh group so that the peers are fully connected with one another in the mesh group.

Table 8-5 Configure an MSDP mesh group

Operation

Command

Description

Enter system view

system-view

Enter MSDP view

msdp

Add an MSDP peer to a mesh group

peer peer-address mesh-group name

Required

This command must be configured on all the peers; therefore, you need to configure this command for multiple times.

 

&  Note:

l      Before you configure an MSDP mesh group, make sure that the routers are fully connected with one another.

l      The same group name must be configured on all the peers.

l      If you add the same MSDP peer to multiple mesh groups, only the latest configuration takes effect.

 

8.3.5  Configuring MSDP Peer Connection Control

The connection between MSDP peers can be flexibly controlled. You can disable the MSDP peering relationships temporarily by shutting down the MSDP peers. As a result, SA messages cannot be transmitted between these two peers. On the other hand, when resetting an MSDP peering relationship between faulty MSDP peers or bringing faulty MSDP peers back to work, you can adjust the retry interval of establishing a peering relationship through the following configuration.

Table 8-6 Configure MSDP peer connection control

Operation

Command

Description

Enter system view

system-view

Enter MSDP view

msdp

Shut down an MSDP peer

shutdown peer-address

Optional

Configure retry interval of setting up an MSDP peer connection

timer retry seconds

Optional

The default value is 30 seconds.

 

8.4  Configuring SA Message Transmission

An SA message contains the IP address of the multicast source S, multicast group address G, and RP address. In addition, it contains the first multicast data received by the RP in the domain where the multicast source resides. For some burst multicast data, if the multicast data interval exceeds the SA message hold time, the multicast data must be encapsulated in the SA message; otherwise, the receiver will never receive the multicast source information.

By default, when a new receiver joins, a router does not send any SA request message to its MSDP peer but has to wait for the next SA message. This defers the reception of the multicast information by the receiver. In order for the new receiver to know about the currently active multicast source as quickly as possible, the router needs to send SA request messages to the MSDP peer.

Generally, a router accepts all SA messages sent by all MSDP peers and sends all SA messages to all MSDP peers. By configuring the rules for filtering SA messages to receive/send, you can effectively control the transmission of SA messages among MSDP peers. For forwarded SA messages, you can also configure a Time-to-Live (TTL) threshold to control the range where SA messages carrying encapsulated data are transmitted.

To reduce the delay in obtaining the multicast source information, you can cache SA messages on the router. The number of SA messages cached must not exceed the system limit. The more messages are cached, the more router memory is occupied.

8.4.1  Configuration Prerequisites

Before you configure SA message transmission, perform the following tasks:

l           Configuring a unicast routing protocol.

l           Configuring basic IP multicast functions.

l           Configuring basic PIM-SM functions.

l           Configuring basic MSDP functions.

Table 8-7 Configuration tasks

Operation

Description

Related section

Configure the transmission and filtering of SA request messages

Optional

Section 8.4.2  "Configuring the Transmission and Filtering of SA Request Messages"

Configure a rule for filtering the multicast source of SA messages

Optional

Section 8.4.3  "Configuring a Rule for Filtering the Multicast Sources of SA Messages"

Configure a rule for filtering received and forwarded SA messages

Optional

Section 8.4.4  "Configuring a Rule for Filtering Received and Forwarded SA Messages"

Configure SA message cache

Optional

Section 8.4.5  "Configuring SA Message Cache"

 

8.4.2  Configuring the Transmission and Filtering of SA Request Messages

After you enable the sending of SA request messages, when a router receives a Join message, it sends an SA request message to the specified remote MSDP peer, which responds with an SA message that it has cached. After sending an SA request message, the router will get immediately a response from all active multicast sources. By default, the router does not send any SA request message to its MSDP peers upon receipt of a Join message; instead, it waits for the next SA message.

The SA message that the remote MSDP peer responds with is cached in advance; therefore, you must enable the SA message caching mechanism in advance. Typically, only the routers caching SA messages can respond to SA request messages.

After you have configured a rule for filtering received SA messages, if no ACL is specified, all SA request messages sent by the corresponding MSDP peer will be ignored; if an ACL is specified, the SA request messages that satisfy the ACL rule are received while others are ignored.

Table 8-8 Configure the transmission and filtering of SA request messages

Operation

Command

Description

Enter system view

system-view

Enter MSDP view

msdp

Enable SA message caching mechanism

cache-sa-enable

Optional

By default, the router caches the SA state upon receipt of an SA message.

Enable MSDP peers to send SA request messages

peer peer-address request-sa-enable

Optional

By default, upon receipt of a Join message, the router sends no SA request message to its MSDP peer but waits for the next SA message.

Configure a rule for filtering the SA messages received by an MSDP peer

peer peer-address sa-request-policy [ acl acl-number ]

Optional

You can configure the rule for filtering related multicast group IP addresses in ACL.

By default, a router receives all SA request messages from the MSDP peer.

 

8.4.3  Configuring a Rule for Filtering the Multicast Sources of SA Messages

An RP filters each registered source to control the information of active sources advertised in the SA message. An MSDP peer can be configured to advertise only the (S, G) entries in the multicast routing table that satisfy the filtering rule when the MSDP creates the SA message; that is, to control the (S, G) entries to be imported from the multicast routing table to the PIM-SM domain. If the import-source command is executed without the acl keyword, no source will be advertised in the SA message.

Table 8-9 Configure a rule for filtering multicast sources using SA messages

Operation

Command

Description

Enter system view

system-view

Enter MSDP view

msdp

Configure to filter multicast sources using SA messages

import-source [ acl acl-number ]

Optional

You can configure the rule for filtering related multicast group IP addresses in ACL.

By default, all the (S, G) entries in the domain are advertised in the SA message.

 

8.4.4  Configuring a Rule for Filtering Received and Forwarded SA Messages

Besides the creation of source information, controlling multicast source information allows you to control the forwarding and reception of source information. You can control the reception of SA messages using the MSDP inbound filter (corresponding to the import keyword); you can control the forwarding of SA messages by using either the MSDP outbound filter (corresponding to the export argument) or the TTL threshold. By default, an MSDP peer receives and forwards all SA messages.

MSDP inbound/outbound filter implements the following functions:

l           Filtering out all (S, G) entries

l           Receiving/forwarding only the SA messages permitted by advanced ACL rules (You can configure ACL rules for filtering source IP addresses and group IP addresses.)

An SA message carrying encapsulated data can reach the specified MSDP peer outside the domain only when the TTL in its IP header exceeds the threshold; therefore, you can control the forwarding of SA messages that carry encapsulated data by configuring the TTL threshold.

Table 8-10 Configure a rule for filtering received and forwarded SA messages

Operation

Command

Description

Enter system view

system-view

Enter MSDP view

msdp

Configure to filter imported and exported SA messages

peer peer-address sa-policy { import | export } [ acl acl-number ]

Optional

By default, no filtering is imposed on SA messages to be received or forwarded, namely all SA messages from MSDP peers are received or forwarded.

Configure the minimum TTL for the multicast packets sent to the specified MSDP peer

peer peer-address minimum-ttl ttl-value

Optional

By default, the value of TTL threshold is 0.

 

8.4.5  Configuring SA Message Cache

With the SA message caching mechanism enabled on the router, the group that a new member subsequently joins can obtain all active sources directly from the SA cache and join the corresponding SPT source tree, instead of waiting for the next SA message.

You can configure the number of SA entries cached in each MSDP peer on the router by executing the following command, but the number must be within the system limit. To protect a router against Deny of Service (DoS) attacks, you can manually configure the maximum number of SA messages cached on the router. Generally, the configured number of SA messages cached should be less than the system limit.

Table 8-11 Configure SA message cache

Operation

Command

Description

Enter system view

system-view

Enter MSDP view

msdp

Enable SA message caching mechanism

cache-sa-enable

Optional

By default, the SA message caching mechanism is enabled.

Configure the maximum number of SA messages cached

peer peer-address sa-cache-maximum sa-limit

Optional

By default, the maximum number of SA messages cached on a router is 2,048.

 

8.5  Displaying and Maintaining MSDP Configuration

I. Displaying and debugging MSDP configuration

After the above-mentioned configuration, you can use the display command in any view to display the MSDP running information, so as to verify configuration result.

In user view, you can execute the reset command to reset the MSDP counter.

Table 8-12 Display and debug MSDP configuration

Operation

Command

Description

Display the brief information of MSDP peer state

display msdp brief

You can execute the display command in any view.

Display the detailed information of MSDP peer status

display msdp peer-status [ peer-address ]

Display the (S, G) state learned from MSDP peers

display msdp sa-cache [ group-address | [ source-address ] ] [autonomous-system-number ]

Display the number of sources and groups in the MSDP cache

display msdp sa-count [autonomous-system-number ]

Reset the TCP connection with the specified MSDP peer

reset msdp peer peer-address

Clear the cached SA messages

reset msdp sa-cache [ group-address ]

Clear the statistics information of the specified MSDP peer without resetting the MSDP peer

reset msdp statistics [ peer-address ]

 

II. Tracing the transmission path of an SA message over the network

You can use the msdp-tracert command in any view to trace the path along which the multicast data travels from the multicast source to the destination receiver over the network, so as to locate errors, if any.

Table 8-13 Trace the transmission path of an SA message over the network

Operation

Command

Description

Trace the transmission path of an SA message over the network

msdp-tracert source-address group-address rp-address [ max-hops max-hops ] [ next-hop-info | sa-info | peer-info ]* [ skip-hops skip-hops ]

You can execute the msdp-tracert command in any view.

Trace the transmission path of messages sent by the multicast source over the network

mtracert source-address [ group-address | last-hop-router-address group-address ]

You can execute the mtracertcommand in any view.

 

You can locate message loss and configuration errors by tracing the network path of the specified (S, G, RP) entries. Once the transmission path of SA messages is determined, correct configuration can prevent the flooding of SA messages.

8.6  MSDP Configuration Example

8.6.1  Configuration Example of Anycast RP Application

I. Network requirements

Each PIM-SM network is a single-BSR administrative domain, with multiple multicast sources (S) and receivers. With Anycast RP configured in each PIM-SM domain, when a new member joins the multicast group, the switch directly connected to the receiver can send a Join message to the nearest RP on the topology.

The PIM-SM network implements OSPF to provide unicast routes and establish MSDP peers between SwitchC and SwitchD. Meanwhile, the Loopback10 interfaces of SwitchC and SwitchD play the roles of C-BSR and C-RP.

II. Network diagram

Figure 8-5 Network diagram for Anycast RP configuration

III. Configuration procedure

1)         Configure interface IP addresses and unicast routing protocol on the switches.

In the PIM-SM domain, configure the interface IP addresses on the switches and interconnect the switches through OSPF. Configure the IP address and mask of each interface according to Figure 8-5. The details are omitted here.

2)         Enable multicast and configure PIM-SM.

# Enable multicast on SwitchC and enable PIM-SM on all interfaces. The configuration procedures on other switches are similar to that on SwitchC. The details are omitted here.

<SwitchC> system-view

[SwitchC] multicast routing-enable

[SwitchC] interface Vlan-interface 100

[SwitchC-Vlan-interface100] pim sm

[SwitchC-Vlan-interface100] quit

[SwitchC] interface Vlan-interface 200

[SwitchC-Vlan-interface200] pim sm

[SwitchC-Vlan-interface200] quit

[SwitchC] interface Vlan-interface 110

[SwitchC-Vlan-interface110] pim sm

[SwitchC-Vlan-interface110] quit

# Configure the same Loopback10 interface address on SwitchC and SwitchD and configure the locations of C-BSR and C-RP. The configuration procedure on SwitchD is similar to that on SwitchC. The details are omitted here.

[SwitchC] interface loopback 10

[SwitchC-LoopBack10] ip address 10.1.1.1 255.255.255.255

[SwitchC-LoopBack10] pim sm

[SwitchC-LoopBack10] quit

[SwitchC] pim

[SwitchC-pim] c-bsr loopback 10

[SwitchC-pim] c-rp loopback 0

[SwitchC-pim] quit

3)         Configure an MSDP peer.

# Configure an MSDP peer on Loopback0 on SwitchC.

[SwitchC] msdp

[SwitchC-msdp] originating-rp loopback0

[SwitchC-msdp] peer 2.2.2.2 connect-interface loopback0

[SwitchC-msdp] quit

# Configure an MSDP peer on Loopback0 on SwitchD.

[SwitchD] msdp

[SwitchD-msdp] originating-rp loopback0

[SwitchD-msdp] peer 1.1.1.1 connect-interface loopback0

[SwitchD-msdp] quit

8.7  Troubleshooting MSDP Configuration

8.7.1  MSDP Peer Always in the Down State

I. Symptom

An MSDP peer is configured, but it is always in the down state.

II. Analysis

An MSDP peer relationship between the locally configured connect-interface interface address and the configured peer address is based on a TCP connection. If the address of local connect-interface interface is inconsistent with the peer address configured on the peer router, no TCP connection can be established. If there is no route between the two peers, no TCP connection can be established.

III. Solution

1)         Check the connectivity of the route between the routers. Use the display ip routing-table command to check that the unicast route between the routers is correct.

2)         Further check that a unicast route exists between two routers that will become MSDP peers and that the route leads to the two peers.

3)         Check that the interface addresses of the MSDP peers are consistent. Use the display current-configuration command to check that the address of the local connect-interface interface is consistent with the address of the corresponding MSDP peer.

8.7.2  No SA Entry in the SA Cache of the Router

I. Symptom

An MSDP fails to send (S, G) forwarding entries through an SA message.

II. Analysis

You can use the import-source command to send the (S, G) entries of the local multicast domain to the neighboring MSDP peer through SA messages. The acl keyword is optional. If you do not use this keyword, all (S, G) entries will be filtered out by default, that is, none of the (S, G) entries in the local multicast domain will be advertised. Before the import-source command is executed, the system will send all (S, G) entries in the local multicast domain. If the MSDP fails to send the (S, G) entries of the local multicast domain through SA messages, verify that the import-source command is configured correctly.

III. Solution

1)         Check the connectivity of the route between the routers. Use the display ip routing-table command to check that the unicast route between the routers is correct.

2)         Further check that a unicast route exists between two routers that will become MSDP peers and that the route leads to the two peers.

3)         Verify the configuration of the import-source command and the corresponding ACL to ensure that the ACL rule filters the right (S, G) entries.

 

  • Cloud & AI
  • InterConnect
  • Intelligent Computing
  • Security
  • SMB Products
  • Intelligent Terminal Products
  • Product Support Services
  • Technical Service Solutions
All Services
  • Resource Center
  • Policy
  • Online Help
All Support
  • Become A Partner
  • Partner Policy & Program
  • Global Learning
  • Partner Sales Resources
  • Partner Business Management
  • Service Business
All Partners
  • Profile
  • News & Events
  • Online Exhibition Center
  • Contact Us
All About Us
新华三官网