- Table of Contents
-
- 12-High Availability Configuration Guide
- 00-Preface
- 01-Interface backup configuration
- 02-DLDP configuration
- 03-Monitor Link configuration
- 04-VRRP configuration
- 05-Load balancing configuration
- 06-Reth interface and redundancy group configuration
- 07-BFD configuration
- 08-Track configuration
- 09-Process placement configuration
- 10-Interface collaboration configuration
- Related Documents
-
Title | Size | Download |
---|---|---|
05-Load balancing configuration | 1.09 MB |
Configuring server load balancing
NAT-mode server load balancing
Indirect-mode server load balancing
Server load balancing tasks at a glance
Relationship between configuration items
Adding and configuring a server farm member
Configuring scheduling algorithms for a server farm
Setting the availability criteria
Enabling the slow online feature
Configuring intelligent monitoring
Configuring the action to take when a server farm is busy
Specifying a fault processing method
Creating a real server and specifying a server farm
Specifying an IP address and port number
Configuring the bandwidth and connection parameters
Enabling the slow offline feature
Setting the bandwidth ratio and maximum expected bandwidth
Disabling VPN instance inheritance
Virtual server tasks at a glance for Layer 4 server load balancing
Virtual server tasks at a glance for Layer 7 server load balancing
Configuring a TCP virtual server to operate at Layer 7
Specifying the VSIP and port number
Configuring the bandwidth and connection parameters
Enabling per-packet load balancing for UDP traffic
Configuring the HTTP redirection feature
Specifying a parameter profile
Applying an LB connection limit policy
Enabling IP address advertisement for a virtual server
Specifying an interface for sending gratuitous ARP packets and ND packets
Creating a match rule that references an LB class
Creating a source IP address match rule
Creating an interface match rule
Creating a user group match rule
Creating a TCP payload match rule
Creating an HTTP content match rule
Creating an HTTP cookie match rule
Creating an HTTP header match rule
Creating an HTTP URL match rule
Creating an HTTP method match rule
Creating a RADIUS attribute match rule
Configuring a forwarding LB action
Configuring a modification LB action
Specifying a response file for matching HTTP requests
Specifying a response file used upon load balancing failure
About configuring an LB policy
Specifying the default LB action
Sticky group tasks at a glance for Layer 4 server load balancing
Sticky group tasks at a glance for Layer 7 server load balancing
Configuring the IP sticky method
Configuring the HTTP content sticky method
Configuring the HTTP cookie sticky method
Configuring the HTTP header sticky method
Configuring the HTTP or UDP payload sticky method
Configuring the RADIUS attribute sticky method
Configuring the SIP call ID sticky method
Configuring the SSL sticky method
Configuring the timeout timer for sticky entries
Ignoring the limits for sessions that match sticky entries
Enabling stickiness-over-busyness
Configuring a parameter profile
Parameter profile tasks at a glance
Configuring the ToS field in IP packets sent to the client
Configuring the maximum local window size for TCP connections
Configuring the idle timeout for TCP connections
Configuring the TIME_WAIT state timeout time for TCP connections
Configuring the retransmission timeout time for SYN packets
Configuring the TCP keepalive parameters
Configuring the FIN-WAIT-1 state timeout time for TCP connections
Configuring the FIN-WAIT-2 state timeout time for TCP connections
Setting the MSS for the LB device
Configuring the TCP payload match parameters
Enabling load balancing for each HTTP request
Configuring connection reuse between the LB device and the server
Modifying the header in each HTTP request or response
Disabling case sensitivity matching for HTTP
Configuring the maximum length to parse the HTTP content
Configuring secondary cookie parameters
Specifying the action to take when the header of an HTTP packet exceeds the maximum length
Configuring the HTTP compression feature
Configuring the HTTP statistics feature
Configuring an LB probe template
About configuring an LB probe template
Configuring a TCP-RST LB probe template
Configuring a TCP zero-window LB probe template
Configuring an LB connection limit policy
Performing a load balancing test
About performing a load balancing test
Performing an IPv4 load balancing test
Performing an IPv6 load balancing test
Enabling load balancing logging
Enabling load balancing basic logging
Enabling load balancing NAT logging
Displaying and maintaining server load balancing
Server load balancing configuration examples
Example: Configuring basic Layer 4 server load balancing
Example: Configuring Layer 4 server load balancing hot backup
Example: Configuring basic Layer 7 server load balancing
Example: Configuring Layer 7 server load balancing SSL termination
Configuring outbound link load balancing
About outbound link load balancing
Outbound link load balancing tasks at a glance
Relationship between configuration items
Adding and configuring a link group member
Configuring a scheduling algorithm for a link group
Setting the availability criteria
Enabling the slow online feature
Specifying a fault processing method
Configuring the proximity feature
Creating a link and specifying a link group
Specifying an outbound next hop for a link
Specifying an outgoing interface for a link
Configuring the bandwidth and connection parameters
Enabling the slow offline feature
Setting the link cost for proximity calculation
Setting the bandwidth ratio and maximum expected bandwidth
Disabling VPN instance inheritance for a link
Virtual server tasks at a glance
Specifying the VSIP and port number
Specifying a parameter profile
Configuring the bandwidth and connection parameters
Enabling the link protection feature
Enabling bandwidth statistics collection by interfaces
Creating a match rule that references an LB class
Creating a source IP address match rule
Creating a destination IP address match rule
Creating an input interface match rule
Creating a user group match rule
Creating a domain name match rule
Creating an application group match rule
Configuring a forwarding LB action
Configuring the ToS field in IP packets sent to the server
Specifying the default LB action
Sticky group tasks at a glance
Configuring the IP sticky method
Configuring the timeout time for sticky entries
Ignoring the limits for sessions that match sticky entries
Configuring a parameter profile
About configuring a parameter profile
Configuring the ToS field in IP packets sent to the client
About configuring ISP information
Configuring ISP information manually
Setting the aging time for DNS cache entries
Performing a load balancing test
About performing a load balancing test
Performing an IPv4 load balancing test
Performing an IPv6 load balancing test
Enabling load balancing logging
Enabling load balancing basic logging
Enabling load balancing NAT logging
Enabling load balancing link busy state logging
Displaying and maintaining outbound link load balancing
Outbound link load balancing configuration examples
Example: Configuring outbound link load balancing
Configuring transparent DNS proxies
Transparent DNS proxy on the LB device
Transparent DNS proxy tasks at a glance
Configuring a transparent DNS proxy
Transparent DNS proxy tasks at a glance
Creating a transparent DNS proxy
Specifying an IP address and port number
Specifying the default DNS server pool
Enabling the link protection feature
Enabling the transparent DNS proxy
Adding and configuring a DNS server pool member
Configuring a scheduling algorithm for a DNS server pool
Creating a DNS server and specifying a DNS server pool
Specifying an IP address and port number
Enabling the device to automatically obtain the IP address of a DNS server
Associating a link with a DNS server
Specifying an outbound next hop for a link
Specifying an outgoing interface for a link
Configuring the maximum bandwidth
Setting the bandwidth ratio and maximum expected bandwidth
Creating a match rule that references an LB class
Creating a source IP address match rule
Creating a destination IP address match rule
Creating a domain name match rule
Configuring a forwarding LB action
Configuring the ToS field in IP packets sent to the DNS server
Specifying the default LB action
Sticky group tasks at a glance
Configuring the IP sticky method
Configuring the timeout time for sticky entries
Enabling load balancing logging
Enabling load balancing NAT logging
Enabling load balancing link busy state logging
Displaying and maintaining transparent DNS proxy
Transparent DNS proxy configuration examples
Example: Configuring transparent DNS proxy
Load balancing overview
Load balancing (LB) is a cluster technology that distributes services among multiple network devices or links.
Advantages of load balancing
Load balancing has the following advantages:
· High performance—Improves overall system performance by distributing services to multiple devices or links.
· Scalability—Meets increasing service requirements without compromising service quality by easily adding devices or links.
· High availability—Improves overall availability by using backup devices or links.
· Manageability—Simplifies configuration and maintenance by centralizing management on the load balancing device.
· Transparency—Preserves the transparency of the network topology for end users. Adding or removing devices or links does not affect services.
Load balancing types
LB includes the following types:
· Server load balancing—Data centers generally use server load balancing for networking. Network services are distributed to multiple servers or firewalls to enhance the processing capabilities of the servers or firewalls.
· Link load balancing—Link load balancing applies to a network environment where there are multiple carrier links to implement dynamic link selection. This enhances link utilization. Link load balancing supports IPv4 and IPv6, but does not support IPv4-to-IPv6 packet translation. Link load balancing is classified into the following types based on the direction of connection requests:
¡ Outbound link load balancing—Load balances traffic among the links from the internal network to the external network.
¡ Transparent DNS proxy—Load balances DNS requests among the links from the internal network to the external network.
Configuring server load balancing
About server load balancing
Server load balancing types
Server load balancing is classified into Layer 4 server load balancing and Layer 7 server load balancing.
· Layer 4 server load balancing—Identifies network layer and transport layer information, and is implemented based on streams. It distributes packets in the same stream to the same server. Layer 4 server load balancing cannot distribute Layer 7 services based on contents.
· Layer 7 server load balancing—Identifies network layer, transport layer, and application layer information, and is implemented based on contents. It analyzes packet contents, distributes packets one by one based on the contents, and distributes connections to the specified server according to the predefined policies. Layer 7 server load balancing applies load balancing services to a large scope.
Server load balancing supports IPv4 and IPv6, but Layer 4 server load balancing does not support IPv4-to-IPv6 or IPv6-to-IPv4 translation.
Deployment modes
Server load balancing uses the NAT and indirect deployment modes.
NAT-mode server load balancing
NAT-mode network diagram
As shown in Figure 1, NAT-mode server load balancing contains the following elements:
· LB device—Distributes different service requests to multiple servers.
· Server—Responds to and processes different service requests.
· VSIP—Virtual service IP address of the cluster, used for users to request services.
· Server IP—IP address of a server, used by the LB device to distribute requests.
NAT-mode implementation modes
NAT-mode server load balancing is implemented through the following modes:
· Destination NAT (DNAT).
· Source NAT (SNAT).
· DNAT + SNAT.
DNAT mode
DNAT-mode server load balancing requires you to change the gateway or configure a static route for the server to send packets destined to the host through the LB device.
Figure 2 DNAT-mode server load balancing workflow
Table 1 Workflow description
Description |
Source IP address |
Destination IP address |
1. The host sends a request. |
Host IP |
VSIP |
2. When the LB device receives the request, it uses a scheduling algorithm to calculate to which server it distributes the request. |
N/A |
N/A |
3. The LB device uses the DNAT technology to distribute the request, using Server IP as the destination IP. |
Host IP |
Server IP |
4. The server receives and processes the request and then sends a response. |
Server IP |
Host IP |
5. The LB device receives the response, translates the source IP to VSIP, and forwards the response to the requesting host. |
VSIP |
Host IP |
SNAT mode
SNAT-mode server load balancing requires the following configurations:
· Configure VSIP for the loopback interface on each server.
· Configure a route to the IP address assigned by the SNAT address pool.
As a best practice, do not use SNAT-mode server load balancing because its application scope is limited. This chapter does not provide detailed information about SNAT-mode server load balancing.
DNAT + SNAT mode
Figure 3 DNAT + SNAT-mode server load balancing workflow
Table 2 Workflow description
Description |
Source IP address |
Destination IP address |
1. The host sends a request. |
Host IP |
VSIP |
2. When the LB device receives the request, it uses a scheduling algorithm to calculate to which server it distributes the request. |
N/A |
N/A |
3. The LB device uses the DNAT + SNAT technology to distribute the request, using the IP address in the SNAT address pool as the source IP and Server IP as the destination IP. |
SNAT IP |
Server IP |
4. The server receives and processes the request and then sends a response. |
Server IP |
SNAT IP |
5. The LB device receives the response, translates the source IP to VSIP and destination IP to Host IP, and forwards the response to the requesting host. |
VSIP |
Host IP |
Indirect-mode server load balancing
Indirect-mode network diagram
As shown in Figure 4, indirect-mode server load balancing contains the following elements:
· LB device—Distributes different service requests to multiple servers.
· General device—Forwards data according to general forwarding rules.
· Server—Responds to and processes different service requests.
· VSIP—Virtual service IP address of the cluster, used for users to request services.
· Server IP—IP address of a server, used by the LB device to distribute requests.
Indirect-mode workflow
Indirect-mode server load balancing requires configuring the VSIP on both the LB device and the servers. Because the VSIP on a server cannot be contained in an ARP request and response, you can configure the VSIP on a loopback interface.
Figure 5 Indirect-mode server load balancing workflow
Table 3 Workflow description
Description |
Source IP address |
Destination IP address |
1. The host sends a request. |
Host IP |
VSIP |
2. When the general device receives the request, it forwards it to LB device. The VSIP cannot be contained in an ARP request and response, so the general device only forwards the request to the LB device. |
Host IP |
VSIP |
3. When the LB device receives the request, it uses a scheduling algorithm to calculate to which server it distributes the request. |
N/A |
N/A |
4. The LB device distributes the request. The source and destination IP addresses in the request packet are not changed. |
Host IP |
VSIP |
5. The server receives and processes the request and then sends a response. |
VSIP |
Host IP |
6. The general device receives the response, and forwards it to the requesting host. |
VSIP |
Host IP |
In indirect mode, the LB device does not forward packets returned by the server.
Server load balancing tasks at a glance
Relationship between configuration items
Figure 6 shows the relationship between the following configuration items:
· Server farm—A collection of real servers that contain similar content. A sever farm can be referenced by a virtual server or an LB action.
· Real server—An entity on the LB device to process user services.
· Virtual server—A virtual service provided by the LB device to determine whether to perform load balancing for packets received on the LB device. Only the packets that match a virtual server are load balanced.
· LB class—Classifies packets to implement load balancing based on packet type.
· LB action—Drops, forwards, or modifies packets.
· LB policy—Associates an LB class with an LB action. An LB policy can be referenced by a virtual server.
· Sticky group—Uses a sticky method to distribute similar sessions to the same real server. A sticky group can be referenced by a virtual server or an LB action.
· Parameter profile—Defines advanced parameters to process packets. A parameter profile can be referenced by a virtual server.
Figure 6 Relationship between the main configuration items
Tasks at a glance
To configure server load balancing, perform the following tasks:
3. Configuring a virtual server
4. (Optional.) Configuring an LB policy
5. (Optional.) Configuring a sticky group
6. (Optional.) Configuring templates
¡ Configuring a parameter profile
¡ Configuring an LB probe template
7. (Optional.) Configuring an LB connection limit policy
8. (Optional.) Configuring the ALG feature
9. (Optional.) Reloading a response file
10. (Optional.) Performing a load balancing test
11. (Optional.) Configuring SNMP notifications and logging for load balancing
¡ Enabling load balancing logging
Configuring a server farm
You can add real servers that contain similar content to a server farm to facilitate management.
Server farm tasks at a glance
The server farm configuration tasks for Layer 4 and Layer 7 server load balancing are the same.
To configure a server farm, perform the following tasks:
2. (Optional.) Adding and configuring a server farm member
3. Configuring scheduling algorithms for a server farm
4. Configuring NAT
Choose the following tasks as needed:
¡ Configuring indirect-mode NAT
5. Setting the availability criteria
6. (Optional.) Enabling the slow online feature
7. (Optional.) Configuring health monitoring
8. (Optional.) Configuring intelligent monitoring
9. (Optional.) Configuring the action to take when a server farm is busy
10. (Optional.) Specifying a fault processing method
Creating a server farm
1. Enter system view.
system-view
2. Create a server farm and enter server farm view.
server-farm server-farm-name
3. (Optional.) Configure a description for the server farm.
description text
By default, no description is configured for the server farm.
Adding and configuring a server farm member
About adding and configuring a server farm member
Perform this task to create a server farm member or add an existing real server as a server farm member in server farm view. You can also specify a server farm for a real server in real server view to achieve the same purpose (see "Creating a real server and specifying a server farm").
After adding a server farm member, you can configure the following parameters and features for the real server in the server farm:
· Weight.
· Priority.
· Connection limits.
· Health monitoring.
· Slow offline.
The member-based scheduling algorithm selects the best real server based on these configurations.
Adding a server farm member
1. Enter system view.
system-view
2. Enter server farm view.
server-farm server-farm-name
3. Create and add a server farm member and enter server farm member view.
real-server real-server-name port port-number
If the real server already exists, the command adds the existing real server as a server farm member.
4. (Optional.) Configure a description for the server farm member.
description text
By default, no description is configured for the server farm member.
Setting the weight and priority of the server farm member
1. Enter system view.
system-view
2. Enter server farm view.
server-farm server-farm-name
3. Enter server farm member view.
real-server real-server-name port port-number
4. Set the weight of the server farm member.
weight weight-value
The default setting is 100.
5. Set the priority of the server farm member.
priority priority
The default setting is 4.
Setting the connection limits of the server farm member
1. Enter system view.
system-view
2. Enter server farm view.
server-farm server-farm-name
3. Enter server farm member view.
real-server real-server-name port port-number
4. Set the connection rate of the server farm member.
rate-limit connection connection-number
The default setting is 0 (the connection rate is not limited).
5. Set the maximum number of connections allowed for the server farm member.
connection-limit max max-number
The default setting is 0 (the maximum number of connections is not limited).
Configuring health monitoring for the server farm member
1. Enter system view.
system-view
2. Enter server farm view.
server-farm server-farm-name
3. Enter server farm member view.
real-server real-server-name port port-number
4. Specify a health monitoring method for the server farm member.
probe template-name [ nqa-template-port ]
By default, no health monitoring method is specified for the server farm member.
You can specify an NQA template or load balancing template for health monitoring. For information about NQA templates, see NQA configuration in Network Management and Monitoring Configuration Guide.
5. Specify the health monitoring success criteria for the server farm member.
success-criteria { all | at-least min-number }
By default, health monitoring succeeds only when all the specified health monitoring methods succeed.
Enabling the slow offline feature for the server farm member
1. Enter system view.
system-view
2. Enter server farm view.
server-farm server-farm-name
3. Enter server farm member view.
real-server real-server-name port port-number
4. Enable the slow offline feature for the server farm member.
slow-shutdown enable
By default, the slow offline feature is disabled.
5. Shut down the server farm member.
shutdown
By default, the server farm member is activated.
Configuring scheduling algorithms for a server farm
About scheduling algorithms for server farms
Perform this task to specify a scheduling algorithm for a server farm and specify the number of real servers to participate in scheduling. The LB device calculates the real servers to process user requests based on the specified scheduling algorithm.
The device provides the following scheduling algorithms for a server farm:
· Source IP address hash algorithm—Hashes the source IP address of user requests and distributes user requests to different real servers according to the hash values.
· Source IP address and port hash algorithm—Hashes the source IP address and port number of user requests and distributes user requests to different real servers according to the hash values.
· Destination IP address hash algorithm—Hashes the destination IP address of user requests and distributes user requests to different real servers according to the hash values.
· HTTP hash algorithm—Hashes the content of user requests and distributes user requests to different real servers according to the hash values.
· Cache Array Routing Protocol hash algorithm—The CARP hash algorithm is an enhancement to the hash algorithm. When the number of available real servers changes, this algorithm makes all available real servers have the smallest load changes. This algorithm supports hashing based on source IP address, source IP address and port number, destination IP address, and HTTP content.
· Dynamic round robin—Assigns new connections to real servers based on load weight values calculated by using the memory usage, CPU usage, and disk usage of the real servers. The smaller the load, the greater the weight value. A real server with a greater weight value is assigned more connections. This algorithm can take effect only if you specify an SNMP-DCA NQA template. If no SNMP-DCA NQA template is specified, the non-weighted round robin algorithm is used. For more information about NQA templates, see NQA configuration in Network Management and Monitoring Configuration Guide.
· Weighted least connection algorithm (real server-based)—Always assigns user requests to the real server with the fewest number of weighted active connections (the total number of active connections in all server farms divided by weight). The weight value used in this algorithm is configured in real server view.
· Weighted least connection algorithm (server farm member-based)—Always assigns user requests to the real server with the fewest number of weighted active connections (the total number of active connections in the specified server farm divided by weight). The weight value used in this algorithm is configured in server farm member view.
· Random algorithm—Randomly assigns user requests to real servers.
· Least time algorithm—Assigns new connections to real servers based on load weight values calculated by using the response time of the real servers. The shorter the response time, the greater the weight value. A real server with a greater weight value is assigned more connections.
· Round robin algorithm—Assigns user requests to real servers based on the weights of real servers. A higher weight indicates more user requests will be assigned.
· Bandwidth algorithm—Distributes user requests to real servers according to the weights and remaining bandwidth of real servers.
· Maximum bandwidth algorithm—Distributes user requests always to an idle real server that has the largest remaining bandwidth.
Procedure
1. Enter system view.
system-view
2. Enter server farm view.
server-farm server-farm-name
3. Specify a scheduling algorithm for the server farm.
¡ Specify a real server-based scheduling algorithm.
predictor { dync-roundrobin | least-connection | least-time | { bandwidth | max-bandwidth } [ inbound | outbound ] }
¡ Specify a server farm member-based scheduling algorithm.
predictor hash [ carp ] address { destination | source | source-ip-port } [ mask mask-length ] [ prefix prefix-length ]
predictor hash [ carp ] http [ offset offset ] [ start start-string ] [ [ end end-string ] | [ length length ] ]
predictor { least-connection member | random | round-robin }
By default, the scheduling algorithm for the server farm is weighted round robin.
4. Specify the number of real servers to participate in scheduling.
selected-server min min-number max max-number
By default, the real servers with the highest priority participate in scheduling.
Configuring indirect-mode NAT
Restrictions and guidelines
Indirect-mode NAT configuration requires disabling NAT for the server farm.
Procedure
1. Enter system view.
system-view
2. Enter server farm view.
server-farm server-farm-name
3. Disable NAT for the server farm.
transparent enable
By default, NAT is enabled for a server farm.
If the server farm is referenced by a virtual server of the HTTP type, the NAT feature takes effect even if it is disabled.
Configuring NAT-mode NAT
About NAT-mode NAT
The NAT-mode NAT configuration varies by NAT mode.
· For DNAT mode, you only need to enable NAT for the server farm.
· For SNAT mode and DNAT + SNAT mode, you must create an SNAT address pool to be referenced by the server farm.
After the server farm references the SNAT address pool, the LB device replaces the source address of the packets it receives with an SNAT address before forwarding the packets.
Restrictions and guidelines
An SNAT address pool can have a maximum of 256 IPv4 addresses and 65536 IPv6 addresses. No overlapping IPv4 or IPv6 addresses are allowed in different SNAT address pools.
Configuring DNAT
1. Enter system view.
system-view
2. Enter server farm view.
server-farm server-farm-name
3. Enable NAT for the server farm.
undo transparent enable
By default, NAT is enabled for a server farm.
If the server farm is referenced by a virtual server of the HTTP type, the NAT feature takes effect even if it is disabled.
4. (Optional.) Specify an interface for sending gratuitous ARP packets and ND packets.
arp-nd interface interface-type interface-number
By default, no interface is specified for sending gratuitous ARP packets and ND packets.
Configuring SNAT and DNAT+SNAT
1. Enter system view.
system-view
2. Create an SNAT address pool and enter SNAT address pool view.
loadbalance snat-pool pool-name
3. (Optional.) Configure a description for the SNAT address pool.
description text
By default, no description is configured for an SNAT address pool.
4. Specify an address range for the SNAT address pool.
IPv4:
ip range start start-ipv4-address end end-ipv4-address
IPv6:
ipv6 range start start-ipv6-address end end-ipv6-address
By default, no address range is specified for an SNAT address pool.
5. Return to system view.
quit
6. Enter server farm view.
server-farm server-farm-name
7. Enable NAT for the server farm.
undo transparent enable
By default, NAT is enabled for a server farm.
If a server farm is referenced by a virtual server of the HTTP type, the NAT feature takes effect even when it is disabled.
8. Specify the SNAT address pool to be referenced by the server farm.
snat-pool pool-name
By default, no SNAT address pool is referenced by a server farm.
9. (Optional.) Specify an interface for sending gratuitous ARP packets and ND packets.
arp-nd interface interface-type interface-number
By default, no interface is specified for sending gratuitous ARP packets and ND packets.
Setting the availability criteria
About setting the availability criteria
Perform this task to set the criteria (lower percentage and higher percentage) to determine whether a server farm is available. This helps implement traffic switchover between the master and backup server farms.
· When the number of available real servers to the total number of real servers in the master server farm is smaller than the lower percentage, traffic is switched to the backup server farm.
· When the number of available real servers to the total number of real servers in the master server farm is greater than the upper percentage, traffic is switched back to the master server farm.
Procedure
1. Enter system view.
system-view
2. Enter server farm view.
server-farm server-farm-name
3. Set the criteria to determine whether the server farm is available.
activate lower lower-percentage upper upper-percentage
By default, when a minimum of one real server is available, the server farm is available.
Enabling the slow online feature
About the slow online feature
The real servers newly added to a server farm might not be able to immediately process large numbers of services assigned by the LB device. To resolve this issue, enable the slow online feature for the server farm. The feature uses the standby timer and ramp-up timer. When the real servers are brought online, the LB device does not assign any services to the real servers until the standby timer expires.
When the standby timer expires, the ramp-up timer starts. During the ramp-up time, the LB device increases the service amount according to the processing capability of the real servers, until the ramp-up timer expires.
Procedure
1. Enter system view.
system-view
2. Enter server farm view.
server-farm server-farm-name
3. Enable the slow online feature for the server farm.
slow-online [ standby-time standby-time ramp-up-time ramp-up-time ]
By default, the slow online feature is disabled for the server farm.
Configuring health monitoring
About configuring health monitoring
Perform this task to enable health monitoring to detect the availability of real servers.
Restrictions and guidelines
The health monitoring configuration in real server view takes precedence over the configuration in server farm view.
You can specify an NQA template or load balancing template for health monitoring. For information about NQA templates, see NQA configuration in Network Management and Monitoring Configuration Guide.
Procedure
1. Enter system view.
system-view
2. Enter server farm view.
server-farm server-farm-name
3. Specify a health monitoring method for the server farm.
probe template-name [ nqa-template-port ]
By default, no health monitoring method is specified for the server farm.
4. Specify the health monitoring success criteria for the server farm.
success-criteria { all | at-least min-number }
By default, health monitoring succeeds only when all the specified health monitoring methods succeed.
Configuring intelligent monitoring
About intelligent monitoring
Intelligent monitoring identifies the health of server farm members by counting the number of RST packets or zero-window packets sent by each server farm member. Upon packet threshold violation, a protection action is taken. This feature is implemented by referencing a TCP-RST or TCP zero-window probe template in server farm view.
You can use the following methods to recover a server farm member placed in Auto shutdown state by this feature:
· Set the automatic recovery time in server farm view for the server farm member to automatically recover.
· Manually recover the server farm member.
Restrictions and guidelines
A real server that is shut down or placed in busy state due to packet threshold violation will be restored to the normal state immediately when the referenced probe template is deleted.
Prerequisites
Before configuring this feature, configure an LB probe template (see "Configuring an LB probe template").
Specifying an LB probe template for a server farm
1. Enter system view.
system-view
2. Enter server farm view.
server-farm server-farm-name
3. Specify an LB probe template for the server farm.
probe-template { tcp-rst | tcp-zero-window } template-name
By default, no LB probe template is specified for a server farm.
4. (Optional.) Set the automatic recovery time.
auto-shutdown recovery-time recovery-time
By default, the automatic recovery time is 0 seconds, which means that a server farm member placed in Auto shutdown state does not automatically recover.
Manually recovering a real server in Auto shutown state
1. Enter system view.
system-view
2. Enter real server view.
real-server real-server-name
3. Manually recover the real server.
recover-from-auto-shutdown
Manually recovering a server farm member in Auto shutown state
1. Enter system view.
system-view
2. Enter server farm view.
server-farm server-farm-name
3. Enter server farm member view.
real-server real-server-name port port-number
4. Manually recover the server farm member.
recover-from-auto-shutdown
Configuring the action to take when a server farm is busy
About configuring the action to take when a server farm is busy
A server farm is considered busy when all its real servers are busy. You can configure one of the following actions:
· drop—Stops assigning client requests to a server farm. If the LB policy for the server farm contains the action of matching the next rule, the device compares client requests with the next rule. Otherwise, the device drops the client requests.
· enqueue—Stops assigning client requests to a server farm and assigns new client requests to a wait queue. New client requests will be dropped when the queue length exceeds the configured length. Client requests already in the queue will be aged out when the configured timeout time expires.
· force—Forcibly assigns client requests to all real servers in the server farm.
The device determines whether a real server is busy based on the following factors:
· Maximum number of connections.
· Maximum number of connections per second.
· Maximum number of HTTP requests per second.
· Maximum bandwidth.
· SNMP-DCA probe result.
Procedure
1. Enter system view.
system-view
2. Enter server farm view.
server-farm server-farm-name
3. Configure the action to take when the server farm is busy.
busy-action { drop | enqueue length length timeout timeout-value | force }
The default action is drop.
Specifying a fault processing method
About fault processing methods
Perform this task to specify one of the following fault processing methods for a server farm:
· Keep—Does not actively terminate the connection with the failed real server. Keeping or terminating the connection depends on the timeout mechanism of the protocol.
· Reschedule—Redirects the connection to another available real server in the server farm.
· Reset—Terminates the connection with the failed real server by sending RST packets (for TCP packets) or ICMP unreachable packets (for other types of packets).
Procedure
1. Enter system view.
system-view
2. Enter server farm view.
server-farm server-farm-name
3. Specify a fault processing method for the server farm.
fail-action { keep | reschedule | reset }
By default, the fault processing method is keep. All available connections are kept.
Configuring a real server
A real server is an entity on the LB device to process user services. A real server can belong to multiple server farms. A server farm can have multiple real servers.
Real server tasks at a glance
The real server configuration tasks for Layer 4 and Layer 7 server load balancing are the same.
To configure a real server, perform the following tasks:
1. Creating a real server and specifying a server farm
2. Specifying an IP address and port number
3. Setting a weight and priority
4. (Optional.) Configuring the bandwidth and connection parameters
5. (Optional.) Configuring health monitoring
6. (Optional.) Enabling the slow offline feature
7. (Optional.) Setting the bandwidth ratio and maximum expected bandwidth
8. (Optional.) Configuring a VPN instance for a real server
¡ Disabling VPN instance inheritance
Creating a real server and specifying a server farm
1. Enter system view.
system-view
2. Create a real server and enter real server view.
real-server real-server-name
3. (Optional.) Configure a description for the real server.
description text
By default, no description is configured for the real server.
4. Specify a server farm for the real server.
server-farm server-farm-name
By default, the real server does not belong to any server farms.
Specifying an IP address and port number
1. Enter system view.
system-view
2. Enter real server view.
real-server real-server-name
3. Specify an IP address for the real server.
IPv4:
ip address ipv4-address
IPv6:
ipv6 address ipv6-address
By default, no IP address is specified for the real server.
4. Specify the port number for the real server.
port port-number
By default, the port number of the real server is 0. Packets use their respective port numbers.
Setting a weight and priority
About setting a weight and priority
Perform this task to set a weight for the weighted round robin and weighted least connection algorithms of a real server, and the scheduling priority in the server farm for the server.
Procedure
1. Enter system view.
system-view
2. Enter real server view.
real-server real-server-name
3. Set a weight for the real server.
weight weight-value
By default, the weight of the real server is 100.
4. Set a priority for the real server.
priority priority
By default, the priority of the real server is 4.
Configuring the bandwidth and connection parameters
About configuring the bandwidth and connection parameters
This task allows you to configure the following parameters:
· Maximum bandwidth.
· Maximum number of connections.
· Maximum number of connections per second.
· Maximum number of HTTP requests per second.
If any of the preceding thresholds is exceeded, the real server is placed in busy state.
Procedure
1. Enter system view.
system-view
2. Enter real server view.
real-server real-server-name
3. Set the maximum bandwidth for the real server.
rate-limit bandwidth [ inbound | outbound ] bandwidth-value
By default, the maximum bandwidth, inbound bandwidth, and outbound bandwidth are 0 KBps for the real server. The bandwidths are not limited.
4. Set the maximum number of connections for the real server.
connection-limit max max-number
By default, the maximum number of connections is 0 for the real server. The number is not limited.
5. Set the maximum number of connections per second for the real server.
rate-limit connection connection-number
By default, the maximum number of connections per second is 0 for the real server. The number is not limited.
6. Set the maximum number of HTTP requests per second for the real server.
rate-limit http-request request-number
By default, the maximum number of HTTP requests per second is 0 for the real server. The number is not limited.
Configuring health monitoring
About configuring health monitoring
Perform this task to enable health monitoring to detect the availability of a real server.
Restrictions and guidelines
The health monitoring configuration in real server view takes precedence over the configuration in server farm view.
Procedure
1. Enter system view.
system-view
2. Enter real server view.
real-server real-server-name
3. Specify a health monitoring method for the real server.
probe template-name [ nqa-template-port ]
By default, no health monitoring method is specified for the real server.
4. Specify the health monitoring success criteria for the real server.
success-criteria { all | at-least min-number }
By default, the health monitoring succeeds only when all the specified health monitoring methods succeed.
Enabling the slow offline feature
About the slow offline feature
The shutdown command immediately terminates existing connections of a real server. The slow offline feature ages out the connections, and does not establish new connections.
Restrictions and guidelines
To enable the slow offline feature for a real server, you must execute the slow-shutdown enable command and then the shutdown command. If you execute the shutdown command and then the slow-shutdown enable command, the slow offline feature does not take effect and the real server is shut down.
Procedure
1. Enter system view.
system-view
2. Enter real server view.
real-server real-server-name
3. Enable the slow offline feature for the real server.
slow-shutdown enable
By default, the slow offline feature is disabled.
4. Shut down the real server.
shutdown
By default, the real server is activated.
Setting the bandwidth ratio and maximum expected bandwidth
About setting the bandwidth ratio and maximum expected bandwidth
When the traffic exceeds the maximum expected bandwidth multiplied by the bandwidth ratio of a real server, new traffic is not distributed to the real server. When the traffic drops below the maximum expected bandwidth multiplied by the bandwidth recovery ratio of the real server, the real server participates in scheduling again.
In addition to being used for link protection, the maximum expected bandwidth is used for remaining bandwidth calculation in the bandwidth algorithm and maximum bandwidth algorithm.
Procedure
1. Enter system view.
system-view
2. Enter real server view.
real-server real-server-name
3. Set the bandwidth ratio.
bandwidth [ inbound | outbound ] busy-rate busy-rate-number [ recovery recovery-rate-number ]
By default, the total bandwidth ratio is 70.
4. Set the maximum expected bandwidth.
max-bandwidth [ inbound | outbound ] bandwidth-value
By default, the maximum expected bandwidth, maximum uplink expected bandwidth, and maximum downlink expected bandwidth are 0 KBps. The bandwidths are not limited.
Specifying a VPN instance
1. Enter system view.
system-view
2. Enter real server view.
real-server real-server-name
3. Specify a VPN instance for the real server.
vpn-instance vpn-instance-name
By default:
¡ A real server belongs to the public network if VPN instance inheritance is disabled.
¡ A real server belongs to the VPN instance of its virtual server if VPN instance inheritance is enabled.
Disabling VPN instance inheritance
About VPN instance inheritance
When VPN instance inheritance is enabled, a real server inherits the VPN instance of its virtual server if no VPN instance is specified for the real server. When VPN instance inheritance is disabled, a real server belongs to the public network if no VPN instance is specified for the real server.
Procedure
1. Enter system view.
system-view
2. Enter real server view.
real-server real-server-name
3. Disable VPN instance inheritance for the real server.
inherit vpn-instance disable
By default, VPN instance inheritance is enabled for a real server.
Configuring a virtual server
A virtual server is a virtual service provided by the LB device to determine whether to perform load balancing for packets received on the LB device. Only the packets that match a virtual server are load balanced.
Restrictions and guidelines
If both the "Specifying server farms" and "Specifying an LB policy" tasks are configured, packets are processed by the LB policy first. If the processing fails, the packets are processed by the specified server farms.
Virtual server tasks at a glance for Layer 4 server load balancing
2. Configuring a TCP virtual server to operate at Layer 7
3. Specifying the VSIP and port number
5. Configure a packet processing policy
Choose one of the following tasks:
6. (Optional.) Configuring the bandwidth and connection parameters
7. (Optional.) Enabling per-packet load balancing for UDP traffic
8. (Optional.) Specifying a parameter profile or LB connection limit policy
¡ Specifying a parameter profile
¡ Applying an LB connection limit policy
9. (Optional.) Configuring hot backup
10. (Optional.) Enabling IP address advertisement for a virtual server
11. (Optional.) Specifying an interface for sending gratuitous ARP packets and ND packets
Virtual server tasks at a glance for Layer 7 server load balancing
2. Specifying the VSIP and port number
4. Configure a packet processing policy
Choose one of the following tasks:
5. (Optional.) Configuring the bandwidth and connection parameters
6. (Optional.) Configuring the HTTP redirection feature
7. (Optional.) Specifying a parameter profile or policy
¡ Specifying a parameter profile
¡ Applying an LB connection limit policy
8. (Optional.) Configuring hot backup
9. (Optional.) Enabling IP address advertisement for a virtual server
10. (Optional.) Specifying an interface for sending gratuitous ARP packets and ND packets
Creating a virtual server
About virtual server types
The virtual server types of Layer 4 server load balancing include IP, TCP, and UDP.
The virtual server types of Layer 7 server load balancing include fast HTTP, HTTP, RADIUS, TCP-based SIP, and UDP-based SIP. For information about SIP, see Voice Configuration Guide.
Restrictions and guidelines
Do not use fast HTTP virtual servers together with the TCP client verification feature. For more information the TCP client verification feature, see attack detection and prevention configuration in Security Configuration Guide.
Creating a virtual server for Layer 4 server load balancing
1. Enter system view.
system-view
2. Create an IP, TCP, or UDP virtual server and enter virtual server view.
virtual-server virtual-server-name type { ip | tcp | udp }
When you create a virtual server, you must specify the virtual server type. You can enter an existing virtual server view without specifying the virtual server type. If you specify the virtual server type when entering an existing virtual server view, the virtual server type must be the one specified when you create the virtual server.
3. (Optional.) Configure a description for the virtual server.
description text
By default, no description is configured for the virtual server.
Creating a virtual server for Layer 7 server load balancing
1. Enter system view.
system-view
2. Create a fast HTTP, RADIUS, HTTP, TCP-based SIP, or UDP-based SIP virtual server and enter virtual server view.
virtual-server virtual-server-name type { fast-http | http | radius | sip-tcp | sip-udp }
When you create a virtual server, you must specify the virtual server type. You can enter an existing virtual server view without specifying the virtual server type. If you specify the virtual server type when entering an existing virtual server view, the virtual server type must be the one specified when you create the virtual server.
3. (Optional.) Configure a description for the virtual server.
description text
By default, no description is configured for the virtual server.
Configuring a TCP virtual server to operate at Layer 7
1. Enter system view.
system-view
2. Enter TCP virtual server view.
virtual-server virtual-server-name
3. Configure the TCP virtual server to operate at Layer 7.
application-mode enable
By default, a TCP virtual server operates at Layer 4.
Specifying the VSIP and port number
Restrictions and guidelines
Do not specify the same VSIP and port number for virtual servers of the fast HTTP, HTTP, IP, RADIUS, TCP-based SIP, and TCP types.
Do not specify the same VSIP and port number for virtual servers of the UDP and UDP-based SIP, types.
Specifying the VSIP and port number for Layer 4 server load balancing
1. Enter system view.
system-view
2. Enter IP, TCP, or UDP virtual server view.
virtual-server virtual-server-name
3. Specify the VSIP for the virtual server.
IPv4:
virtual ip address ipv4-address [ mask-length | mask ]
IPv6:
virtual ipv6 address ipv6-address [ prefix-length ]
By default, no IP address is specified for the virtual server.
4. Specify the port number for the virtual server.
port { port-number [ to port-number ] } &<1-n>
By default, the port number is 0 (meaning any port number) for the virtual server of the IP, TCP, or UDP type.
Specifying the VSIP and port number for Layer 7 server load balancing
1. Enter system view.
system-view
2. Enter fast HTTP, HTTP, RADIUS, TCP-based SIP, or UDP-based SIP virtual server view.
virtual-server virtual-server-name
3. Specify the VSIP for the virtual server.
IPv4:
virtual ip address ipv4-address [ mask-length | mask ]
IPv6:
virtual ipv6 address ipv6-address [ prefix-length ]
By default, no IP address is specified for the virtual server.
4. Specify the port number for the virtual server.
port { port-number [ to port-number ] } &<1-n>
By default:
¡ The port number is 80 for the virtual server of the fast HTTP or HTTP type.
¡ The port number is 0 (meaning any port number) for the virtual server of the RADIUS type.
¡ The port number is 5060 for the virtual server of the SIP type.
If the virtual server has referenced an SSL policy, you must specify a non-default port number (typically 443) for the virtual server.
Specifying a VPN instance
1. Enter system view.
system-view
2. Enter virtual server view.
virtual-server virtual-server-name
3. Specify a VPN instance for the virtual server.
vpn-instance vpn-instance-name
By default, a virtual server belongs to the public network.
Specifying server farms
About specifying server farms
When the primary server farm is available (contains available real servers), the virtual server forwards packets through the primary server farm. When the primary server farm is not available, the virtual server forwards packets through the backup server farm.
Procedure
1. Enter system view.
system-view
2. Enter virtual server view.
virtual-server virtual-server-name
3. Specify server farms.
default server-farm server-farm-name [ backup backup-server-farm-name ] [ sticky sticky-name ]
By default, no server farm is specified for the virtual server.
Specifying an LB policy
About specifying an LB policy
By referencing an LB policy, the virtual server load balances matching packets based on the packet contents.
Procedure
1. Enter system view.
system-view
2. Enter virtual server view.
virtual-server virtual-server-name
3. Specify an LB policy for the virtual server.
lb-policy policy-name
By default, the virtual server does not reference any LB policies.
A virtual server can only reference a policy profile of the specified type. For example, a virtual server of the fast HTTP or HTTP type can reference a policy profile of the generic type or HTTP type. A virtual server of the IP, SIP, TCP, or UDP type can only reference a policy profile of the generic type. A virtual server of the RADIUS type can reference a policy profile of the generic or RADIUS type.
Configuring the bandwidth and connection parameters
1. Enter system view.
system-view
2. Enter virtual server view.
virtual-server virtual-server-name
3. Set the maximum bandwidth for the virtual server.
rate-limit bandwidth [ inbound | outbound ] bandwidth-value
By default, the maximum bandwidth, inbound bandwidth, and outbound bandwidth are 0 KBps for the virtual server. The bandwidths are not limited.
4. Set the maximum number of connections for the virtual server.
connection-limit max max-number
By default, the maximum number of connections is 0 for the virtual server. The number is not limited.
5. Set the maximum number of connections per second for the virtual server.
rate-limit connection connection-number
By default, the maximum number of connections per second is 0 for the virtual server. The number is not limited.
Enabling per-packet load balancing for UDP traffic
About per-packet load balancing for UDP traffic
By default, the LB device distributes traffic matching the virtual server according to application type. Traffic of the same application type is distributed to one real server. Perform this task to enable the LB device to distribute traffic matching the virtual server on a per-packet basis.
Procedure
1. Enter system view.
system-view
2. Enter UDP-based SIP or UDP virtual server view.
virtual-server virtual-server-name
3. Enable per-packet load balancing for UDP traffic for the virtual server.
udp per-packet
By default, per-packet load balancing for UDP traffic is disabled for the virtual server.
Configuring the HTTP redirection feature
About the HTTP redirection feature
This feature redirects all HTTP request packets matching a virtual server to the specified URL.
Procedure
1. Enter system view.
system-view
2. Enter HTTP virtual server view.
virtual-server virtual-server-name
3. Enable the redirection feature and specify a redirection URL for the virtual server.
redirect relocation relocation
By default, the redirection feature is disabled for the virtual server.
4. Specify the redirection status code that the LB device returns to a client.
redirect return-code { 301 | 302 }
By default, the redirection status code that the LB device returns to a client is 302.
This command takes effect only when the redirection feature is enabled for the virtual server.
Specifying a parameter profile
About specifying a parameter profile
You can configure advanced parameters through a parameter profile. The virtual server references the parameter profile to analyze, process, and optimize service traffic.
Specifying a parameter profile for Layer 4 server load balancing
1. Enter system view.
system-view
2. Enter IP, TCP, or UDP virtual server view.
virtual-server virtual-server-name
3. Specify a parameter profile for the virtual server.
parameter { ip | tcp } profile-name [ client-side | server-side ]
By default, the virtual server does not reference any parameter profiles.
TCP virtual servers can only use a TCP parameter profile. IP virtual servers and UDP virtual servers can only use an IP parameter profile. Only TCP parameter profiles support the client-side and server-side keywords.
Specifying a parameter profile for Layer 7 server load balancing
1. Enter system view.
system-view
2. Enter fast HTTP, HTTP, RADIUS, TCP-based SIP, or UDP-based SIP virtual server view.
virtual-server virtual-server-name
3. Specify a parameter profile for the virtual server.
parameter { http | http-compression | http-statistics | ip | oneconnect | tcp | tcp-application } profile-name [ client-side | server-side ]
By default, the virtual server does not reference any parameter profiles.
Only fast HTTP and HTTP virtual servers support HTTP and TCP parameter profiles. Only HTTP virtual servers support HTTP-compression, HTTP-statistics, and OneConnect parameter profiles. Only TCP virtual servers operating at Layer 7 support TCP-application parameter profiles.
Only TCP parameter profiles support the client-side and server-side keywords.
Applying an LB connection limit policy
About applying an LB connection limit policy
Perform this task to limit the number of connections accessing the virtual server.
Procedure
1. Enter system view.
system-view
2. Enter virtual server view.
virtual-server virtual-server-name
3. Apply an LB connection limit policy to the virtual server.
lb-limit-policy policy-name
By default, no LB connection limit policies are applied to the virtual server.
Specifying an SSL policy
About specifying an SSL policy
Specifying an SSL client policy enables the LB device (SSL client) to send encrypted traffic to an SSL server.
Specifying an SSL server policy enables the LB device (SSL server) to send encrypted traffic to an SSL client.
Restrictions and guidelines
You must disable and then enable a virtual server for a modified SSL policy to take effect.
Procedure
1. Enter system view.
system-view
2. Enter TCP or HTTP virtual server view.
virtual-server virtual-server-name
3. Specify an SSL client policy for the virtual server.
ssl-client-policy policy-name
By default, the virtual server does not reference any SSL client policies.
The virtual servers of the TCP type and fast HTTP type do not support this command.
4. Specify an SSL server policy for the virtual server.
ssl-server-policy policy-name
By default, the virtual server does not reference any SSL server policies.
The virtual servers of the fast HTTP type do not support this command.
Configuring hot backup
About configuring hot backup
To implement hot backup for two LB devices, you must enable synchronization for session extension information and sticky entries to avoid service interruption.
Restrictions and guidelines
For successful sticky entry synchronization, if you want to specify a sticky group, enable sticky entry synchronization before specifying a sticky group on both LB devices. You can specify a sticky group by using the sticky sticky-name option when you specify server farms.
Procedure
1. Enter system view.
system-view
2. Enter virtual server view.
virtual-server virtual-server-name
3. Enable session extension information synchronization.
connection-sync enable
By default, session extension information synchronization is disabled.
The virtual servers of the HTTP type do not support this command.
4. Enable sticky entry synchronization.
sticky-sync enable
By default, sticky entry synchronization is disabled.
Enabling IP address advertisement for a virtual server
About enabling IP address advertisement for a virtual server
This feature can implement load balancing among data centers in a disaster recovery network. You must enable IP address advertisement for a virtual server on the LB device in each data center.
After this feature is configured, the device advertises the IP address of the virtual server to OSPF for route calculation. When the service of a data center switches to another data center, the traffic to the virtual server can also be switched to that data center. For information about OSPF, see Layer 3—IP Routing Configuration Guide.
Procedure
1. Enter system view.
system-view
2. Enter virtual server view.
virtual-server virtual-server-name
3. Enable IP address advertisement for the virtual server.
route-advertisement enable
By default, IP address advertisement is disabled for a virtual server.
Specifying an interface for sending gratuitous ARP packets and ND packets
About specifying an interface for sending gratuitous ARP packets and ND packets
Perform this task to specify an interface from which gratuitous ARP packets and ND packets are sent out. For information about gratuitous ARP, see ARP configuration in Layer 3—IP Services Configuration Guide. For information about ND, see IPv6 basics configuration in Layer 3—IP Services Configuration Guide.
Procedure
1. Enter system view.
system-view
2. Enter virtual server view.
virtual-server virtual-server-name
3. Specify an interface for sending gratuitous ARP packets and ND packets.
arp-nd interface interface-type interface-number
By default, no interface is specified for sending gratuitous ARP packets and ND packets.
Enabling a virtual server
About enabling a virtual server
After you configure a virtual server, you must enable the virtual server to for it to work.
Procedure
1. Enter system view.
system-view
2. Enter virtual server view.
virtual-server virtual-server-name
3. Enable the virtual server.
service enable
By default, the virtual server is disabled.
Configuring an LB class
An LB class classifies packets by comparing packets against specific rules. Matching packets are further processed by LB actions. You can create a maximum of 65535 rules for an LB class.
LB class tasks at a glance
To configure an LB class, perform the following tasks:
1. Configure a generic LB class
b. Create a match rule
Choose the following tasks as needed:
- Creating a match rule that references an LB class
- Creating a source IP address match rule
- Creating an interface match rule
- Creating a user group match rule
- Creating a TCP payload match rule
2. Configure an HTTP LB class
b. Create a match rule
Choose the following tasks as needed:
- Creating a match rule that references an LB class
- Creating a source IP address match rule
- Creating an interface match rule
- Creating a user group match rule
- Creating an HTTP content match rule
- Creating an HTTP cookie match rule
- Creating an HTTP header match rule
- Creating an HTTP URL match rule
- Creating an HTTP method match rule
- Creating a RADIUS attribute match rule
3. Configure a RADIUS LB class
b. Create a match rule
Choose the following tasks as needed:
- Creating a match rule that references an LB class
- Creating a source IP address match rule
- Creating a RADIUS attribute match rule
Creating an LB class
Creating an LB class for Layer 4 server load balancing
1. Enter system view.
system-view
2. Create a generic LB class, and enter LB class view.
loadbalance class class-name type generic [ match-all | match-any ]
When you create an LB class, you must specify the class type. You can enter an existing LB class view without specifying the class type. If you specify the class type when entering an existing LB class view, the class type must be the one specified when you create the LB class.
3. (Optional.) Configure a description for the LB class.
description text
By default, no description is configured for the LB class.
Creating an LB class for Layer 7 server load balancing
1. Enter system view.
system-view
2. Create an HTTP or RADIUS LB class, and enter LB class view.
loadbalance class class-name type { http | radius } [ match-all | match-any ]
When you create an LB class, you must specify an class type. You can enter an existing LB class view without specifying the class type. If you specify the class type when entering an existing LB class view, the class type must be the one specified when you create the LB class.
3. (Optional.) Configure a description for the LB class.
description text
By default, no description is configured for the LB class.
Creating a match rule that references an LB class
1. Enter system view.
system-view
2. Enter LB class view.
loadbalance class class-name
3. Create a match rule that references an LB class.
match [ match-id ] class class-name
Creating a source IP address match rule
1. Enter system view.
system-view
2. Enter LB class view.
loadbalance class class-name
3. Create a source IP address match rule.
match [ match-id ] source { ip address ipv4-address [ mask-length | mask ] | ipv6 address ipv6-address [ prefix-length ] }
Creating an ACL match rule
1. Enter system view.
system-view
2. Enter LB class view.
loadbalance class class-name
3. Create an ACL match rule.
match [ match-id ] acl [ ipv6 ] { acl-number | name acl-name }
Creating an interface match rule
1. Enter system view.
system-view
2. Enter generic or HTTP LB class view.
loadbalance class class-name
3. Create an interface match rule.
match [ match-id ] interface interface-type interface-number
Creating a user match rule
1. Enter system view.
system-view
2. Enter generic or HTTP LB class view.
loadbalance class class-name
3. Create a user match rule.
match [ match-id ] [ identity-domain domain-name ] user user-name
Creating a user group match rule
1. Enter system view.
system-view
2. Enter generic or HTTP LB class view.
loadbalance class class-name
3. Create a user group match rule.
match [ match-id ] [ identity-domain domain-name ] user-group user-group-name
Creating a TCP payload match rule
About the TCP payload match rule
The device takes the corresponding LB action on TCP packets matching a TCP payload match rule. If you specify the not keyword for a TCP payload match rule, the device takes the corresponding LB action on TCP packets not matching the TCP payload match rule.
Procedure
1. Enter system view.
system-view
2. Enter generic LB class view.
loadbalance class class-name
3. Create a TCP payload match rule.
match [ match-id ] payload payload [ case-insensitive ] [ not ]
Creating an HTTP content match rule
1. Enter system view.
system-view
2. Enter HTTP LB class view.
loadbalance class class-name
3. Create an HTTP content match rule.
match [ match-id ] content content [ offset offset ]
This command is not supported by virtual servers of the fast HTTP type.
Creating an HTTP cookie match rule
1. Enter system view.
system-view
2. Enter HTTP LB class view.
loadbalance class class-name
3. Create an HTTP cookie match rule.
match [ match-id ] cookie cookie-name value value
Creating an HTTP header match rule
1. Enter system view.
system-view
2. Enter HTTP LB class view.
loadbalance class class-name
3. Create an HTTP header match rule.
match [ match-id ] header header-name value value
Creating an HTTP URL match rule
1. Enter system view.
system-view
2. Enter HTTP LB class view.
loadbalance class class-name
3. Create an HTTP URL match rule.
match [ match-id ] url url
Creating an HTTP method match rule
1. Enter system view.
system-view
2. Enter HTTP LB class view.
loadbalance class class-name
3. Create an HTTP method match rule.
match [ match-id ] method { ext ext-type | rfc rfc-type }
Creating a RADIUS attribute match rule
1. Enter system view.
system-view
2. Enter RADIUS LB class view.
loadbalance class class-name
3. Create a RADIUS attribute match rule.
match [ match-id ] radius-attribute { code attribute-code | user-name } value attribute-value
Configuring an LB action
About LB action modes
LB actions include the following modes:
· Forwarding mode—Determines whether and how to forward packets. If no forwarding action is specified, packets are dropped.
· Modification mode—Modifies packets. To prevent the LB device from dropping the modified packets, the modification action must be used together with a forwarding action.
· Response mode—Responds to client requests by using a file.
If you create an LB action without specifying any of the previous action modes, packets are dropped.
Restrictions and guidelines
For Layer 4 server load balancing, the following configurations are mutually exclusive:
· Configure the forwarding mode.
· Specify server farms.
For Layer 7 server load balancing, any two of the following configurations are mutually exclusive:
· Specify server farms.
· Configure the redirection feature.
· Specify a response file for matching HTTP requests.
For Layer 7 server load balancing, the following configurations are also mutually exclusive:
· Match the next rule upon failure to find a real server.
· Specify a response file used upon load balancing failure.
LB action tasks at a glance
To configure an LB action, perform the following tasks:
1. Configuring a generic LB action
b. (Optional.) Configuring a forwarding LB action
- Configuring the forwarding mode
You can configure only one of the "Configuring the forwarding mode", "Specifying server farms", and "Closing TCP connections" tasks.
- (Optional.) Matching the next rule upon failure to find a real server
- (Optional.) Closing TCP connections upon failure to find a real server
c. (Optional.) Configuring a modification LB action
- Configuring the ToS field in IP packets sent to the server
2. Configuring an HTTP LB action.
b. (Optional.) Configuring a forwarding LB action
You can configure only one of the "Specifying server farms" and "Closing TCP connections" tasks.
- (Optional.) Matching the next rule upon failure to find a real server
- (Optional.) Closing TCP connections upon failure to find a real server
c. (Optional.) Configuring a modification LB action
Choose the following tasks as needed:
- Configuring the ToS field in IP packets sent to the server
- Rewriting the URL in the Location header of HTTP responses from the server
- Specifying an SSL client policy
- Rewriting the content of HTTP responses
- Configuring the HTTP redirection feature
d. (Optional.) Configuring a response LB action
- Specifying a response file for matching HTTP requests
- Specifying a response file used upon load balancing failure
3. Configuring a RADIUS LB action.
b. (Optional.) Configuring a forwarding LB action
- Matching the next rule upon failure to find a real server
c. (Optional.) Configuring a modification LB action
- Configuring the ToS field in IP packets sent to the server
Creating an LB action
Creating an LB action for Layer 4 server load balancing
1. Enter system view.
system-view
2. Create a generic LB action, and enter LB action view.
loadbalance action action-name type generic
When you create an LB action, you must specify the action type. You can enter an existing LB action view without specifying the action type. If you specify the action type when entering an existing LB action view, the action type must be the one specified when you create the LB action.
3. (Optional.) Configure a description for the LB action.
description text
By default, no description is configured for the LB action.
Creating an LB action for Layer 7 server load balancing
1. Enter system view.
system-view
2. Create an HTTP or RADIUS LB action, and enter LB action view.
loadbalance action action-name type { http | radius }
When you create an LB action, you must specify the action type. You can enter an existing LB action view without specifying the action type. If you specify the action type when entering an existing LB action view, the action type must be the one specified when you create the LB action.
3. (Optional.) Configure a description for the LB action.
description text
By default, no description is configured for the LB action.
Configuring a forwarding LB action
About forwarding LB actions
Three forwarding LB action types are available:
· Forward—Forwards matching packets.
· Specify server farms—When the primary server farm is available (contains available real servers), the primary server farm is used to guide packet forwarding. When the primary server farm is not available, the backup server farm is used to guide packet forwarding.
· Close TCP connections—Closes TCP connections matching the LB policy by sending FIN or RST packets.
· Match the next rule upon failure to find a real server—If the device fails to find a real server according to the LB action, it matches the packet with the next rule in the LB policy.
· Close TCP connections upon failure to find a real server—Closes TCP connections matching the LB policy by sending FIN or RST packets if the device fails to find a real server according to the LB action.
Configuring the forwarding mode
1. Enter system view.
system-view
2. Enter generic LB action view.
loadbalance action action-name
3. Configure the forwarding mode.
forward all
By default, the forwarding mode is to discard packets.
Specifying server farms
1. Enter system view.
system-view
2. Enter LB action view.
loadbalance action action-name
3. Specify server farms.
server-farm server-farm-name [ backup backup-server-farm-name ] [ sticky sticky-name ]
By default, no server farm is specified.
Closing TCP connections
1. Enter system view.
system-view
2. Enter generic or HTTP LB action view.
loadbalance action action-name
3. Configure the method of closing TCP connections.
tcp-close { fin | rst }
By default, FIN packets are sent to close TCP connections.
Matching the next rule upon failure to find a real server
1. Enter system view.
system-view
2. Enter LB action view.
loadbalance action action-name
3. Match the next rule upon failure to find a real server.
fallback-action continue
By default, packets are discarded when no real servers are available for the current LB action.
Closing TCP connections upon failure to find a real server
1. Enter system view.
system-view
2. Enter generic or HTTP LB action view.
loadbalance action action-name
3. Configure the method of closing TCP connections upon failure to find a real server.
fallback-action close { fin | rst }
By default, packets are dropped when no real servers are available for the current LB action.
Configuring a modification LB action
Configuring the ToS field in IP packets sent to the server
1. Enter system view.
system-view
2. Enter LB action view.
loadbalance action action-name
3. Configure the ToS field in IP packets sent to the server.
set ip tos tos-number
By default, the ToS field in IP packets sent to the server is not changed.
Handling the HTTP header
1. Enter system view.
system-view
2. Enter HTTP LB action view.
loadbalance action action-name
3. Delete an HTTP header.
header delete { both | request | response } name header-name
By default, the HTTP header is not deleted.
The device deletes the specified header from HTTP packets.
4. Insert an HTTP header.
header insert { both | request | response } name header-name value value [ encode { base64 | url } ]
By default, the HTTP header is not inserted.
The device inserts the specified header into HTTP packets.
5. Rewrite an HTTP header.
header rewrite { both | request | response } name header-name value value replace replace [ encode { base64 | url } ]
By default, the HTTP header is not rewritten.
The device rewrites the specified content of the matching header in HTTP packets as new content.
6. Rewrite the URL in HTTP requests.
header rewrite request url value value replace replace [ encode { base64 | url } ]
By default, the URL in HTTP requests is not rewritten.
Rewriting the URL in the Location header of HTTP responses from the server
1. Enter system view.
system-view
2. Enter HTTP LB action view.
loadbalance action action-name
3. Rewrite the URL in the Location header of HTTP responses from the server.
ssl url rewrite location location [ clearport clear-port ] [ sslport ssl-port ]
By default, the URL in the Location header of HTTP responses from the server is not rewritten.
If the Location header of an HTTP response packet contains the specified URL and HTTP port number, the system rewrites HTTP in the URL to HTTPS and rewrites the HTTP port number to an SSL port number.
Specifying an SSL client policy
1. Enter system view.
system-view
2. Enter HTTP LB action view.
loadbalance action action-name
3. Specify an SSL client policy for the LB action.
ssl-client-policy policy-name
By default, the LB action does not reference any SSL client policies.
Specifying an SSL client policy enables the LB device (SSL client) to send encrypted traffic to an SSL server.
You must disable and then enable a virtual server for a modified SSL policy to take effect.
Rewriting the content of HTTP responses
1. Enter system view.
system-view
2. Enter HTTP LB action view.
loadbalance action action-name
3. Rewrite the content of HTTP responses.
content rewrite value value replace replace
By default, the content of HTTP responses is not rewritten.
Configuring the HTTP redirection feature
1. Enter system view.
system-view
2. Enter HTTP LB action view.
loadbalance action action-name
3. Enable the HTTP redirection feature and specify a redirection URL for the LB action.
redirect relocation relocation
By default, the HTTP redirection feature is disabled for an LB action.
This feature redirects all HTTP request packets matching an LB action to the specified URL.
4. Specify the redirection status code that the LB device returns to a client.
redirect return-code { 301 | 302 }
By default, the redirection status code that the LB device returns to a client is 302.
This command takes effect only when the HTTP redirection feature is enabled for an LB action.
Specifying a response file for matching HTTP requests
About specifying a response file for matching HTTP requests
If the URL path in a client request matches the specified URL path, the device responds to the request by using an uncompressed file.
If the URL path in a client request matches the specified working path plus a relative path in the specified zip file, the device responds to the request by using the file in the zip file.
If you configure both an uncompressed file and a compressed file for the same URL path, the uncompressed file is used to respond to matching HTTP requests.
Procedure
1. Enter system view.
system-view
2. Enter HTTP LB action view.
loadbalance action action-name
3. Specify a response file for matching HTTP requests.
response { url url file filename | workpath workpath zip-file zip-filename }
By default, no response file is specified for HTTP requests.
Specifying a response file used upon load balancing failure
About specifying a response file used upon load balancing failure
This feature enables the device to respond to client requests when the device fails to find an available real server or fails to find the response file specified in the response command.
Restrictions and guidelines
The response file specified in this feature must contain a complete HTTP packet and cannot contain only the HTTP content.
Procedure
1. Enter system view.
system-view
2. Enter HTTP LB action view.
loadbalance action action-name
3. Specify a response file used upon load balancing failure.
fallback-action response raw-file raw-filename
By default, no response file is specified for use upon load balancing failure.
Configuring an LB policy
About configuring an LB policy
An LB policy associates an LB class with an LB action to guide packet forwarding. In an LB policy, you can configure an LB action for packets matching the specified LB class, and configure the default action for packets matching no LB class.
You can specify multiple LB classes for an LB policy. Packets match the LB classes in the order the LB classes are configured. If an LB class is matched, the specified LB action is performed. If no LB class is matched, the default LB action is performed.
LB policy tasks at a glance
The LB policy configuration tasks for Layer 4 and Layer 7 server load balancing are the same.
To configure an LB policy, perform the following tasks:
3. Specifying the default LB action
Creating an LB policy
Creating an LB policy for Layer 4 server load balancing
1. Enter system view.
system-view
2. Create a generic LB policy, and enter LB action view.
loadbalance policy policy-name type generic
When you create an LB policy, you must specify the policy type. You can enter an existing LB policy view without specifying the policy type. If you specify the policy type when entering an existing LB policy view, the policy type must be the one specified when you create the LB policy.
3. (Optional.) Configure a description for the LB policy.
description text
By default, no description is configured for the LB policy.
Creating an LB policy for Layer 7 server load balancing
1. Enter system view.
system-view
2. Create an HTTP or RADIUS LB policy, and enter LB policy view.
loadbalance policy policy-name type { http | radius }
When you create an LB policy, you must specify the policy type. You can enter an existing LB policy view without specifying the policy type. If you specify the policy type when entering an existing LB policy view, the policy type must be the one specified when you create the LB policy.
3. (Optional.) Configure a description for the LB policy.
description text
By default, no description is configured for the LB policy.
Specifying an LB action
Restrictions and guidelines
A generic LB policy can reference only generic LB classes and generic LB actions. An HTTP LB policy can reference HTTP or generic LB classes and LB actions. A RADIUS LB policy can reference RADIUS or generic LB classes and LB actions.
Procedure
1. Enter system view.
system-view
2. Enter LB policy view.
loadbalance policy policy-name
3. Specify an LB action for the specified LB class.
class class-name [ insert-before before-class-name ] action action-name
By default, no LB action is specified for any LB classes.
You can specify the same LB action for different LB classes.
Specifying the default LB action
Restrictions and guidelines
A generic LB policy can only reference generic LB actions. This rule does not apply to HTTP LB policies.
Procedure
1. Enter system view.
system-view
2. Enter LB policy view.
loadbalance policy policy-name
3. Specify the default LB action.
default-class action action-name
By default, no default LB action is specified.
Configuring a sticky group
A sticky group uses a sticky method to distribute similar sessions to the same real server according to sticky entries. The sticky method applies to the first packet of a session. Other packets of the session are distributed to the same real server.
Sticky group tasks at a glance for Layer 4 server load balancing
2. Configuring the IP sticky method
3. (Optional.) Configuring the timeout timer for sticky entries
4. (Optional.) Ignoring the limits for sessions that match sticky entries
5. (Optional.) Enabling stickiness-over-busyness
Sticky group tasks at a glance for Layer 7 server load balancing
2. Configuring a sticky method
Choose the following tasks as needed:
¡ Configuring the IP sticky method
¡ Configuring the HTTP content sticky method
¡ Configuring the HTTP cookie sticky method
¡ Configuring the HTTP header sticky method
¡ Configuring the HTTP or UDP payload sticky method
¡ Configuring the RADIUS attribute sticky method
¡ Configuring the SIP call ID sticky method
¡ Configuring the SSL sticky method
3. (Optional.) Configuring the timeout timer for sticky entries
4. (Optional.) Ignoring the limits for sessions that match sticky entries
5. (Optional.) Enabling stickiness-over-busyness
Creating a sticky group
Creating a sticky group for Layer 4 server load balancing
1. Enter system view.
system-view
2. Create an address- and port-type sticky group and enter sticky group view.
sticky-group group-name type address-port
When you create a sticky group, you must specify the group type. You can enter an existing sticky group view without specifying the group type. If you specify the group type when entering an existing sticky group view, the group type must be the one specified when you create the sticky group.
3. (Optional.) Configure a description for the sticky group.
description text
By default, no description is configured for the sticky group.
Creating a sticky group for Layer 7 server load balancing
1. Enter system view.
system-view
2. Create a sticky group of the HTTP content, HTTP cookie, HTTP header, HTTP/UDP payload, RADIUS, SIP, or SSL type and enter sticky group view.
sticky-group group-name type { http-content | http-cookie | http-header | payload | radius | sip | ssl }
When you create a sticky group, you must specify the group type. You can enter an existing sticky group view without specifying the group type. If you specify the group type when entering an existing sticky group view, the group type must be the one specified when you create the sticky group.
3. (Optional.) Configure a description for the sticky group.
description text
By default, no description is configured for the sticky group.
Configuring the IP sticky method
1. Enter system view.
system-view
2. Enter sticky group view.
sticky-group group-name
3. Configure the IP sticky method.
IPv4:
ip [ port ] { both | destination | source } [ mask mask-length ]
IPv6:
ipv6 [ port ] { both | destination | source } [ prefix prefix-length ]
By default, no IP sticky method is configured.
Configuring the HTTP content sticky method
1. Enter system view.
system-view
2. Enter HTTP content sticky group view.
sticky-group group-name
3. Configure the HTTP content sticky method.
content [ offset offset ] [ start start-string ] [ end end-string | length length ]
By default, no HTTP content sticky method is configured.
This command is not supported by the virtual servers of the fast HTTP type.
Configuring the HTTP cookie sticky method
1. Enter system view.
system-view
2. Enter HTTP cookie sticky group view.
sticky-group group-name
3. Configure the HTTP cookie sticky method.
cookie { get name cookie-name [ offset offset ] [ start start-string ] [ end end-string | length length ] | { insert | rewrite } [ name cookie-name ] }
By default, no HTTP cookie sticky method is configured.
4. (Optional.) Specify the name of the secondary cookie that appears in the URI.
cookie secondary name value
By default, the name of the secondary cookie to be searched in the URI is not specified.
5. (Optional.) Enable checking for all packets.
check all-packet
By default, the checking for all packets is disabled.
Configuring the HTTP header sticky method
1. Enter system view.
system-view
2. Enter HTTP header sticky group view.
sticky-group group-name
3. Configure the HTTP header sticky method.
header { { { host | name header-name | url } [ offset offset ] [ start start-string ] [ end end-string | length length ] } | request-method | version }
By default, no HTTP header sticky method is configured.
Configuring the HTTP or UDP payload sticky method
1. Enter system view.
system-view
2. Enter HTTP or UDP payload sticky group view.
sticky-group group-name
3. Configure the HTTP or UDP payload sticky method.
payload [ offset offset ] [ start start-string ] [ end end-string | length length ]
By default, no HTTP or UDP payload sticky method is configured.
This command is not supported by the virtual servers of the fast HTTP type.
Configuring the RADIUS attribute sticky method
1. Enter system view.
system-view
2. Enter RADIUS attribute sticky group view.
sticky-group group-name
3. Configure the RADIUS attribute sticky method.
radius-attribute { code attribute-code | framed-ip-address | user-name }
By default, no RADIUS attribute sticky method is configured.
Configuring the SIP call ID sticky method
About configuring the SIP call ID sticky method
The SIP call ID sticky method allows the device to generate sticky entries based on the Call-ID header field in SIP messages. Packets with the same call ID are assigned to the same real server.
Procedure
1. Enter system view.
system-view
2. Enter SIP sticky group view.
sticky-group group-name
3. Configure the SIP call ID sticky method.
header call-id
By default, no SIP call ID sticky method is configured.
Configuring the SSL sticky method
1. Enter system view.
system-view
2. Enter SSL sticky group view.
sticky-group group-name
3. Configure the SSL sticky method based on SSL session ID.
ssl session-id
By default, no SSL sticky method is configured.
Configuring the timeout timer for sticky entries
1. Enter system view.
system-view
2. Enter sticky group view.
sticky-group group-name
3. Configure the timeout timer for sticky entries.
timeout timeout-value
By default, the timeout timer for sticky entries is 86400 seconds for sticky groups of the HTTP cookie type and 60 seconds for sticky groups of other types.
Ignoring the limits for sessions that match sticky entries
About ignoring the limits for sessions that match sticky entries
Perform this task to ignore the following limits for sessions that match sticky entries:
· Bandwidth and connection parameters on real servers.
· LB connection limit policies on virtual servers.
Procedure
1. Enter system view.
system-view
2. Enter sticky group view.
sticky-group group-name
3. Ignore the limits for sessions that match sticky entries.
override-limit enable
By default, the session limits apply to sessions that match sticky entries.
Enabling stickiness-over-busyness
About stickiness-over-busyness
This feature enables the device to assign client requests to real servers based on sticky entries, regardless of whether the real servers are busy.
When this feature is disabled, the device assigns client requests to only real servers in normal state.
Procedure
1. Enter system view.
system-view
2. Enter sticky group view.
sticky-group group-name
3. Enable stickiness-over-busyness.
sticky-over-busy enable
By default, stickiness-over-busyness is disabled.
Configuring a parameter profile
Parameter profile tasks at a glance
To configure a parameter profile, perform the following tasks:
1. Configuring an IP parameter profile
a. Creating a parameter profile
b. Configuring the ToS field in IP packets sent to the client
2. Configuring a TCP parameter profile
a. Creating a parameter profile
b. Configuring parameters in the profile
Choose the following tasks as needed:
- Configuring the maximum local window size for TCP connections
- Configuring the idle timeout for TCP connections
- Configuring the TIME_WAIT state timeout time for TCP connections
- Configuring the retransmission timeout time for SYN packets
- Configuring the TCP keepalive parameters
- Configuring the FIN-WAIT-1 state timeout time for TCP connections
- Configuring the FIN-WAIT-2 state timeout time for TCP connections
- Setting the MSS for the LB device
3. Configuring a TCP parameter profile
a. Creating a parameter profile
b. Configuring the TCP payload match parameters
4. Configuring an HTTP parameter profile
a. Creating a parameter profile
b. Configuring parameters in the profile
Choose the following tasks as needed:
- Enabling load balancing for each HTTP request
- Configuring connection reuse between the LB device and the server
- Modifying the header in each HTTP request or response
- Disabling case sensitivity matching for HTTP
- Configuring the maximum length to parse the HTTP content
- Configuring secondary cookie parameters
- Specifying the action to take when the header of an HTTP packet exceeds the maximum length
5. Configuring an HTTP-compression parameter profile
a. Creating a parameter profile
b. Configuring the HTTP compression feature
6. Configuring an HTTP-statistics parameter profile
a. Creating a parameter profile
b. Configuring the HTTP statistics feature
Creating a parameter profile
Creating a parameter profile for Layer 4 server load balancing
1. Enter system view.
system-view
2. Create an IP parameter profile and enter parameter profile view.
parameter-profile profile-name type ip
When you create a parameter profile, you must specify the profile type. You can enter an existing parameter profile view without specifying the profile type. If you specify the profile type when entering an existing parameter profile view, the profile type must be the one specified when you create the parameter profile.
3. (Optional.) Configure a description for the parameter profile.
description text
By default, no description is configured for the parameter profile.
Creating a parameter profile for Layer 7 server load balancing
1. Enter system view.
system-view
2. Create a parameter profile of the HTTP, HTTP-compression, HTTP-statistics, OneConnect, TCP, or TCP-application type and enter parameter profile view.
parameter-profile profile-name type { http | http-compression | http-statistics | oneconnect | tcp | tcp-application }
When you create a parameter profile, you must specify the profile type. You can enter an existing parameter profile view without specifying the profile type. If you specify the profile type when entering an existing parameter profile view, the profile type must be the one specified when you create the parameter profile.
3. (Optional.) Configure a description for the parameter profile.
description text
By default, no description is configured for the parameter profile.
Configuring the ToS field in IP packets sent to the client
1. Enter system view.
system-view
2. Enter IP parameter profile view.
parameter-profile profile-name
3. Configure the ToS field in the IP packets sent to the client.
set ip tos tos-number
By default, the ToS field in IP packets sent to the client is not changed.
Configuring the maximum local window size for TCP connections
1. Enter system view.
system-view
2. Enter TCP parameter profile view.
parameter-profile profile-name
3. Configure the maximum local window size for TCP connections.
tcp window-size size
By default, the maximum local window size is 65535 for TCP connections.
Configuring the idle timeout for TCP connections
About configuring the idle timeout for TCP connections
Perform this task to configure the idle timeout for TCP connections between the LB device and the clients and for TCP connections between the LB device and the servers. If no traffic is available on a TCP connection before the idle timeout expires, the LB device terminates the TCP connection.
Procedure
1. Enter system view.
system-view
2. Enter TCP parameter profile view.
parameter-profile profile-name
3. Configure the idle timeout for TCP connections.
tcp connection idle-timeout value
By default, the idle timeout is 0 seconds, which means TCP connections never time out.
Configuring the TIME_WAIT state timeout time for TCP connections
About configuring the TIME_WAIT state timeout time for TCP connections
The TCP TIME_WAIT timer is twice the Maximum Segment Lifetime (MSL), which is the maximum amount of time that any segment can exist on the network before being dropped.
A TCP connection cannot be released until the TIME_WAIT timer expires. To release TCP connections faster and improve load balancing efficiency, perform this task to set a smaller TIME_WAIT state timeout time.
Procedure
1. Enter system view.
system-view
2. Enter TCP parameter profile view.
parameter-profile profile-name
3. Configure the TIME_WAIT state timeout time.
time-wait timeout value
By default, the TIME_WAIT state timeout time for TCP connections is 2 seconds.
Configuring the retransmission timeout time for SYN packets
1. Enter system view.
system-view
2. Enter TCP parameter profile view.
parameter-profile profile-name
3. Configure the retransmission timeout time for SYN packets.
syn retransmission-timeout timeout-value
By default, the retransmission timeout time for SYN packets is 10 seconds.
Configuring the TCP keepalive parameters
1. Enter system view.
system-view
2. Enter TCP parameter profile view.
parameter-profile profile-name
3. Configure the idle timeout time for sending keepalive packets.
keepalive idle-timeout timeout-value
By default, the idle timeout time for sending keepalive packets is 10 seconds.
4. Configure the retransmission interval and retransmission times for keepalive packets.
keepalive retransmission interval interval count count
By default, the retransmission interval is 3 seconds, and the retransmission times is 3.
Configuring the FIN-WAIT-1 state timeout time for TCP connections
1. Enter system view.
system-view
2. Enter TCP parameter profile view.
parameter-profile profile-name
3. Configure the FIN-WAIT-1 state timeout time.
fin-wait1 timeout timeout-value
By default, the FIN-WAIT-1 state timeout time for TCP connections is 5 seconds.
Configuring the FIN-WAIT-2 state timeout time for TCP connections
1. Enter system view.
system-view
2. Enter TCP parameter profile view.
parameter-profile profile-name
3. Configure the FIN-WAIT-2 state timeout time.
fin-wait2 timeout timeout-value
By default, the FIN-WAIT-2 state timeout time for TCP connections is 5 seconds.
Specifying the action to take on the segments that exceed the MSS in the HTTP requests from the client
1. Enter system view.
system-view
2. Enter TCP parameter profile view.
parameter-profile profile-name
3. Specify the action to take on the segments that exceed the MSS in the HTTP requests from the client.
exceed-mss { allow | drop }
By default, the device allows the segments to exceed the MSS in the HTTP requests from the client.
Setting the MSS for the LB device
About the MSS
When a client establishes a TCP connection to the LB device, the client sends its own MSS (maximum segment size) value to the LB device. The LB device records the MSS value and sends the configured MSS value to the client. The client and the LB device use the smaller MSS value for communication.
When the LB device establishes a TCP connection to the server, the LB device sends the configured MSS value to the server. The server records the MSS value and sends its own MSS value to the LB device. The LB device and the server use the smaller MSS value for communication.
Procedure
1. Enter system view.
system-view
2. Enter TCP parameter profile view.
parameter-profile profile-name
3. Set the MSS for the LB device.
tcp mss value
By default, the MSS is not set for the LB device.
Configuring the TCP payload match parameters
About configuring the TCP payload match parameters
For the TCP payload match rule, the device buffers traffic for TCP payload matching during the buffering period. The device stops buffering traffic when any of the following events occurs:
· The device receives the buffering end string.
· The size of buffered data exceeds the specified buffering size.
· The buffered data matches the TCP payload match rule.
Procedure
1. Enter system view.
system-view
2. Enter TCP-application parameter profile view.
parameter-profile profile-name
3. Set the buffering period for TCP payload matching.
match-buffer-time time
By default, the buffering period for TCP payload matching is 3 seconds.
4. Configure a condition for ending data buffering in TCP payload matching. Choose the options to configure as needed:
¡ Set the maximum buffering size.
match-buffer-size size
By default, the maximum buffering size is 4096 bytes.
¡ Configure the buffering end string.
match-buffer-end string
By default, no buffering end string is configured.
Enabling load balancing for each HTTP request
1. Enter system view.
system-view
2. Enter HTTP parameter profile view.
parameter-profile profile-name
3. Enable load balancing for each HTTP request.
rebalance per-request
By default, load balancing applies to the first HTTP request of a connection. Other HTTP requests are processed in the same way the first request is processed.
Configuring connection reuse between the LB device and the server
About connection reuse
This feature allows the LB device to establish connections to the server that can be reused by multiple clients.
After you enable connection reuse, you can configure the following parameters:
· Idle timeout time—Amount of time that a TCP connection can stay idle before it is disconnected. After the TCP connection is disconnected, new connection requests trigger establishment of new TCP connections.
· Maximum reuse times—Maximum number of times that a TCP connection can be reused. The TCP connection is not disconnected until the maximum number of reuse times is reached. After the TCP connection is disconnected, new connection requests trigger establishment of a new TCP connection.
· IPv4 mask/IPv6 prefix—Limits the network segment of clients that can reuse connections between the LB device and servers. If the client that initiates a connection request is in the same network segment as the idle TCP connection, the idle TCP connection is reused. If the client does not match this requirement, a new TCP connection is established.
Enabling connection reuse between the LB device and the server
1. Enter system view.
system-view
2. Enter HTTP parameter profile view.
parameter-profile profile-name
3. Enable connection reuse between the LB device and the server.
server-connection reuse
By default, connection reuse is disabled between the LB device and the server.
This command is not supported by the virtual servers of the fast HTTP type.
4. Return to system view.
quit
Configuring connection reuse parameters
1. Enter OneConnect parameter profile view
parameter-profile profile-name
2. (Optional.) Set the idle timeout time for TCP connections between the LB device and servers.
idle-time idle-time
The default setting is 86400 seconds.
3. (Optional.) Set the maximum number of times that a TCP connection can be reused.
max-reuse reuse-number
The default setting is 1000.
4. (Optional.) Specify the IPv4 mask for connection reuse.
ip source mask { mask-length | mask }
By default, the IPv4 mask for connection reuse is the natural mask.
5. (Optional.) Specify the IPv6 prefix length for connection reuse.
ipv6 source prefix prefix-length
By default, client IPv6 addresses with a prefix length of 0 can reuse connections.
Modifying the header in each HTTP request or response
1. Enter system view.
system-view
2. Enter HTTP parameter profile view.
parameter-profile profile-name
3. Perform the insert, delete, or modify operation for the header in each HTTP request or response.
header modify per-request
By default, the insert, delete, or modify operation is performed for the header in the first HTTP request or response of a connection.
Disabling case sensitivity matching for HTTP
1. Enter system view.
system-view
2. Enter HTTP parameter profile view.
parameter-profile profile-name
3. Disable case sensitivity matching for HTTP.
case-insensitive
By default, case sensitivity matching is enabled for HTTP.
Configuring the maximum length to parse the HTTP content
1. Enter system view.
system-view
2. Enter HTTP parameter profile view.
parameter-profile profile-name
3. Configure the maximum length to parse the HTTP content.
content maxparse-length length
By default, the maximum length to parse the HTTP content is 4096.
This command is not supported by the virtual servers of the fast HTTP type.
4. Configure the maximum length to parse HTTP headers.
header maxparse-length length
By default, the maximum length to parse HTTP headers is 4096.
This command is not supported by the virtual servers of the fast HTTP type.
Configuring secondary cookie parameters
1. Enter system view.
system-view
2. Enter HTTP parameter profile view.
parameter-profile profile-name
3. Configure the delimiter that separates secondary cookies in a URL.
secondary-cookie delimiters text
By default, the delimiter that separates secondary cookies in a URL is slash (/), ampersand (&), number sign (#), or plus (+).
4. Configure the start string for secondary cookies in a URL.
secondary-cookie start text
By default, the start string for secondary cookies in a URL is question mark (?).
Specifying the action to take when the header of an HTTP packet exceeds the maximum length
1. Enter system view.
system-view
2. Enter HTTP parameter profile view.
parameter-profile profile-name
3. Specify the action to take when the header of an HTTP packet exceeds the maximum length.
header exceed-length { continue | drop }
By default, the system continues to perform load balancing for HTTP requests or responses when their packet headers exceed the maximum length.
This command is not supported by the virtual servers of the fast HTTP type.
Configuring the HTTP compression feature
1. Enter system view.
system-view
2. Enter HTTP-compression parameter profile view.
parameter-profile profile-name
3. Set the minimum length of HTTP response content for compression.
content length-threshold length
By default, the minimum length of HTTP response content for compression is 1024 bytes.
4. Set the compression level.
compression level level
By default, the compression level is 1.
5. Set the memory size used for compression.
memory-size size
By default, the memory size used for compression is 8 KB.
6. Enable compression for responses to HTTP 1.0 requests.
request-version all
By default, compression is disabled for responses to HTTP 1.0 requests.
7. Specify the preferred compression algorithm.
prefer-method { deflate | gzip }
By default, the preferred compression algorithm is gzip.
8. Delete the Accept-Encoding header from HTTP requests.
header delete request accept-encoding
By default, the Accept-Encoding header is deleted from HTTP requests.
9. Insert the Vary header to HTTP responses and set the header content to Accept-Encoding.
header insert response vary
By default, the Vary header is inserted to HTTP responses, and the header content is Accept-Encoding.
10. Configure a filtering rule for compression.
rule [ rule-id ] { permit | deny } { content | url } expression
By default, no filtering rules are configured.
11. Set the window size used for compression.
window-size size
By default, the window size used for compression is 16 KB.
Configuring the HTTP statistics feature
About HTTP statistics
This feature allows you to collect statistics about HTTP traffic destined for matching URLs by configuring an HTTP-statistics parameter profile.
If HTTP packets match the specified URL and source IP address object group, they are counted based on the source IP address object group. If HTTP packets match the specified URL but do not match the specified source IP address object group, they are counted based on the source IP address.
You can configure multiple URL match rules for one statistics node to count the total amount of traffic destined for these URLs.
Procedure
1. Enter system view.
system-view
2. Enter HTTP-statistics parameter profile view.
parameter-profile profile-name
3. (Optional.) Enable collection of HTTP traffic statistics by source IP address object group.
source-ip object-group object-group-name
By default, HTTP traffic statistics are collected on a per-IP address basis.
4. Create a statistics node and enter its view.
node node-name
5. Configure a URL match rule.
statistics-match [ rule-id ] url url
By default, a statistics node does not have URL match rules.
6. (Optional.) Configure a description for the statistics node.
description text
By default, no description is configured for a statistics node.
Configuring an LB probe template
About configuring an LB probe template
You can configure an RST or zero-window LB probe template to monitor the number of RST packets or zero-window packets sent by each server farm member in a server farm. The device identifies the health of server farm members according to the packet count and takes one of the following protection actions when the packet count threshold is reached:
· Places a real server in busy state—After placing a real server in busy state, the device starts probing the real server at the specified probe intervals. If the number of RST or zero-window packets sent does not reach the threshold in a probe interval, the real server is placed back in normal state. If threshold violation persists when the maximum probe times is reached, the system automatically shuts down the real server.
· Shuts down a real server—Automatically shuts down the real server and sets the server state to Auto shutdown.
Configuring a TCP-RST LB probe template
1. Enter system view.
system-view
2. Create a TCP-RST LB probe template and enter its view.
loadbalance probe-template tcp-rst template-name
3. (Optional.) Configure a description for the TCP-RST LB probe template.
description text
By default, no description is configured for a TCP-RST LB probe template.
4. Set the monitoring time for the TCP-RST LB probe template.
monitor-interval interval-time
By default, the monitoring time is 10 seconds.
5. Set the RST packet count threshold for the TCP-RST LB probe template.
rst threshold number
By default, the RST packet count threshold is 1000000.
6. Configure the protection action for the TCP-RST LB probe template.
protect-action { auto-shutdown | busy [ probe-interval interval ] [ probe-times times ] }
By default, the protection action is to place a real server in busy state.
Configuring a TCP zero-window LB probe template
1. Enter system view.
system-view
2. Create a TCP zero-window LB probe template and enter its view.
loadbalance probe-template tcp-zero-window template-name
3. (Optional.) Configure a description for the TCP zero-window LB probe template.
description text
By default, no description is configured for a TCP zero-window LB probe template.
4. Set the monitoring time for the TCP zero-window LB probe template.
monitor-interval interval-time
By default, the monitoring time is 10 seconds.
5. Set the percentage threshold of zero-window packets for the TCP zero-window LB probe template.
zero-window threshold percentage
By default, the percentage threshold of zero-window packets is 40%.
6. Configure the protection action for the TCP zero-window LB probe template.
protect-action { auto-shutdown | busy [ probe-interval interval ] [ probe-times times ] }
By default, the protection action is to place a real server in busy state.
Configuring an LB connection limit policy
About LB connection limit policies
Using an LB connection limit policy can limit the number of connections on the device. It helps prevent a large number of connections from consuming too many device system resources and server resources. In this way, internal network resources (hosts or servers) are protected, and device system resources can be used more appropriately.
An LB connection limit policy can have multiple rules. Each rule specifies a range of users and the limit to the user connections. A connection limit policy applies only to the user connections matching a rule. When the number of connections for a certain type reaches the upper limit (max-amount), the device does not accept new connection requests of that type. It accepts new connection requests only when the number of connections drops below the lower limit (min-amount).
The user ranges in the rules are set by using ACLs.
Procedure
1. Enter system view.
system-view
2. Create an LB connection limit policy, and enter LB connection limit policy view.
loadbalance limit-policy policy-name
3. Configure an LB connection limit rule.
limit limit-id acl [ ipv6 ] { acl-number | name acl-name } [ per-destination | per-service | per-source ] * amount max-amount min-amount
By default, an LB connection limit policy does not contain rules.
4. (Optional.) Configure a description for the LB connection limit policy.
description text
By default, no description is configured for an LB connection limit policy.
Configuring the ALG feature
About the ALG feature
The Application Level Gateway (ALG) feature distributes parent and child sessions to the same link.
Procedure
1. Enter system view.
system-view
2. Enable ALG.
¡ Enable
ALG for the specified protocol:
loadbalance
alg { dns | ftp | h323 | icmp-error | ils | mgcp | nbt | pptp | rsh | rtsp | sccp | sip | sqlnet | tftp | xdmcp }
¡ Enable ALG for all protocols:
loadbalance
alg all-enable
By default, ALG is enabled for the DNS, FTP, PPTP, and RTSP protocols and ICMP error packets.
Reloading a response file
About reloading a response file
If a response file (specified in the response or fallback-action response raw-file command) changes, you must reload the file to make it take effect.
Procedure
1. Enter system view.
system-view
2. Reload a response file.
reload http-response { file filename }
Performing a load balancing test
About performing a load balancing test
Perform this task in any view to test the load balancing result.
Performing an IPv4 load balancing test
To perform an IPv4 load balancing test, execute the following command in any view:
In standalone mode:
loadbalance schedule-test ip [ vpn-instance vpn-instance-name ] { application http { message-file file-name | method { get | post } url url [ header header ]&<1-10> [ content content-value ] } | protocol { protocol-number | icmp | tcp | udp } } destination destination-address destination-port destination-port source source-address source-port source-port
In IRF mode:
loadbalance schedule-test ip [ vpn-instance vpn-instance-name ] { application http { message-file file-name | method { get | post } url url [ header header ]&<1-10> [ content content-value ] } | protocol { protocol-number | icmp | tcp | udp } } destination destination-address destination-port destination-port source source-address source-port source-port [ slot slot-number ]
loadbalance schedule-test ip [ vpn-instance vpn-instance-name ] { application http { message-file file-name | method { get | post } url url [ header header ]&<1-10> [ content content-value ] } | protocol { protocol-number | icmp | tcp | udp } } destination destination-address destination-port destination-port source source-address source-port source-port [ chassis chassis-number slot slot-number ]
Performing an IPv6 load balancing test
To perform an IPv4 load balancing test, execute the following command in any view:
In standalone mode:
loadbalance schedule-test ipv6 [ vpn-instance vpn-instance-name ] { application http { message-file file-name | method { get | post } url url [ header header ]&<1-10> [ content content-value ] } | protocol { protocol-number | icmpv6 | tcp | udp } } destination destination-address destination-port destination-port source source-address source-port source-port
In IRF mode:
loadbalance schedule-test ipv6 [ vpn-instance vpn-instance-name ] { application http { message-file file-name | method { get | post } url url [ header header ]&<1-10> [ content content-value ] } | protocol { protocol-number | icmpv6 | tcp | udp } } destination destination-address destination-port destination-port source source-address source-port source-port [ slot slot-number ]
loadbalance schedule-test ipv6 [ vpn-instance vpn-instance-name ] { application http { message-file file-name | method { get | post } url url [ header header ]&<1-10> [ content content-value ] } | protocol { protocol-number | icmpv6 | tcp | udp } } destination destination-address destination-port destination-port source source-address source-port source-port [ chassis chassis-number slot slot-number ]
Enabling SNMP notifications
About enabling SNMP notifications
To report critical load balancing events to an NMS, enable SNMP notifications for load balancing. For load balancing event notifications to be sent correctly, you must also configure SNMP as described in Network Management and Monitoring Configuration Guide.
The SNMP notifications configuration tasks for Layer 4 and Layer 7 server load balancing are the same.
Procedure
1. Enter system view.
system-view
2. Enable SNMP notifications for load balancing.
snmp-agent trap enable loadbalance
By default, SNMP notifications are enabled for load balancing.
Enabling load balancing logging
About load balancing logging
For security auditing purposes, enable load balancing logging to record load balancing information. Load balancing logging includes basic logging and NAT logging.
Load balancing basic logging generates logs for the following events:
· The state of a real server or real server group changes.
· The health monitoring result of a real server changes.
· The number of connections on a real server or virtual server reaches or drops below the upper limit.
· The connection establishment rate on a real server or virtual server reaches or drops below the upper limit.
· A primary/backup server farm switchover occurs between server farms specified for a virtual server.
· A primary/backup server farm switchover occurs between server farms specified for an LB action.
Load balancing NAT logging records NAT session information, including IP address and port translation information and access information.
Enabling load balancing basic logging
1. Enter system view.
system-view
2. Enable load balancing basic logging.
loadbalance log enable base
By default, load balancing basic logging is enabled.
Enabling load balancing NAT logging
1. Enter system view.
system-view
2. Enable load balancing NAT logging.
loadbalance log enable nat
By default, load balancing NAT logging is disabled.
Displaying and maintaining server load balancing
Execute display commands in any view and reset commands in user view.
Task |
Command |
Display LB action information. |
display loadbalance action [ name action-name ] |
Display LB class information. |
display loadbalance class [ name class-name ] |
Display Layer 7 LB TCP connection information. |
display loadbalance connections [ client-side{ ipv4 | ipv6 } [ cs-client-ip ip-address [ cs-client-port port-number ] ] [ cs-server-ip ip-address [ cs-server-port port-number ] ] ] [ server-side { ipv4 | ipv6 } [ ss-client-ip ip-address [ ss-client-port port-number ] ] [ ss-server-ip ip-address [ ss-server-port port-number ] ] ] In IRF mode: display loadbalance connections [ client-side{ ipv4 | ipv6 } [ cs-client-ip ip-address [ cs-client-port port-number ] ] [ cs-server-ip ip-address [ cs-server-port port-number ] ] ] [ server-side { ipv4 | ipv6 } [ ss-client-ip ip-address [ ss-client-port port-number ] ] [ ss-server-ip ip-address [ ss-server-port port-number ] ] ] [ slot slot-number ] |
Display LB hot backup statistics. |
In IRF mode: display loadbalance hot-backup statistics [ slot slot-number ] |
Display LB connection limit policy information. |
display loadbalance limit-policy [ name policy-name ] |
Display LB policy information. |
display loadbalance policy [ name policy-name ] |
Display LB probe template information. |
display loadbalance probe-template [ name template-name ] |
Display SNAT address pool information. |
display loadbalance snat-pool [ name pool-name ] |
Display parameter profile information. |
display parameter-profile [ name parameter-name ] |
Display real server information. |
display real-server [ brief | name real-server-name ] |
Display server farm member information. |
display real-server server-farm server-farm-name [ name real-server-name port port-number ] |
Display real server statistics. |
In standalone mode: display real-server statistics [ name real-server-name ] In IRF mode: display real-server statistics [ name real-server-name ] [ slot slot-number ] |
Display server farm member statistics. |
In standalone mode: display real-server statistics server-farm server-farm-name [ name real-server-name port port-number ] In IRF mode: display real-server statistics server-farm server-farm-name [ name real-server-name port port-number ] [ slot slot-number ] |
Display server farm information. |
display server-farm [ brief | name server-farm-name ] |
Display sticky entry information. |
In standalone mode: display sticky virtual-server [ virtual-server-name ] [ class class-name | default-class | default-server-farm ] In IRF mode: display sticky virtual-server [ virtual-server-name ] [ class class-name | default-class | default-server-farm ] [ slot slot-number ] |
Display sticky group information. |
display sticky-group [ name group-name ] |
Display virtual server information. |
display virtual-server [ brief | name virtual-server-name ] |
Display virtual server statistics. |
In standalone mode: display virtual-server statistics [ name virtual-server-name ] In IRF mode: display virtual-server statistics [ name virtual-server-name ] [ slot slot-number ] |
Display the ALG status for all protocols. |
display loadbalance alg |
Perform a PCRE regular expression match test and display the match result. |
loadbalance test pcre value value { string string | file file-name } [ offset offset ] [ case-insensitive ] |
Perform a regular-expression-based rewrite test and display the rewrite result. |
loadbalance test rewrite value value replace replace-string { string string | file file-name } [ offset offset ] [ case-insensitive ] |
Clear LB hot backup statistics. |
|
Clear real server statistics. |
reset real-server statistics [ real-server-name ] |
Clear server farm member statistics. |
reset real-server statistics server-farm server-farm-name [ name real-server-name port port-number ] |
Clear virtual server statistics. |
reset virtual-server statistics [ virtual-server-name ] |
Server load balancing configuration examples
Example: Configuring basic Layer 4 server load balancing
Network configuration
In Figure 7, physical servers Server A, Server B, and Server C provide FTP services, and are in descending order of hardware configuration.
Configure server load balancing on the device to distribute user requests among the servers based on their hardware performance, and use health monitoring to monitor the reachability of the servers.
Procedure
1. Configure the device:
¡ Assign IP addresses to the interfaces and configure routes. Make sure the network connections are available. (Details not shown.)
¡ Configure a server farm.
# Create the ICMP-type NQA template t1.
[Device] nqa template icmp t1
[Device-nqatplt-icmp-t1] quit
# Create the server farm sf, and specify the scheduling algorithm as weighted round robin and health monitoring method as t1.
[Device] server-farm sf
[Device-sfarm-sf] predictor round-robin
[Device-sfarm-sf] probe t1
[Device-sfarm-sf] quit
¡ Configure real servers.
# Create the real server rs1 with IPv4 address 192.168.1.1, port number 21, and weight 150, and add it to the server farm sf.
[Device] real-server rs1
[Device-rserver-rs1] ip address 192.168.1.1
[Device-rserver-rs1] port 21
[Device-rserver-rs1] weight 150
[Device-rserver-rs1] server-farm sf
[Device-rserver-rs1] quit
# Create the real server rs2 with IPv4 address 192.168.1.2, port number 21, and weight 120, and add it to the server farm sf.
[Device] real-server rs2
[Device-rserver-rs2] ip address 192.168.1.2
[Device-rserver-rs2] port 21
[Device-rserver-rs2] weight 120
[Device-rserver-rs2] server-farm sf
[Device-rserver-rs2] quit
# Create the real server rs3 with IPv4 address 192.168.1.3, port number 21, and weight 80, and add it to the server farm sf.
[Device] real-server rs3
[Device-rserver-rs3] ip address 192.168.1.3
[Device-rserver-rs3] port 21
[Device-rserver-rs3] weight 80
[Device-rserver-rs3] server-farm sf
[Device-rserver-rs3] quit
¡ Configure a virtual server.
# Create the TCP virtual server vs with VSIP 61.159.4.100, specify its default master server farm sf, and enable the virtual server.
[Device] virtual-server vs type tcp
[Device-vs-tcp-vs] virtual ip address 61.159.4.100
[Device-vs-tcp-vs] default server-farm sf
[Device-vs-tcp-vs] service enable
[Device-vs-tcp-vs] quit
2. Configure the physical servers:
# Specify the default gateway 192.168.1.100 for physical servers Server A, Server B, and Server C. (Details not shown.)
Verifying the configuration
# Display brief information about all real servers.
[Device] display real-server brief
Real server Address Port State VPN instance Server farm
rs1 192.168.1.1 21 Active sf
rs2 192.168.1.2 21 Active sf
rs3 192.168.1.3 21 Active sf
# Display detailed information about all server farms.
[Device] display server-farm
Server farm: sf
Description:
Predictor: Round robin
Proximity: Disabled
NAT: Enabled
SNAT pool:
Failed action: Keep
Active threshold: Disabled
Slow-online: Disabled
Selected server: Disabled
Probe information:
Probe success criteria: All
Probe method:
t1
Total real server: 3
Active real server: 3
Real server list:
Name State VPN instance Address Port Weight Priority
rs1 Active 192.168.1.1 21 150 4
rs2 Active 192.168.1.2 21 120 4
rs3 Active 192.168.1.3 21 80 4
# Display detailed information about all virtual servers.
[Device] display virtual-server
Virtual server: vs
Description:
Type: TCP
State: Active
VPN instance:
Virtual IPv4 address: 61.159.4.100/32
Virtual IPv6 address: --
Port: 0
Primary server farm: sf (in use)
Backup server farm:
Sticky:
LB policy:
Connection limit: --
Rate limit:
Connections: --
Bandwidth: --
Inbound bandwidth: --
Outbound bandwidth: --
Connection synchronization: Disabled
Sticky synchronization: Disabled
Bandwidth busy protection: Disabled
Interface bandwidth statistics: Disabled
Route advertisement: Disabled
Example: Configuring Layer 4 server load balancing hot backup
Network configuration
In Figure 8, physical servers Server A, Server B, and Server C provide FTP services, and are in descending order of hardware configuration.
Configure server load balancing on the devices to distribute user requests among the servers based on their hardware performance, and use health monitoring to monitor reachability of the servers.
For high availability purposes, implement hot backup for the devices. To ensure traffic continuity, enable synchronization for session extension information and sticky entries on the devices.
Procedure
1. Configure the devices:
¡ Configure IRF.
# Set up an IRF fabric for the two LB devices with member IDs 1 and 2. (Details not shown.)
For more information about IRF, see Virtual Technologies Configuration Guide.
¡ Configure Reth interfaces.
# Create Reth 1, and assign an IP address to Reth 1. Assign member interfaces to Reth 1, and set their priorities.
[Device-Reth1] ip address 61.159.4.100 24
[Device-Reth1] member interface gigabitethernet 1/0 priority 20
[Device-Reth1] member interface gigabitethernet 2/0/1 priority 10
[Device-Reth1] quit
# Create Reth 2, and assign an IP address to Reth 1. Assign member interfaces to Reth 2, and set their priorities.
[Device-Reth2] ip address 192.168.1.100 24
[Device-Reth2] member interface gigabitethernet 2/0 priority 20
[Device-Reth2] member interface gigabitethernet 2/0/2 priority 10
[Device-Reth2] quit
¡ Configure a redundancy group.
# Create track entries to monitor the link state of the Ethernet interfaces.
[Device] track 1 interface gigabitethernet 1/0
[Device] track 2 interface gigabitethernet 2/0
[Device] track 3 interface gigabitethernet 2/0/1
[Device] track 4 interface gigabitethernet 2/0/2
# Create the redundancy group bkp and assign Reth 1 and Reth 2 to the redundancy group.
[Device-redundancy-group-bkp] member interface reth 1
[Device-redundancy-group-bkp] member interface reth 2
# Create node 1 (with priority 100) for the redundancy group bkp and bind node 1 to IRF member device 1. Associate track entries 1 and 2 with node 1.
[Device-redundancy-group-bkp] node 1
[Device-redundancy-group-bkp-node-1] priority 100
[Device-redundancy-group-bkp-node-1] bind slot 1
[Device-redundancy-group-bkp-node-1] track 1 interface gigabitethernet 1/0
[Device-redundancy-group-bkp-node-1] track 2 interface gigabitethernet 2/0
[Device-redundancy-group-bkp-node-1] quit
# Create node 2 (with priority 50) for the redundancy group bkp and bind node 2 to IRF member device 2. Associate track entries 3 and 4 with node 2.
[Device-redundancy-group-bkp] node 2
[Device-redundancy-group-bkp-node2] priority 50
[Device-redundancy-group-bkp-node2] bind slot 2
[Device-redundancy-group-bkp-node2] track 3 interface gigabitethernet 2/0/1
[Device-redundancy-group-bkp-node2] track 4 interface gigabitethernet 2/0/2
[Device-redundancy-group-bkp-node2] quit
[Device-redundancy-group-bkp] quit
¡ Enable session synchronization for stateful failover.
[Device] session synchronization enable
¡ Configure a server farm.
# Create the ICMP-type NQA template t1.
[Device-nqatplt-icmp-t1] quit
# Create the server farm sf, and specify the scheduling algorithm as weighted round robin and health monitoring method as t1.
[Device-sfarm-sf] predictor round-robin
[Device-sfarm-sf] probe t1
[Device-sfarm-sf] quit
¡ Configure real servers.
# Create the real server rs1 with IPv4 address 192.168.1.1, port number 21, and weight 150, and add it to the server farm sf.
[Device-rserver-rs1] ip address 192.168.1.1
[Device-rserver-rs1] port 21
[Device-rserver-rs1] weight 150
[Device-rserver-rs1] server-farm sf
[Device-rserver-rs1] quit
# Create the real server rs2 with IPv4 address 192.168.1.2, port number 21, and weight 120, and add it to the server farm sf.
[Device-rserver-rs2] ip address 192.168.1.2
[Device-rserver-rs2] port 21
[Device-rserver-rs2] weight 120
[Device-rserver-rs2] server-farm sf
[Device-rserver-rs2] quit
# Create the real server rs3 with IPv4 address 192.168.1.3, port number 21, and weight 80, and add it to the server farm sf.
[Device-rserver-rs3] ip address 192.168.1.3
[Device-rserver-rs3] port 21
[Device-rserver-rs3] weight 80
[Device-rserver-rs3] server-farm sf
[Device-rserver-rs3] quit
¡ Configure a virtual server.
# Create the TCP virtual server vs with VSIP 61.159.4.100, and specify its default master server farm sf. Enable synchronization for session extension information and sticky entries, and enable the virtual server.
[Device] virtual-server vs type tcp
[Device-vs-tcp-vs] virtual ip address 61.159.4.100
[Device-vs-tcp-vs] default server-farm sf
[Device-vs-tcp-vs] connection-sync enable
[Device-vs-tcp-vs] sticky-sync enable
[Device-vs-tcp-vs] service enable
[Device-vs-tcp-vs] quit
2. Configure the physical servers:
# Specify the default gateway 192.168.1.100 for physical servers Server A, Server B, and Server C. (Details not shown.)
Verifying the configuration
# Display information about the redundancy group bkp.
[Device] display redundancy group bkp
Redundancy group bkp (ID 1):
Node ID Slot Priority Status Track weight
1 Slot1 100 Primary 255
2 Slot2 50 Secondary 255
Preempt delay time remained : 0 min
Preempt delay timer setting : 1 min
Remaining hold-down time : 0 sec
Hold-down timer setting : 1 sec
Manual switchover request : No
Member interfaces:
Reth1 Reth2
Member failover groups:
Node 1:
Track info:
Track Status Reduced weight Interface
1 Positive 255 GE1/0
2 Positive 255 GE2/0
Node 2:
Track info:
Track Status Reduced weight Interface
3 Negative 255 GE2/1/0
4 Negative 255 GE2/2/0
# Display brief information about all real servers.
[Device] display real-server brief
Real server Address Port State VPN instance Server farm
rs1 192.168.1.1 21 Active sf
rs2 192.168.1.2 21 Active sf
rs3 192.168.1.3 21 Active sf
# Display detailed information about all server farms.
Server farm: sf
Description:
Predictor: Round robin
Proximity: Disabled
NAT: Enabled
SNAT pool:
Failed action: Keep
Active threshold: Disabled
Slow-online: Disabled
Selected server: Disabled
Probe information:
Probe success criteria: All
Probe method:
t1
Total real server: 3
Active real server: 3
Real server list:
Name State VPN instance Address Port Weight Priority
rs1 Active 192.168.1.1 21 150 4
rs2 Active 192.168.1.2 21 120 4
rs3 Active 192.168.1.3 21 80 4
# Display detailed information about all virtual servers.
[Device] display virtual-server
Virtual server: vs
Description:
Type: TCP
State: Active
VPN instance:
Virtual IPv4 address: 61.159.4.100/32
Virtual IPv6 address: --
Port: 0
Primary server farm: sf (in use)
Backup server farm:
Sticky:
LB policy:
Connection limit: --
Rate limit:
Connections: --
Bandwidth: --
Inbound bandwidth: --
Outbound bandwidth: --
Connection synchronization: Enabled
Sticky synchronization: Enabled
Bandwidth busy protection: Disabled
Interface bandwidth statistics: Disabled
Route advertisement: Disabled
Example: Configuring basic Layer 7 server load balancing
Network configuration
In Figure 9, physical servers Server A, Server B, and Server C provide HTTP services, and are in descending order of hardware configuration.
Configure server load balancing on the device to distribute user requests among the servers based on their hardware performance, and use health monitoring to monitor reachability of the servers.
Procedure
1. Configure the device:
¡ Assign IP addresses to the interfaces and configure routes. Make sure the network connections are available. (Details not shown.)
¡ Configure a server farm.
# Create the HTTP-type NQA template t1.
<Device> system-view
[Device] nqa template http t1
[Device-nqatplt-http-t1] quit
# Create server farm sf, and specify the scheduling algorithm as weighted round robin and health monitoring method as t1.
[Device] server-farm sf
[Device-sfarm-sf] predictor round-robin
[Device-sfarm-sf] probe t1
[Device-sfarm-sf] quit
¡ Configure real servers.
# Create the real server rs1 with IPv4 address 192.168.1.1, port number 8080, and weight 150, and add it to the server farm sf.
[Device] real-server rs1
[Device-rserver-rs1] ip address 192.168.1.1
[Device-rserver-rs1] port 8080
[Device-rserver-rs1] weight 150
[Device-rserver-rs1] server-farm sf
[Device-rserver-rs1] quit
# Create the real server rs2 with IPv4 address 192.168.1.2, port number 8080, and weight 120, and add it to the server farm sf.
[Device] real-server rs2
[Device-rserver-rs2] ip address 192.168.1.2
[Device-rserver-rs2] port 8080
[Device-rserver-rs2] weight 120
[Device-rserver-rs2] server-farm sf
[Device-rserver-rs2] quit
# Create the real server rs3 with IPv4 address 192.168.1.3, port number 8080, and weight 80, and add it to the server farm sf.
[Device] real-server rs3
[Device-rserver-rs3] ip address 192.168.1.3
[Device-rserver-rs3] port 8080
[Device-rserver-rs3] weight 80
[Device-rserver-rs3] server-farm sf
[Device-rserver-rs3] quit
¡ Configure a virtual server.
# Create the HTTP virtual server vs with VSIP 61.159.4.100, specify its default master server farm sf, and enable the virtual server.
[Device] virtual-server vs type http
[Device-vs-http-vs] virtual ip address 61.159.4.100
[Device-vs-http-vs] default server-farm sf
[Device-vs-http-vs] service enable
[Device-vs-http-vs] quit
2. Configure the physical servers:
# Specify the default gateway 192.168.1.100 for physical servers Server A, Server B, and Server C. (Details not shown.)
Verifying the configuration
# Display brief information about all real servers.
[Device] display real-server brief
Real server Address Port State VPN instance Server farm
rs1 192.168.1.1 8080 Active sf
rs2 192.168.1.2 8080 Active sf
rs3 192.168.1.3 8080 Active sf
# Display detailed information about all server farms.
[Device] display server-farm
Server farm: sf
Description:
Predictor: Round robin
Proximity: Disabled
NAT: Enabled
SNAT pool:
Failed action: Keep
Active threshold: Disabled
Slow-online: Disabled
Selected server: Disabled
Probe information:
Probe success criteria: All
Probe method:
t1
Total real server: 3
Active real server: 3
Real server list:
Name State VPN instance Address Port Weight Priority
rs1 Active 192.168.1.1 8080 150 4
rs2 Active 192.168.1.2 8080 120 4
rs3 Active 192.168.1.3 8080 80 4
# Display detailed information about all virtual servers.
[Device] display virtual-server
Virtual server: vs
Description:
Type: HTTP
State: Active
VPN instance:
Virtual IPv4 address: 61.159.4.100/32
Virtual IPv6 address: --
Port: 80
Primary server farm: sf (in use)
Backup server farm:
Sticky:
LB policy:
HTTP parameter profile:
Connection limit: --
Rate limit:
Connections: --
Bandwidth: --
Inbound bandwidth: --
Outbound bandwidth: --
SSL server policy:
SSL client policy:
Redirect relocation:
Redirect return-code: 302
Sticky synchronization: Disabled
Bandwidth busy protection: Disabled
Interface bandwidth statistics: Disabled
Route advertisement: Disabled
Example: Configuring Layer 7 server load balancing SSL termination
Network configuration
In Figure 10, physical servers Server A, Server B, and Server C provide HTTP services, and are in descending order of hardware configuration.
Configure server load balancing on the device to distribute user requests among the servers based on their hardware performance, and use health monitoring to monitor reachability of the servers.
The device establishes an HTTPS connection encrypted by SSL with the host, and establishes HTTP connections with the servers. The device must terminate the SSL session with the host before establishing a TCP connection with a server.
Procedure
1. Configure the device:
¡ Configure IP addresses for interfaces.
<Device> system-view
[Device] interface gigabitethernet 1/0
[Device-GigabitEthernet1/0] ip address 61.159.4.100 24
[Device-GigabitEthernet1/0] quit
[Device] interface gigabitethernet 2/0
[Device-GigabitEthernet2/0] ip address 192.168.1.100 24
[Device-GigabitEthernet2/0] quit
¡ Configure a server farm.
# Create the SSL-type NQA template t1.
<Device> system-view
[Device] nqa template ssl t1
[Device-nqatplt-ssl-t1] quit
# Create the server farm sf, and specify the scheduling algorithm as weighted round robin and health monitoring method as t1.
[Device] server-farm sf
[Device-sfarm-sf] predictor round-robin
[Device-sfarm-sf] probe t1
[Device-sfarm-sf] quit
¡ Configure an SSL server policy:
# Configure PKI domain 1. (For more information about configuring a PKI domain, see Security Configuration Guide.)
[Device] pki domain 1
# Create an SSL server policy named ssp.
[Device] ssl server-policy ssp
# Specify PKI domain 1 for SSL server policy ssp.
[Device-ssl-server-policy-ssp] pki-domain 1
# Enable client authentication.
[Device-ssl-server-policy-ssp] client-verify enable
[Device-ssl-server-policy-ssp] quit
# Apply SSL server policy ssp to control HTTPS access.
[Device] ip https ssl-server-policy ssp
# Enable the HTTPS service.
[Device] ip https enable
# Create a local user named usera, set the password to 123 in plaintext form, specify the service type as HTTPS, and specify the user role for the local user as network-admin.
[Device] local-user usera
[Device-luser-usera] password simple 123
[Device-luser-usera] service-type https
[Device-luser-usera] authorization-attribute user-role network-admin
¡ Configure real servers.
# Create the real server rs1 with IPv4 address 192.168.1.1, port number 8080, and weight 150, and add it to the server farm sf.
[Device] real-server rs1
[Device-rserver-rs1] ip address 192.168.1.1
[Device-rserver-rs1] port 8080
[Device-rserver-rs1] weight 150
[Device-rserver-rs1] server-farm sf
[Device-rserver-rs1] quit
# Create the real server rs2 with IPv4 address 192.168.1.2, port number 8080, and weight 120, and add it to the server farm sf.
[Device] real-server rs2
[Device-rserver-rs2] ip address 192.168.1.2
[Device-rserver-rs2] port 8080
[Device-rserver-rs2] weight 120
[Device-rserver-rs2] server-farm sf
[Device-rserver-rs2] quit
# Create the real server rs3 with IPv4 address 192.168.1.3, port number 8080, and weight 80, and add it to the server farm sf.
[Device] real-server rs3
[Device-rserver-rs3] ip address 192.168.1.3
[Device-rserver-rs3] port 8080
[Device-rserver-rs3] weight 80
[Device-rserver-rs3] server-farm sf
[Device-rserver-rs3] quit
¡ Configure a virtual server.
# Create the HTTP virtual server vs with VSIP 61.159.4.100 and port number 443, and specify its default master server farm sf.
[Device] virtual-server vs type http
[Device-vs-http-vs] virtual ip address 61.159.4.100
[Device-vs-http-vs] port 443
[Device-vs-http-vs] default server-farm sf
# Specify the SSL server policy ssp for the virtual server.
[Device-vs-http-vs] ssl-server-policy ssp
# Enable the virtual server.
[Device-vs-http-vs] service enable
[Device-vs-http-vs] quit
2. Configure the physical servers:
# Specify the default gateway 192.168.1.100 for physical servers Server A, Server B, and Server C. (Details not shown.)
Verifying the configuration
# Display brief information about all real servers.
[Device] display real-server brief
Real server Address Port State VPN instance Server farm
rs1 192.168.1.1 8080 Active sf
rs2 192.168.1.2 8080 Active sf
rs3 192.168.1.3 8080 Active sf
# Display detailed information about all server farms.
[Device] display server-farm
Server farm: sf
Description:
Predictor: Round robin
Proximity: Disabled
NAT: Enabled
SNAT pool:
Failed action: Keep
Active threshold: Disabled
Slow-online: Disabled
Selected server: Disabled
Total real server: 3
Active real server: 3
Real server list:
Name State VPN instance Address Port Weight Priority
rs1 Active 192.168.1.1 8080 150 4
rs2 Active 192.168.1.2 8080 120 4
rs3 Active 192.168.1.3 8080 80 4
# Display detailed information about all virtual servers.
[Device] display virtual-server
Virtual server: vs
Description:
Type: HTTP
State: Active
VPN instance:
Virtual IPv4 address: 61.159.4.100/32
Virtual IPv6 address: --
Port: 443
Default server farm: sf (in use)
Backup server farm:
Sticky:
LB policy:
HTTP parameter profile:
Connection limit: --
Rate limit:
Connections: --
Bandwidth: --
Inbound bandwidth: --
Outbound bandwidth: --
SSL server policy: ssp
SSL client policy:
Redirect relocation:
Redirect return-code: 302
Connection synchronization: Disabled
Sticky synchronization: Disabled
Bandwidth busy protection: Disabled
Interface bandwidth statistics: Disabled
Route advertisement: Disabled
Configuring outbound link load balancing
About outbound link load balancing
Outbound link load balancing load balances traffic among the links from the internal network to the external network.
Typical network diagram
As shown in Figure 11, outbound link load balancing contains the following elements:
· LB device—Distributes outbound traffic among multiple links.
· Link—Physical links provided by ISPs.
· VSIP—Virtual service IP address of the cluster, which identifies the destination network for packets from the internal network.
· Server IP—IP address of a server.
Workflow
Figure 12 shows the outbound link load balancing workflow.
Figure 12 Outbound link load balancing workflow
The workflow for outbound link load balancing is as follows:
1. The LB device receives traffic from the internal server.
2. The LB device selects the optimal link based on the LB policy, sticky method, proximity algorithm, and scheduling algorithm (typically the bandwidth algorithm or maximum bandwidth algorithm) in turn.
3. The LB device forwards the traffic to the external server through the optimal link.
4. The LB device receives traffic from the external server.
5. The LB device forwards the traffic to the internal server.
Outbound link load balancing tasks at a glance
Relationship between configuration items
Figure 13 shows the relationship between the following configuration items:
· Link group—A collection of links that contain similar functions. A link group can be referenced by a virtual server or an LB action.
· Link—Physical links provided by ISPs.
· Virtual server—A virtual service provided by the LB device to determine whether to perform load balancing for packets received on the LB device. Only the packets that match a virtual server are load balanced.
· LB class—Classifies packets to implement load balancing based on packet type.
· LB action—Drops, forwards, or modifies packets.
· LB policy—Associates an LB class with an LB action. An LB policy can be referenced by a virtual server.
· Sticky group—Uses a sticky method to distribute similar sessions to the same link. A sticky group can be referenced by a virtual server or an LB action.
· Parameter profile—Defines advanced parameters to process packets. A parameter profile can be referenced by a virtual server.
Figure 13 Relationship between the main configuration items
Tasks at a glance
To configure outbound link load balancing, perform the following tasks:
3. Configuring a virtual server
4. (Optional.) Configuring an LB policy
5. (Optional.) Configuring a sticky group
6. (Optional.) Configuring a parameter profile
7. (Optional.) Configuring ISP information
8. (Optional.) Setting the aging time for DNS cache entries
9. (Optional.) Configuring the ALG feature
10. (Optional.) Performing a load balancing test
11. (Optional.) Configuring SNMP notifications and logging for load balancing
¡ Enabling load balancing logging
Configuring a link group
You can add links that contain similar functions to a link group to facilitate management.
Link group tasks at a glance
To configure a link group, perform the following tasks:
2. (Optional.) Adding and configuring a link group member
3. Configuring a scheduling algorithm for a link group
4. Setting the availability criteria
6. (Optional.) Configuring SNAT
7. (Optional.) Enabling the slow online feature
8. (Optional.) Configuring health monitoring
9. (Optional.) Specifying a fault processing method
10. (Optional.) Configuring the proximity feature
Creating a link group
1. Enter system view.
system-view
2. Create a link group and enter link group view.
loadbalance link-group link-group-name
3. (Optional.) Configure a description for the link group.
description text
By default, no description is configured for a link group.
Adding and configuring a link group member
About adding and configuring a link group member
Perform this task to create a link group member or add an existing link as a link group member in link group view. You can also specify a link group for a link in link view to achieve the same purpose (see "Creating a link and specifying a link group").
After adding a link group member, you can configure the following parameters and features for the link in the link group:
· Weight.
· Priority.
· Connection limits.
· Health monitoring.
· Slow offline.
The member-based scheduling algorithm selects the best link based on these configurations.
Adding a link group member
1. Enter system view.
system-view
2. Enter link group view.
loadbalance link-group link-group-name
3. Create and add a link group member and enter link group member view.
link link-name
If the link already exists, the command adds the existing link as a link group member.
4. (Optional.) Configure a description for the link group member.
description text
By default, no description is configured for the link group member.
Setting the weight and priority of the link group member
1. Enter system view.
system-view
2. Enter link group view.
loadbalance link-group link-group-name
3. Enter link group member view.
link link-name
4. Set the weight of the link group member.
weight weight-value
The default setting is 100.
5. Set the priority of the link group member.
priority priority
The default setting is 4.
Setting the connection limits of the link group member
1. Enter system view.
system-view
2. Enter link group view.
loadbalance link-group link-group-name
3. Enter link group member view.
link link-name
4. Set the connection rate of the link group member.
rate-limit connection connection-number
The default setting is 0 (the connection rate is not limited).
5. Set the maximum number of connections allowed for the link group member.
connection-limit max max-number
The default setting is 0 (the maximum number of connections is not limited).
Configuring health monitoring for the link group member
1. Enter system view.
system-view
2. Enter link group view.
loadbalance link-group link-group-name
3. Enter link group member view.
link link-name
4. Specify a health monitoring method for the link group member.
probe template-name
By default, no health monitoring method is specified for the link group member.
You can specify an NQA template or load balancing template for health monitoring. For information about NQA templates, see NQA configuration in Network Management and Monitoring Configuration Guide.
5. Specify the health monitoring success criteria for the link group member.
success-criteria { all | at-least min-number }
By default, health monitoring succeeds only when all the specified health monitoring methods succeed.
Enabling the slow offline feature for the link group member
1. Enter system view.
system-view
2. Enter link group view.
loadbalance link-group link-group-name
3. Enter link group member view.
link link-name
4. Enable the slow offline feature for the link group member.
slow-shutdown enable
By default, the slow offline feature is disabled.
5. Shut down the link group member.
shutdown
By default, the link group member is activated.
Configuring a scheduling algorithm for a link group
About scheduling algorithms for link groups
Perform this task to specify a scheduling algorithm for a link group and specify the number of links to participate in scheduling. The LB device calculates the links to process user requests based on the specified scheduling algorithm.
The device provides the following scheduling algorithms for a link group:
· Weighted least connection algorithm (link-based)—Always assigns user requests to the link with the fewest number of weighted active connections (the total number of active connections in all link groups divided by weight). The weight value used in this algorithm is configured in real server view.
· Weighted least connection algorithm (link group member-based)—Always assigns user requests to the link with the fewest number of weighted active connections (the total number of active connections in the specified link group divided by weight). The weight value used in this algorithm is configured in link group member view.
· Random algorithm—Randomly assigns user requests to links.
· Round robin algorithm—Assigns user requests to links based on the weights of links. A higher weight indicates more user requests will be assigned.
· Bandwidth algorithm—Distributes user requests to links according to the weights and remaining bandwidth of links.
· Maximum bandwidth algorithm—Distributes user requests always to an idle link that has the largest remaining bandwidth.
· Source IP address hash algorithm—Hashes the source IP address of user requests and distributes user requests to different links according to the hash values.
· Source IP address and port hash algorithm—Hashes the source IP address and port number of user requests and distributes user requests to different links according to the hash values.
· Destination IP address hash algorithm—Hashes the destination IP address of user requests and distributes user requests to different links according to the hash values.
Procedure
1. Enter system view.
system-view
2. Enter link group view.
loadbalance link-group link-group-name
3. Specify a scheduling algorithm for the link group.
¡ Specify a link-based scheduling algorithm.
predictor { least-connection | { bandwidth | max-bandwidth } [ inbound | outbound ] }
¡ Specify a link group member-based scheduling algorithm.
predictor hash address { destination | source | source-ip-port } [ mask mask-length ] [ prefix prefix-length ]
predictor { least-connection member | random | round-robin }
By default, the scheduling algorithm for a link group is weighted round robin.
4. Specify the number of links to participate in scheduling.
selected-link min min-number max max-number
By default, the links with the highest priority participate in scheduling.
Setting the availability criteria
About setting the availability criteria
Perform this task to set the criteria (lower percentage and higher percentage) to determine whether a link group is available. This helps implement traffic switchover between the master and backup link groups.
· When the number of available links to the total number of links in the master link group is smaller than the lower percentage, traffic is switched to the backup link group.
· When the number of available links to the total number of links in the master link group is greater than the upper percentage, traffic is switched back to the master link group.
Procedure
1. Enter system view.
system-view
2. Enter link group view.
loadbalance link-group link-group-name
3. Set the criteria to determine whether the link group is available.
activate lower lower-percentage upper upper-percentage
By default, when a minimum of one link is available, the link group is available.
Disabling NAT
Restrictions and guidelines
Typically, outbound link load balancing networking requires disabling NAT for a link group.
Procedure
1. Enter system view.
system-view
2. Enter link group view.
loadbalance link-group link-group-name
3. Disable NAT for the link group.
transparent enable
By default, NAT is enabled for a link group.
Configuring SNAT
About SNAT
After a link group references the SNAT address pool, the LB device replaces the source address of the packets it receives with an SNAT address before forwarding the packets.
Restrictions and guidelines
An SNAT address pool can have a maximum of 256 IPv4 addresses and 65536 IPv6 addresses. No overlapping IPv4 or IPv6 addresses are allowed in different SNAT address pools.
As a best practice, do not use SNAT because its application scope is limited for outbound link load balancing.
Procedure
1. Enter system view.
system-view
2. Create an SNAT address pool and enter SNAT address pool view.
loadbalance snat-pool pool-name
3. (Optional.) Configure a description for the SNAT address pool.
description text
By default, no description is configured for an SNAT address pool.
4. Specify an address range for the SNAT address pool.
IPv4:
ip range start start-ipv4-address end end-ipv4-address
IPv6:
ipv6 range start start-ipv6-address end end-ipv6-address
By default, an SNAT address pool does not contain address ranges.
5. Return to system view.
quit
6. Enter link group view.
loadbalance link-group link-group-name
7. Specify the SNAT address pool to be referenced by the link group.
snat-pool pool-name
By default, no SNAT address pool is referenced by a link group.
Enabling the slow online feature
About the slow online feature
Links newly added to a link group might be unable to immediately process large numbers of services assigned by the LB device. To resolve this issue, enable the slow online feature for the link group. The feature uses the standby timer and ramp-up timer. When the links are brought online, the LB device does not assign any services to the links until the standby timer expires.
When the standby timer expires, the ramp-up timer starts. During the ramp-up time, the LB device increases the service amount according to the processing capability of the links, until the ramp-up timer expires.
Procedure
1. Enter system view.
system-view
2. Enter link group view.
loadbalance link-group link-group-name
3. Enable the slow online feature for the link group.
slow-online [ standby-time standby-time ramp-up-time ramp-up-time ]
By default, the slow online feature is disabled for a link group.
Configuring health monitoring
About configuring health monitoring
Perform this task to enable health monitoring to detect the availability of links.
Restrictions and guidelines
The health monitoring configuration in link view takes precedence over the configuration in link group view.
You can specify an NQA template or load balancing template for health monitoring. For information about NQA templates, see NQA configuration in Network Management and Monitoring Configuration Guide.
Procedure
1. Enter system view.
system-view
2. Enter link group view.
loadbalance link-group link-group-name
3. Specify a health monitoring method for the link group.
probe template-name
By default, no health monitoring method is specified for a link group.
4. Specify the health monitoring success criteria for the link group.
success-criteria { all | at-least min-number }
By default, health monitoring succeeds only when all the specified health monitoring methods succeed.
Specifying a fault processing method
About fault processing methods
Perform this task to specify one of the following fault processing methods for a link group:
· Keep—Does not actively terminate the connection with the failed link. Keeping or terminating the connection depends on the timeout mechanism of the protocol.
· Reschedule—Redirects the connection to another available link in the link group.
· Reset—Terminates the connection with the failed link by sending RST packets (for TCP packets) or ICMP unreachable packets (for other types of packets).
Procedure
1. Enter system view.
system-view
2. Enter link group view.
loadbalance link-group link-group-name
3. Specify a fault processing method for the link group.
fail-action { keep | reschedule | reset }
By default, the fault processing method is keep. All available connections are kept.
Configuring the proximity feature
About the proximity feature
The proximity feature performs link detection to select the optimal link to a destination. If no proximity information for a destination is available, the load balancing module selects a link based on the scheduling algorithm. It then performs proximity detection to generate proximity entries for forwarding subsequent traffic.
You can specify an NQA template or load-balancing probe template to perform link detection. The device generates proximity entries according to the detection results and proximity parameter settings. For information about NQA templates, see NQA configuration in Network Management and Monitoring Configuration Guide.
Restrictions and guidelines
To configure the proximity feature, first configure proximity parameters in proximity view, and then enable the proximity feature in link group view.
Configuring an LB probe template
1. Enter system view.
system-view
2. Create an LB probe template and enter LB probe template view.
loadbalance probe-template icmp template-name
3. Set the probe interval.
frequency interval
The default setting is 300 seconds.
4. Set the timeout time for probe responses.
timeout timeout-value
The default setting is 3 seconds.
Configuring proximity parameters
1. Enter system view.
system-view
2. Enter proximity view.
loadbalance proximity [ vpn-instance vpn-instance-name ]
3. Specify the proximity probe method for packets.
match [ match-id ] { tcp } { lb-probe lb-template | probe nqa-template }
By default, no proximity probe method is specified.
4. Specify the default proximity probe method.
match default { lb-probe lb-template | probe nqa-template }
By default, the default proximity probe method is not specified.
5. Set the mask length for IPv4 proximity entries.
ip mask { mask-length | mask }
By default, the mask length for IPv4 proximity entries is 24.
6. Set the prefix length for IPv6 proximity entries.
ipv6 prefix prefix-length
By default, the prefix length for IPv6 proximity entries is 96.
7. Set the network delay weight for proximity calculation.
rtt weight rtt-weight
By default, the network delay weight for proximity calculation is 100.
8. Set the TTL weight for proximity calculation.
ttl weight ttl-weight
By default, the TTL weight for proximity calculation is 100.
9. Set the bandwidth weight for proximity calculation.
bandwidth { inbound | outbound } weight bandwidth-weight
By default, the inbound or outbound bandwidth weight for proximity calculation is 100.
10. Set the cost weight for proximity calculation.
cost weight cost-weight
By default, the cost weight for proximity calculation is 100.
11. Set the aging timer for proximity entries.
timeout timeout-value
By default, the aging timer for proximity entries is 60 seconds.
12. Set the maximum number of proximity entries.
max-number number
By default, the number of proximity entries is not limited.
Enabling the proximity feature
1. Enter system view.
system-view
2. Enter link group view.
loadbalance link-group link-group-name
3. Enable the proximity feature.
proximity enable
By default, the proximity feature is disabled for a link group.
Configuring a link
A link is a physical link provided by an ISP. A link can belong to multiple link groups. A link group can have multiple links.
Link tasks at a glance
To configure a link, perform the following tasks:
1. Creating a link and specifying a link group
2. Specifying a next hop IP address or an outgoing interface
Choose one of the following tasks:
¡ Specifying an outbound next hop for a link
¡ Specifying an outgoing interface for a link
3. Setting a weight and priority
4. (Optional.) Configuring the bandwidth and connection parameters
5. (Optional.) Configuring health monitoring
6. (Optional.) Enabling the slow offline feature
7. (Optional.) Setting the link cost for proximity calculation
8. (Optional.) Setting the bandwidth ratio and maximum expected bandwidth
9. (Optional.) Disabling VPN instance inheritance for a link
Creating a link and specifying a link group
1. Enter system view.
system-view
2. Create a link and enter link view.
loadbalance link link-name
By default, no links exist.
3. (Optional.) Configure a description for the link.
description text
By default, no description is configured for a link.
4. Specify a link group for the link.
link-group link-group-name
By default, a link does not belong to any link group.
Specifying an outbound next hop for a link
1. Enter system view.
system-view
2. Enter link view.
loadbalance link link-name
3. Specify an outbound next hop for the link.
IPv4:
router ip ipv4-address
IPv6:
router ipv6 ipv6-address
By default, a link does not have an outbound next hop.
Specifying an outgoing interface for a link
About specifying an outgoing interface for a link
In scenarios where IP addresses are obtained through PPPoE, an LB device can dynamically obtain the outbound next hop IP address through the specified outgoing interface.
Procedure
1. Enter system view.
system-view
2. Enter link view.
loadbalance link link-name
3. Specify an outgoing interface for the link.
router interface interface-type interface-number
By default, no outgoing interface is specified for a link.
Setting a weight and priority
About setting a weight and priority
Perform this task to configure a weight for the weighted round robin and weighted least connection algorithms of a link, and the scheduling priority in the link group for the server.
Procedure
1. Enter system view.
system-view
2. Enter link view.
loadbalance link link-name
3. Set a weight for the link.
weight weight-value
By default, the weight of a link is 100.
4. Set a priority for the link.
priority priority
By default, the priority of a link is 4.
Configuring the bandwidth and connection parameters
1. Enter system view.
system-view
2. Enter link view.
loadbalance link link-name
3. Set the maximum bandwidth for the link.
rate-limit bandwidth [ inbound | outbound ] bandwidth-value
By default, the maximum bandwidth, inbound bandwidth, and outbound bandwidth are 0 KBps for a link. The bandwidths are not limited.
4. Set the maximum number of connections for the link.
connection-limit max max-number
By default, the maximum number of connections is 0 for a link. The number is not limited.
5. Set the maximum number of connections per second for the link.
rate-limit connection connection-number
By default, the maximum number of connections per second is 0 for a link. The number is not limited.
Configuring health monitoring
About configuring health monitoring
Perform this task to enable health monitoring to detect the availability of a link.
Restrictions and guidelines
The health monitoring configuration in link view takes precedence over the configuration in link group view.
Procedure
1. Enter system view.
system-view
2. Enter link view.
loadbalance link link-name
3. Specify a health monitoring method for the link.
probe template-name
By default, no health monitoring method is specified for a link.
4. Specify the health monitoring success criteria for the link.
success-criteria { all | at-least min-number }
By default, the health monitoring succeeds only when all the specified health monitoring methods succeed.
Enabling the slow offline feature
About the slow offline feature
The shutdown command immediately terminates existing connections of a link. The slow offline feature ages out the connections, and does not establish new connections.
Restrictions and guidelines
To enable the slow offline feature for a link, you must execute the slow-shutdown enable command and then the shutdown command. If you execute the shutdown command and then the slow-shutdown enable command, the slow offline feature does not take effect and the link is shut down.
Procedure
1. Enter system view.
system-view
2. Enter link view.
loadbalance link link-name
3. Enable the slow offline feature for the link.
slow-shutdown enable
By default, the slow offline feature is disabled.
4. Shut down the link.
shutdown
By default, the link is activated.
Setting the link cost for proximity calculation
1. Enter system view.
system-view
2. Enter link view.
loadbalance link link-name
3. Set the link cost for proximity calculation.
cost cost-value
By default, the link cost for proximity calculation is 0.
Setting the bandwidth ratio and maximum expected bandwidth
About setting the bandwidth ratio and maximum expected bandwidth
When the traffic exceeds the maximum expected bandwidth multiplied by the bandwidth ratio of a link, new traffic (traffic that does not match any sticky entries) is not distributed to the link. When the traffic drops below the maximum expected bandwidth multiplied by the bandwidth recovery ratio of the link, the link participates in scheduling again.
In addition to being used for link protection, the maximum expected bandwidth is used for remaining bandwidth calculation in the bandwidth algorithm, maximum bandwidth algorithm, and dynamic proximity algorithm.
Procedure
1. Enter system view.
system-view
2. Enter link view.
loadbalance link link-name
3. Set the bandwidth ratio.
bandwidth [ inbound | outbound ] busy-rate busy-rate-number [ recovery recovery-rate-number ]
By default, the total bandwidth ratio is 70.
4. Set the maximum expected bandwidth.
max-bandwidth [ inbound | outbound ] bandwidth-value
By default, the maximum expected bandwidth, maximum uplink expected bandwidth, and maximum downlink expected bandwidth are 0 KBps. The bandwidths are not limited.
Disabling VPN instance inheritance for a link
About VPN instance inheritance
When VPN instance inheritance is enabled, a link without a VPN instance specified inherits the VPN instance of the virtual server. When VPN instance inheritance is disabled, a link without a VPN instance specified belongs to the public network.
Procedure
1. Enter system view.
system-view
2. Enter link view.
loadbalance link link-name
3. Disable VPN instance inheritance for the link.
inherit vpn-instance disable
By default, VPN instance inheritance is enabled for a link.
Configuring a virtual server
A virtual server is a virtual service provided by the LB device to determine whether to perform load balancing for packets received on the LB device. Only the packets that match a virtual server are load balanced.
Restrictions and guidelines
Outbound link load balancing supports only the link-IP virtual server.
Virtual server tasks at a glance
To configure a virtual server, perform the following tasks:
2. Specifying the VSIP and port number
3. (Optional.) Specifying a VPN instance
4. Configuring a packet processing policy
Choose the following tasks as needed:
5. (Optional.) Specifying a parameter profile
6. (Optional.) Configuring the bandwidth and connection parameters
7. (Optional.) Enabling the link protection feature
8. (Optional.) Enabling bandwidth statistics collection by interfaces
9. (Optional.) Configuring hot backup
Creating a virtual server
1. Enter system view.
system-view
2. Create a link-IP virtual server and enter virtual server view.
virtual-server virtual-server-name type link-ip
3. (Optional.) Configure a description for the virtual server.
description text
By default, no description is configured for the virtual server.
Specifying the VSIP and port number
1. Enter system view.
system-view
2. Enter link-IP virtual server view.
virtual-server virtual-server-name
3. Specify the VSIP for the virtual server.
IPv4:
virtual ip address ipv4-address [ mask-length | mask ]
IPv6:
virtual ipv6 address ipv6-address [ prefix-length ]
By default, no IPv4 or IPv6 address is specified for a virtual server.
4. Specify the port number for the virtual server.
port port-number
By default, the port number is 0 (meaning any port number) for a link-IP virtual server.
Specifying a VPN instance
1. Enter system view.
system-view
2. Enter virtual server view.
virtual-server virtual-server-name
3. Specify a VPN instance for the virtual server.
vpn-instance vpn-instance-name
By default, a virtual server belongs to the public network.
Specifying link groups
About specifying link groups
When the primary link group is available (contains available links), the virtual server forwards packets through the primary link group. When the primary link group is not available, the virtual server forwards packets through the backup link group.
Procedure
1. Enter system view.
system-view
2. Enter virtual server view.
virtual-server virtual-server-name
3. Specify link groups.
default link-group link-group-name [ backup backup-link-group-name ] [ sticky sticky-name ]
By default, no link group is specified for a virtual server.
Specifying an LB policy
About specifying an LB policy
By referencing an LB policy, the virtual server load balances matching packets based on the packet contents.
Procedure
1. Enter system view.
system-view
2. Enter virtual server view.
virtual-server virtual-server-name
3. Specify an LB policy for the virtual server.
lb-policy policy-name
By default, the virtual server does not reference any LB policies.
A virtual server can only reference a policy profile of the specified type. For example, a virtual server of the link-IP type can only reference a policy profile of the link-generic type.
Specifying a parameter profile
About specifying a parameter profile
You can configure advanced parameters through a parameter profile. The virtual server references the parameter profile to analyze, process, and optimize service traffic.
Procedure
1. Enter system view.
system-view
2. Enter virtual server view.
virtual-server virtual-server-name
3. Specify a parameter profile for the virtual server.
parameter ip profile-name
By default, the virtual server does not reference any parameter profiles.
Configuring the bandwidth and connection parameters
1. Enter system view.
system-view
2. Enter virtual server view.
virtual-server virtual-server-name
3. Set the maximum bandwidth for the virtual server.
rate-limit bandwidth [ inbound | outbound ] bandwidth-value
By default, the maximum bandwidth, inbound bandwidth, and outbound bandwidth for the virtual server are 0 KBps. The bandwidths are not limited.
4. Set the maximum number of connections for the virtual server.
connection-limit max max-number
By default, the maximum number of connections of the virtual server is 0. The number is not limited.
5. Set the maximum number of connections per second for the virtual server.
rate-limit connection connection-number
By default, the maximum number of connections per second for the virtual server is 0. The number is not limited.
Enabling the link protection feature
About the link protection feature
Perform this task to prevent traffic from overwhelming a busy link. If traffic exceeds the bandwidth ratio of a link, the LB device distributes new traffic that does not match any sticky entries to other links.
Restrictions and guidelines
This feature takes effect only when bandwidth statistics collection by interfaces is enabled.
Procedure
1. Enter system view.
system-view
2. Enter virtual server view.
virtual-server virtual-server-name
3. Enable the link protection feature.
bandwidth busy-protection enable
By default, the link protection feature is disabled.
Enabling bandwidth statistics collection by interfaces
About enabling bandwidth statistics collection by interfaces
By default, the load balancing module automatically collects link bandwidth statistics. Perform this task to enable interfaces to collect bandwidth statistics.
Procedure
1. Enter system view.
system-view
2. Enter virtual server view.
virtual-server virtual-server-name
3. Enable bandwidth statistics collection by interfaces.
bandwidth interface statistics enable
By default, bandwidth statistics collection by interfaces is disabled.
Configuring hot backup
About hot backup
To implement hot backup for two LB devices, you must enable synchronization for session extension information and sticky entries to avoid service interruption.
Restrictions and guidelines
For successful sticky entry synchronization, if you want to specify a sticky group, enable sticky entry synchronization before specifying a sticky group on both LB devices. You can specify a sticky group by using the sticky sticky-name option when specifying link groups.
Procedure
1. Enter system view.
system-view
2. Enter virtual server view.
virtual-server virtual-server-name
3. Enable session extension information synchronization.
connection-sync enable
By default, session extension information synchronization is disabled.
4. Enable sticky entry synchronization.
sticky-sync enable
By default, sticky entry synchronization is disabled.
Enabling a virtual server
About enabling a virtual server
After you configure a virtual server, you must enable the virtual server for it to work.
Procedure
1. Enter system view.
system-view
2. Enter virtual server view.
virtual-server virtual-server-name
3. Enable the virtual server.
service enable
By default, the virtual server is disabled.
Configuring an LB class
An LB class classifies packets by comparing packets against specific rules. Matching packets are further processed by LB actions. You can create a maximum of 65535 rules for an LB class.
LB class tasks at a glance
To configure an LB class, perform the following tasks:
2. Creating a match rule
Choose the following tasks as needed:
¡ Creating a match rule that references an LB class
¡ Creating a source IP address match rule
¡ Creating a destination IP address match rule
¡ Creating an input interface match rule
¡ Creating a user group match rule
¡ Creating a domain name match rule
¡ Creating an application group match rule
Creating an LB class
1. Enter system view.
system-view
2. Create a link-generic LB class, and enter LB class view.
loadbalance class class-name type link-generic [ match-all | match-any ]
When you create an LB class, you must specify the class type. You can enter an existing LB class view without specifying the class type. If you specify the class type when entering an existing LB class view, the class type must be the one specified when you create the LB class.
3. (Optional.) Configure a description for the LB class.
description text
By default, no description is configured for the LB class.
Creating a match rule that references an LB class
1. Enter system view.
system-view
2. Enter LB class view.
loadbalance class class-name
3. Create a match rule that references an LB class.
match [ match-id ] class class-name
Creating a source IP address match rule
1. Enter system view.
system-view
2. Enter LB class view.
loadbalance class class-name
3. Create a source IP address match rule.
match [ match-id ] source { ip address ipv4-address [ mask-length | mask ] | ipv6 address ipv6-address [ prefix-length ] }
Creating a destination IP address match rule
1. Enter system view.
system-view
2. Enter LB class view.
loadbalance class class-name
3. Create a destination IP address match rule.
match [ match-id ] destination { ip address ipv4-address [ mask-length | mask ] | ipv6 address ipv6-address [ prefix-length ] }
Creating an ACL match rule
1. Enter system view.
system-view
2. Enter LB class view.
loadbalance class class-name
3. Create an ACL match rule.
match [ match-id ] acl [ ipv6 ] { acl-number | name acl-name }
Creating an input interface match rule
1. Enter system view.
system-view
2. Enter LB class view.
loadbalance class class-name
3. Create an input interface match rule.
match [ match-id ] interface interface-type interface-number
Creating a user match rule
1. Enter system view.
system-view
2. Enter LB class view.
loadbalance class class-name
3. Create a user match rule.
match [ match-id ] [ identity-domain domain-name ] user user-name
Creating a user group match rule
1. Enter system view.
system-view
2. Enter LB class view.
loadbalance class class-name
3. Create a user group match rule.
match [ match-id ] [ identity-domain domain-name ] user-group user-group-name
Creating a domain name match rule
About domain name match rules
The LB device stores mappings between domain names and IP addresses in the DNS cache. If the destination IP address of an incoming packet matches an IP address in the DNS cache, the LB device queries the domain name for the IP address. If the queried domain name matches the domain name configured in a match rule, the LB device takes the LB action on the packet.
Procedure
1. Enter system view.
system-view
2. Enter LB class view.
loadbalance class class-name
3. Create a domain name match rule.
match [ match-id ] destination domain-name domain-name
By default, an LB class does not have any match rules.
Creating an ISP match rule
1. Enter system view.
system-view
2. Enter LB class view.
loadbalance class class-name
3. Create an ISP match rule.
match [ match-id ] isp isp-name
Creating an application group match rule
1. Enter system view.
system-view
2. Enter LB class view.
loadbalance class class-name
3. Create an application group match rule.
match [ match-id ] app-group group-name
Configuring an LB action
About LB actions
LB actions include the following modes:
· Forwarding mode—Determines whether and how to forward packets. If no forwarding action is specified, packets are dropped.
· Modification mode—Modifies packets. To prevent the LB device from dropping the modified packets, the modification action must be used together with a forwarding action.
If you create an LB action without specifying any of the previous action modes, packets are dropped.
Restrictions and guidelines
The "Configuring the forwarding mode" and "Specifying link groups" tasks are mutually exclusive. Configuring one task automatically cancels the other task that you have configured.
LB action tasks at a glance
To configure an LB action, perform the following tasks:
2. (Optional.) Configuring a forwarding LB action
¡ Configuring the forwarding mode
¡ (Optional.) Matching the next rule upon failure to find a link
¡ (Optional.) Matching the next rule when all links are busy
3. (Optional.) Configuring a modification LB action
¡ Configuring the ToS field in IP packets sent to the server
Creating an LB action
1. Enter system view.
system-view
2. Create a link-generic LB action and enter LB action view.
loadbalance action action-name type link-generic
When you create an LB action, you must specify the action type. You can enter an existing LB action view without specifying the action type. If you specify the action type when entering an existing LB action view, the action type must be the one specified when you create the LB action.
3. (Optional.) Configure a description for the LB action.
description text
By default, no description is configured for the LB action.
Configuring a forwarding LB action
About forwarding LB actions
Three forwarding LB action types are available:
· Forward—Forwards matching packets.
· Specify link groups—When the primary link group is available (contains available real servers), the primary link group is used to guide packet forwarding. When the primary link group is not available, the backup link group is used to guide packet forwarding.
· Match the next rule upon failure to find a link—If the device fails to find a link according to the LB action, it matches the packet with the next rule in the LB policy.
· Match the next rule when all links are busy.
Configuring the forwarding mode
1. Enter system view.
system-view
2. Enter LB action view.
loadbalance action action-name
3. Configure the forwarding mode.
forward all
By default, the forwarding mode is to discard packets.
Specifying link groups
1. Enter system view.
system-view
2. Enter LB action view.
loadbalance action action-name
3. Specify link groups.
link-group link-group-name [ backup backup-link-group-name ] [ sticky sticky-name ]
By default, no link group is specified.
Matching the next rule upon failure to find a link
1. Enter system view.
system-view
2. Enter LB action view.
loadbalance action action-name
3. Match the next rule upon failure to find a link.
fallback-action continue
By default, the next rule is not matched when no links are available for the current LB action.
Matching the next rule when all links are busy
1. Enter system view.
system-view
2. Enter LB action view.
loadbalance action action-name
3. Match the next rule when all links are busy.
busy-action continue
By default, the device assigns packets to links regardless of whether they are busy.
Configuring the ToS field in IP packets sent to the server
1. Enter system view.
system-view
2. Enter LB action view.
loadbalance action action-name
3. Configure the ToS field in IP packets sent to the server.
set ip tos tos-number
By default, the ToS field in IP packets sent to the server is not changed.
Configuring an LB policy
About LB policies
An LB policy associates an LB class with an LB action to guide packet forwarding. In an LB policy, you can configure an LB action for packets matching the specified LB class, and configure the default action for packets matching no LB class.
You can specify multiple LB classes for an LB policy. Packets match the LB classes in the order the LB classes are configured. If an LB class is matched, the specified LB action is performed. If no LB class is matched, the default LB action is performed.
LB policy tasks at a glance
To configure an LB policy, perform the following tasks:
3. Specifying the default LB action
Creating an LB policy
1. Enter system view.
system-view
2. Create a link-generic LB policy, and enter LB action view.
loadbalance policy policy-name type link-generic
When you create an LB policy, you must specify the policy type. You can enter an existing LB policy view without specifying the policy type. If you specify the policy type when entering an existing LB policy view, the policy type must be the one specified when you create the LB policy.
3. (Optional.) Configure a description for the LB policy.
description text
By default, no description is configured for an LB policy.
Specifying an LB action
Restrictions and guidelines
A link-generic LB policy can reference only link-generic LB classes and link-generic LB actions.
Procedure
1. Enter system view.
system-view
2. Enter LB policy view.
loadbalance policy policy-name
3. Specify an LB action for the specified LB class.
class class-name [ insert-before before-class-name ] action action-name
By default, no LB action is specified for any LB classes.
You can specify an LB action for different LB classes.
Specifying the default LB action
Restrictions and guidelines
A link-generic LB policy can only reference link-generic LB actions.
Procedure
1. Enter system view.
system-view
2. Enter LB policy view.
loadbalance policy policy-name
3. Specify the default LB action.
default-class action action-name
By default, no default LB action is specified.
Configuring a sticky group
A sticky group uses a sticky method to distribute similar sessions to the same link according to sticky entries. The sticky method applies to the first packet of a session. Other packets of the session are distributed to the same link.
Sticky group tasks at a glance
To configure a sticky group, perform the following tasks:
2. Configuring the IP sticky method
3. (Optional.) Configuring the timeout time for sticky entries
4. (Optional.) Ignoring the limits for sessions that match sticky entries
Creating a sticky group
1. Enter system view.
system-view
2. Create an address- and port-type sticky group and enter sticky group view.
sticky-group group-name type address-port
When you create a sticky group, you must specify the group type. You can enter an existing sticky group view without specifying the group type. If you specify the group type when entering an existing sticky group view, the group type must be the one specified when you create the sticky group.
3. (Optional.) Configure a description for the sticky group.
description text
By default, no description is configured for the sticky group.
Configuring the IP sticky method
1. Enter system view.
system-view
2. Enter sticky group view.
sticky-group group-name
3. Configure the IP sticky method.
IPv4:
ip [ port ] { both | destination | source } [ mask mask-length ]
IPv6:
ipv6 [ port ] { both | destination | source } [ prefix prefix-length ]
By default, no IP sticky method is configured.
Configuring the timeout time for sticky entries
1. Enter system view.
system-view
2. Enter sticky group view.
sticky-group group-name
3. Configure the timeout time for sticky entries.
timeout timeout-value
By default, the timeout time for sticky entries is 86400 seconds for sticky groups of the HTTP cookie type and 60 seconds for sticky groups of other types.
Ignoring the limits for sessions that match sticky entries
About ignoring the limits for sessions that match sticky entries
Perform this task to ignore the following limits for sessions that match sticky entries:
· Bandwidth and connection parameters on links.
· LB connection limit policies on virtual servers.
Procedure
1. Enter system view.
system-view
2. Enter sticky group view.
sticky-group group-name
3. Ignore the limits for sessions that match sticky entries.
override-limit enable
By default, the session limits apply to sessions that match sticky entries.
Configuring a parameter profile
About configuring a parameter profile
You can configure advanced parameters through a parameter profile. The virtual server references the parameter profile to analyze, process, and optimize service traffic.
Creating a parameter profile
1. Enter system view.
system-view
2. Create an IP-type parameter profile and enter parameter profile view.
parameter-profile profile-name type ip
By default, no parameter profiles exist.
When you create a parameter profile, you must specify the profile type. You can enter an existing parameter profile view without specifying the profile type. If you specify the profile type when entering an existing parameter profile view, the profile type must be the one specified when you create the parameter profile.
3. (Optional.) Configure a description for the parameter profile.
description text
By default, no description is configured for the parameter profile.
Configuring the ToS field in IP packets sent to the client
1. Enter system view.
system-view
2. Enter IP parameter profile view.
parameter-profile profile-name
3. Configure the ToS field in IP packets sent to the client.
set ip tos tos-number
By default, the ToS field in IP packets sent to the client is not changed.
Configuring ISP information
About configuring ISP information
Perform this task to configure IP address information for an ISP. The IP address information can be used by an ISP match rule. When the destination IP address of packets matches the ISP match rule of an LB class, the LB device takes the action associated with the class. The device supports the following methods to configure IP address information:
· Manual configuration—The administrator manually specifies IP address information.
· ISP auto update—With ISP auto update enabled, the device regularly queries IP address information from the whois server according to the whois maintainer object of the ISP.
· ISP file import—The administrator manually imports an ISP file in .tp format.
Restrictions and guidelines
You can configure ISP information manually, by importing an ISP file, by auto update, or use the combination of these methods..
Configuring ISP information manually
1. Enter system view.
system-view
2. Create an ISP and enter ISP view.
loadbalance isp name isp-name
3. Specify the IP address for the ISP.
IPv4:
ip address ipv4-address { mask-length | mask }
IPv6:
ipv6 address ipv6-address prefix-length
By default, an ISP does not contain IPv4 or IPv6 addresses.
An ISP does not allow overlapping network segments.
4. (Optional.) Configure a description for the ISP.
description text
By default, no description is configured for the ISP.
Configuring ISP auto update
1. Enter system view.
system-view
2. Create an ISP and enter ISP view.
loadbalance isp name isp-name
3. Specify a whois maintainer object for the ISP.
whois-mntner mntner-name
By default, no whois maintainer object is specified.
You can specify a maximum of 10 whois maintainer objects for an ISP.
4. Return to system view.
quit
5. Enable ISP auto update.
loadbalance isp auto-update enable
By default, ISP auto update is disabled.
6. Configure the ISP auto update frequency.
loadbalance isp auto-update frequency { per-day | per-week | per-month }
By default, the ISP auto update is performed once per week.
7. Specify the whois server to be queried for ISP auto update.
loadbalance isp auto-update whois-server { domain domain-name | ip ip-address }
By default, no whois server is specified for ISP auto update.
Importing an ISP file
1. Enter system view.
system-view
2. Create an ISP and enter ISP view.
loadbalance isp file isp-file-name
Setting the aging time for DNS cache entries
About DNS cache entries
A DNS cache entry records the mapping between a domain name and the IP address of the outbound next hop.
Procedure
1. Enter system view.
system-view
2. Set the aging time for DNS cache entries.
loadbalance dns-cache aging-time aging-time
By default, the aging time for DNS cache entries is 60 minutes.
Configuring the ALG feature
About the ALG feature
The Application Level Gateway (ALG) feature distributes parent and child sessions to the same link.
Procedure
1. Enter system view.
system-view
2. Enable ALG.
¡ Enable
ALG for the specified protocol:
loadbalance
alg { dns | ftp | h323 | icmp-error | ils | mgcp | nbt | pptp | rsh | rtsp | sccp | sip | sqlnet | tftp | xdmcp }
¡ Enable ALG for all protocols:
loadbalance
alg all-enable
By default, ALG is enabled for the DNS, FTP, PPTP, and RTSP protocols and ICMP error packets.
Performing a load balancing test
About performing a load balancing test
Perform this task in any view to test the load balancing result.
Performing an IPv4 load balancing test
To perform an IPv4 load balancing test, execute the following command in any view:
In standalone mode:
loadbalance schedule-test ip [ vpn-instance vpn-instance-name ] { application http { message-file file-name | method { get | post } url url [ header header ]&<1-10> [ content content-value ] } | protocol { protocol-number | icmp | tcp | udp } } destination destination-address destination-port destination-port source source-address source-port source-port
In IRF mode:
loadbalance schedule-test ip [ vpn-instance vpn-instance-name ] { application http { message-file file-name | method { get | post } url url [ header header ]&<1-10> [ content content-value ] } | protocol { protocol-number | icmp | tcp | udp } } destination destination-address destination-port destination-port source source-address source-port source-port [ slot slot-number ]
loadbalance schedule-test ip [ vpn-instance vpn-instance-name ] { application http { message-file file-name | method { get | post } url url [ header header ]&<1-10> [ content content-value ] } | protocol { protocol-number | icmp | tcp | udp } } destination destination-address destination-port destination-port source source-address source-port source-port [ chassis chassis-number slot slot-number ]
Performing an IPv6 load balancing test
To perform an IPv6 load balancing test, execute the following command in any view:
In standalone mode:
loadbalance schedule-test ipv6 [ vpn-instance vpn-instance-name ] { application http { message-file file-name | method { get | post } url url [ header header ]&<1-10> [ content content-value ] } | protocol { protocol-number | icmpv6 | tcp | udp } } destination destination-address destination-port destination-port source source-address source-port source-port
In IRF mode:
loadbalance schedule-test ipv6 [ vpn-instance vpn-instance-name ] { application http { message-file file-name | method { get | post } url url [ header header ]&<1-10> [ content content-value ] } | protocol { protocol-number | icmpv6 | tcp | udp } } destination destination-address destination-port destination-port source source-address source-port source-port [ slot slot-number ]
loadbalance schedule-test ipv6 [ vpn-instance vpn-instance-name ] { application http { message-file file-name | method { get | post } url url [ header header ]&<1-10> [ content content-value ] } | protocol { protocol-number | icmpv6 | tcp | udp } } destination destination-address destination-port destination-port source source-address source-port source-port [ chassis chassis-number slot slot-number ]
Enabling SNMP notifications
About enabling SNMP notifications
To report critical load balancing events to an NMS, enable SNMP notifications for load balancing. For load balancing event notifications to be sent correctly, you must also configure SNMP as described in Network Management and Monitoring Configuration Guide.
The SNMP notifications configuration tasks for Layer 4 and Layer 7 server load balancing are the same.
Procedure
1. Enter system view.
system-view
2. Enable SNMP notifications for load balancing.
snmp-agent trap enable loadbalance
By default, SNMP notifications are enabled for load balancing.
Enabling load balancing logging
About load balancing logging
For security auditing purposes, enable load balancing logging to record load balancing information. Load balancing logging includes the following types:
· Basic logging.
· NAT logging.
· Link busy state logging.
Basic logging generates logs for the following events:
· The state of a link or link group changes.
· The health monitoring result of a link changes.
· The number of connections on a link or virtual server reaches or drops below the upper limit.
· The connection establishment rate on a link or virtual server reaches or drops below the upper limit.
· A primary/backup server farm switchover occurs between server farms specified for a virtual server.
· A primary/backup server farm switchover occurs between server farms specified for an LB action.
NAT logging records NAT session information, including IP address and port translation information and access information.
Link busy state logging records busy states for all links.
Enabling load balancing basic logging
1. Enter system view.
system-view
2. Enable load balancing basic logging.
loadbalance log enable base
By default, load balancing basic logging is enabled.
Enabling load balancing NAT logging
1. Enter system view.
system-view
2. Enable load balancing NAT logging.
loadbalance log enable nat
By default, load balancing NAT logging is disabled.
Enabling load balancing link busy state logging
1. Enter system view.
system-view
2. Enable load balancing link busy state logging.
loadbalance log enable bandwidth-busy
By default, load balancing link busy state logging is disabled.
Displaying and maintaining outbound link load balancing
Execute display commands in any view and reset commands in user view.
Task |
Command |
Display LB action information. |
display loadbalance action [ name action-name ] |
Display LB class information. |
display loadbalance class [ name class-name ] |
Display LB hot backup statistics. |
In IRF mode: display loadbalance hot-backup statistics [ slot slot-number ] |
Display ISP information. |
display loadbalance isp [ ip ipv4-address | ipv6 ipv6-address | name isp-name ] |
Display LB policy information. |
display loadbalance policy [ name policy-name ] |
Display proximity entry information. |
In standalone mode: display loadbalance proximity [ vpn-instance vpn-instance-name ] [ ip [ ipv4-address ] | ipv6 [ ipv6-address ] ] In IRF mode: display loadbalance proximity [ vpn-instance vpn-instance-name ] [ ip [ ipv4-address ] | ipv6 [ ipv6-address ] ] [ slot slot-number ] |
Display parameter profile information. |
display parameter-profile [ name parameter-name ] |
Display link information. |
display loadbalance link [ brief | name link-name ] |
Display link group member information. |
display loadbalance link link-group link-group-name [ name link-name ] |
Display link statistics. |
In standalone mode: display loadbalance link statistics [ name link-name ] In IRF mode: display loadbalance link statistics [ name link-name ] [ slot slot-number ] |
Display link group member statistics. |
In standalone mode: display loadbalance link statistics link-group link-group-name [ name link-name ] In IRF mode: display loadbalance link statistics link-group link-group-name [ name link-name ] [ slot slot-number ] |
Display link outbound interface statistics. |
display loadbalance link out-interface statistics [ name link-name ] |
Display link group information. |
display loadbalance link-group [ brief | name link-group-name ] |
Display sticky entry information. |
In standalone mode: display sticky virtual-server [ virtual-server-name ] [ class class-name | default-class | default-link-group ] In IRF mode: display sticky virtual-server [ virtual-server-name ] [ class class-name | default-class | default-link-group ] [ slot slot-number ] |
Display sticky group information. |
display sticky-group [ name group-name ] |
Display virtual server information. |
display virtual-server [ brief | name virtual-server-name ] |
Display virtual server statistics. |
In standalone mode: display virtual-server statistics [ name virtual-server-name ] In IRF mode: display virtual-server statistics [ name virtual-server-name ] [ slot slot-number ] |
Display the ALG status for all protocols. |
|
Display DNS cache information. |
In standalone mode: display loadbalance dns-cache [ domain-name domain-name ] In IRF mode: display loadbalance dns-cache [ domain-name domain-name ] [ slot slot-number ] |
Clear LB hot backup statistics. |
reset loadbalance hot-backup statistics |
Clear proximity entry information. |
reset loadbalance proximity [ vpn-instance vpn-instance-name ] [ ip [ ipv4-address ] | ipv6 [ ipv6-address ] ] |
Clear all Layer 7 connections. |
reset loadbalance connections |
Clear link statistics. |
reset loadbalance link statistics [ link-name ] |
Clear link group member statistics. |
reset loadbalance link statistics link-group link-group-name [ name link-name ] |
Clear virtual server statistics. |
reset virtual-server statistics [ virtual-server-name ] |
Clear DNS cache information. |
reset loadbalance dns-cache [ domain-name domain-name ] |
Outbound link load balancing configuration examples
Example: Configuring outbound link load balancing
Network configuration
In Figure 14, ISP 1 and ISP 2 provide two links, Link 1 and Link 2, with the same router hop count, bandwidth, and cost. Link 1 has lower network delay.
Configure link load balancing for the device to select an optimal link for traffic from the host to the server.
Procedure
1. Configure IP addresses for interfaces.
[Device] interface gigabitethernet 1/0
[Device-GigabitEthernet1/0] ip address 10.1.1.1 24
[Device-GigabitEthernet1/0] quit
[Device] interface gigabitethernet 2/0
[Device-GigabitEthernet2/0] ip address 20.1.1.1 24
[Device-GigabitEthernet2/0] quit
2. Configure a link group:
# Create the ICMP-type NQA template t1, and configure the NQA client to send the probe result to the feature that uses the template on a per-probe basis.
[Device-nqatplt-icmp-t1] reaction trigger per-probe
[Device-nqatplt-icmp-t1] quit
# Specify the default proximity probe method as t1, and set the network delay weight for proximity calculation to 200.
[Device] loadbalance proximity
[Device-lb-proximity] match default probe t1
[Device-lb-proximity] rtt weight 200
[Device-lb-proximity] quit
# Create the link group lg, and enable the proximity feature.
[Device] loadbalance link-group lg
[Device-lb-lgroup-lg] proximity enable
# Disable the NAT feature.
[Device-lb-lgroup-lg] transparent enable
[Device-lb-lgroup-lg] quit
3. Configure links:
# Create the link link1 with next hop address 10.1.1.2, and add it to the link group lg.
[Device] loadbalance link link1
[Device-lb-link-link1] router ip 10.1.1.2
[Device-lb-link-link1] link-group lg
[Device-lb-link-link1] quit
# Create the link link2 with next hop address 20.1.1.2, and add it to link group lg.
[Device] loadbalance link link2
[Device-lb-link-link2] router ip 20.1.1.2
[Device-lb-link-link2] link-group lg
[Device-lb-link-link2] quit
4. Create the link-IP virtual server vs with VSIP 0.0.0.0/0, specify its default master link group lg, and enable the virtual server.
[Device] virtual-server vs type link-ip
[Device-vs-link-ip-vs] virtual ip address 0.0.0.0 0
[Device-vs-link-ip-vs] default link-group lg
[Device-vs-link-ip-vs] service enable
[Device-vs-link-ip-vs] quit
Verifying the configuration
# Display brief information about all links.
[Device] display loadbalance link brief
Link Route IP State VPN instance Link group
link1 10.1.1.2 Active lg
link2 20.1.1.2 Active lg
# Display detailed information about all link groups.
[Device] display loadbalance link-group
Link group: lg
Description:
Predictor: Round robin
Proximity: Enabled
NAT: Disabled
SNAT pool:
Failed action: Keep
Active threshold: Disabled
Slow-online: Disabled
Selected link: Disabled
Probe information:
Probe success criteria: All
Probe method:
t1
Total link: 2
Active link: 2
Link list:
Name State VPN instance Router IP Weight Priority
link1 Active 10.1.1.2 100 4
link2 Active 20.1.1.2 100 4
# Display detailed information about all virtual servers.
[Device] display virtual-server
Virtual server: vs
Description:
Type: LINK-IP
State: Active
VPN instance:
Virtual IPv4 address: 0.0.0.0/0
Virtual IPv6 address: --
Port: 0
Primary link group: lg (in use)
Backup link group:
Sticky:
LB policy:
LB limit-policy:
Connection limit: --
Rate limit:
Connections: --
Bandwidth: --
Inbound bandwidth: --
Outbound bandwidth: --
Connection synchronization: Disabled
Sticky synchronization: Disabled
Bandwidth busy protection: Disabled
Interface bandwidth statistics: Disabled
Route advertisement: Disabled
# Display brief information about all IPv4 proximity entries.
[Device] display loadbalance proximity ip
IPv4 entries in total: 1
IPv4 address/Mask length Timeout Best link
------------------------------------------------------------
10.1.0.0/24 50 link1
Configuring transparent DNS proxies
About transparent DNS proxies
Application scenario
As shown in Figure 15, intranet users of an enterprise can access external servers A and B through link 1 of ISP 1 and link 2 of ISP 2. External servers A and B provide the same services. All DNS requests of intranet users are forwarded to DNS server A, which returns the resolved IP address of external server A to the requesting users. In this way, all traffic of intranet users is forwarded on one link. Link congestion might occur.
The transparent DNS proxy feature can solve this problem by forwarding DNS requests to DNS servers in different ISPs. All traffic from intranet users is evenly distributed on multiple links. This feature can prevent link congestion and ensure service continuity upon a link failure.
Figure 15 Transparent DNS proxy working mechanism
Workflow
The transparent DNS proxy is implemented by changing the destination IP address of DNS requests.
Figure 16 Transparent DNS proxy workflow
Table 4 Workflow description
Step |
Source IP address |
Destination IP address |
1. An intranet user on the client host sends a DNS request to the LB device. |
Host IP address |
IP address of DNS server A |
2. The LB device selects a DNS server to forward the DNS request according to the scheduling algorithm. |
N/A |
N/A |
3. The LB device changes the destination IP address of the DNS request as the IP address of the selected DNS server. |
Host IP address |
IP address of the selected DNS server |
4. The DNS server processes the DNS request and replies with a DNS response. |
IP address of the selected DNS server |
Host IP address |
5. The LB device changes the source IP address of the DNS response as the destination IP address of the DNS request. |
IP address of DNS server A |
Host IP address |
6. The intranet user accesses the external server according to the resolved IP address in the DNS response. |
Host IP address |
IP address of the external server |
7. The external server responds to the intranet user. |
IP address of the external server |
Host IP address |
Transparent DNS proxy on the LB device
The LB device distributes DNS requests to multiple links by changing the destination IP address of DNS requests.
As shown in Figure 17, the LB device contains the following elements:
· Transparent DNS proxy—The LB device performs transparent DNS proxy for a DNS request only when the port number of the DNS request matches the port number of the transparent DNS proxy.
· DNS server pool—A group of DNS servers.
· DNS server—Entity that processes DNS requests.
· Link—Physical link provided by an ISP.
· LB class—Classifies packets to implement load balancing based on packet type.
· LB action—Drops, forwards, or modifies packets.
· LB policy—Associates an LB class with an LB action. An LB policy can be referenced by the transparent DNS proxy.
Figure 17 Transparent DNS proxy on the LB device
If the destination IP address and port number of a DNS request match those of the transparent DNS proxy, the LB device processes the DNS request as follows:
1. The LB device finds the DNS server pool associated with the transparent DNS proxy.
2. The LB device selects a DNS server according to the scheduling algorithm configured for the DNS server pool.
3. The LB device uses the IP address of the selected DNS server as the destination IP address of the DNS request, and sends the request to the DNS server.
4. The DNS server receives and processes the DNS request, and replies with a DNS response.
The intranet user can now access the external server after receiving the DNS response.
Transparent DNS proxy tasks at a glance
To configure the transparent DNS proxy feature, perform the following tasks:
1. Configuring a transparent DNS proxy
2. Configuring a DNS server pool
5. (Optional.) Configuring an LB policy
6. (Optional.) Configuring a sticky group
7. (Optional.) Enabling load balancing logging
Configuring a transparent DNS proxy
By configuring a transparent DNS proxy, you can load balance DNS requests that match the transparent DNS proxy.
Restrictions and guidelines
If both the "Specifying the default DNS server pool" and "Specifying an LB policy" tasks are configured, packets are processed by the LB policy first. If the processing fails, the packets are processed by the default DNS server pool.
Transparent DNS proxy tasks at a glance
To configure a transparent DNS proxy, perform the following tasks:
1. Creating a transparent DNS proxy
2. Specifying an IP address and port number
3. Configuring a packet processing policy
Choose the following tasks as needed:
¡ Specifying the default DNS server pool
4. (Optional.) Specifying a VPN instance
5. (Optional.) Enabling the link protection feature
6. (Optional.) Configuring hot backup
7. Enabling the transparent DNS proxy
Creating a transparent DNS proxy
1. Enter system view.
system-view
2. Create a transparent DNS proxy and enter its view.
loadbalance dns-proxy dns-proxy-name type udp
Specifying an IP address and port number
Restrictions and guidelines
If server load balancing is configured, make sure the IP address and port number of the transparent DNS proxy are different than the virtual server of the UDP type.
As a best practice, configure an all-zero IP address for a transparent DNS proxy. In this case, all DNS requests are processed by the transparent DNS proxy.
Procedure
1. Enter system view.
system-view
2. Enter transparent DNS proxy view.
loadbalance dns-proxy dns-proxy-name
3. Specify an IP address for the transparent DNS proxy.
IPv4:
ip address ipv4-address [ mask-length | mask ]
IPv6:
ipv6 address ipv6-address [ prefix-length ]
By default, no IP address is specified for a transparent DNS proxy.
4. Specify the port number for the transparent DNS proxy.
port port-number
By default, the port number is 53 for a transparent DNS proxy.
Specifying the default DNS server pool
1. Enter system view.
system-view
2. Enter transparent DNS proxy view.
loadbalance dns-proxy dns-proxy-name
3. Specify the default DNS server pool for the transparent DNS proxy.
default dns-server-pool pool-name [ sticky sticky-name ]
By default, no default DNS server pool is specified for a transparent DNS proxy.
Specifying an LB policy
About specifying an LB policy
By referencing an LB policy, the transparent DNS proxy load balances matching DNS requests based on the packet contents. For more information about configuring an LB policy, see "Configuring an LB policy."
Procedure
1. Enter system view.
system-view
2. Enter transparent DNS proxy view.
loadbalance dns-proxy dns-proxy-name
3. Specify an LB policy for the transparent DNS proxy.
lb-policy policy-name
By default, a transparent DNS proxy does not reference any LB policies.
Specifying a VPN instance
1. Enter system view.
system-view
2. Enter transparent DNS proxy view.
loadbalance dns-proxy dns-proxy-name
3. Specify a VPN instance for the transparent DNS proxy.
vpn-instance vpn-instance-name
By default, a transparent DNS proxy belongs to the public network.
Enabling the link protection feature
About the link protection feature
This feature enables a transparent DNS proxy to select a DNS server based on the link bandwidth ratio. If the bandwidth ratio of a link is exceeded, the DNS server is not selected.
If the traffic volume on the link to a DNS server exceeds the maximum expected bandwidth multiplied by the bandwidth ratio, the DNS server is busy and will not be selected. If the traffic volume drops below the maximum expected bandwidth multiplied by the bandwidth recovery ratio, the DNS server participates in scheduling again. For more information about setting the bandwidth ratio, see "Setting the bandwidth ratio and maximum expected bandwidth."
Procedure
1. Enter system view.
system-view
2. Enter transparent DNS proxy view.
loadbalance dns-proxy dns-proxy-name
3. Enable the link protection feature.
bandwidth busy-protection enable
By default, the link protection feature is disabled.
Configuring hot backup
About hot backup
To implement hot backup for two LB devices, you must enable synchronization for session extension information and sticky entries to avoid service interruption.
Restrictions and guidelines
For successful sticky entry synchronization, if you want to specify a sticky group, enable sticky entry synchronization before specifying a sticky group on both LB devices. You can specify a sticky group by using the sticky sticky-name option when specifying the default DNS server pool.
Procedure
1. Enter system view.
system-view
2. Enter transparent DNS proxy view.
loadbalance dns-proxy dns-proxy-name
3. Enable session extension information synchronization.
connection-sync enable
By default, session extension information synchronization is disabled.
4. Enable sticky entry synchronization.
sticky-sync enable
By default, sticky entry synchronization is disabled.
Enabling the transparent DNS proxy
About enabling the transparent DNS proxy
After configuring a transparent DNS proxy, you must enable the transparent DNS proxy for it to work.
Procedure
1. Enter system view.
system-view
2. Enter transparent DNS proxy view.
loadbalance dns-proxy dns-proxy-name
3. Enable the transparent DNS proxy.
service enable
By default, a transparent DNS proxy is disabled.
Configuring a DNS server pool
By configuring a DNS server pool, you can perform centralized management on DNS servers that have similar functions.
Creating a DNS server pool
1. Enter system view.
system-view
2. Create a DNS server pool and enter its view.
loadbalance dns-server-pool pool-name
3. (Optional.) Configure a description for the DNS server pool.
description text
By default, no description is configured for a DNS server pool.
Adding and configuring a DNS server pool member
About adding and configuring a DNS server pool member
Perform this task to create a DNS server pool member or add an existing DNS server as a DNS server pool member in DNS server pool view. You can also specify a DNS server pool for a DNS server in DNS server view to achieve the same purpose (see "Creating a DNS server and specifying a DNS server pool").
After adding a DNS server pool member, you can configure the following parameters and features for the DNS server in the DNS server pool:
· Weight.
· Priority.
· Health monitoring.
The member-based scheduling algorithm selects the best DNS server based on these configurations.
Adding a DNS server pool member
1. Enter system view.
system-view
2. Enter DNS server pool view.
loadbalance dns-server-pool pool-name
3. Create and add a DNS server pool member and enter DNS server pool member view.
dns-server dns-server-name port port-number
If the DNS server already exists, the command adds the existing DNS server as a DNS server pool member.
4. (Optional.) Configure a description for the DNS server pool member.
description text
By default, no description is configured for the DNS server pool member.
Setting the weight and priority of the DNS server pool member
1. Enter system view.
system-view
2. Enter DNS server pool view.
loadbalance dns-server-pool pool-name
3. Enter DNS server pool member view.
dns-server dns-server-name port port-number
4. Set the weight of the DNS server pool member.
weight weight-value
The default setting is 100.
5. Set the priority of the DNS server pool member.
priority priority
The default setting is 4.
Configuring health monitoring for the DNS server pool member
1. Enter system view.
system-view
2. Enter DNS server pool view.
loadbalance dns-server-pool pool-name
3. Enter DNS server pool member view.
dns-server dns-server-name port port-number
4. Specify a health monitoring method for the DNS server pool member.
probe template-name
By default, no health monitoring method is specified for the DNS server pool member.
You can specify an NQA template or load balancing template for health monitoring. For information about NQA templates, see NQA configuration in Network Management and Monitoring Configuration Guide.
5. Specify the health monitoring success criteria for the DNS server pool member.
success-criteria { all | at-least min-number }
By default, health monitoring succeeds only when all the specified health monitoring methods succeed.
Configuring a scheduling algorithm for a DNS server pool
About scheduling algorithms for DNS server pools
Perform this task to specify a scheduling algorithm for a DNS server pool and specify the number of DNS servers to participate in scheduling. The LB device calculates the DNS servers to process DNS requests based on the following scheduling algorithms:
· Source IP address hash algorithm—Hashes the source IP address of DNS requests and distributes DNS requests to different DNS servers according to the hash values. This hash algorithm ensures that DNS requests with the same source IP address are distributed to the same DNS server.
· Source IP address and port hash algorithm—Hashes the source IP address and port number of DNS requests and distributes DNS requests to different DNS servers according to the hash values. This hash algorithm ensures that DNS requests with the same source IP address and port number are distributed to the same DNS server.
· Destination IP address hash algorithm—Hashes the destination IP address of DNS requests and distributes DNS requests to different DNS servers according to the hash values. This hash algorithm ensures that DNS requests with the same destination IP address are distributed to the same DNS server.
· Random algorithm—Distributes DNS requests to DNS servers randomly.
· Weighted round-robin algorithm—Distributes DNS requests to DNS servers in a round-robin manner according to the weights of DNS servers. For example, you can assign weight values 2 and 1 to DNS server A and DNS server B, respectively. This algorithm distributes two DNS requests to DNS server A and then distributes one DNS request to DNS server B. This algorithm applies to scenarios where DNS servers have different performance and bear similar load for each session.
· Bandwidth algorithm—Distributes DNS requests to DNS servers according to the weights and remaining bandwidths of DNS servers. When the remaining bandwidths of two DNS servers are the same, this algorithm is equivalent to the round-robin algorithm. When the weights of two DNS servers are the same, this algorithm always distributes DNS requests to the DNS server that has larger remaining bandwidth.
· Maximum bandwidth algorithm—Distributes DNS requests always to an idle DNS server that has the largest remaining bandwidth.
Procedure
1. Enter system view.
system-view
2. Enter DNS server pool view.
loadbalance dns-server-pool pool-name
3. Specify a scheduling algorithm for the DNS server pool.
predictor hash address { destination | source | source-ip-port } [ mask mask-length ] [ prefix prefix-length ]
predictor { random | round-robin | { bandwidth | max-bandwidth } [ inbound | outbound ] }
By default, the scheduling algorithm for a DNS server pool is weighted round robin.
4. Specify the number of DNS servers to participate in scheduling.
selected-server min min-number max max-number
By default, the DNS servers with the highest priority participate in scheduling.
Configuring health monitoring
About configuring health monitoring
Perform this task to enable health monitoring to detect the availability of DNS servers in a DNS server pool.
Restrictions and guidelines
The health monitoring configuration in DNS server view takes precedence over the configuration in DNS server pool view.
You can specify an NQA template or load balancing template for health monitoring. For information about NQA templates, see NQA configuration in Network Management and Monitoring Configuration Guide.
Procedure
1. Enter system view.
system-view
2. Enter DNS server pool view.
loadbalance dns-server-pool pool-name
3. Specify a health monitoring method for the DNS server pool.
probe template-name
By default, no health monitoring method is specified for a DNS server pool.
4. Specify the health monitoring success criteria for the DNS server pool.
success-criteria { all | at-least min-number }
By default, health monitoring succeeds only when all the specified health monitoring methods succeed.
Configuring a DNS server
Perform this task to configure an entity on the LB device for processing DNS requests. DNS servers configured on the LB device correspond to DNS servers in ISP networks. A DNS server can belong to multiple DNS server pools. A DNS server pool can contain multiple DNS servers.
DNS server tasks at a glance
To configure a DNS server, perform the following tasks:
1. Creating a DNS server and specifying a DNS server pool
2. Configuring an IP address for a DNS server
Choose one of the following tasks:
¡ Specifying an IP address and port number
¡ Enabling the device to automatically obtain the IP address of a DNS server
3. Associating a link with a DNS server
4. (Optional.) Setting a weight and priority
5. (Optional.) Configuring health monitoring
Creating a DNS server and specifying a DNS server pool
1. Enter system view.
system-view
2. Create a DNS server and enter its view.
loadbalance dns-server dns-server-name
3. (Optional.) Configure a description for the DNS server.
description text
By default, no description is configured for a DNS server.
4. Specify a DNS server pool for the DNS server.
dns-server-pool pool-name
By default, a DNS server does not belong to any DNS server pool.
Specifying an IP address and port number
1. Enter system view.
system-view
2. Enter DNS server view.
loadbalance dns-server dns-server-name
3. Specify an IP address for the DNS server.
IPv4:
ip address ipv4-address
IPv6:
ipv6 address ipv6-address
By default, no IP address is specified for a DNS server.
4. Specify the port number for the DNS server.
port port-number
By default, the port number of a DNS server is 0. Packets use their own port numbers.
Enabling the device to automatically obtain the IP address of a DNS server
About enabling the device to automatically obtain the IP address of a DNS server
In scenarios where IP addresses are obtained through PPPoE, an LB device can dynamically obtain the IP address of a DNS server.
Before configuring this task, you must specify the outgoing interface for the link associated with the DNS server. Otherwise, the IP address of the DNS server cannot be obtained.
Procedure
1. Enter system view.
system-view
2. Enter DNS server view.
loadbalance dns-server dns-server-name
3. Enable the device to automatically obtain the IP address of the DNS server.
auto-alloc address
By default, the device does not automatically obtain the IP address of a DNS server.
Associating a link with a DNS server
Restrictions and guidelines
A DNS server can be associated with only one link. A link can be associated with multiple DNS servers.
Procedure
1. Enter system view.
system-view
2. Enter DNS server view.
loadbalance dns-server dns-server-name
3. Associate a link with the DNS server.
link link-name
By default, no link is associated with a DNS server.
Setting a weight and priority
About setting a weight and priority
Perform this task to set a weight for the weighted round robin algorithm and bandwidth algorithm of a DNS server, and set the scheduling priority in the DNS server pool for the DNS server.
Procedure
1. Enter system view.
system-view
2. Enter DNS server view.
loadbalance dns-server dns-server-name
3. Set a weight for the DNS server.
weight weight-value
By default, the weight of a DNS server is 100.
4. Set a priority for the DNS server.
priority priority
By default, the priority of a DNS server is 4.
Configuring health monitoring
About configuring health monitoring
Perform this task to enable health monitoring to detect the availability of a DNS server.
Restrictions and guidelines
The health monitoring configuration in DNS server view takes precedence over the configuration in DNS server pool view.
Procedure
1. Enter system view.
system-view
2. Enter DNS server view.
loadbalance dns-server dns-server-name
3. Specify a health monitoring method for the DNS server.
probe template-name
By default, no health monitoring method is specified for a DNS server.
4. Specify the health monitoring success criteria for the DNS server.
success-criteria { all | at-least min-number }
By default, health monitoring succeeds only when all the specified health monitoring methods succeed.
Configuring a link
A link is a physical link provided by an ISP. You can guide traffic forwarding by specifying an outbound next hop for a link. You can enhance link performance by configuring the maximum bandwidth, health monitoring, bandwidth ratio, and maximum expected bandwidth.
Link tasks at a glance
To configure a link, perform the following tasks:
2. Specifying a next hop IP address or an outgoing interface
Choose one of the following tasks:
¡ Specifying an outbound next hop for a link
¡ Specifying an outgoing interface for a link
3. (Optional.) Specifying a VPN instance
4. (Optional.) Configuring the maximum bandwidth
5. (Optional.) Configuring health monitoring
6. (Optional.) Setting the bandwidth ratio and maximum expected bandwidth
Creating a link
1. Enter system view.
system-view
2. Create a link and enter link view.
loadbalance link link-name
3. (Optional.) Configure a description for the link.
description text
By default, no description is configured for a link.
Specifying an outbound next hop for a link
1. Enter system view.
system-view
2. Enter link view.
loadbalance link link-name
3. Specify an outbound next hop for the link.
IPv4:
router ip ipv4-address
IPv6:
router ipv6 ipv6-address
By default, no outbound next hop is specified for a link.
Specifying an outgoing interface for a link
About specifying an outgoing interface for a link
In scenarios where IP addresses are obtained through PPPoE, an LB device can dynamically obtain the outbound next hop IP address through the specified outgoing interface.
Procedure
1. Enter system view.
system-view
2. Enter link view.
loadbalance link link-name
3. Specify an outgoing interface for the link.
router interface interface-type interface-number
By default, no outgoing interface is specified for a link.
Specifying a VPN instance
1. Enter system view.
system-view
2. Enter link view.
loadbalance link link-name
3. Specify a VPN instance for the link.
vpn-instance vpn-instance-name
By default, a link belongs to the public network.
Configuring the maximum bandwidth
1. Enter system view.
system-view
2. Enter link view.
loadbalance link link-name
3. Set the maximum bandwidth for the link.
rate-limit bandwidth [ inbound | outbound ] bandwidth-value
By default, the maximum bandwidth for a link is not limited.
Configuring health monitoring
About configuring health monitoring
Perform this task to enable health monitoring to detect the availability of a link.
Procedure
1. Enter system view.
system-view
2. Enter link view.
loadbalance link link-name
3. Specify a health monitoring method for the link.
probe template-name
By default, no health monitoring method is specified for a link.
4. Specify the health monitoring success criteria for the link.
success-criteria { all | at-least min-number }
By default, the health monitoring succeeds only when all the specified health monitoring methods succeed.
Setting the bandwidth ratio and maximum expected bandwidth
About setting the bandwidth ratio and maximum expected bandwidth
When the traffic exceeds the maximum expected bandwidth multiplied by the bandwidth ratio of a link, new traffic (traffic that does not match any sticky entries) is not distributed to the link. When the traffic drops below the maximum expected bandwidth multiplied by the bandwidth recovery ratio of the link, the link participates in scheduling again.
In addition to being used for link protection, the maximum expected bandwidth is used for remaining bandwidth calculation in the bandwidth algorithm and maximum bandwidth algorithm.
Procedure
1. Enter system view.
system-view
2. Enter link view.
loadbalance link link-name
3. Set the bandwidth ratio.
bandwidth [ inbound | outbound ] busy-rate busy-rate-number [ recovery recovery-rate-number ]
By default, the total bandwidth ratio is 70.
4. Set the maximum expected bandwidth.
max-bandwidth [ inbound | outbound ] bandwidth-value
By default, the maximum expected bandwidth is not limited.
Configuring an LB class
An LB class classifies packets by comparing packets against specific rules. Matching packets are further processed by LB actions. You can create a maximum of 65535 rules for an LB class.
LB class tasks at a glance
To configure an LB class, perform the following tasks:
2. Creating a match rule
Choose the following tasks as needed:
¡ Creating a match rule that references an LB class
¡ Creating a source IP address match rule
¡ Creating a destination IP address match rule
¡ Creating a domain name match rule
Creating an LB class
1. Enter system view.
system-view
2. Create a DNS LB class, and enter LB class view.
loadbalance class class-name type dns [ match-all | match-any ]
When you create an LB class, you must specify the class type. You can enter an existing LB class view without specifying the class type. If you specify the class type when entering an existing LB class view, the class type must be the one specified when you create the LB class.
3. (Optional.) Configure a description for the LB class.
description text
By default, no description is configured for an LB class.
Creating a match rule that references an LB class
1. Enter system view.
system-view
2. Enter LB class view.
loadbalance class class-name
3. Create a match rule that references an LB class.
match [ match-id ] class class-name
Creating a source IP address match rule
1. Enter system view.
system-view
2. Enter LB class view.
loadbalance class class-name
3. Create a source IP address match rule.
match [ match-id ] source { ip address ipv4-address [ mask-length | mask ] | ipv6 address ipv6-address [ prefix-length ] }
Creating a destination IP address match rule
1. Enter system view.
system-view
2. Enter LB class view.
loadbalance class class-name
3. Create a destination IP address match rule.
match [ match-id ] destination { ip address ipv4-address [ mask-length | mask ] | ipv6 address ipv6-address [ prefix-length ] }
Creating an ACL match rule
Restrictions and guidelines
If the specified ACL does not exist, the ACL match rule does not take effect.
Procedure
1. Enter system view.
system-view
2. Enter LB class view.
loadbalance class class-name
3. Create an ACL match rule.
match [ match-id ] acl [ ipv6 ] { acl-number | name acl-name }
By default, an LB class does not have any match rules.
Creating a domain name match rule
1. Enter system view.
system-view
2. Enter LB class view.
loadbalance class class-name
3. Create a domain name match rule.
match [ match-id ] domain-name domain-name
Configuring an LB action
About LB actions
LB actions include the following modes:
· Forwarding mode—Determines whether and how to forward packets. If no forwarding action is specified, packets are dropped.
· Modification mode—Modifies packets. To prevent the LB device from dropping the modified packets, the modification action must be used together with a forwarding action.
If you create an LB action without specifying any of the previous action modes, packets are dropped.
Restrictions and guidelines
The following tasks are mutually exclusive:
· Configuring the forwarding mode
· Specifying a DNS server pool for guiding packet forwarding
· Skipping the current transparent DNS proxy
Configuring one task automatically cancels the other task that you have configured.
LB action tasks at a glance
To configure an LB action, perform the following tasks:
2. (Optional.) Configuring a forwarding LB action
¡ Configuring the forwarding mode
¡ Specifying a DNS server pool for guiding packet forwarding
¡ Skipping the current transparent DNS proxy
¡ Matching the next rule upon failure to find a DNS server
¡ (Optional.) Matching the next rule when all DNS servers are busy
3. (Optional.) Configuring a modification LB action
¡ Configuring the ToS field in IP packets sent to the DNS server
Creating an LB action
1. Enter system view.
system-view
2. Create a DNS LB action and enter LB action view.
loadbalance action action-name type dns
When you create an LB action, you must specify the action type. You can enter an existing LB action view without specifying the action type. If you specify the action type when entering an existing LB action view, the action type must be the one specified when you create the LB action.
3. (Optional.) Configure a description for the LB action.
description text
By default, no description is configured for an LB action.
Configuring a forwarding LB action
About forwarding LB actions
Three forwarding LB action types are available:
· Forward—Forwards matching packets.
· Specify a DNS server pool for guiding packet forwarding.
· Skip the current transparent DNS proxy—Skips the current transparent DNS proxy and match the next transparent DNS proxy or virtual server.
· Match the next rule upon failure to find a DNS server—If the device fails to find a DNS server according to the LB action, it matches the packet with the next rule in the LB policy.
· Match the next rule when all DNS servers are busy.
Configuring the forwarding mode
1. Enter system view.
system-view
2. Enter DNS LB action view.
loadbalance action action-name
3. Configure the forwarding mode.
forward all
By default, the forwarding mode is to discard packets.
Specifying a DNS server pool for guiding packet forwarding
1. Enter system view.
system-view
2. Enter DNS LB action view.
loadbalance action action-name
3. Specify a DNS server pool for guiding packet forwarding.
dns-server-pool pool-name [ sticky sticky-name ]
By default, no DNS server pool is specified for guiding packet forwarding.
Skipping the current transparent DNS proxy
1. Enter system view.
system-view
2. Enter DNS LB action view.
loadbalance action action-name
3. Skip the current transparent DNS proxy.
skip current-dns-proxy
By default, the forwarding mode is to discard packets.
Matching the next rule upon failure to find a DNS server
1. Enter system view.
system-view
2. Enter DNS LB action view.
loadbalance action action-name
3. Match the next rule upon failure to find a DNS server.
fallback-action continue
By default, the next rule is not matched (packets are dropped) when no DNS servers are available for an LB action.
Matching the next rule when all DNS servers are busy
1. Enter system view.
system-view
2. Enter LB action view.
loadbalance action action-name
3. Match the next rule when all DNS servers are busy.
busy-action continue
By default, the device assigns DNS requests to DNS servers regardless of whether they are busy.
Configuring the ToS field in IP packets sent to the DNS server
1. Enter system view.
system-view
2. Enter DNS LB action view.
loadbalance action action-name
3. Configure the ToS field in IP packets sent to the DNS server.
set ip tos tos-number
By default, the ToS field in IP packets sent to the DNS server is not changed.
Configuring an LB policy
LB policy tasks at a glance
To configure an LB policy, perform the following tasks:
3. Specifying the default LB action
Creating an LB policy
1. Enter system view.
system-view
2. Create a DNS LB policy and enter LB action view.
loadbalance policy policy-name type dns
When you create an LB policy, you must specify the policy type. You can enter an existing LB policy view without specifying the policy type. If you specify the policy type when entering an existing LB policy view, the policy type must be the one specified when you create the LB policy.
3. (Optional.) Configure a description for the LB policy.
description text
By default, no description is configured for an LB policy.
Specifying an LB action
Restrictions and guidelines
A DNS LB policy can reference only DNS LB classes and DNS LB actions.
Procedure
1. Enter system view.
system-view
2. Enter DNS LB policy view.
loadbalance policy policy-name
3. Specify an LB action for an LB class.
class class-name [ insert-before before-class-name ] action action-name
By default, no LB action is specified for an LB class.
Specifying the default LB action
Restrictions and guidelines
The default LB action takes effect on packets that do not match any LB classes.
A DNS LB policy can reference only a DNS LB action as the default LB action.
Procedure
1. Enter system view.
system-view
2. Enter DNS LB policy view.
loadbalance policy policy-name
3. Specify the default LB action.
default-class action action-name
By default, no default LB action is specified.
Configuring a sticky group
A sticky group uses a sticky method to distribute similar sessions to the same DNS server according to sticky entries. The sticky method applies to the first packet of a session. Other packets of the session are distributed to the same DNS server.
Sticky group tasks at a glance
To configure a sticky group, perform the following tasks:
2. Configuring the IP sticky method
3. (Optional.) Configuring the timeout time for sticky entries
Creating a sticky group
1. Enter system view.
system-view
2. Create an address- and port-type sticky group and enter sticky group view.
sticky-group group-name type address-port
When you create a sticky group, you must specify the group type. You can enter an existing sticky group view without specifying the group type. If you specify the group type when entering an existing sticky group view, the group type must be the one specified when you create the sticky group.
3. (Optional.) Configure a description for the sticky group.
description text
By default, no description is configured for a sticky group.
Configuring the IP sticky method
1. Enter system view.
system-view
2. Enter sticky group view.
sticky-group group-name
3. Configure the IP sticky method.
IPv4:
ip [ port ] { both | destination | source } [ mask mask-length ]
IPv6:
ipv6 [ port ] { both | destination | source } [ prefix prefix-length ]
By default, no IP sticky method is configured.
Configuring the timeout time for sticky entries
1. Enter system view.
system-view
2. Enter sticky group view.
sticky-group group-name
3. Configure the timeout time for sticky entries.
timeout timeout-value
By default, the timeout time for sticky entries is 60 seconds.
Enabling load balancing logging
About load balancing logging
For security auditing purposes, enable load balancing logging to record load balancing information. Load balancing logging includes NAT logging and link busy state logging.
NAT logging records NAT session information, including IP address and port translation information and access information.
Link busy state logging records busy states for all links.
Enabling load balancing NAT logging
1. Enter system view.
system-view
2. Enable load balancing NAT logging.
loadbalance log enable nat
By default, load balancing NAT logging is disabled.
Enabling load balancing link busy state logging
1. Enter system view.
system-view
2. Enable load balancing link busy state logging.
loadbalance log enable bandwidth-busy
By default, load balancing link busy state logging is disabled.
Displaying and maintaining transparent DNS proxy
Execute display commands in any view and reset commands in user view.
Task |
Command |
Display DNS server pool information. |
display loadbalance dns-server-pool [ brief | name pool-name ] |
Display DNS server information. |
display loadbalance dns-server [ brief | name dns-server-name ] |
Display DNS server pool member information. |
display loadbalance dns-server dns-server-pool dns-server-pool-name [ name dns-server-name port port-number ] |
Display DNS server statistics. |
In standalone mode: display loadbalance dns-server statistics [ name dns-server-name ] In IRF mode: display loadbalance dns-server statistics [ name dns-server-name ] [ slot slot-number ] |
Display DNS server pool member statistics. |
In standalone mode: display loadbalance dns-server statistics dns-server-pool dns-server-pool-name [ name dns-server-name port port-number ] In IRF mode: display loadbalance dns-server statistics dns-server-pool dns-server-pool-name [ name dns-server-name port port-number ] [ slot slot-number ] |
Display transparent DNS proxy information. |
display loadbalance dns-proxy [ brief | name dns-proxy-name ] |
Display transparent DNS proxy statistics. |
In standalone mode: display loadbalance dns-proxy statistics [ name dns-proxy-name ] In IRF mode: display loadbalance dns-proxy statistics [ name dns-proxy-name ] [ slot slot-number ] |
Display link information. |
display loadbalance link [ brief | name link-name ] |
Display link statistics. |
In standalone mode: display loadbalance link statistics [ name link-name ] In IRF mode: display loadbalance link statistics [ name link-name ] [ slot slot-number ] |
Display LB class information. |
display loadbalance class [ name class-name ] |
Display LB action information. |
display loadbalance action [ name action-name ] |
Display LB policy information. |
display loadbalance policy [ name policy-name ] |
Display sticky entry information for transparent DNS proxies. |
In standalone mode: display sticky dns-proxy [ dns-proxy-name ] [ class class-name | default-class | default-dns-server-pool ] In IRF mode: display sticky dns-proxy [ dns-proxy-name ] [ class class-name | default-class | default-dns-server-pool ] [ slot slot-number ] |
Display sticky group information. |
display sticky-group [ name group-name ] |
Display LB hot backup statistics. |
In IRF mode: display loadbalance hot-backup statistics [ slot slot-number ] |
Clear DNS server statistics. |
reset loadbalance dns-server statistics [ dns-server-name ] |
Clear DNS server pool member statistics. |
reset loadbalance dns-server statistics dns-server-pool dns-server-pool-name [ name dns-server-name port port-number ] |
Clear transparent DNS proxy statistics. |
reset loadbalance dns-proxy statistics [ dns-proxy-name ] |
Clear link statistics. |
reset loadbalance link statistics [ link-name ] |
Clear LB hot backup statistics. |
reset loadbalance hot-backup statistics |
Transparent DNS proxy configuration examples
Example: Configuring transparent DNS proxy
Network configuration
In Figure 18, ISP 1 and ISP 2 provide two links with the same bandwidth: Link 1 and Link 2. The IP address of the DNS server of ISP 1 is 10.1.2.100. The IP address of the DNS server of ISP 2 is 20.1.2.100. Intranet users use domain name www.abc.com to access Web server A and Web server B.
Configure a transparent DNS proxy on the device to evenly distribute user traffic to Link 1 and Link 2.
Procedure
1. Configure IP addresses for interfaces.
<Device> system-view
[Device] interface gigabitethernet 1/0
[Device-GigabitEthernet1/0] ip address 192.168.1.100 24
[Device-GigabitEthernet1/0] quit
[Device] interface gigabitethernet 2/0
[Device-GigabitEthernet2/0] ip address 10.1.1.1 24
[Device-GigabitEthernet2/0] quit
[Device] interface gigabitethernet 1/0/3
[Device-GigabitEthernet1/0/3] ip address 20.1.1.1 24
[Device-GigabitEthernet1/0/3] quit
2. Configure links:
# Create the link link1 with next hop address 10.1.1.2.
[Device] loadbalance link link1
[Device-lb-link-link1] router ip 10.1.1.2
[Device-lb-link-link1] quit
# Create the link link2 with next hop address 20.1.1.2.
[Device] loadbalance link link2
[Device-lb-link-link2] router ip 20.1.1.2
[Device-lb-link-link2] quit
3. Create a DNS server pool named dsp.
[Device] loadbalance dns-server-pool dsp
[Device-lb-dspool-dsp] quit
4. Configure DNS servers:
# Create a DNS server named ds1, configure its IP address as 10.1.2.100, assign it to DNS server pool dsp, and associate it with link link1.
[Device] loadbalance dns-server ds1
[Device-lb-ds-ds1] ip address 10.1.2.100
[Device-lb-ds-ds1] dns-server-pool dsp
[Device-lb-ds-ds1] link link1
[Device-lb-ds-ds1] quit
# Create a DNS server named ds2, configure its IP address as 20.1.2.100, assign it to DNS server pool dsp, and associate it with link link2.
[Device] loadbalance dns-server ds2
[Device-lb-ds-ds2] ip address 20.1.2.100
[Device-lb-ds-ds2] dns-server-pool dsp
[Device-lb-ds-ds2] link link2
[Device-lb-ds-ds2] quit
5. Configure a transparent DNS proxy:
# Create a UDP transparent DNS proxy named dns-proxy1, configure its IP address as 0.0.0.0, specify DNS server pool dsp as its default DNS server pool, and enable the transparent DNS proxy.
[Device] loadbalance dns-proxy dns-proxy1 type udp
[Device-lb-dp-udp-dp] ip address 0.0.0.0 0
[Device-lb-dp-udp-dp] default dns-server-pool dsp
[Device-lb-dp-udp-dp] service enable
[Device-lb-dp-udp-dp] quit
Verifying the configuration
# Display brief information about all DNS servers.
[Device] display loadbalance dns-server brief
DNS server Address Port Link State DNS server pool
ds1 10.1.2.100 0 link1 Active dsp
ds2 20.1.2.100 0 link2 Active dsp
# Display detailed information about all DNS server pools.
[Device] display loadbalance dns-server-pool
DNS server pool: dsp
Description:
Predictor: Round robin
Selected server: Disabled
Probe information:
Probe success criteria: All
Probe method:
Total DNS servers: 2
Active DNS servers: 2
DNS server list:
Name State Address port Link Weight Priority
ds1 Active 10.1.2.100 0 link1 100 4
ds2 Active 20.1.2.100 0 link2 100 4
# Display detailed information about all transparent DNS proxies.
[Device] display loadbalance dns-proxy
DNS proxy: dns-proxy1
Type: UDP
State: Active
Service state: Enabled
VPN instance:
IPv4 address: 1.1.1.0/24
IPv6 address: --
Port: 53
DNS server pool: dsp
Sticky:
LB policy:
Connection synchronization: Enabled
Sticky synchronization: Enabled
Bandwidth busy protection: Disabled
After you complete the previous configuration, the LB device can evenly distribute DNS requests to DNS server A and DNS server B. Then, intranet user traffic is evenly distributed to Link 1 and Link 2.