Contents

Load Balancing

Website Visitors:

What is Load Balancing?

The load balancing feature distributes user requests for Web site pages and other protected applications across multiple servers that all host (or mirror) the same content. You use load balancing primarily to manage user requests to heavily used applications, preventing poor performance and outages and ensuring that users can access your protected applications. Load balancing also provides fault tolerance; when one server that hosts a protected application becomes unavailable, the feature distributes user requests to the other servers that host the same application.

You can configure the load balancing feature to:

  • Distribute all requests for a specific protected Web site, application, or resource between two or more identically-configured servers.
  • Use any of several different algorithms to determine which server should receive each incoming user request, basing the decision on different factors, such as which server has the fewest current user connections or which server has the lightest load.

The load balancing feature is a core feature of the NetScaler appliance. Most users first set up a working basic configuration and then customize various settings, including persistence for connections. In addition, you can configure features for protecting the configuration against failure, managing client traffic, managing and monitoring servers, and managing a large scale deployment.

How Load Balancing Works

In a basic load balancing setup, clients send their requests to the IP address of a virtual server configured on the NetScaler appliance. The virtual server distributes them to the load-balanced application servers according to a preset pattern, called the load balancing algorithm. In some cases, you might want to assign the load balancing virtual server a wildcard address instead of a specific IP address. For instructions about specifying a global HTTP port on the appliance, see Global HTTP Ports.

Load Balancing Algorithms

The load balancing algorithm defines the criteria that the NetScaler appliance uses to select the service to which to redirect each client request. Different load balancing algorithms use different criteria. For example, the least connection algorithm selects the service with the fewest active connections, while the round robin algorithm maintains a running queue of active services, distributes each connection to the next service in the queue, and then sends that service to the end of the queue.

Some load balancing algorithms are best suited to handling traffic on websites, others to managing traffic to DNS servers, and others to handling complex web applications used in e-commerce or on company LANs or WANs. The following table lists each load balancing algorithm that the NetScaler appliance supports, with a brief description of how each operates.

Name Server Selection Based On
LEASTCONNECTION Which service currently has the fewest client connections. This is the default load balancing algorithm.
ROUNDROBIN Which service is at the top of a list of services. After that service is selected for a connection, it moves to the bottom of the list.
LEASTRESPONSETIME Which load balanced server currently has the quickest response time.
URLHASH A hash of the destination URL.
DOMAINHASH A hash of the destination domain.
DESTINATIONIPHASH A hash of the destination IP address.
SOURCEIPHASH A hash of the source IP address.
SRCIPDESTIPHASH A hash of the source and destination IP addresses.
CALLIDHASH A hash of the call ID in the SIP header.
SRCIPSRCPORTHASH A hash of the client’s IP address and port.
LEASTBANDWIDTH Which service currently has the fewest bandwidth constraints.
LEASTPACKETS Which service currently is receiving the fewest packets.
CUSTOMLOAD Data from a load monitor.
TOKEN The configured token.
LRTM Fewest active connections and the lowest average response time.

Depending on the protocol of the service that it is load balancing, the NetScaler appliance sets up each connection between client and server to last for a different time interval. This is called load balancing granularity, of which are three types: request-based, connection-based, and time-based granularity. The following table describes each type of granularity and when each is used.

Granularity Types of Load Balanced Service Specifies
Request -based HTTP or HTTPS A new service is chosen for each HTTP request, independent of TCP connections. As with all HTTP requests, after the Web server fulfills the request, the connection is closed.
Connection-based TCP and TCP-based protocols other than HTTP A service is chosen for every new TCP connection. The connection persists until terminated by either the service or the client.
Time-based UDP and other IP protocols A new service is chosen for each UDP packet. Upon selection of a service, a session is created between the service and a client for a specified period of time. When the time expires, the session is deleted and a new service is chosen for any additional packets, even if those packets come from the same client.

During startup of a virtual server, or whenever the state of a virtual server changes, the virtual server can initially use the round robin method to distribute the client requests among the physical servers. This type of distribution, referred to as startup round robin, helps prevent unnecessary load on a single server as the initial requests are served. After using the round robin method at the startup, the virtual server switches to the load balancing method specified on the virtual server.

Before you use Load Blancer in netscaler, you must enable it first. For this, open settings, and click configure basic features. Then enable LoadBalancing and content switching.

Want to learn more on Citrix Automations and solutions???

Subscribe to get our latest content by email.

If you like our content, please support us by sponsoring on GitHub below: