|
Version 6.3 |
|
| ||||||||||||||||||||||
Real-Time TasksThe CommuniGate Pro Real-Time Tasks communicate by sending events to task handlers. A task handler is a reference object describing a Real-Time Task, a Session, or a Real-Time Signal. In a CommuniGate Pro Cluster environment, the Task handler contains the address of the Cluster Member server the referenced Real-Time Task, Session, or Signal is running on. When an event should be delivered to a different cluster member, it is delivered using the inter-Cluster CLI/API. The event recipient can reply, using the sender task handler, and again the inter-Cluster CLI/API will be used to deliver the reply event. Real-Time Application Tasks usually employ Media channels. To be able to exchange media with external entities, Real-Time Tasks should run only on those Cluster members that have direct access to the Internet. XIMSS Call LegsWhen a XIMSS session initiates a call, it creates a Call Leg object. These Call Leg objects manage XIMSS user Media channels and they should be able to exchange media with external entities, so they should run only on those Cluster members that have direct access to the Internet. When a Real-Time Signal component directs an incoming call to a XIMSS session, it creates a Call Leg object on the same Cluster member processing this incoming call Signal request. This Call Leg object is then "attached" to the XIMSS session (which is running on some backend Server and this can be running on a different Cluster member). When an XIMSS session and its Call Leg are running on different Cluster members, they communicate via special events, which are delivered using the inter-Cluster CLI/API. Signals
Real-Time Signal processing results in
DNS Resolver , SIP, and XMPP requests.
When a Cluster is configured so that only the frontend servers can access the Internet, Real-Time Signal processing should take place on those frontend servers only. Even if the Real-Time applications and Call Legs are configured to run on frontend servers only, Real-Time Signals can be generated on other cluster members, too: XIMSS and XMPP sessions, Automated Rules, and other components can send Instant Messages, Event packages generate notification Signals, etc. When a Real-Time Signal is running on a frontend server, it uses inter-Cluster CLI/API to retrieve Account data (such as SIP registration), or to perform requested actions (to deliver SUBSCRIBE or XMPP IQ request, or to initiate a call). Configuring Call Leg and Signal ProcessingTo configure the Call Leg and Signal creation mode, open the General page in the Settings WebAdmin realm and click the Cluster link:
SIPThe CommuniGate Pro SIP Farm® feature allows several Cluster members to process SIP request packets randomly distributed to them by a Load Balancer. Configure the Load Balancer to distribute incoming SIP UDP packets (port 5060 by default)
to the SIP ports of the selected SIP Farm Cluster members.
To configure the SIP Farm Members, open the General page in the Settings WebAdmin realm and click the Cluster link:
The CommuniGate Pro Cluster maintains the information about all its Servers with the SIP Farm setting set to Member. Incoming UDP packets and TCP connections are distributed to those Servers using regular simple Load Balancers. The receiving Server detects if the received packet must be processed on a certain Farm Server: it checks if the packet is a response or an ACK packet for an existing transaction or if the packet is directed to a Node created on a certain Server. In this case the packet is relayed to the proper Cluster member: Packets not directed to a particular Cluster member are distributed to all currently available Farm Members based on the CommuniGate Pro SIP Farm algorithms. To process a Signal, Cluster members may need to retrieve certain Account information (registration, preferences, etc.). If the Cluster member cannot open the Account (because the Member is a Frontend Server or because the Account is locked on a different Backend Server), it uses the inter-Cluster CLI/API to retrieve the required information from the proper Backend Server. Several Load Balancer and network configurations can be used to implement a SIP Farm: Single-IP NAT Load BalancerThis method is used for small Cluster installations, when the frontend Servers do not have direct access to the Internet, and the Load Balancer performs Network Address Translation for frontend Servers. First select the "virtual" IP address (VIP) - this is the only address your Cluster SIP users will "see":
The frontend servers have IP addresses F1, F2, F3, ... Configure the Load Balancer to process incoming UDP packets received on its VIP address and port 5060:
SIP-specific techniques implemented in some Load Balancers allow them to send all "related" requests to the same server. Usually these techniques are based on the request Call-ID field and thus fail very often. CommuniGate Pro SIP Farm technology ensures proper request handling if a request or response packet is received by any SIP Farm member. Thus, these SIP-specific Load Balancer techniques are not required with CommuniGate Pro. Many Load Balancers create "session binding" for incoming UDP requests, in the same way they process incoming TCP connections - even if they do not implement any SIP-special techniques.The Binding table for some Load Balancer port v (and the Load Balancer VIP address) contains IP address-port pairs: X:x <-> F1:f
where X:x is the remote (sending) device IP address and port, and F1:f is the frontend Server IP address and port
the incoming packet has been forwarded to.
When the remote device re-sends the request, this table record allows the Load Balancer to send the request to the same frontend Server (note that this is not needed with the CommuniGate Pro SIP Farm). These Load Balancers usually create "session binding" for outgoing UDP requests, too: when a packet is sent from some frontend address/port F2:f to some remote address/port Y:y, a record is created in the Load Balancer Binding table: Y:y <-> F2:f
When the remote device sends a response packet, this table record allows the Load Balancer to send the response to the "proper" frontend Server (note that this is not needed with the CommuniGate Pro SIP Farm). CommuniGate Pro SIP Farm distributes SIP request packets by relaying them between the frontend Servers,
according to the SIP Farm algorithms; the SIP Farm algorithms redirect the SIP response packets to the
frontend Server that has sent the related SIP request.
It is very important to consult with your Load Balancer manufacturer to ensure that the Load Balancer does not use "session binding" for UDP port 5060 - to avoid the problem described above. Multi-IP NAT Load BalancerIn this configuration frontend Servers have direct access to the Internet (they have IP addresses directly "visible" from the Internet).
Load Balancers with UDP "session binding" will have the same problems as described above. DSR Load BalancerDSR (Direct Server Response) is the preferred Load-Balancing method for larger installations. To use the DSR method, create an "alias" for the loopback network interface on each Frontend Server. While the standard address for the loopback interface is 127.0.0.1, create an alias with the VIP address and the 255.255.255.255 network mask:
Note: Because MAC addresses are used to redirect incoming packets, the Load Balancer and all frontend Servers must be connected to the same network segment; there should be no router between the Load Balancer and frontend Servers. Note: when a network "alias" is created, open the General Info page in the CommuniGate Pro WebAdmin Settings realm, and click the Refresh button to let the Server detect the newly added IP address. The DSR method is transparent for all TCP-based services (including SIP over TCP/TLS), no additional CommuniGate Pro Server configuration is required: when a TCP connection is accepted on a local VIP address, outgoing packets for that connection will always have the same VIP address as the source address. To use the DSR method for SIP UDP, the CommuniGate Pro frontend Server configuration should be updated:
Repeat this configuration change for all frontend Servers. RTP MediaEach Media stream terminated in CommuniGate Pro (a stream relayed with a media proxy or a stream processed with a media server channel) is bound to a particular Cluster Member. The Load Balancer must ensure that all incoming Media packets are delivered to the proper Cluster Member. Single-IP MethodThe "single-IP" method is useful for a small and medium-size installations.The Cluster Members have internal addresses L1, L1, L3, etc. The Load Balancer has an external address G0. The Network Settings of each Cluster Member are modified, so the Media Ports used on each Member are different: ports 10000-19999 on the L1 Member, ports 20000-29999 on the L2 Member, ports 30000-39999 on the L3 Member, etc. All packets coming to the G0 address to the standard ports (5060 for SIP) are distributed to the L1, L2, L3 addresses, to the same ports. All packets coming to the G0 address to the media ports are distributed according to the port range:
The Server-wide WAN IP Address setting should be left empty on all Cluster Members.
This method should not be used for large installations (unless there is little or no media termination): it allows you to allocate only 64000 ports for all Cluster media streams (each AVP stream takes 2 ports, so the total number of audio streams is limited to 32000, and if video is used (together with audio), such a Cluster cannot support more than 16,000 concurrent A/V sessions. Multi-IP No-NAT Load BalancerThe "multi-IP" method is useful for large installations. Each frontend has its own IP address, and when a Media Channel or a Media Proxy is created on that frontend Server, this unique IP address is used for direct communication between the Server and the client device or remote server. The Network Settings of each Cluster Member can specify the same Media Port ranges, and the number of concurrent RTP streams is not limited by 64000 ports. In the simplest case, all frontend Servers have "real" IP Addresses, i.e. they are directly connected to the Internet. If the Load Balancer uses a DSR method (see above), then it should not care about the packets originating on the frontend Servers from non-VIP addresses: these packets either bypass the Load Balancer, or it should deliver them without any modification. If the Load Balancer uses a "normal" method, it should be instructed to process "load balanced ports" only, while packets to and from "other ports" (such as the ports in the Media Ports range) should be redirected without any modification. Multi-IP NAT MethodYou can use the Multi-IP method even if your frontend Servers do not have "real" IP Addresses, but they use "LAN"-type addresses L1, L1, L3, etc.Configure the Load Balancer to host real IP Addresses G1, G2, G3,... - in addition to the VIP IP Address used to access CommuniGate Pro services. Configure the Load Balancer to "map" its external IP address G1 to the frontend Server address L1, so all packets coming to the IP Address G1, port g (G1:g) are redirected to the frontend Server address L1, same port g (L1:g). The Load Balancer may change the packet target address to L1, or it may leave it as is (G1); When the Load Balancer receives a packet from the L1 address, port l (L1:l), and this port is not a port involved in a load balancing operations (an SMTP, POP, IMAP, SIP, etc.), the Load Balancer should redirect the packet outside, replacing its source address from L1 to G1: L1:l->G1:l. Configure the Load Balancer in the same way to "map" its external IP addresses G2, G3, ... to the other frontend Server IP addresses L2, L3... Configure the CommuniGate Pro frontend Servers, using the WebAdmin Settings realm. Open the Network pages, and specify the "mapped" IP addresses as Server-wide WAN IP Addresses: G1 for the frontend Server with L1 IP address, G2 for the frontend Server with L2 IP address, etc. |