I have made a diagram using Dia, of a ser footprint that I would like to put together.
http://www.ecad.org/~jev/ser/SerFootprint.png
The idea is that all user accounts, and locations (Down to ser/rtpproxy fronted) will be stored in the main Billing/User accounts back end. I want to have a farm of front end ser machines which will just proxy invites/registers/byes to the back end for authentication, authorization, and billing, and also proxy RTP by means of either Maxims rtpproxy or AG's mediaproxy.
The front end SERs will be able to come and go, and our cisco router will manage the balancing (using things like 'sticky IP').
Depending on load we can just add more front end ser machines, and also possibly add more back end machines (using the t_replicate() mechanism) if need be.
Currently I'm playing around in my test network getting this footprint to a functional state, I wanted to share the idea with community and see what you guys think of this setup? Weaknesses, [over|under] complicated?
It is possible that some front end machines would be specific for a certain group of phones, based on latency (Different physical location). My main requirement is that I have a single billing/accounting mechanisism...
Calls from one end point to the other will allways be mediated through a RTP proxy on one of the front end machines.
What do you guys think?
1. How will you do the load balancing? Do the several ser servers have a single IP address? As the clients are behind NAT, the NAT traversal always has to be done by the proxy which acceptet the registration of the client (only this IP address is allowed to traverse the NAT in case of symmetric NATs).
Furthermore you have to ensure that all messages of a transaction traverse the same proxy.
Another point is that only the SIP proxy which acceptet the registration knows about the current location of a UA, except you use the t_replicate feature.
billing: as long as all SIP proxies write their account data into the same database you have a single billing mechanism. btw: don't you have a more reliable source for CDRs like a PSTN gateway?
Klaus
Jev wrote:
I have made a diagram using Dia, of a ser footprint that I would like to put together.
http://www.ecad.org/~jev/ser/SerFootprint.png
The idea is that all user accounts, and locations (Down to ser/rtpproxy fronted) will be stored in the main Billing/User accounts back end. I want to have a farm of front end ser machines which will just proxy invites/registers/byes to the back end for authentication, authorization, and billing, and also proxy RTP by means of either Maxims rtpproxy or AG's mediaproxy.
The front end SERs will be able to come and go, and our cisco router will manage the balancing (using things like 'sticky IP').
Depending on load we can just add more front end ser machines, and also possibly add more back end machines (using the t_replicate() mechanism) if need be.
Currently I'm playing around in my test network getting this footprint to a functional state, I wanted to share the idea with community and see what you guys think of this setup? Weaknesses, [over|under] complicated?
It is possible that some front end machines would be specific for a certain group of phones, based on latency (Different physical location). My main requirement is that I have a single billing/accounting mechanisism...
Calls from one end point to the other will allways be mediated through a RTP proxy on one of the front end machines.
What do you guys think?
Serusers mailing list serusers@lists.iptel.org http://lists.iptel.org/mailman/listinfo/serusers
Thanks for your feedback Klaus,
Klaus Darilion wrote:
- How will you do the load balancing? Do the several ser servers have a
single IP address? As the clients are behind NAT, the NAT traversal always has to be done by the proxy which acceptet the registration of the client (only this IP address is allowed to traverse the NAT in case of symmetric NATs).
The router in my diagram will have "Stciky IP" which will maintain the udp connection between a UA, and the first ser front end it hits.
Furthermore you have to ensure that all messages of a transaction traverse the same proxy.
In my test environment I was rewriting the URI on the front and back end box with a hard coded host name, but this will get more complicated when I add more than one front end boxes. One option that I have been mulling is to possibly use the textops modules to rewrite the URI to the origin address of the recieved SIP message, this should make all messsages that the backend replys to will go to the originating front end ser. Does this make sense? Failing the textops module, I think I can run sed (or similar) on the sip message. Keeping the message rewriting inside the ser instance seems preferable to me.
Another point is that only the SIP proxy which acceptet the registration knows about the current location of a UA, except you use the t_replicate feature.
I understand now that I will need to have a location (persistent or not) on the front end servers. I imagine it should not be a problem to use t_replicate() across all my front end servers, but will this mean that all ser front end servers will attempt to send udp ping packets to natted clients?
Thanks! -Jev
On Jun 28, 2004 at 16:51, Jev jev@ecad.org wrote:
I have made a diagram using Dia, of a ser footprint that I would like to put together.
http://www.ecad.org/~jev/ser/SerFootprint.png
The idea is that all user accounts, and locations (Down to ser/rtpproxy fronted) will be stored in the main Billing/User accounts back end. I want to have a farm of front end ser machines which will just proxy invites/registers/byes to the back end for authentication, authorization, and billing, and also proxy RTP by means of either Maxims rtpproxy or AG's mediaproxy.
The front end SERs will be able to come and go, and our cisco router will manage the balancing (using things like 'sticky IP').
So all the packets comming from the same ip will be sent to the same fron end SER? (hashing after src ip)?
Anyway there are some problems related to the nat traversal:
1. nat ping - nat ping needs to access usrloc, so that it would know which users to ping. However on your setup the front-end servers have no ideea about this, so they wouldn't be able to nat ping. The "main" server (User accounts) knows who to ping but its ping won't traverse a symmetric nat (the nat will have an open binding only with the outbound proxy, which would be one of the load balanced front-ends).
2. consider user A calling user B, where at least B is behind a nat. The invite would reach the "main" server which will look up B and will try to send the message to B's address. Unfortunately B's nat will drop the packet, because it has an open binding only between B and the load balanced ip. (this will work only if B has a full cone nat which is very very unlikely)
3. assuming the above stuff will work somehow, you still have to be very carefull to open only one rtp proxy session (since each front end has its own rtp proxy you should make sure you use force_rtp_proxy on only one of them, for the same call)
Andrei
Andrei Pelinescu-Onciul wrote: [snip]
So all the packets comming from the same ip will be sent to the same fron end SER? (hashing after src ip)?
Yes, using ciscos "Sticky IP" which I admit, I do not know about, but I'm told it will do this job properly.
Anyway there are some problems related to the nat traversal:
- nat ping - nat ping needs to access usrloc, so that it would know
which users to ping. However on your setup the front-end servers have no ideea about this, so they wouldn't be able to nat ping. The "main" server (User accounts) knows who to ping but its ping won't traverse a symmetric nat (the nat will have an open binding only with the outbound proxy, which would be one of the load balanced front-ends).
I do realize this now, so I'm considering running a non-persistent usr_loc (no mysql back end) on all the front end servers, and using t_replicate between all of them. I admit I have not verified if this is possible, so please forgive me if I'm talking non-sense here at this stage. My concern here, as I mentioned in my reply to Klaus's post, is that if I use t_replicate will all my front end ser servers, will they all spit udp at a single natted client when the client has only one udp session with one front end server?
- consider user A calling user B, where at least B is behind a nat.
The invite would reach the "main" server which will look up B and will try to send the message to B's address. Unfortunately B's nat will drop the packet, because it has an open binding only between B and the load balanced ip. (this will work only if B has a full cone nat which is very very unlikely)
I'm not sure on the solution here. I will need to make the call go via the front end ser server that has the active udp session with the client. I'm going to sleep on this!
- assuming the above stuff will work somehow, you still have to be very
carefull to open only one rtp proxy session (since each front end has its own rtp proxy you should make sure you use force_rtp_proxy on only one of them, for the same call)
I agree, and I realize that I'm making some challenging issues for myself :) Thank you Andrei for your comments!
-Jev
I may share some of my experience on a similar concept. Notice that it's not a solution but more an idea sharing. From the experiment, I found that there is a fundamental weakness in ser (plus UDP plus NAT) to support a distributed SIP environment. I'm not saying it can't be done. However, to make ser more distributed, I think there is a need to redesign the way ser handle user location.
The lab environment I have is 4 ser proxies and 2 ser location servers. The 4 ser proxies were used as front end for proxying SIP requests. They have a SRV record in the DNS server for UAs which understand this record. For UA that doesn't understand SRV, the DNS also reply the proxies IP in a round robin fashion.
When a UA lookup the IP of the proxy, it get one from either the SRV record or round robin A record.
All REGISTER requests are forwarded from the proxies to the primary location server. This is than replicated to the secondary location server by t_replicate. So, the proxies has no knowledge of UA location. Only the location servers know where to reach the UA.
For other SIP requests, I have tried two different methods to handle them.
1. Forward all requests to location server and use record_route to keep the proxy in the path:
This works great to maintain dialogue as INVITE, reINVITE, BYE, CANCEL will all proxy back to the location server which has the transaction state. OTOH, it is poor in NAT handling since the location server was never directly contacted by the NAT device. The nat ping will not keep a hole in the NAT device. Also, it has no performance improvement over one single "proxy+location" server as all requests end up in location server.
2. Proxy querying UA location via SQL
In this method, I've written a small SQL script to be run by the proxy via exec_dst to check the UA location from the location server DB backend. (I know that DB is not the best place to check location but it is easier than writing C program to query the memory USRLOC on the location server.) This works best for performance as the proxies are sharing the requests as well as RTP proxying. However, it is relatively poor in NAT and transaction as the INVITE, BYE and CANCEL can be handled by different proxy due DNS resolution.
One way I see ser going distributed is to follow the idea of squid plus some enhancement. The group of proxies are put into partnership. When the proxy receive a REGISTER request, it check whether one of its partner has a record of that UA or not. If yes, it forward the request to the other proxy and forget it. Otherwise, it save the location in its memory, do NAT stuff and becomes the authoritive proxy for that UA until the REGISTER expires. When other request comes in, the proxy do the same check with its partner again and forward the request to the authoritive proxy. This way, the authoritive proxy maintains the nat ping, shares the RTP proxying and keep trace of transactions.
When a new proxy comes in, we just need to tell ser that there is a new member in the partnership. (Though, we need to find a way to tell ser about this without restarting so that it maintains the USRLOC in memory) Instantly, this proxy can serve new UA that was never seen before or its REGISTER has expires somewhere.
The only thing I haven't figured out a solution would be how to pick up UA location when one of the proxy fails. I don't like the way t_replicate works as it requires hard coding other proxies in the script and needs restarting ser for failover.
Zeus
-----Original Message----- From: serusers-bounces@lists.iptel.org [mailto:serusers-bounces@lists.iptel.org] On Behalf Of Jev Sent: Wednesday, 30 June 2004 8:53 AM To: Andrei Pelinescu-Onciul Cc: serusers@lists.iptel.org Subject: Re: [Serusers] request for comments
Andrei Pelinescu-Onciul wrote: [snip]
So all the packets comming from the same ip will be sent to
the same
fron end SER? (hashing after src ip)?
Yes, using ciscos "Sticky IP" which I admit, I do not know about, but I'm told it will do this job properly.
Anyway there are some problems related to the nat traversal:
- nat ping - nat ping needs to access usrloc, so that it
would know
which users to ping. However on your setup the front-end
servers have
no ideea about this, so they wouldn't be able to nat ping.
The "main"
server (User accounts) knows who to ping but its ping won't
traverse a
symmetric nat (the nat will have an open binding only with the outbound proxy, which would be one of the load balanced front-ends).
I do realize this now, so I'm considering running a non-persistent usr_loc (no mysql back end) on all the front end servers, and using t_replicate between all of them. I admit I have not verified if this is possible, so please forgive me if I'm talking non-sense here at this stage. My concern here, as I mentioned in my reply to Klaus's post, is that if I use t_replicate will all my front end ser servers, will they all spit udp at a single natted client when the client has only one udp session with one front end server?
- consider user A calling user B, where at least B is
behind a nat.
The invite would reach the "main" server which will look up
B and will
try to send the message to B's address. Unfortunately B's nat will drop the packet, because it has an open binding only
between B and the
load balanced ip. (this will work only if B has a full cone
nat which
is very very unlikely)
I'm not sure on the solution here. I will need to make the call go via the front end ser server that has the active udp session with the client. I'm going to sleep on this!
- assuming the above stuff will work somehow, you still have to be
very carefull to open only one rtp proxy session (since
each front end
has its own rtp proxy you should make sure you use
force_rtp_proxy on
only one of them, for the same call)
I agree, and I realize that I'm making some challenging issues for myself :) Thank you Andrei for your comments!
-Jev
Serusers mailing list serusers@lists.iptel.org http://lists.iptel.org/mailman/listinfo/serusers
Zeus Ng wrote:
I may share some of my experience on a similar concept. Notice that it's not a solution but more an idea sharing.
Solutions are nice, but I really want to hear ideas :) Provides food for thought! :)
From the experiment, I found that there is a fundamental weakness in ser (plus UDP plus NAT) to support a distributed SIP environment. I'm not saying it can't be done. However, to make ser more distributed, I think there is a need to redesign the way ser handle user location.
The lab environment I have is 4 ser proxies and 2 ser location servers. The 4 ser proxies were used as front end for proxying SIP requests. They have a SRV record in the DNS server for UAs which understand this record. For UA that doesn't understand SRV, the DNS also reply the proxies IP in a round robin fashion.
When a UA lookup the IP of the proxy, it get one from either the SRV record or round robin A record.
All REGISTER requests are forwarded from the proxies to the primary location server. This is than replicated to the secondary location server by t_replicate. So, the proxies has no knowledge of UA location. Only the location servers know where to reach the UA.
For other SIP requests, I have tried two different methods to handle them.
- Forward all requests to location server and use record_route to keep the
proxy in the path:
This works great to maintain dialogue as INVITE, reINVITE, BYE, CANCEL will all proxy back to the location server which has the transaction state. OTOH, it is poor in NAT handling since the location server was never directly contacted by the NAT device. The nat ping will not keep a hole in the NAT device. Also, it has no performance improvement over one single "proxy+location" server as all requests end up in location server.
So you had the backend location server contacting the UAC directly? I'm attempting to route the invite back through the originating front end proxy that has the nat session already established with the natted UAC. At the moment this only works because I am rewriting the (hardcoded) hostname in my config, but I'm looking at doing this dynamically so that any requests to the user location server will have their hostname rewritten to the previous hop.
- Proxy querying UA location via SQL
In this method, I've written a small SQL script to be run by the proxy via exec_dst to check the UA location from the location server DB backend. (I know that DB is not the best place to check location but it is easier than writing C program to query the memory USRLOC on the location server.) This works best for performance as the proxies are sharing the requests as well as RTP proxying. However, it is relatively poor in NAT and transaction as the INVITE, BYE and CANCEL can be handled by different proxy due DNS resolution.
I really want to keep my operations within SIP messaging only, and not having to rely on external mechanisms such as sql queries. This maintains our flexibility to use any SIP compliant device. It's a great idea thogh! :)
One way I see ser going distributed is to follow the idea of squid plus some enhancement. The group of proxies are put into partnership. When the proxy receive a REGISTER request, it check whether one of its partner has a record of that UA or not. If yes, it forward the request to the other proxy and forget it. Otherwise, it save the location in its memory, do NAT stuff and becomes the authoritive proxy for that UA until the REGISTER expires. When other request comes in, the proxy do the same check with its partner again and forward the request to the authoritive proxy. This way, the authoritive proxy maintains the nat ping, shares the RTP proxying and keep trace of transactions.
When a new proxy comes in, we just need to tell ser that there is a new member in the partnership. (Though, we need to find a way to tell ser about this without restarting so that it maintains the USRLOC in memory) Instantly, this proxy can serve new UA that was never seen before or its REGISTER has expires somewhere.
This sounds like a cool idea, I'm not familiar with squids proxiy partnership model, but what you explain seems sound to me. Perhaps the ser proxies could use SRV records to learn about new 'partner' ser proxies? Or would this be a miss-aplication of the SRV feature?
The only thing I haven't figured out a solution would be how to pick up UA location when one of the proxy fails. I don't like the way t_replicate works as it requires hard coding other proxies in the script and needs restarting ser for failover.
If a proxy that is maintaing a NAT session with a UAC goes away, I see no way of passing off this session/location to another server except just waiting for the UAC to re-register.
Zeus
-----Original Message----- From: serusers-bounces@lists.iptel.org [mailto:serusers-bounces@lists.iptel.org] On Behalf Of Jev Sent: Wednesday, 30 June 2004 8:53 AM To: Andrei Pelinescu-Onciul Cc: serusers@lists.iptel.org Subject: Re: [Serusers] request for comments
Andrei Pelinescu-Onciul wrote: [snip]
So all the packets comming from the same ip will be sent to
the same
fron end SER? (hashing after src ip)?
Yes, using ciscos "Sticky IP" which I admit, I do not know about, but I'm told it will do this job properly.
Anyway there are some problems related to the nat traversal:
- nat ping - nat ping needs to access usrloc, so that it
would know
which users to ping. However on your setup the front-end
servers have
no ideea about this, so they wouldn't be able to nat ping.
The "main"
server (User accounts) knows who to ping but its ping won't
traverse a
symmetric nat (the nat will have an open binding only with the outbound proxy, which would be one of the load balanced front-ends).
I do realize this now, so I'm considering running a non-persistent usr_loc (no mysql back end) on all the front end servers, and using t_replicate between all of them. I admit I have not verified if this is possible, so please forgive me if I'm talking non-sense here at this stage. My concern here, as I mentioned in my reply to Klaus's post, is that if I use t_replicate will all my front end ser servers, will they all spit udp at a single natted client when the client has only one udp session with one front end server?
- consider user A calling user B, where at least B is
behind a nat.
The invite would reach the "main" server which will look up
B and will
try to send the message to B's address. Unfortunately B's nat will drop the packet, because it has an open binding only
between B and the
load balanced ip. (this will work only if B has a full cone
nat which
is very very unlikely)
I'm not sure on the solution here. I will need to make the call go via the front end ser server that has the active udp session with the client. I'm going to sleep on this!
- assuming the above stuff will work somehow, you still have to be
very carefull to open only one rtp proxy session (since
each front end
has its own rtp proxy you should make sure you use
force_rtp_proxy on
only one of them, for the same call)
I agree, and I realize that I'm making some challenging issues for myself :) Thank you Andrei for your comments!
-Jev
Serusers mailing list serusers@lists.iptel.org http://lists.iptel.org/mailman/listinfo/serusers
More food :-)
We distribute our SER boxes. We did it by:
1) using the domain name for a login id to the postgres database 2) slicing each table based on domain name, that is, instead of subscriber I have a view calls av_subscriber which only shows records from the current domain. 3) Each ser proxy serves a domain.
Using this technique I can run dozens of SER proxies each with its own view of the database...however, if any one domain gets too big I will have a problem (I haven't had that problem yet, I'll let you know!).
I have always thought that the way to solve the distribution problem is to relax the in-memory caching of registrations. Everytime some UAC registers the database is updated, and everytime a call is to be delivered the location table is queried. Using this technique will tax the database more, but it would allow multiple SER proxies without the need for a sticky bit set, that is a round robin or least used SLB model.
At the current time because of the caching you can't have two SER proxies serving the same REGISTERed customer base because the location table gets tromped.
---greg
On Jun 30, 2004, at 1:06 PM, Jev wrote:
Zeus Ng wrote:
I may share some of my experience on a similar concept. Notice that it's not a solution but more an idea sharing.
Solutions are nice, but I really want to hear ideas :) Provides food for thought! :)
From the experiment, I found that there is a fundamental weakness in ser (plus UDP plus NAT) to support a distributed SIP environment. I'm not saying it can't be done. However, to make ser more distributed, I think there is a need to redesign the way ser handle user location. The lab environment I have is 4 ser proxies and 2 ser location servers. The 4 ser proxies were used as front end for proxying SIP requests. They have a SRV record in the DNS server for UAs which understand this record. For UA that doesn't understand SRV, the DNS also reply the proxies IP in a round robin fashion. When a UA lookup the IP of the proxy, it get one from either the SRV record or round robin A record. All REGISTER requests are forwarded from the proxies to the primary location server. This is than replicated to the secondary location server by t_replicate. So, the proxies has no knowledge of UA location. Only the location servers know where to reach the UA. For other SIP requests, I have tried two different methods to handle them.
- Forward all requests to location server and use record_route to
keep the proxy in the path: This works great to maintain dialogue as INVITE, reINVITE, BYE, CANCEL will all proxy back to the location server which has the transaction state. OTOH, it is poor in NAT handling since the location server was never directly contacted by the NAT device. The nat ping will not keep a hole in the NAT device. Also, it has no performance improvement over one single "proxy+location" server as all requests end up in location server.
So you had the backend location server contacting the UAC directly? I'm attempting to route the invite back through the originating front end proxy that has the nat session already established with the natted UAC. At the moment this only works because I am rewriting the (hardcoded) hostname in my config, but I'm looking at doing this dynamically so that any requests to the user location server will have their hostname rewritten to the previous hop.
- Proxy querying UA location via SQL
In this method, I've written a small SQL script to be run by the proxy via exec_dst to check the UA location from the location server DB backend. (I know that DB is not the best place to check location but it is easier than writing C program to query the memory USRLOC on the location server.) This works best for performance as the proxies are sharing the requests as well as RTP proxying. However, it is relatively poor in NAT and transaction as the INVITE, BYE and CANCEL can be handled by different proxy due DNS resolution.
I really want to keep my operations within SIP messaging only, and not having to rely on external mechanisms such as sql queries. This maintains our flexibility to use any SIP compliant device. It's a great idea thogh! :)
One way I see ser going distributed is to follow the idea of squid plus some enhancement. The group of proxies are put into partnership. When the proxy receive a REGISTER request, it check whether one of its partner has a record of that UA or not. If yes, it forward the request to the other proxy and forget it. Otherwise, it save the location in its memory, do NAT stuff and becomes the authoritive proxy for that UA until the REGISTER expires. When other request comes in, the proxy do the same check with its partner again and forward the request to the authoritive proxy. This way, the authoritive proxy maintains the nat ping, shares the RTP proxying and keep trace of transactions. When a new proxy comes in, we just need to tell ser that there is a new member in the partnership. (Though, we need to find a way to tell ser about this without restarting so that it maintains the USRLOC in memory) Instantly, this proxy can serve new UA that was never seen before or its REGISTER has expires somewhere.
This sounds like a cool idea, I'm not familiar with squids proxiy partnership model, but what you explain seems sound to me. Perhaps the ser proxies could use SRV records to learn about new 'partner' ser proxies? Or would this be a miss-aplication of the SRV feature?
The only thing I haven't figured out a solution would be how to pick up UA location when one of the proxy fails. I don't like the way t_replicate works as it requires hard coding other proxies in the script and needs restarting ser for failover.
If a proxy that is maintaing a NAT session with a UAC goes away, I see no way of passing off this session/location to another server except just waiting for the UAC to re-register.
Zeus
-----Original Message----- From: serusers-bounces@iptel.org [mailto:serusers-bounces@lists.iptel.org] On Behalf Of Jev Sent: Wednesday, 30 June 2004 8:53 AM To: Andrei Pelinescu-Onciul Cc: serusers@lists.iptel.org Subject: Re: [Serusers] request for comments
Andrei Pelinescu-Onciul wrote: [snip]
So all the packets comming from the same ip will be sent to
the same
fron end SER? (hashing after src ip)?
Yes, using ciscos "Sticky IP" which I admit, I do not know about, but I'm told it will do this job properly.
Anyway there are some problems related to the nat traversal:
- nat ping - nat ping needs to access usrloc, so that it
would know
which users to ping. However on your setup the front-end
servers have
no ideea about this, so they wouldn't be able to nat ping.
The "main"
server (User accounts) knows who to ping but its ping won't
traverse a
symmetric nat (the nat will have an open binding only with the outbound proxy, which would be one of the load balanced front-ends).
I do realize this now, so I'm considering running a non-persistent usr_loc (no mysql back end) on all the front end servers, and using t_replicate between all of them. I admit I have not verified if this is possible, so please forgive me if I'm talking non-sense here at this stage. My concern here, as I mentioned in my reply to Klaus's post, is that if I use t_replicate will all my front end ser servers, will they all spit udp at a single natted client when the client has only one udp session with one front end server?
- consider user A calling user B, where at least B is
behind a nat.
The invite would reach the "main" server which will look up
B and will
try to send the message to B's address. Unfortunately B's nat will drop the packet, because it has an open binding only
between B and the
load balanced ip. (this will work only if B has a full cone
nat which
is very very unlikely)
I'm not sure on the solution here. I will need to make the call go via the front end ser server that has the active udp session with the client. I'm going to sleep on this!
- assuming the above stuff will work somehow, you still have to be
very carefull to open only one rtp proxy session (since
each front end
has its own rtp proxy you should make sure you use
force_rtp_proxy on
only one of them, for the same call)
I agree, and I realize that I'm making some challenging issues for myself :) Thank you Andrei for your comments!
-Jev
Serusers mailing list serusers@lists.iptel.org http://lists.iptel.org/mailman/listinfo/serusers
Serusers mailing list serusers@lists.iptel.org http://lists.iptel.org/mailman/listinfo/serusers
Greg Fausak www.AddaBrand.com (US) 469-546-1265
More food :-)
We distribute our SER boxes. We did it by:
- using the domain name for a login id to the postgres database
- slicing each table based on domain name, that is, instead of subscriber I have a view calls av_subscriber which only shows
records from the current domain. 3) Each ser proxy serves a domain.
Personally, I don't treat this as distributed proxy but centralized location server. Nevertheless, it serve what you want. The use of view is good. Hope that MySQL will have this feature in their next major release.
Using this technique I can run dozens of SER proxies each with its own view of the database...however, if any one domain gets too big I will have a problem (I haven't had that problem yet, I'll let you know!).
I have always thought that the way to solve the distribution problem is to relax the in-memory caching of registrations. Everytime some UAC registers the database is updated, and everytime a call is to be delivered the location table is
Well, it's true in certain sense but you can also send a REGISTER request with no "Contact" header to get the memory version of USRLOC. But I agree with you that using just SQL DB should be an option in ser.
queried. Using this technique will tax the database more, but it would allow multiple SER proxies without the need for a sticky bit set, that is a round robin or least used SLB model.
You still need to "sticky" back the original proxy front end for NAT transveral.
At the current time because of the caching you can't have two SER proxies serving the same REGISTERed customer base because the location table gets tromped.
Not entirely true. If you can separate the proxy and location server function, you can have multiple proxies for the same domain.
---greg
On Jun 30, 2004, at 1:06 PM, Jev wrote:
Zeus Ng wrote:
I may share some of my experience on a similar concept. Notice that it's not a solution but more an idea sharing.
Solutions are nice, but I really want to hear ideas :) Provides food for thought! :)
From the experiment, I found that there is a fundamental weakness in ser (plus UDP plus NAT) to support a distributed SIP environment. I'm not saying it can't be done. However, to make ser more distributed, I think there is a need to redesign the way ser handle user location. The lab environment I have is 4 ser
proxies and
2 ser location servers. The 4 ser proxies were used as front end for proxying SIP
requests. They
have a SRV record in the DNS server for UAs which understand this record. For UA that doesn't understand SRV, the DNS also reply the
proxies IP in a
round robin fashion. When a UA lookup the IP of the proxy, it get one from
either the SRV
record or round robin A record. All REGISTER requests are forwarded from the proxies to
the primary
location server. This is than replicated to the secondary location server by t_replicate. So, the proxies has no knowledge of UA
location. Only the
location servers know where to reach the UA. For other SIP requests, I have tried two different methods
to handle
them.
- Forward all requests to location server and use record_route to
keep the proxy in the path: This works great to maintain dialogue as INVITE, reINVITE, BYE, CANCEL will all proxy back to the location server which has the transaction state. OTOH, it is poor in NAT handling since the location server was never directly contacted by the NAT device. The nat ping will not keep a
hole in the
NAT device. Also, it has no performance improvement over one single "proxy+location" server as all requests end up in location server.
So you had the backend location server contacting the UAC directly? I'm attempting to route the invite back through the
originating front
end proxy that has the nat session already established with
the natted
UAC. At the moment this only works because I am rewriting the (hardcoded) hostname in my config, but I'm looking at doing this dynamically so that any requests to the user location
server will have
their hostname rewritten to the previous hop.
- Proxy querying UA location via SQL
In this method, I've written a small SQL script to be run by the proxy via exec_dst to check the UA location from the location server DB backend. (I know that DB is not the best place to check location but
it is easier
than writing C program to query the memory USRLOC on the
location server.)
This works best for performance as the proxies are sharing the
requests as
well as RTP proxying. However, it is relatively poor in NAT and transaction as the INVITE, BYE and CANCEL can be handled by different
proxy due DNS
resolution.
I really want to keep my operations within SIP messaging
only, and not
having to rely on external mechanisms such as sql queries. This maintains our flexibility to use any SIP compliant device. It's a great idea thogh! :)
One way I see ser going distributed is to follow the idea of squid plus some enhancement. The group of proxies are put into
partnership. When the
proxy receive a REGISTER request, it check whether one of its
partner has a
record of that UA or not. If yes, it forward the request to the
other proxy
and forget it. Otherwise, it save the location in its memory, do NAT stuff and becomes the authoritive proxy for that UA until the
REGISTER expires.
When other request comes in, the proxy do the same check with
its partner
again and forward the request to the authoritive proxy. This way, the authoritive proxy maintains the nat ping, shares the RTP proxying and
keep trace
of transactions. When a new proxy comes in, we just need to tell ser that
there is a
new member in the partnership. (Though, we need to find a way
to tell ser
about this without restarting so that it maintains the USRLOC in memory) Instantly, this proxy can serve new UA that was never seen
before or
its REGISTER has expires somewhere.
This sounds like a cool idea, I'm not familiar with squids proxiy partnership model, but what you explain seems sound to me.
Perhaps the
ser proxies could use SRV records to learn about new 'partner' ser proxies? Or would this be a miss-aplication of the SRV feature?
The only thing I haven't figured out a solution would be
how to pick
up UA location when one of the proxy fails. I don't like the way t_replicate works as it requires hard coding other proxies in the script and needs restarting ser for failover.
If a proxy that is maintaing a NAT session with a UAC goes
away, I see
no way of passing off this session/location to another
server except
just waiting for the UAC to re-register.
Zeus
-----Original Message----- From: serusers-bounces@lists.iptel.org
[mailto:serusers-bounces@lists.iptel.org]
On Behalf Of Jev Sent: Wednesday, 30 June 2004 8:53 AM To: Andrei Pelinescu-Onciul Cc: serusers@lists.iptel.org Subject: Re: [Serusers] request for comments
Andrei Pelinescu-Onciul wrote: [snip]
So all the packets comming from the same ip will be sent to
the same
fron end SER? (hashing after src ip)?
Yes, using ciscos "Sticky IP" which I admit, I do not know about, but I'm told it will do this job properly.
Anyway there are some problems related to the nat traversal:
- nat ping - nat ping needs to access usrloc, so that it
would know
which users to ping. However on your setup the front-end
servers have
no ideea about this, so they wouldn't be able to nat ping.
The "main"
server (User accounts) knows who to ping but its ping won't
traverse a
symmetric nat (the nat will have an open binding only with the outbound proxy, which would be one of the load balanced front-ends).
I do realize this now, so I'm considering running a non-persistent usr_loc (no mysql back end) on all the front end servers,
and using
t_replicate between all of them. I admit I have not
verified if this
is possible, so please forgive me if I'm talking
non-sense here at
this stage. My concern here, as I mentioned in my reply
to Klaus's
post, is that if I use t_replicate will all my front end ser servers, will they all spit udp at a single natted client
when the
client has only one udp session with one front end server?
- consider user A calling user B, where at least B is
behind a nat.
The invite would reach the "main" server which will look up
B and will
try to send the message to B's address. Unfortunately
B's nat will
drop the packet, because it has an open binding only
between B and the
load balanced ip. (this will work only if B has a full cone
nat which
is very very unlikely)
I'm not sure on the solution here. I will need to make the call go via the front end ser server that has the active udp session with the client. I'm going to sleep on this!
- assuming the above stuff will work somehow, you still
have to be
very carefull to open only one rtp proxy session (since
each front end
has its own rtp proxy you should make sure you use
force_rtp_proxy on
only one of them, for the same call)
I agree, and I realize that I'm making some challenging issues for myself :) Thank you Andrei for your comments!
-Jev
Serusers mailing list serusers@lists.iptel.org http://lists.iptel.org/mailman/listinfo/serusers
Serusers mailing list serusers@lists.iptel.org http://lists.iptel.org/mailman/listinfo/serusers
Greg Fausak www.AddaBrand.com (US) 469-546-1265
On Jun 30, 2004, at 10:04 PM, Zeus Ng wrote:
More food :-)
We distribute our SER boxes. We did it by:
- using the domain name for a login id to the postgres database
- slicing each table based on domain name, that is, instead of subscriber I have a view calls av_subscriber which only shows
records from the current domain. 3) Each ser proxy serves a domain.
Personally, I don't treat this as distributed proxy but centralized location server. Nevertheless, it serve what you want. The use of view is good. Hope that MySQL will have this feature in their next major release.
Well, I guess....each proxy server is at a different IP address. That is distributed isn't it?
Views are essential for multi-domains. Using views I can give customers access to the database. Why wait for mysql, postgres is a much better database engine :-)
Using this technique I can run dozens of SER proxies each with its own view of the database...however, if any one domain gets too big I will have a problem (I haven't had that problem yet, I'll let you know!).
I have always thought that the way to solve the distribution problem is to relax the in-memory caching of registrations. Everytime some UAC registers the database is updated, and everytime a call is to be delivered the location table is
Well, it's true in certain sense but you can also send a REGISTER request with no "Contact" header to get the memory version of USRLOC. But I agree with you that using just SQL DB should be an option in ser.
queried. Using this technique will tax the database more, but it would allow multiple SER proxies without the need for a sticky bit set, that is a round robin or least used SLB model.
You still need to "sticky" back the original proxy front end for NAT transveral.
At the current time because of the caching you can't have two SER proxies serving the same REGISTERed customer base because the location table gets tromped.
Not entirely true. If you can separate the proxy and location server function, you can have multiple proxies for the same domain.
but currently you can't. the registration server is the only one that knows about the location of the ua. -g
---greg
On Jun 30, 2004, at 1:06 PM, Jev wrote:
Zeus Ng wrote:
I may share some of my experience on a similar concept. Notice that it's not a solution but more an idea sharing.
Solutions are nice, but I really want to hear ideas :) Provides food for thought! :)
From the experiment, I found that there is a fundamental weakness in ser (plus UDP plus NAT) to support a distributed SIP environment. I'm not saying it can't be done. However, to make ser more distributed, I think there is a need to redesign the way ser handle user location. The lab environment I have is 4 ser
proxies and
2 ser location servers. The 4 ser proxies were used as front end for proxying SIP
requests. They
have a SRV record in the DNS server for UAs which understand this record. For UA that doesn't understand SRV, the DNS also reply the
proxies IP in a
round robin fashion. When a UA lookup the IP of the proxy, it get one from
either the SRV
record or round robin A record. All REGISTER requests are forwarded from the proxies to
the primary
location server. This is than replicated to the secondary location server by t_replicate. So, the proxies has no knowledge of UA
location. Only the
location servers know where to reach the UA. For other SIP requests, I have tried two different methods
to handle
them.
- Forward all requests to location server and use record_route to
keep the proxy in the path: This works great to maintain dialogue as INVITE, reINVITE, BYE, CANCEL will all proxy back to the location server which has the transaction state. OTOH, it is poor in NAT handling since the location server was never directly contacted by the NAT device. The nat ping will not keep a
hole in the
NAT device. Also, it has no performance improvement over one single "proxy+location" server as all requests end up in location server.
So you had the backend location server contacting the UAC directly? I'm attempting to route the invite back through the
originating front
end proxy that has the nat session already established with
the natted
UAC. At the moment this only works because I am rewriting the (hardcoded) hostname in my config, but I'm looking at doing this dynamically so that any requests to the user location
server will have
their hostname rewritten to the previous hop.
- Proxy querying UA location via SQL
In this method, I've written a small SQL script to be run by the proxy via exec_dst to check the UA location from the location server DB backend. (I know that DB is not the best place to check location but
it is easier
than writing C program to query the memory USRLOC on the
location server.)
This works best for performance as the proxies are sharing the
requests as
well as RTP proxying. However, it is relatively poor in NAT and transaction as the INVITE, BYE and CANCEL can be handled by different
proxy due DNS
resolution.
I really want to keep my operations within SIP messaging
only, and not
having to rely on external mechanisms such as sql queries. This maintains our flexibility to use any SIP compliant device. It's a great idea thogh! :)
One way I see ser going distributed is to follow the idea of squid plus some enhancement. The group of proxies are put into
partnership. When the
proxy receive a REGISTER request, it check whether one of its
partner has a
record of that UA or not. If yes, it forward the request to the
other proxy
and forget it. Otherwise, it save the location in its memory, do NAT stuff and becomes the authoritive proxy for that UA until the
REGISTER expires.
When other request comes in, the proxy do the same check with
its partner
again and forward the request to the authoritive proxy. This way, the authoritive proxy maintains the nat ping, shares the RTP proxying and
keep trace
of transactions. When a new proxy comes in, we just need to tell ser that
there is a
new member in the partnership. (Though, we need to find a way
to tell ser
about this without restarting so that it maintains the USRLOC in memory) Instantly, this proxy can serve new UA that was never seen
before or
its REGISTER has expires somewhere.
This sounds like a cool idea, I'm not familiar with squids proxiy partnership model, but what you explain seems sound to me.
Perhaps the
ser proxies could use SRV records to learn about new 'partner' ser proxies? Or would this be a miss-aplication of the SRV feature?
The only thing I haven't figured out a solution would be
how to pick
up UA location when one of the proxy fails. I don't like the way t_replicate works as it requires hard coding other proxies in the script and needs restarting ser for failover.
If a proxy that is maintaing a NAT session with a UAC goes
away, I see
no way of passing off this session/location to another
server except
just waiting for the UAC to re-register.
Zeus
-----Original Message----- From: serusers-bounces@lists.iptel.org
[mailto:serusers-bounces@lists.iptel.org]
On Behalf Of Jev Sent: Wednesday, 30 June 2004 8:53 AM To: Andrei Pelinescu-Onciul Cc: serusers@lists.iptel.org Subject: Re: [Serusers] request for comments
Andrei Pelinescu-Onciul wrote: [snip]
So all the packets comming from the same ip will be sent to
the same
fron end SER? (hashing after src ip)?
Yes, using ciscos "Sticky IP" which I admit, I do not know about, but I'm told it will do this job properly.
Anyway there are some problems related to the nat traversal:
- nat ping - nat ping needs to access usrloc, so that it
would know
which users to ping. However on your setup the front-end
servers have
no ideea about this, so they wouldn't be able to nat ping.
The "main"
server (User accounts) knows who to ping but its ping won't
traverse a
symmetric nat (the nat will have an open binding only with the outbound proxy, which would be one of the load balanced front-ends).
I do realize this now, so I'm considering running a non-persistent usr_loc (no mysql back end) on all the front end servers,
and using
t_replicate between all of them. I admit I have not
verified if this
is possible, so please forgive me if I'm talking
non-sense here at
this stage. My concern here, as I mentioned in my reply
to Klaus's
post, is that if I use t_replicate will all my front end ser servers, will they all spit udp at a single natted client
when the
client has only one udp session with one front end server?
- consider user A calling user B, where at least B is
behind a nat.
The invite would reach the "main" server which will look up
B and will
try to send the message to B's address. Unfortunately
B's nat will
drop the packet, because it has an open binding only
between B and the
load balanced ip. (this will work only if B has a full cone
nat which
is very very unlikely)
I'm not sure on the solution here. I will need to make the call go via the front end ser server that has the active udp session with the client. I'm going to sleep on this!
- assuming the above stuff will work somehow, you still
have to be
very carefull to open only one rtp proxy session (since
each front end
has its own rtp proxy you should make sure you use
force_rtp_proxy on
only one of them, for the same call)
I agree, and I realize that I'm making some challenging issues for myself :) Thank you Andrei for your comments!
-Jev
Serusers mailing list serusers@lists.iptel.org http://lists.iptel.org/mailman/listinfo/serusers
Serusers mailing list serusers@lists.iptel.org http://lists.iptel.org/mailman/listinfo/serusers
Greg Fausak www.AddaBrand.com (US) 469-546-1265
Greg Fausak www.AddaBrand.com (US) 469-546-1265
Jev,
See below.
From the experiment, I found that there is a fundamental weakness in ser (plus UDP plus NAT) to support a distributed SIP environment. I'm not saying it can't be
done. However,
to make ser more distributed, I think there is a need to
redesign the
way ser handle user location.
The lab environment I have is 4 ser proxies and 2 ser location servers. The 4 ser proxies were used as front end for proxying SIP requests. They have a SRV record in the DNS server for UAs which understand this record. For UA that doesn't understand SRV, the DNS also reply the proxies IP in a round robin fashion.
When a UA lookup the IP of the proxy, it get one from
either the SRV
record or round robin A record.
All REGISTER requests are forwarded from the proxies to the primary location server. This is than replicated to the secondary location server by t_replicate. So, the proxies has no knowledge of UA location. Only the location servers know where to reach the UA.
For other SIP requests, I have tried two different methods
to handle
them.
- Forward all requests to location server and use record_route to
keep the proxy in the path:
This works great to maintain dialogue as INVITE, reINVITE,
BYE, CANCEL
will all proxy back to the location server which has the
transaction
state. OTOH, it is poor in NAT handling since the location
server was
never directly contacted by the NAT device. The nat ping
will not keep
a hole in the NAT device. Also, it has no performance
improvement over
one single "proxy+location" server as all requests end up
in location
server.
So you had the backend location server contacting the UAC directly? I'm attempting to route the invite back through the originating front end proxy that has the nat session already established with the natted UAC. At the moment this only works because I am rewriting the (hardcoded) hostname in my config, but I'm looking at doing this dynamically so that any requests to the user location server will have their hostname rewritten to the previous hop.
Stupid idea, can't you save the proxy front end IP in SQL DB?
- Proxy querying UA location via SQL
In this method, I've written a small SQL script to be run
by the proxy
via exec_dst to check the UA location from the location server DB backend. (I know that DB is not the best place to check
location but
it is easier than writing C program to query the memory
USRLOC on the
location server.) This works best for performance as the
proxies are
sharing the requests as well as RTP proxying. However, it is relatively poor in NAT and transaction as the INVITE, BYE
and CANCEL
can be handled by different proxy due DNS resolution.
I really want to keep my operations within SIP messaging only, and not having to rely on external mechanisms such as sql queries. This maintains our flexibility to use any SIP compliant device. It's a great idea thogh! :)
Well, RFC does not mandate how proxy get UA location from location server. If you want to stick with SIP or want the memory version USRLOC, I suppose you can write a module function "loc_lookup()" to send REGISTER request with no "Contact" header to the location server. The reply should contain all UA location. I think this is the better way to do it. However, it's easier for me to do SQL that C right now. Hopefully someone will have the time to write such module.
One way I see ser going distributed is to follow the idea of squid plus some enhancement. The group of proxies are put into
partnership.
When the proxy receive a REGISTER request, it check whether
one of its
partner has a record of that UA or not. If yes, it forward
the request
to the other proxy and forget it. Otherwise, it save the
location in
its memory, do NAT stuff and becomes the authoritive proxy
for that UA
until the REGISTER expires. When other request comes in,
the proxy do
the same check with its partner again and forward the
request to the
authoritive proxy. This way, the authoritive proxy
maintains the nat
ping, shares the RTP proxying and keep trace of transactions.
When a new proxy comes in, we just need to tell ser that there is a new member in the partnership. (Though, we need to find a
way to tell
ser about this without restarting so that it maintains the
USRLOC in
memory) Instantly, this proxy can serve new UA that was never seen before or its REGISTER has expires somewhere.
This sounds like a cool idea, I'm not familiar with squids proxiy partnership model, but what you explain seems sound to me. Perhaps the ser proxies could use SRV records to learn about new 'partner' ser proxies? Or would this be a miss-aplication of the SRV feature?
The SRV records could possibly serve the need and has the advantage that ser does not need restarting.
The only thing I haven't figured out a solution would be
how to pick
up UA location when one of the proxy fails. I don't like the way t_replicate works as it requires hard coding other proxies in the script and needs restarting ser for failover.
If a proxy that is maintaing a NAT session with a UAC goes away, I see no way of passing off this session/location to another server except just waiting for the UAC to re-register.
True for NAT. But there are UAs on public IP as well.
The trouble we are facing is NAT. If all UAs are in public network (e.g. IPV6), the problem disappear.
Zeus
-----Original Message----- From: serusers-bounces@lists.iptel.org [mailto:serusers-bounces@lists.iptel.org] On Behalf Of Jev Sent: Wednesday, 30 June 2004 8:53 AM To: Andrei Pelinescu-Onciul Cc: serusers@lists.iptel.org Subject: Re: [Serusers] request for comments
Andrei Pelinescu-Onciul wrote: [snip]
So all the packets comming from the same ip will be sent to
the same
fron end SER? (hashing after src ip)?
Yes, using ciscos "Sticky IP" which I admit, I do not know
about, but
I'm told it will do this job properly.
Anyway there are some problems related to the nat traversal:
- nat ping - nat ping needs to access usrloc, so that it
would know
which users to ping. However on your setup the front-end
servers have
no ideea about this, so they wouldn't be able to nat ping.
The "main"
server (User accounts) knows who to ping but its ping won't
traverse a
symmetric nat (the nat will have an open binding only with the outbound proxy, which would be one of the load balanced
front-ends).
I do realize this now, so I'm considering running a non-persistent usr_loc (no mysql back end) on all the front end servers, and using t_replicate between all of them. I admit I have not verified if this is possible, so please forgive me if I'm talking non-sense
here at this
stage. My concern here, as I mentioned in my reply to Klaus's post, is that if I use t_replicate will all my front end ser servers, will they all spit udp at a single natted client when the client has only one udp session with one front end server?
- consider user A calling user B, where at least B is
behind a nat.
The invite would reach the "main" server which will look up
B and will
try to send the message to B's address. Unfortunately B's nat will drop the packet, because it has an open binding only
between B and the
load balanced ip. (this will work only if B has a full cone
nat which
is very very unlikely)
I'm not sure on the solution here. I will need to make the call go via the front end ser server that has the active udp session with the client. I'm going to sleep on this!
- assuming the above stuff will work somehow, you still have to be
very carefull to open only one rtp proxy session (since
each front end
has its own rtp proxy you should make sure you use
force_rtp_proxy on
only one of them, for the same call)
I agree, and I realize that I'm making some challenging issues for myself :) Thank you Andrei for your comments!
-Jev
Serusers mailing list serusers@lists.iptel.org http://lists.iptel.org/mailman/listinfo/serusers