We are using t_replicate() to replicate REGISTERs among redundant proxies (and a separate presence server that needs REGISTER for pua_bla). Before reinventing the wheel here, I thought I'd ask if others already have a method in place to re-sync a restarted proxy's state? I guess one way is to pull the usrloc data from mysql.... Another would be to somehow ask the proxy to walk the usrloc table and do a bunch of t_registers()...
Any thoughts appreciated. /a
Hi Alan,
if you need persistence over restarts, you need to use mysql support.
regards, bogdan
Alan Crosswell wrote:
We are using t_replicate() to replicate REGISTERs among redundant proxies (and a separate presence server that needs REGISTER for pua_bla). Before reinventing the wheel here, I thought I'd ask if others already have a method in place to re-sync a restarted proxy's state? I guess one way is to pull the usrloc data from mysql.... Another would be to somehow ask the proxy to walk the usrloc table and do a bunch of t_registers()...
Any thoughts appreciated. /a
Users mailing list Users@openser.org http://openser.org/cgi-bin/mailman/listinfo/users
Bogdan,
I already have persistence across restarts wth mysql support. The issue is that with replicated servers, using t_replicate(), if one is down for several hours or days (e.g. due to a hardware failure -- which is why we are replicating in the first place), then it no longer has current state and needs to acquire it somehow.
/a
Bogdan-Andrei Iancu wrote:
Hi Alan,
if you need persistence over restarts, you need to use mysql support.
regards, bogdan
Alan Crosswell wrote:
We are using t_replicate() to replicate REGISTERs among redundant proxies (and a separate presence server that needs REGISTER for pua_bla). Before reinventing the wheel here, I thought I'd ask if others already have a method in place to re-sync a restarted proxy's state? I guess one way is to pull the usrloc data from mysql.... Another would be to somehow ask the proxy to walk the usrloc table and do a bunch of t_registers()...
Any thoughts appreciated. /a
Users mailing list Users@openser.org http://openser.org/cgi-bin/mailman/listinfo/users
Though it is not my field of speciality, you most probably wanna synch directly the Mysql database ... maybe mysql provides such "sync" function, otherwise you may have to write some script or such ... This way on boot the db is already sinch'd
Cesc
On 4/22/07, Alan Crosswell alan@columbia.edu wrote:
Bogdan,
I already have persistence across restarts wth mysql support. The issue is that with replicated servers, using t_replicate(), if one is down for several hours or days (e.g. due to a hardware failure -- which is why we are replicating in the first place), then it no longer has current state and needs to acquire it somehow.
/a
Bogdan-Andrei Iancu wrote:
Hi Alan,
if you need persistence over restarts, you need to use mysql support.
regards, bogdan
Alan Crosswell wrote:
We are using t_replicate() to replicate REGISTERs among redundant proxies (and a separate presence server that needs REGISTER for pua_bla). Before reinventing the wheel here, I thought I'd ask if others already have a method in place to re-sync a restarted proxy's state? I guess one way is to pull the usrloc data from mysql.... Another would be to somehow ask the proxy to walk the usrloc table and do a bunch of t_registers()...
Any thoughts appreciated. /a
Users mailing list Users@openser.org http://openser.org/cgi-bin/mailman/listinfo/users
Users mailing list Users@openser.org http://openser.org/cgi-bin/mailman/listinfo/users
Cesc writes:
Though it is not my field of speciality, you most probably wanna synch directly the Mysql database ... maybe mysql provides such "sync" function, otherwise you may have to write some script or such ... This way on boot the db is already sinch'd
if you use mysql cluster, it is always synchronized. it has other problems though, which may get fixed in version 5.1.
-- juha
Does openser handle the fact that two proxies are writing to the same database?
Juha Heinanen wrote:
Cesc writes:
Though it is not my field of speciality, you most probably wanna synch directly the Mysql database ... maybe mysql provides such "sync" function, otherwise you may have to write some script or such ... This way on boot the db is already sinch'd
if you use mysql cluster, it is always synchronized. it has other problems though, which may get fixed in version 5.1.
-- juha
Juha Heinanen wrote:
Cesc writes:
Though it is not my field of speciality, you most probably wanna synch directly the Mysql database ... maybe mysql provides such "sync" function, otherwise you may have to write some script or such ... This way on boot the db is already sinch'd
if you use mysql cluster, it is always synchronized. it has other problems though, which may get fixed in version 5.1.
What issues are you referring to? We are using openser together with a mysql 5.0 cluster for quite some time and never had any issues.
Christian
-- juha
Users mailing list Users@openser.org http://openser.org/cgi-bin/mailman/listinfo/users
Juha Heinanen wrote:
Christian Schlatter writes:
What issues are you referring to? We are using openser together with a mysql 5.0 cluster for quite some time and never had any issues.
how do you upgrade your openser tables when new fields are added/removed, which happens on every new release?
We use a combination of mysql cluster 5.0.x for the location table and master/slave replication for everything else for over a year now.
Altering the cluster table is basically done by: - disabling all except one node at the SIP load balancer and stop mysql on the disabled nodes - putting the remaining active node into single-user-mode - altering the table on this node - exiting single-user-mode - putting back the deactivated nodes into the system
You have to schedule that to a time of day where one proxy/registrar can handle all the traffic, but beside that it works quite smoothly...
Cheers, Andreas
This e-mail is confidential and may well also be legally privileged. If you have received it in error, you are on notice of its status. Please notify us immediately by reply e-mail and then delete this message from your system. Please do not copy it or use it for any purposes, or disclose its contents to any other person: to do so could be a breach of confidence. Thank you for your cooperation. Information pursuant to paragraph 14 Austrian Companies Code: UPC Austria GmbH; Registered Office: Wolfganggasse 58-60, 1120 Vienna Company Register Number: FN 189858d at the Commercial Court of Vienna
Juha Heinanen wrote:
Christian Schlatter writes:
What issues are you referring to? We are using openser together
with a > mysql 5.0 cluster for quite some time and never
had any issues.
how do you upgrade your openser tables when new fields are added/removed, which happens on every new release?
We use a combination of mysql cluster 5.0.x for the location table and master/slave replication for everything else for over a year now.
Altering the cluster table is basically done by:
- disabling all except one node at the SIP load balancer and
stop mysql on the disabled nodes
- putting the remaining active node into single-user-mode
- altering the table on this node
- exiting single-user-mode
- putting back the deactivated nodes into the system
You have to schedule that to a time of day where one proxy/registrar can handle all the traffic, but beside that it works quite smoothly...
Cheers, Andreas
This is essentially what we're planning to do, with the exception that we're developing using 5.1 and all of our tables will be on MySQL Cluster (since we can use disk-based storage for NDB Cluster). It has thus far been successful, especially with the upgrade from 1.1.1 to 1.2 recently in our development environment.
It's good to know we're not completely out in the woods...
- Brad The contents of this e-mail are intended for the named addressee only. It contains information that may be confidential. Unless you are the named addressee or an authorized designee, you may not copy or use it, or disclose it to anyone else. If you received it in error please notify us immediately and then destroy it.
Andreas Granig writes:
- altering the table on this node
my understanding is that you cannot in 5.0 do alter table on NDBCLUSTER table:
It is not possible to make online schema changes such as those accomplished using ALTER TABLE or CREATE INDEX, as the NDB Cluster engine does not support autodiscovery of such changes.
mysql it is not a production quality system, if user needs to hassle with this kind of basic procedure.
-- juha
Juha,
Juha Heinanen wrote:
my understanding is that you cannot in 5.0 do alter table on NDBCLUSTER table:
It is not possible to make online schema changes such as those accomplished using ALTER TABLE or CREATE INDEX, as the NDB Cluster engine does not support autodiscovery of such changes.
mysql it is not a production quality system, if user needs to hassle with this kind of basic procedure.
See http://dev.mysql.com/tech-resources/articles/mysql-cluster-50.html
This seems to conflict with the statement from the cluster-limitations docs. However, if you do it in single-user-mode, it works. Otherwise you may get errors regarding invalid schema infos on other sql nodes, which then have to be restarted (with the local ndb tables of the sql nodes being deleted) to work again.
Andreas
This e-mail is confidential and may well also be legally privileged. If you have received it in error, you are on notice of its status. Please notify us immediately by reply e-mail and then delete this message from your system. Please do not copy it or use it for any purposes, or disclose its contents to any other person: to do so could be a breach of confidence. Thank you for your cooperation. Information pursuant to paragraph 14 Austrian Companies Code: UPC Austria GmbH; Registered Office: Wolfganggasse 58-60, 1120 Vienna Company Register Number: FN 189858d at the Commercial Court of Vienna
One option being discussed here is running mysql replicated. This brings up the concern that the independent proxies are not aware that they are talking to a single database instance. Will there be issues involved such as: - deadlocks - inconsistency of in-memory and in-database data structures (e.g. usrloc, presence) - collisions of per-proxy unique keys that are inserted into tables Personally, I think mysql replication violates KISS if I am trying to have my redundant servers as independent and survivable as possible.
Some approaches we've talked about here that don't use mysql replication are: - in /etc/init.d/openser start, add a step to copy registrations from the other server(s) that would have normally been learned by t_replicate(). Basically select * from location | insert.... - consider doing something at the application level along the lines of an "openserctl ul show" to get this data. This has the advantage of not going "behind the back" of the proxy and would even work for t_replicate() situations in which the proxies are not flushing usrloc to persistent storage.
Our architecture load shares two proxies. While UAs think they have a single proxy they've registered to, inbound INVITEs are randomly distributed among the two proxies (e.g. on our PSTN media gateways, or other UAs that happen to have chosen the other proxy as their outbound).
Any comments from people who have already dealt with this issue or are thinking about it would be appreciated. /a
Cesc wrote:
Though it is not my field of speciality, you most probably wanna synch directly the Mysql database ... maybe mysql provides such "sync" function, otherwise you may have to write some script or such ... This way on boot the db is already sinch'd
Cesc
On 4/22/07, Alan Crosswell alan@columbia.edu wrote:
Bogdan,
I already have persistence across restarts wth mysql support. The issue is that with replicated servers, using t_replicate(), if one is down for several hours or days (e.g. due to a hardware failure -- which is why we are replicating in the first place), then it no longer has current state and needs to acquire it somehow.
/a
Bogdan-Andrei Iancu wrote:
Hi Alan,
if you need persistence over restarts, you need to use mysql support.
regards, bogdan
Alan Crosswell wrote:
We are using t_replicate() to replicate REGISTERs among redundant proxies (and a separate presence server that needs REGISTER for pua_bla). Before reinventing the wheel here, I thought I'd ask if others already have a method in place to re-sync a restarted proxy's state? I guess one way is to pull the usrloc data from mysql.... Another would be to somehow ask the proxy to walk the usrloc table and do a bunch of t_registers()...
Any thoughts appreciated. /a
Users mailing list Users@openser.org http://openser.org/cgi-bin/mailman/listinfo/users
Users mailing list Users@openser.org http://openser.org/cgi-bin/mailman/listinfo/users
Alan Crosswell writes:
Personally, I think mysql replication violates KISS if I am trying to have my redundant servers as independent and survivable as possible.
that is a good goal, but i don't see how you can do it in a KISS way without having UAs that register with both (all) proxies.
Our architecture load shares two proxies. While UAs think they have a single proxy they've registered to, inbound INVITEs are randomly distributed among the two proxies (e.g. on our PSTN media gateways, or other UAs that happen to have chosen the other proxy as their outbound).
do your two proxies have two ip visible to outside? if so, how do you deal with UAs behind NATs?
-- juha
Alan,
Alan Crosswell wrote:
One option being discussed here is running mysql replicated. This brings up the concern that the independent proxies are not aware that they are talking to a single database instance. Will there be issues involved such as:
- deadlocks
- inconsistency of in-memory and in-database data structures (e.g. usrloc, presence)
- collisions of per-proxy unique keys that are inserted into tables
Personally, I think mysql replication violates KISS if I am trying to have my redundant servers as independent and survivable as possible.
With mysql master-slave replication only one proxy could write to the database, so you'd need master-master replication which is possible but doesn't offer ACID as mysql cluster does.
I agree that using DB replication violates KISS as would also do the application layer state replication solutions you're describing below. For me the best solution would be to have endpoints registering with all redundant proxies in parallel. This would also be in-line with SIP's redundancy in the previous hop. The newest firmware release for the Polycom SIP phones does support that, though I haven't tested it yet. Are there other endpoints implementing this feature?
Christian
Some approaches we've talked about here that don't use mysql replication are:
- in /etc/init.d/openser start, add a step to copy registrations from the other server(s) that would have normally been learned by t_replicate(). Basically select * from location | insert....
- consider doing something at the application level along the lines of an "openserctl ul show" to get this data. This has the advantage of not going "behind the back" of the proxy and would even work for t_replicate() situations in which the proxies are not flushing usrloc to persistent storage.
Our architecture load shares two proxies. While UAs think they have a single proxy they've registered to, inbound INVITEs are randomly distributed among the two proxies (e.g. on our PSTN media gateways, or other UAs that happen to have chosen the other proxy as their outbound).
Any comments from people who have already dealt with this issue or are thinking about it would be appreciated. /a
Cesc wrote:
Though it is not my field of speciality, you most probably wanna synch directly the Mysql database ... maybe mysql provides such "sync" function, otherwise you may have to write some script or such ... This way on boot the db is already sinch'd
Cesc
On 4/22/07, Alan Crosswell alan@columbia.edu wrote:
Bogdan,
I already have persistence across restarts wth mysql support. The issue is that with replicated servers, using t_replicate(), if one is down for several hours or days (e.g. due to a hardware failure -- which is why we are replicating in the first place), then it no longer has current state and needs to acquire it somehow.
/a
Bogdan-Andrei Iancu wrote:
Hi Alan,
if you need persistence over restarts, you need to use mysql support.
regards, bogdan
Alan Crosswell wrote:
We are using t_replicate() to replicate REGISTERs among redundant proxies (and a separate presence server that needs REGISTER for pua_bla). Before reinventing the wheel here, I thought I'd ask if others already have a method in place to re-sync a restarted proxy's state? I guess one way is to pull the usrloc data from mysql.... Another would be to somehow ask the proxy to walk the usrloc table and do a bunch of t_registers()...
Any thoughts appreciated. /a
Users mailing list Users@openser.org http://openser.org/cgi-bin/mailman/listinfo/users
Users mailing list Users@openser.org http://openser.org/cgi-bin/mailman/listinfo/users
Users mailing list Users@openser.org http://openser.org/cgi-bin/mailman/listinfo/users
At 02:32 23/04/2007, Christian Schlatter wrote:
Alan,
Alan Crosswell wrote:
One option being discussed here is running mysql replicated. This brings up the concern that the independent proxies are not aware that they are talking to a single database instance. Will there be issues involved such as:
- deadlocks
- inconsistency of in-memory and in-database data structures
(e.g. usrloc, presence)
- collisions of per-proxy unique keys that are inserted into
tables Personally, I think mysql replication violates KISS if I am trying to have my redundant servers as independent and survivable as possible.
With mysql master-slave replication only one proxy could write to the database, so you'd need master-master replication which is possible but doesn't offer ACID as mysql cluster does.
I would not constrain oneself, for example one can do symmetric two-way client-server replication. We have done that once with SER, other testimony can be found in http://portal.acm.org/citation.cfm?id=1227865.1228010&coll=&dl=ACM&a... (referring to a Columbia University technical report for even more details).
I agree that using DB replication violates KISS as would also do the application layer state replication solutions you're describing below.
I would say it depends on which type of data you are referring to. With SER (and I would say this is 100% directly applicable to openser too) we have been using different replication strategies to different types of data (tables).
For me the best solution would be to have endpoints registering with all redundant proxies in parallel. This would also be in-line with SIP's redundancy in the previous hop. The newest firmware release for the Polycom SIP phones does support that, though I haven't tested it yet. Are there other endpoints implementing this feature?
The downside is that it does not work once you get a new SIP phone.
-jiri
-- Jiri Kuthan http://iptel.org/~jiri/
Jiri Kuthan writes:
I would not constrain oneself, for example one can do symmetric two-way client-server replication. We have done that once with SER, other testimony can be found in n http://portal.acm.org/citation.cfm?id=1227865.1228010&coll=&dl=ACM&a...
it would be interesting to read that paper, but i guess it is not freely available. so far i have not seen any solution that would do two the things at the same time: balance load and be resilient.
-- juha
Juha Heinanen wrote:
Jiri Kuthan writes:
I would not constrain oneself, for example one can do symmetric two-way client-server replication. We have done that once with SER, other testimony can be found in n http://portal.acm.org/citation.cfm?id=1227865.1228010&coll=&dl=ACM&a...
it would be interesting to read that paper, but i guess it is not freely available. so far i have not seen any solution that would do two
http://www.google.com/search?q=Failover%2C+load+sharing+and+server+architect...
the things at the same time: balance load and be resilient.
-- juha
Users mailing list Users@openser.org http://openser.org/cgi-bin/mailman/listinfo/users
Klaus Darilion writes:
it would be interesting to read that paper, but i guess it is not freely available. so far i have not seen any solution that would do two the things at the same time: balance load and be resilient.
http://www.google.com/search?q=Failover%2C+load+sharing+and+server+architect...
thanks klaus for the pointer.
i briefly looked at the proposed solution in figure 12 and didn't find anything about nat keepalive pinging. which of the proxies does that or does the solution assume that UAs are not behind nats?
-- juha
At 09:12 23/04/2007, Juha Heinanen wrote:
Klaus Darilion writes:
it would be interesting to read that paper, but i guess it is not freely available. so far i have not seen any solution that would do two the things at the same time: balance load and be resilient.
http://www.google.com/search?q=Failover%2C+load+sharing+and+server+architect...
thanks klaus for the pointer.
i briefly looked at the proposed solution in figure 12 and didn't find anything about nat keepalive pinging. which of the proxies does that or does the solution assume that UAs are not behind nats
I guess that some practical aspects were out of scope of this paper -- it does not seem to seek to provide a full solution. I was merely referring to the DB part of it.
-jiri
Jiri Kuthan writes:
I guess that some practical aspects were out of scope of this paper -- it does not seem to seek to provide a full solution.
looks like university guys don't care about real life solutions. they just want to get their papers published.
this reinforces my thought that the only working solution is that UAs register with all proxies.
until that happens (which may be never), it might make sense to hack nat helper module to use source ip address from an avp.
-- juha
At 08:37 23/04/2007, Klaus Darilion wrote:
Juha Heinanen wrote:
Jiri Kuthan writes:
I would not constrain oneself, for example one can do symmetric two-way client-server replication. We have done that once with SER, other testimony can be found in n http://portal.acm.org/citation.cfm?id=1227865.1228010&coll=&dl=ACM&a...
it would be interesting to read that paper, but i guess it is not freely available. so far i have not seen any solution that would do two
http://www.google.com/search?q=Failover%2C+load+sharing+and+server+architect...
well, try to use 'scholar' instead of 'www' when googlig and you will be selfrewarded quickly :-)
http://www1.cs.columbia.edu/~kns10/publication/sipload.pdf
it gives you even citations.
-jiri
-- Jiri Kuthan http://iptel.org/~jiri/
Jiri Kuthan wrote:
At 02:32 23/04/2007, Christian Schlatter wrote:
Alan,
Alan Crosswell wrote:
One option being discussed here is running mysql replicated. This brings up the concern that the independent proxies are not aware that they are talking to a single database instance. Will there be issues involved such as:
- deadlocks
- inconsistency of in-memory and in-database data structures
(e.g. usrloc, presence)
- collisions of per-proxy unique keys that are inserted into
tables Personally, I think mysql replication violates KISS if I am trying to have my redundant servers as independent and survivable as possible.
With mysql master-slave replication only one proxy could write to the database, so you'd need master-master replication which is possible but doesn't offer ACID as mysql cluster does.
I would not constrain oneself, for example one can do symmetric two-way client-server replication. We have done that once with SER, other testimony can be found in http://portal.acm.org/citation.cfm?id=1227865.1228010&coll=&dl=ACM&a... (referring to a Columbia University technical report for even more details).
Agreed, but I still wouldn't call such a solution "simple and stupid". And the paper mentions that mysql master-master replication doesn't offer atomicity and therefor imposes the danger of inconsistent tables. I guess if the authors would have had access to mysql 5.0 they'd used mysql cluster instead. Don't get me wrong, I think using a DB cluster for registration state is a valid approach, it's just more complicated than letting the endpoints register with all proxies in parallel.
Christian
I agree that using DB replication violates KISS as would also do the application layer state replication solutions you're describing below.
I would say it depends on which type of data you are referring to. With SER (and I would say this is 100% directly applicable to openser too) we have been using different replication strategies to different types of data (tables).
For me the best solution would be to have endpoints registering with all redundant proxies in parallel. This would also be in-line with SIP's redundancy in the previous hop. The newest firmware release for the Polycom SIP phones does support that, though I haven't tested it yet. Are there other endpoints implementing this feature?
The downside is that it does not work once you get a new SIP phone.
-jiri
-- Jiri Kuthan http://iptel.org/~jiri/
At 16:42 23/04/2007, Christian Schlatter wrote:Jiri Kuthan wrote:
Agreed, but I still wouldn't call such a solution "simple and stupid". And the paper mentions that mysql master-master replication doesn't offer atomicity and therefor imposes the danger of inconsistent tables. I guess if the authors would have had access to mysql 5.0 they'd used mysql cluster instead. Don't get me wrong, I think using a DB cluster for registration state is a valid approach, it's just more complicated than letting the endpoints register with all proxies in parallel.
our internal research shows that this does not work as simply as one would wish. regsitration state may be immensly database-intensive. My personal choice is not to put it in database at all.
-jiri
-- Jiri Kuthan http://iptel.org/~jiri/
Christian Schlatter writes:
For me the best solution would be to have endpoints registering with all redundant proxies in parallel.
you have come to the same conclusion as me. i fought hard in ietf sip working group when outbound i-d was written in order to make it mandatory to register with all outbound proxies. unfortunately i didn't have much support on the list and i lost the fight. perhaps i should have mobilized openser users to fight with me.
outbound is not an rfc yet, so i may try to bring the issue up once more.
-- juha