Michael,
I think your idea for a db_mode 3 is a good idea especially considering that
I'll soon be in the same position as you with respect to ser.
On an Asterisk note you mentioned patches at
. Do these patches allow Asterisk to store
user and server configuration data in a MySQL database rather than in .conf
files? If not, could you elaborate a bit on what these patches fix or features
that they provide?
Best Regards,
Paul
--- Michael Shuler <mike(a)bwsys.net> wrote:
I figured that's what people would say :( thanks
though.
I thought of that but the problem is that it takes up to the max time of my
longest REGISTER client timer before its back in sync with the one that runs
off the DB. It would be nice if there was a 3rd option i.e.
modparam("usrloc", "db_mode", 3) that would make it read only at
startup to
"seed" the memory database. The reason I need this is because I have a few
SER servers behind a Foundry ServerIron XL layer 4 switch. When I bring the
failed server back online the Foundry automatically recognizes that port
5060 is alive again and immediately starts sending it traffic. If the RAM
based location table hasn't been fully populated by the time this happens
then we have a bunch of failed calls until it syncs up which in my case is
up to 5 min. I can get around this manually by removing it from the cluster
config in the Foundry but that takes away from the automation of the whole
thing.
Andrei/Jiri, what do you guys think of adding that as an option? It seems
to me that it would be the most logical way to go about doing it and should
be very easy to do since all it is combining parts of mode 0 and 1/2. I
have little experience doing any programming in SER otherwise I would just
write it and submit it as a patch. Most of my time gets sucked up into
Asterisk patches (see
http://svn.asteriskdocs.org/res_data/ for MySQL
dynamic config files, this is a must for carriers... Well at least for me :)
), but if you 2 don't have time or don't think its worth while then I guess
I will have find time. Please let me know what you guys think of this as
soon as you get a chance.
----------------------------------------
Michael Shuler, C.E.O.
BitWise Communications, Inc. (CLEC) And BitWise Systems, Inc. (ISP)
682 High Point Lane
East Peoria, IL 61611
Office: (217) 585-0357
Cell: (309) 657-6365
Fax: (309) 213-3500
E-Mail: mike(a)bwsys.net
Customer Service: (877) 976-0711
-----Original Message-----
From: Maxim Sobolev [mailto:sobomax@portaone.com]
Sent: Sunday, September 26, 2004 3:28 AM
To: Michael Shuler
Cc: serusers(a)lists.iptel.org
Subject: Re: [Serusers] Location table issues
Michael Shuler wrote:
Let's say I have 2 SER servers (ser1 and
ser2). Both save
to the same MySQL
location table and both use t_replicate to let
each other
know about new
REGISTER's. Lets say ser1 gets a REGISTER
from a client
and then calls
t_replicate to send it over to ser2 and both call
save("location"). Doesn't
that cause ser1 and ser2 to write the same record
at about
the same time to
MySQL? Is there any way to prevent doubling the
number of
database writes?
You can configure one of the ser's to do not use database at all for
storing contacts, which will do what you want - contacts will
be saved
by the another ser, while t_replicate() will ensure that both of them
have a whole set at any moment of time.
-Maxim
Does anyone know what the replicate column is for in the
location table?
----------------------------------------
Michael Shuler, C.E.O.
BitWise Communications, Inc. (CLEC) And BitWise Systems, Inc. (ISP)
682 High Point Lane
East Peoria, IL 61611
Office: (217) 585-0357
Cell: (309) 657-6365
Fax: (309) 213-3500
E-Mail: mike(a)bwsys.net
Customer Service: (877) 976-0711
_______________________________________________
Serusers mailing list
serusers(a)lists.iptel.org
http://lists.iptel.org/mailman/listinfo/serusers
_______________________________________________
Serusers mailing list
serusers(a)lists.iptel.org
http://lists.iptel.org/mailman/listinfo/serusers
__________________________________
Do you Yahoo!?
Yahoo! Mail - You care about security. So do we.