> Actually, a minute delay would be a bad thing because
replicated
> usrloc records, using t_replicate() would not make it in to
peer SER
> server caches when the server is starting
up.
Yeah, I forgot about that scheme...
> Given this fact, and given the fact that most SER modules do
not hash
> data upon server startup [like group.so, etc, etc] we are
starting to
> see little value in caching usrloc. Our MySQL server is hit
12 times
> for an INVITE message and so complete caching of usrloc is of
minimal
> performace gain.
>
> Anyhow,
we're not in process of modifying SER so that:
>
> * when ser
starts up usrloc is "lazy-loaded"
> * if a usrloc record is looked up in
cache and is __NOT__ found, then
> MySQL will be queried. If found in
MySQL then the usrloc record will
> be put in to cache for future
lookups
>
> By doing these two things we should not have a
problem we excessively
> large subscriber bases.
>
>
Thoughts?
Makes sense. This is how Berkeley DB and many other DBs work.
In fact, the best would be to build an abstraction cache layer around all the
query functions that have data in the DB. This way you would get the optimum
performance/scalability.
However, there is one more thing: You need to decide on
an algorithm for selecting a usrloc record to replace when the cache is
full. Do you store extra info in memory for each usrloc to make the right
decision (ex. based on the number of lookups).
Also, what to do when you are storing a new location with save: Do you put
it in the cache as well? Today this happens automatically. As you have
continous registrations, you will fill up the cache with registered clients (and
push out the ones having been called). What you really want is to keep the
user locations you need (those required by lookup) in the cache. So I
would suggest that in save(), you only write to the DB (and of course update the
record if its in the cache) and that lookup() is the function that will control
the activation and replacement of the records in the cache.
I think this approach to caching is of interest also to those who do not
have a mysql cluster, but do regular replication, for example to reduce start-up
time. I believe an implementation may get pretty involved (in terms of
important functions you need to touch). However, I cannot see that you
will need to touch the core.
g-)
> Paul
>
>
> On 5/29/05, Greger V. Teigre
<greger@teigre.com> wrote:
> Interesting discussion. I believe
most large-scale deployments (there
> aren't really that many...) divide
the user base across several
> servers. I
> believe they use 20K
users is a "good number" per server. So, one ser
> having to load
that many records, is only if you have a cluster with
> no
> server
divide. Loading all the contacts into memory is impossible to
>
scale,
> at one point it will take too long time and take too much
memory.
> So, a
> better architecture *for such a deployment
scenario* would be a cache
> of
> some size and then a lookup of
records in DB if not present in cache.
> Loading 330 records per second,
you can load about 20,000 contacts in
> a
> minute, which probably
is ok.
> g-)
>
> Zeus Ng wrote:
>> See inline
comment.
>>
>>> Thanks for the info. I did change that
config.h define and
>>> now it works well.
>>
>>
Great to hear that the little change solve your problem.
>>
>>>
>>> My newest problem is the ser start time. In my
very
>>> non-scientific test it took ser about 25 minutes before
it
>>> began serving requests because it was loading usrloc
information.
>>>
>>> That was using 500000 records in
the location table. The
>>> MySQL server was running on the same box
as SER, which is
>>> also my workstation, so stuff like Firefox, X,
etc, were in use.
>>>
>>> But this does bring up an
interesting problem namely - how
>>> can ser service SIP clients
while loading large number of
>>> usrloc records? I'm kind of
thinking that this could be a big
>>
>> No, you can't. In
fact, you will experience a temporary slow down
>> when a hugh number
of UA is un-registering because the table was
>> locked during that
period of time. I once use sipsak to register 5000
>> users in 15s.
When they all expired about the same time, SER hang for
>> a while for
locking the table to release the record from memory.
>>
>>
>>> problem. When dealing with massive user bases there is
no
>>> such thing as a "quick restart".
>>
>>
Well, that's the trade-off of memory base db. You need to balance
the
>> startup time verse runtime performance.
>>
>>>
>>> We do have LVS fully "sip-aware" so we are
doing true UDP
>>> load balancing based on the Call-ID header, but
this is still
>>> a problem [potentially] with replication ucontact
records
>>> while the server is starting up.
>>>
>>> I wonder if it is possible to modify the behaviour of
usrloc
>>> so that it loads in the background while ser is
processing
>>> SIP messages. And when lookup("location") is
executed, usrloc
>>> searching the the ser cache and then MySQL if
no hit is found
>>> in cache -- or something like that.
>>
>> This triggers me to bring up the common question asked on this
list
>> before. Can SER use just MySQL for usrloc? A similar concept
has been
>> done on the speeddial module. It would help load
distribution, faster
>> startup time and better redundancy. Of course,
slower lookup as
>> tradeoff.
>>
>> I once consider
replacing the build-in memory base DB with MySQL
>> memory db. However,
that idea was dropped due to time constrain and
>> compatability
(postgresql) issue.
>>
>>>
>>> Can anyone on
serusers give some tips as to how to get ser to
>>> load usrloc
entries optimized? I know the usual stuff like
>>> faster MySQL
disks, faster network connection, dedicated app
>>> servers, etc,
etc. But I'm looking for ser and/or MySQL
>>> tweaking
hacks.
>>
>> Good luck on your search.
>>
>>
>>>
>>> Regards,
>>>
Paul
>>>
>>>
>>
>>
_______________________________________________
>> Serusers mailing
list
>> serusers@lists.iptel.org
>>
http://lists.iptel.org/mailman/listinfo/serusers