After my last email, I looked at ktcpvs and realized I ignored a couple of
things: ktcpvs only supports tcp (http is obviously tcp-based, but I thought it
supported udp for other protocols). I don't know how much work
implementing udp would be.
Here is a discussion of SIP and LVS that I found useful
(though not encouraging).
Paul: I'm starting to get really curious on the standard LVS components
used for your stickiness! I'm not aware (also after searching now) of an
LVS balancing mechanism that allows correct stickness on SIP udp...!
And I found other too who are looking for it:
My understanding is that ipvs must be extended (according to the developer)
with a call-id based scheduler and that this work has several people willing to
fund development, but that this has not(?) started yet. The problem is
that ipvs is based on ip header analysis and extending the hashing algorithms to
also include payload-based analysis is not straight-forward.
g-)
> With regards to stickiness: Have you looked at ktcpvs? SIP is
an
> "http-like" protocol and I'm pretty sure that you can use the
>
http-based regex hashing to search for Call-Id. If you cannot use
it
> right out of the box, I'm pretty sure the modifications are minimal.
> The user location problem: With a cluster
back-end, I also only
> see save_memory() as the only option.
>
g-)
>
>> "Greger V. Teigre" <greger@teigre.com>
wrote:
>>> Greger, thanks a lot.
>>> The problem with
load balancer is that replies goes to the wrong
>>> server due to
rewriting outgoing a.b.c.d . BTW, as Paul pointed, if
>>> you define
some dummy interface with Virtual IP (VIP), there is no
>>> need to
rewrite outgoing messages (I tested this a little).
>>
>>
>> Yes, if you use LVS with direct routing or tunneling, that is
what
>> you experience.
>> ===Of course. That why I
implemented small "session" stickness.
>> However, it causes additional
internal traffic.
>>
>> What I described was a
"generic" SIP-aware load balancer where SIP
>> messages would be
rewritten and stickiness implemented based on ex.
>> UA IP address (or
call-id like vovida's load balancer).
>> ====Sure, it's better
solution; I think we'll go this way soon (in
>> our next
version).
>>
>>> Why DNS approach is bad (except
restricted NAT - let's say I am
>>> solving this)?
>>
>> Well, IMO, DNS SRV in itself is not bad. It's just that many
user
>> clients do not support DNS SRV yet. Except that, I like
the concept
>> and it will give you a geographical redundancy and load
balancing.
>> ===I am trying to build the following
architecture:
>>
>> DNS (returns domain's public
IP)->LVS+tunneling (Virtual IP)->ser
>> clusters (with private
IPs)
>>
>>>
>>
>>>
>>
DB
>> (MySQL 4.1 cluster)
>>
>>> I guess, Paul
utilizes load-balancer scenario you have described.
>>> Believe
there are only proprietary solutions for
>>> "the-replies-problem".
We tried Vovida call-id-persistence package,
>>> unfortunately it
didn't work for us.
>>
>> Are you referring to the load
balancer proxy? IMHO, the SIP-aware
>> load balancer makes things a bit
messy. It sounds to me that the LVS
>> + tunneling/direct routing
+ virtual IP on dummy adapter is a better
>> solution.
>>
>>> In my configuration I use shared remote DB cluster
(with
>>> replication). Each ser see it as one-public-IP (exactly
the approach
>>> you named for SIP). May be it's good idea to use
local DB clusters,
>>> but if you have more than 2 servers your
replication algorythm gonna
>>> be complex. Additional problem - it
still doesn't solve usrloc
>>> synchronization - you still have to
use t_replicate()...
>>
>>
>> I'm not sure if I
understand.
>> ===Oh, probably I expressed myself not well
enough...
>>
>> So, you have 2 servers at two location, each
location with a shared
>> DB and then replication across an IPsec
tunnel??
>> IMHO, mysql 3.23.x two-way
replication is quite shaky and
>> dangerous to rely on. With no
locking, you will easily get
>> overwrites and you have to be very sure
that your application doesn't
>> mess up the DB. I haven't looked
at mysql 4.1 clustering, but from
>> the little I have seen, it looks
good. Is that what you use?
>>
>> ===We have 2 or more
servers with MysQL 4.1 virtual server (clusters
>> balanced by LVS). We
use MySQL for maintaining subscribers' accounts,
>> not for location.
User location is still in-memory only so far. I am
>> afraid I have to
switch to ser 09 in order to use save_memory (thanks
>> Paul!) and
forward_tcp() for replication.
>>
>>> With regard to
t_replicate() - it doesn't work for more than 2
>>> servers, so I
used exactly forward_tcp() and save_noreply() (you're
>>> absolutely
right - this works fine so far); all sers are happy. Of
>>> course,
this causes additional traffic. Interesting whether Paul's
>>> FIFO
patch reduces traffic between sers?
>>
>> I believe Paul uses
forward_tcp() and save_memory() to save the
>> location to the
replicated server's memory, while the
>> save("location") on the
primary server will store to the DB (which
>> then replicates on the DB
level).
>> g-)
>>
>>
>>
>>
>>
>> Do you Yahoo!?
>> Yahoo! Small Business - Try
our new resources site!