I agree that NAT should be resolved by the peers. I haven't looked at the
forking proxy details; I assume it will do sort of a redirect for REGISTERs and
INVITEs, so that everything thereafter is handled by each SIP server. I
still cannot really see how you solve the NAT problem,though. The public IP of
the SIP server handling the first REGISTER will be the only IP allowed to send
an INVITE to the UA, so if another UA registered with another server makes a
call, the SIP forking proxy must make sure that the INVITE is sent through the
SIP server having done the initial registration of callee.
g-)
---- Original Message ----
From: Alex Vishnev
To: 'Greger V. Teigre'
; serusers@lists.iptel.org
Sent: Tuesday, April 12, 2005 06:20 PM
Subject: RE:
LVS, load balancing,and stickness was ==> Re: [Serusers]
moreusrloc
synchronization
> Greger,
>
> I am not an expert on
anycast as well. I just know it exists and
> people are starting to look
at it more seriously for HA option. That
> is why I though DNS SRV records
would be an easier solution.
> Regarding your comments on NAT, I don’t
believe it is an issue as it
> relates to forking proxy. Forking proxy
should not resolve NAT, it is
> a job for its peers. As for configuring
SER as forking proxy, I
> thought I read about it a while back, but now I
can’t seem to locate
> it. I hope I was not dreaming
;-).
>
> In any case, I will
continue to google around to see if SER has this
> option.
>
> Sincerely,
>
> Alex
>
>
>
>
> From: Greger V. Teigre [mailto:greger@teigre.com]
> Sent:
Tuesday, April 12, 2005 6:21 AM
> To: Alex Vishnev;
serusers@lists.iptel.org
> Subject: Re: LVS, load balancing,and stickness was
==> Re: [Serusers]
> moreusrloc synchronization
>
>
Alex,
> I'm not really knowledgable enough about anycast to say
anything
> useful. The only is that in your described setup, I
cannot really
> see how you get around the UA behind restricted (or worse)
NAT.
> I have never tried to configure SER
as a forking proxy, but I
> wouldn't be surprised if it was possible.
> g-)
>
> ---- Original Message ----
> From: Alex
Vishnev
> To: serusers@lists.iptel.org
> Sent: Monday, April 11, 2005
02:30 PM
> Subject: RE: LVS, load balancing, and stickness was ==> Re:
[Serusers]
> moreusrloc synchronization
>
>> Greger and
Paul,
>>
>> I think you understood me correctly regarding
forking proxy. It is
>> the proxy that will fork out the requests to
all available peering
>> proxies. This approach does not require
stickiness based on Call-id.
>> AFAIK, once the forking proxy receives
an acknowledgement from one of
>> its peers, then the rest of the
session will be done directly to that
>> peer without the use of the
forking proxy. I am considering 2
>> approaches to resolve availability
of forking proxy. 1 – using
>> ANYCAST (good high level
article:
>> http://www.kuro5hin.org/story/2003/12/31/173152/86). 2 –
using dns
>> srv. I am still trying to determine if ANYCAST is a good
solution for
>> creating local RPs with forking proxy. However, I think
that dns srv
>> records can easily be implemented to allow simple round
robin between
>> multiple forking proxies. Thoughts?
>>
>> Alex
>>
>>
>>
>>
>> From: serusers-bounces@lists.iptel.org
[mailto:serusers-bounces@lists.iptel.org]
>> On Behalf Of Greger V.
Teigre
>> Sent: Monday, April 11, 2005 4:47 AM
>> To:
kramarv@yahoo.com
>> Cc: serusers@lists.iptel.org
>> Subject: LVS,
load balancing, and stickness was ==> Re: [Serusers]
>> more usrloc
synchronization
>>
>> After my last email, I looked at ktcpvs
and realized I ignored a
>> couple of things: ktcpvs only supports tcp
(http is obviously
>> tcp-based, but I thought it supported udp for
other protocols). I
>> don't know how much work implementing udp
would be.
>> Here is a discussion of SIP and
LVS that I found useful (though
>> not encouraging).
>>
http://www.austintek.com/LVS/LVS-HOWTO/HOWTO/LVS-HOWTO.services_that_dont_work_yet.html
>>
>> Paul: I'm starting to get really curious on the standard
LVS
>> components used for your stickiness! I'm not aware (also
after
>> searching now) of an LVS balancing mechanism that allows
correct
>> stickness on SIP udp...!
>> And I found other too
who are looking for it:
>>
http://archive.linuxvirtualserver.org/html/lvs-users/2005-02/msg00251.html
>>
>> My understanding is that ipvs must be extended (according to
the
>> developer) with a call-id based scheduler and that this work
has
>> several people willing to fund development, but that this has
not(?)
>> started yet. The problem is that ipvs is based on ip
header analysis
>> and extending the hashing algorithms to also include
payload-based
>> analysis is not straight-forward.
>>
g-)
>>
>>> With regards to stickiness: Have you looked at
ktcpvs? SIP is an
>>> "http-like" protocol and I'm pretty sure
that you can use the
>>> http-based regex hashing to search for
Call-Id. If you cannot use
>>> it right out of the box, I'm
pretty sure the modifications are
>>>
minimal. The user location problem: With a cluster back-end, I
>>>
also only
>>> see save_memory() as the only option.
>>>
g-)
>>>
>>>> "Greger V. Teigre"
<greger@teigre.com> wrote:
>>>>> Greger, thanks a
lot.
>>>>> The problem with load balancer is that replies goes
to the wrong
>>>>> server due to rewriting outgoing a.b.c.d .
BTW, as Paul pointed,
>>>>> if you define some dummy interface
with Virtual IP (VIP), there
>>>>> is no need to rewrite
outgoing messages (I tested this a little).
>>>>
>>>>
>>>> Yes, if you use LVS with direct
routing or tunneling, that is what
>>>> you
experience.
>>>> ===Of course. That why I implemented small
"session" stickness.
>>>> However, it causes additional internal
traffic.
>>>>
>>>> What I described was a
"generic" SIP-aware load balancer where SIP
>>>> messages would
be rewritten and stickiness implemented based on ex.
>>>> UA IP
address (or call-id like vovida's load balancer).
>>>> ====Sure,
it's better solution; I think we'll go this way soon (in
>>>> our
next version).
>>>>
>>>>> Why DNS approach is
bad (except restricted NAT - let's say I am
>>>>> solving
this)?
>>>>
>>>> Well, IMO, DNS SRV in itself is
not bad. It's just that many user
>>>> clients do not support DNS
SRV yet. Except that, I like the
>>>> concept and it will
give you a geographical redundancy and load
>>>> balancing. ===I
am trying to build the following architecture:
>>>>
>>>> DNS (returns domain's public IP)->LVS+tunneling (Virtual
IP)->ser
>>>> clusters (with private IPs)
>>>>
>>>>>
>>>>
>>>>>
>>>>
DB
>>>> (MySQL 4.1 cluster)
>>>>
>>>>> I guess, Paul utilizes load-balancer scenario you have
described.
>>>>> Believe there are only proprietary solutions
for
>>>>> "the-replies-problem". We tried Vovida
call-id-persistence
>>>>> package, unfortunately it didn't
work for us.
>>>>
>>>> Are you referring to the
load balancer proxy? IMHO, the SIP-aware
>>>> load balancer makes
things a bit messy. It sounds to me that the
>>>> LVS +
tunneling/direct routing + virtual IP on dummy adapter is a
>>>>
better solution.
>>>>
>>>>> In my
configuration I use shared remote DB cluster (with
>>>>>
replication). Each ser see it as one-public-IP (exactly
the
>>>>> approach you named for SIP). May be it's good idea
to use local DB
>>>>> clusters, but if you have more than 2
servers your replication
>>>>> algorythm gonna be complex.
Additional problem - it still doesn't
>>>>> solve usrloc
synchronization - you still have to use
>>>>>
t_replicate()...
>>>>
>>>>
>>>>
I'm not sure if I understand.
>>>> ===Oh, probably I expressed
myself not well enough...
>>>>
>>>> So, you have
2 servers at two location, each location with a shared
>>>> DB
and then replication across an IPsec
tunnel??
>>>> IMHO, mysql 3.23.x two-way
replication is quite shaky and
>>>> dangerous to rely on.
With no locking, you will easily get
>>>> overwrites and you have
to be very sure that your application
>>>> doesn't mess up the
DB. I haven't looked at mysql 4.1 clustering,
>>>> but from
the little I have seen, it looks good. Is that what you
>>>>
use?
>>>>
>>>> ===We have 2 or more servers with
MysQL 4.1 virtual server
>>>> (clusters balanced by LVS). We use
MySQL for maintaining
>>>> subscribers' accounts, not for
location. User location is still
>>>> in-memory only so far. I am
afraid I have to switch to ser 09 in
>>>> order to use
save_memory (thanks Paul!) and forward_tcp() for
>>>>
replication.
>>>>
>>>>> With regard to
t_replicate() - it doesn't work for more than 2
>>>>> servers,
so I used exactly forward_tcp() and save_noreply()
>>>>>
(you're absolutely right - this works fine so far); all sers
are
>>>>> happy. Of course, this causes additional traffic.
Interesting
>>>>> whether Paul's FIFO patch reduces traffic
between sers?
>>>>
>>>> I believe Paul uses
forward_tcp() and save_memory() to save the
>>>> location to the
replicated server's memory, while the
>>>> save("location") on
the primary server will store to the DB (which
>>>> then
replicates on the DB level).
>>>> g-)
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Do you Yahoo!?
>>>>
Yahoo! Small Business - Try our new resources site!
>>
>>
>>
>>
_______________________________________________
>> Serusers mailing
list
>> serusers@lists.iptel.org
>>
http://lists.iptel.org/mailman/listinfo/serusers