> Greger, thanks a lot.
> The problem with load balancer is that replies goes to the wrong
> server due to rewriting outgoing a.b.c.d . BTW, as Paul pointed, if
> you define some dummy interface with Virtual IP (VIP), there is no
> need to rewrite outgoing messages (I tested this a little).
Yes, if you use LVS with direct routing or tunneling, that is what you experience.===Of course. That why I implemented small "session" stickness. However, it causes additional internal traffic.What I described was a "generic" SIP-aware load balancer where SIP messages would be rewritten and stickiness implemented based on ex. UA IP address (or call-id like vovida's load balancer).====Sure, it's better solution; I think we'll go this way soon (in our next version).> Why DNS approach is bad (except restricted NAT - let's say I am
> solving this)?Well, IMO, DNS SRV in itself is not bad. It's just that many user clients do not support DNS SRV yet. Except that, I like the concept and it will give you a geographical redundancy and load balancing.===I am trying to build the following architecture:DNS (returns domain's public IP)->LVS+tunneling (Virtual IP)->ser clusters (with private IPs)||DB (MySQL 4.1 cluster)
> I guess, Paul utilizes load-balancer scenario you have described.
> Believe there are only proprietary solutions for
> "the-replies-problem". We tried Vovida call-id-persistence package,
> unfortunately it didn't work for us.Are you referring to the load balancer proxy? IMHO, the SIP-aware load balancer makes things a bit messy. It sounds to me that the LVS + tunneling/direct routing + virtual IP on dummy adapter is a better solution.
> In my configuration I use shared remote DB cluster (with
> replication). Each ser see it as one-public-IP (exactly the approach
> you named for SIP). May be it's good idea to use local DB clusters,
> but if you have more than 2 servers your replication algorythm gonna
> be complex. Additional problem - it still doesn't solve usrloc
> synchronization - you still have to use t_replicate()...I'm not sure if I understand.===Oh, probably I expressed myself not well enough...So, you have 2 servers at two location, each location with a shared DB and then replication across an IPsec tunnel??IMHO, mysql 3.23.x two-way replication is quite shaky and dangerous to rely on. With no locking, you will easily get overwrites and you have to be very sure that your application doesn't mess up the DB. I haven't looked at mysql 4.1 clustering, but from the little I have seen, it looks good. Is that what you use?===We have 2 or more servers with MysQL 4.1 virtual server (clusters balanced by LVS). We use MySQL for maintaining subscribers' accounts, not for location. User location is still in-memory only so far. I am afraid I have to switch to ser 09 in order to use save_memory (thanks Paul!) and forward_tcp() for replication.
> With regard to t_replicate() - it doesn't work for more than 2
> servers, so I used exactly forward_tcp() and save_noreply() (you're
> absolutely right - this works fine so far); all sers are happy. Of
> course, this causes additional traffic. Interesting whether Paul's
> FIFO patch reduces traffic between sers?I believe Paul uses forward_tcp() and save_memory() to save the location to the replicated server's memory, while the save("location") on the primary server will store to the DB (which then replicates on the DB level).g-)