i'm new to rtpproxy (have been using mediaproxy for years) and now try to understand if it would make sense to switch over. rtpproxy readme has this on flags value 1:
+ 1 - append first Via branch to Call-ID when sending command to rtpproxy. ... This is especially useful if you have serially forked call scenarios where rtpproxy gets an "update" command for a new branch, and then a "delete" command for the previous branch, which would otherwise delete the full call, breaking the subsequent "lookup" for the new branch.
how it would be possible in SERIAL forked scenario to get anything for a new branch before previous branch has first failed and gone?
-- juha
Hi Juha!
Maybe the description is bad/wrong, but the details are described in this thread:
http://lists.sip-router.org/pipermail/sr-users/2012-February/071943.html
regards Klaus
On 17.10.2012 18:47, Juha Heinanen wrote:
i'm new to rtpproxy (have been using mediaproxy for years) and now try to understand if it would make sense to switch over. rtpproxy readme has this on flags value 1:
- 1 - append first Via branch to Call-ID when sending command to rtpproxy. ...
This is especially useful if you have serially forked call scenarios where rtpproxy gets an "update" command for a new branch, and then a "delete" command for the previous branch, which would otherwise delete the full call, breaking the subsequent "lookup" for the new branch.
how it would be possible in SERIAL forked scenario to get anything for a new branch before previous branch has first failed and gone?
-- juha
SIP Express Router (SER) and Kamailio (OpenSER) - sr-users mailing list sr-users@lists.sip-router.org http://lists.sip-router.org/cgi-bin/mailman/listinfo/sr-users
That's a very unclear/ambiguous description for those flags ...
-ovidiu
On Wed, Oct 17, 2012 at 1:10 PM, Klaus Darilion klaus.mailinglists@pernau.at wrote:
Hi Juha!
Maybe the description is bad/wrong, but the details are described in this thread:
http://lists.sip-router.org/pipermail/sr-users/2012-February/071943.html
regards Klaus
On 17.10.2012 18:47, Juha Heinanen wrote:
i'm new to rtpproxy (have been using mediaproxy for years) and now try to understand if it would make sense to switch over. rtpproxy readme has this on flags value 1:
- 1 - append first Via branch to Call-ID when sending command to rtpproxy. ... This is especially useful if you have serially forked call scenarios where rtpproxy gets an "update" command for a new branch, and then a "delete" command for the previous branch, which would otherwise delete the full call, breaking the subsequent "lookup" for the new branch.
how it would be possible in SERIAL forked scenario to get anything for a new branch before previous branch has first failed and gone?
-- juha
Klaus Darilion writes:
Maybe the description is bad/wrong, but the details are described in this thread:
http://lists.sip-router.org/pipermail/sr-users/2012-February/071943.html
klaus,
that description there is not very clear either:
I ran into a scenario with couple of serial forks where kamailio loops to itself and, due to the looping, the INVITE to a new branch happens before the CANCEL to an old branch.
is the setup without the 1/2 flags like this:
uac invite -> proxy -> invite -> proxy -> invite -> first uas -> cancel -> proxy -> cancel -> first uas -> invite -> second uas where second proxy instance calls unforce_rtpproxy when it receives the cancel, thus tearing down also the rtpproxy session for the second leg?
is so, why would second proxy instance be involved with rtpproxy stuff at all because that has been already taken care of by the first instance?
-- juha
Juha Heinanen writes:
is the setup without the 1/2 flags like this:
uac invite -> proxy -> invite -> proxy -> invite -> first uas -> cancel -> proxy -> cancel -> first uas -> invite -> second uas where second proxy instance calls unforce_rtpproxy when it receives the cancel, thus tearing down also the rtpproxy session for the second leg?
is so, why would second proxy instance be involved with rtpproxy stuff at all because that has been already taken care of by the first instance?
or is it so that first proxy instance didn't arm rtpproxy, but second instance did and then first instance armed rtpproxy for second uas before second instance unarmed it due to cancel?
-- juha
Please ask Andreas about those flags - he commited it
regards klaus
On 17.10.2012 20:18, Juha Heinanen wrote:
Juha Heinanen writes:
is the setup without the 1/2 flags like this:
uac invite -> proxy -> invite -> proxy -> invite -> first uas -> cancel -> proxy -> cancel -> first uas -> invite -> second uas where second proxy instance calls unforce_rtpproxy when it receives the cancel, thus tearing down also the rtpproxy session for the second leg?
is so, why would second proxy instance be involved with rtpproxy stuff at all because that has been already taken care of by the first instance?
or is it so that first proxy instance didn't arm rtpproxy, but second instance did and then first instance armed rtpproxy for second uas before second instance unarmed it due to cancel?
-- juha
Hi,
On 10/22/2012 07:04 PM, Klaus Darilion wrote:
Please ask Andreas about those flags - he commited it
It's actually about serial hunting e.g. on ring-timeout (maybe the term "forking" is ambiguous).
Consider this scenario:
A sends INVITE to proxy, which calls rtpproxy_offer to create a new session in rtpproxy. You set fr_inv_timer to 5sec, then the INVITE is sent to B, which replies with 100/180. Let's call this "branch B".
After 5sec, tm triggers an internal timeout, and in a failure route you set the R-RUI to C, call append_branch and spiral the INVITE to the same proxy again, which goes through call routing again, calls rtpproxy_offer again and forwards it to C. Let's call this one "branch C".
In the meanwhile, the proxy sent a CANCEL for "branch B", so you'll get a 487 back from B, where you call rtpproxy_destroy in your failure route to tear down this branch. In "normal" operation mode, this will delete the full session in rtpproxy (both B and C), because both are identified with the same call-id and from-tag.
And after that, you get at some point a 200 back from C, where you call rtpproxy_answer. In this lookup, rtpproxy won't find the call anymore, because it got deleted by the rtpproxy_destroy triggered by the 487 from B.
The order of the 200 for "branch C" and the 487 for "branch B" doesn't really matter. The 487 before the 200 will result in an error when rtpproxy_answer is called, because the session is gone on the rtpproxy already. The 487 after the 200 will just successfully delete the session on the rtpproxy, cutting the already established audio.
And this is the reason why we pass the Via branch to the rtpproxy in the offer, answer and destory when we're spiraling a call multiple times over the proxy, in order to be able to only delete a specific branch instead of the whole session in case a branch fails.
The usual configuration is to set "1" for a request when calling rtpproxy_offer and "2" in a response when calling rtpproxy_answer. For rtpproxy_destory you either set nothing when called from a request (e.g. on CANCEL from caller or BYE) to tear down all branches, or "1" when called from failure route (e.g. on negative response from callee) or "2" when called from reply route (if you don't arm a failure route for some reason).
Hope this clarifies it.
Andreas
Thanks, now I remember again :-)
Please copy the whole description into the rtpproxy README.
Thanks Klaus
On 22.10.2012 22:46, Andreas Granig wrote:
Hi,
On 10/22/2012 07:04 PM, Klaus Darilion wrote:
Please ask Andreas about those flags - he commited it
It's actually about serial hunting e.g. on ring-timeout (maybe the term "forking" is ambiguous).
Consider this scenario:
A sends INVITE to proxy, which calls rtpproxy_offer to create a new session in rtpproxy. You set fr_inv_timer to 5sec, then the INVITE is sent to B, which replies with 100/180. Let's call this "branch B".
After 5sec, tm triggers an internal timeout, and in a failure route you set the R-RUI to C, call append_branch and spiral the INVITE to the same proxy again, which goes through call routing again, calls rtpproxy_offer again and forwards it to C. Let's call this one "branch C".
In the meanwhile, the proxy sent a CANCEL for "branch B", so you'll get a 487 back from B, where you call rtpproxy_destroy in your failure route to tear down this branch. In "normal" operation mode, this will delete the full session in rtpproxy (both B and C), because both are identified with the same call-id and from-tag.
And after that, you get at some point a 200 back from C, where you call rtpproxy_answer. In this lookup, rtpproxy won't find the call anymore, because it got deleted by the rtpproxy_destroy triggered by the 487 from B.
The order of the 200 for "branch C" and the 487 for "branch B" doesn't really matter. The 487 before the 200 will result in an error when rtpproxy_answer is called, because the session is gone on the rtpproxy already. The 487 after the 200 will just successfully delete the session on the rtpproxy, cutting the already established audio.
And this is the reason why we pass the Via branch to the rtpproxy in the offer, answer and destory when we're spiraling a call multiple times over the proxy, in order to be able to only delete a specific branch instead of the whole session in case a branch fails.
The usual configuration is to set "1" for a request when calling rtpproxy_offer and "2" in a response when calling rtpproxy_answer. For rtpproxy_destory you either set nothing when called from a request (e.g. on CANCEL from caller or BYE) to tear down all branches, or "1" when called from failure route (e.g. on negative response from callee) or "2" when called from reply route (if you don't arm a failure route for some reason).
Hope this clarifies it.
Andreas
SIP Express Router (SER) and Kamailio (OpenSER) - sr-users mailing list sr-users@lists.sip-router.org http://lists.sip-router.org/cgi-bin/mailman/listinfo/sr-users