Thanks for the detailed explanation, slowly i understand the flags :-)
Klaus
-----Original Message----- From: Jan Janak [mailto:jan@iptel.org] Sent: Wed 10.12.2003 23:50 To: Klaus Darilion Cc: serusers@lists.iptel.org Subject: Re: [Serusers] nathelper question - nat_flag Basically usrloc has been extended to save the status of a flag. Which flag it should save can be configured using the "nat_flag" parameter of the registrar module.
Let's say you configure modparam("registrar", "nat_flag", 6)
Then flag 6 will be saved into the user location database each time you call save("location") and retrieved each time you call lookup("location").
If flag 6 is set before save() then it will be also set after lookup() for the same contact.
Because the user location database stores already rewritten contact (in case the user agent was behind a NAT), you don't know if the contact returned by lookup() is behind NAT
That's what the flag is for. If the configured flag is set after lookup("location") then you know that the contact is behind NAT.
Typically you would do something like this (very rough example):
# For REGISTER If (contact_is_behind_nat) { rewrite_contact setflag(6) save("location") }
# For INVITE if (lookup("location")) { if (isset(6)) { # Contact is behind NAT force_rtp_proxy } }
Jan.
On 10-12 23:40, Klaus Darilion wrote:
Hi!
I'm lost in the NAT problem!
I read about the 'new' flags which will be stored in the location table to detect the presence of NATed clients. http://lists.iptel.org/pipermail/serdev/2003-October/000690.html
But I can't find any documentation about them. How do I use them? Do I have to use setflag(?) before save("location")? Which flag do I have to set? Do I have to retrive the flags during lookup("location") to force_rtp_proxy in case of an registered client whic is nated?
regards, Klaus
Serusers mailing list serusers@lists.iptel.org http://lists.iptel.org/mailman/listinfo/serusers
I am seeing this in my log, just before a restart.
Dec 10 18:52:23 rave ser[22096]: BUG: tcp_main_loop: dead child 1 Dec 10 18:52:23 rave ser[22096]: BUG: tcp_main_loop: CONN_RELEASE
this is 0.8.11, I've never had an error before, had two restarts today. I guess I need to try a newer version?
---greg
On Dec 10, 2003 at 19:10, Greg Fausak lgfausak@august.net wrote:
I am seeing this in my log, just before a restart.
Dec 10 18:52:23 rave ser[22096]: BUG: tcp_main_loop: dead child 1 Dec 10 18:52:23 rave ser[22096]: BUG: tcp_main_loop: CONN_RELEASE
It's harmless. The tcp part notices some of the processes died, but this can happen during a normal restart (depends on how the signals get delivered). I've just forgot to remove this message.
this is 0.8.11, I've never had an error before, had two restarts today. I guess I need to try a newer version?
You should if you use tcp, but you will still get this messages, unless you use the latest rel_0_8_12 cvs (I've just removed the message).
Andrei