This is a snip from the user's mailing list, I'm just adding it to the issue tracker, as it is yet to be confirmed as a bug, only here for tracking pruporses:
```
I have a situation where I have 2 devices registering with the same AOR, I have the registrar module's max_contacts parameter set to 1 and I use the 0x04 flag on the save function.
When I use dmq_usrloc for replication, I can see 2 contacts registered for the AOR on the "client" nodes whereas the node where the actual registration took place only has 1 contact for the AOR.
```
```
It seems to me like another bug, although again I'm not familiar with the module so can't be sure without looking that it is not intended behaviour.
I plan to look later this week when I have some time but in the meantime, you can send me the examples directly if you like.
```
---
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/265
Kamailio takes care of sending in-dialog keep-alives using OPTIONS when dlg_set_attribute ka-src and ka-dst is used. It also sends OPTIONS to correct address if a client sends Re-Invite indicating a network change, which is a common case with mobile clients (switching between mobile-data and Wifi) fixed in #273
The problem is that it ends the dialog on missing even one keep-alive response. Missing a response is possible when the client is handling the network change and keep-alive is sent around that time.
Thus, having a configurable counter after which you expect to end the dialog might be helpful. For eg. you decide to end the dialog after we miss 3 responses for keep-alives in a row.
Something like dlg_set_attribute(ka-src, 3) where 3 indicates the number of responses that can be missed before ending the dialog.
---
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/438
ENV: Debian 8
So on most Linux systems the /var/run folder is a tmpfs and after which clears after each reboot
As I try to start up kamailio after a reboot through kamctl I get the following error.
"ERROR: PID file /var/run/kamailio.pid does not exist -- Kamailio start failed"
Because kamailio does not have the permission to write to the /var/run folder as the kamailio user.
Would it be possible to add in the "kamctl" code a section where the SPECIFIED PID LOCATION (/var/run/kamailio) is checked and if it does not exist create it as the root user and chown it to the SPECIFIED USER (kamailio)? The way I have patched it for now is to create a init script that creates a /var/run/kamailio folder and chmods it to kamailio:kamailio
Thanks everyone!
---
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/231
Hello,
As you will see I have merged my branch back into master.
These changes add a new event route [tm:branch-failure] to the tm module
which is run when any failure response is received on a transaction.
The event_route uses a new route type BRANCH_ROUTE which limits the
functions that can be run in the route.
The functions t_check_status(), t_next_contact_flow(), t_relay() and
unregister() can be used in this route.
A new pv $T_reply_ruid is accessible in this route which can be used to
unregister a single contact entry.
The following example route can be used on a registrar to handle failed
or invalid flows:
event_route[tm:branch-failure] {
xlog("L_INFO", "event_route[tm:branch-failure]\n");
if (t_check_status("430|403")) {
if (!unregister("location", "$tu", "$T_reply_ruid"))
{
xlog("L_WARN", "failed to unregister $tu with
ruid $T_reply_ruid\n");
}
if (!t_next_contact_flow())
{
xlog("L_INFO", "No more flows\n");
}
else
{
xlog("L_INFO", "Next flow\n");
t_relay();
}
}
}
Any bugs, memory leaks etc. let me know!
Hugh
--
Hugh Waite
Principal Design Engineer
Crocodile RCS Ltd.
If a PUBLISH request is sent before a 200 OK for the previous PUBLISH request is received, the E-Tag is reset which leads to inconsistent presentity data in the presence server.
This is happening because the second PUBLISH is sent with the same E-Tag as the previous one, but the presence server has an updated the E-Tag (generated while handling the first PUBLISH).
The pua module should implement a queuing mechanism for subsequent PUBLISH request while a previous PUBLISH transaction is in progress. No new PUBLISH requests should be sent before a response to the previous PUBLISH request was received and the E-Tag updated.
---
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/21
Requests send with uac module's functions like uac_req_send() and remote registrations use the TM module, but don't actually do (DNS) failover when a destination is unreachable or sends a 503.
In the tm's uac.c, t_uac_prepare() function, i can already see that the dns handle is stored in a throwaway variable instead of in the transaction, but fixing the issue isn't as simple as just storing it in the transaction (print_uac_request_from_buf() needs a uas in the transaction).
Due to github not supporting attachments to issues, the sample config and debug log are inline.... :(
DNS entry:
```
$ host -t SRV _sip._udp.failover.test.speakup.nl
_sip._udp.failover.test.speakup.nl has SRV record 10 0 1111 try1.test.speakup.nl.
_sip._udp.failover.test.speakup.nl has SRV record 100 0 2222 try2.test.speakup.nl.
```
Log snippet:
```syslog
Jun 15 10:00:03 rio UACFailover[18178]: INFO: <script>: Sending OPTIONS request to sip:failover.test.speakup.nl
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: tm [uac.c:249]: t_uac_prepare(): DEBUG:tm:t_uac: next_hop=<sip:failover.test.speakup.nl>
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [dns_cache.c:537]: _dns_hash_find(): (_sip._udp.failover.test.speakup.nl(34), 33), h=825
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [resolve.c:937]: get_record(): skipping 8 NS (p=0x9ea284, end=0x9ea3e9)
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [resolve.c:952]: get_record(): parsing 9 ARs (p=0x9ea335, end=0x9ea3e9)
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [dns_cache.c:1741]: dns_get_related(): (0x7f225233eb38 (_sip._udp.failover.test.speakup.nl, 33), 33, *0x7f2257baf2b8) (0)
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [dns_cache.c:840]: dns_cache_add_unsafe(): adding _sip._udp.failover.test.speakup.nl(34) 33 (flags=0) at 825
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [dns_cache.c:840]: dns_cache_add_unsafe(): adding try1.test.speakup.nl(20) 1 (flags=0) at 310
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [dns_cache.c:840]: dns_cache_add_unsafe(): adding try2.test.speakup.nl(20) 1 (flags=0) at 307
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [dns_cache.c:2336]: dns_srv_get_nxt_rr(): (0x7f225233eb38, 0, 0, 1597643831): selected 0/1 in grp. 0 (rand_w=0, rr=0x7f225233eba0 rd=0x7f225233ebb8 p=10 w=0 rsum=0)
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [dns_cache.c:537]: _dns_hash_find(): (try1.test.speakup.nl(20), 1), h=310
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [dns_cache.c:2926]: dns_a_resolve(): (try1.test.speakup.nl, 0) returning 0
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [dns_cache.c:3169]: dns_srv_resolve_ip(): ("_sip._udp.failover.test.speakup.nl", 0, 0), ret=0, ip=192.168.1.245
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [dns_cache.c:3260]: dns_srv_sip_resolve(): (failover.test.speakup.nl, 0, 0), srv0, ret=0
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: tm [uac.c:150]: dlg2hash(): DEBUG: dlg2hash: 54501
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: tm [uac.c:351]: t_uac_prepare(): executing event_route[tm:local-request]
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [parser/msg_parser.c:606]: parse_msg(): SIP Request:
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [parser/msg_parser.c:608]: parse_msg(): method: <OPTIONS>
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [parser/msg_parser.c:610]: parse_msg(): uri: <sip:failover.test.speakup.nl>
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [parser/msg_parser.c:612]: parse_msg(): version: <SIP/2.0>
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [parser/parse_via.c:1254]: parse_via_param(): Found param type 232, <branch> = <z9hG4bK5e4d.47adbbf5000000000000000000000000.0>; state=16
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [parser/parse_via.c:2642]: parse_via(): end of header reached, state=5
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [parser/msg_parser.c:496]: parse_headers(): parse_headers: Via found, flags=2
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [parser/msg_parser.c:498]: parse_headers(): parse_headers: this is the first via
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [parser/parse_addr_spec.c:894]: parse_addr_spec(): end of header reached, state=10
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [parser/msg_parser.c:173]: get_hdr_field(): DEBUG: get_hdr_field: <To> [22]; uri=[sip:to.example.com]
Jun 15 10:00:03 rio UACFailover[18178]: [97B blob data]
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [parser/msg_parser.c:153]: get_hdr_field(): get_hdr_field: cseq <CSeq>: <10> <OPTIONS>
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: pv [pv_core.c:376]: pv_get_xto_attr(): no Tag parameter
Jun 15 10:00:03 rio UACFailover[18178]: INFO: <script>: [udp:172.28.4.128:6789] [10 OPTIONS] Request (cid: 6cd531e47cc63e20-18178(a)172.28.4.128 Branch: -1 ToTag: <null> len:382) To <sip:failover.test.speakup.nl> via <sip:failover.test.speakup.nl>
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [usr_avp.c:631]: destroy_avp_list(): destroying list (nil)
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [usr_avp.c:631]: destroy_avp_list(): destroying list (nil)
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [usr_avp.c:631]: destroy_avp_list(): destroying list (nil)
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [usr_avp.c:631]: destroy_avp_list(): destroying list (nil)
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [usr_avp.c:631]: destroy_avp_list(): destroying list (nil)
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [usr_avp.c:631]: destroy_avp_list(): destroying list (nil)
Jun 15 10:00:03 rio UACFailover[18178]: DEBUG: <core> [xavp.c:446]: xavp_destroy_list(): destroying xavp list (nil)
Jun 15 10:00:05 rio UACFailover[18176]: DEBUG: tm [t_reply.c:1230]: t_should_relay_response(): ->>>>>>>>> T_code=0, new_code=408
Jun 15 10:00:05 rio UACFailover[18176]: WARNING: tm [t_reply.c:957]: run_failure_handlers(): Warning: run_failure_handlers: no UAC support (1, 0)
Jun 15 10:00:05 rio UACFailover[18176]: DEBUG: tm [t_reply.c:2016]: local_reply(): DEBUG: local_reply: branch=0, save=0, winner=0
Jun 15 10:00:05 rio UACFailover[18176]: DEBUG: tm [t_reply.c:2053]: local_reply(): DEBUG: local transaction completed
```
Sample config:
```python
#------------------
# Flags
#------------------
debug=3
memdbg=5
mem_summary=5
#fork=no # This option should not be present to enable forking but disable daemonize, also -D commandline parameter is needed
log_stderror=no
sip_warning=no
listen=172.28.4.128
port=6789
children=2
shm_mem_size=64
log_name="UACFailover"
check_via=no # (cmd. line: -v)
dns=no # (cmd. line: -r)
rev_dns=no # (cmd. line: -R)
dns_use_search_list=no
use_dns_failover=yes
dns_srv_lb=yes
disable_tcp=yes
mpath="/usr/lib/x86_64-linux-gnu/kamailio/modules/"
loadmodule "pv.so"
loadmodule "tm.so"
loadmodule "tmx.so"
loadmodule "sl.so"
loadmodule "xlog.so"
loadmodule "rr.so"
loadmodule "uac.so"
loadmodule "rtimer.so"
# -----------------------------------------------------------------
# Module settings
# -----------------------------------------------------------------
modparam("pv", "varset", "server=s:LOGNAME")
modparam("tm", "auto_inv_100", 0)
# Time until provisional response (eg "100 Trying") is received
modparam("tm", "fr_timer", 2000)
# Time to await final response on INVITE per branch (eg per branch ring time)
modparam("tm", "fr_inv_timer", 10000)
# Time to await final response on INVITE (eg ring time)
modparam("tm", "max_inv_lifetime", 20000)
modparam("tm", "restart_fr_on_each_reply", 0)
modparam("tm", "failure_reply_mode", 3)
modparam("rtimer", "timer", "name=uac;interval=10;mode=1;")
modparam("rtimer", "exec", "timer=uac;route=TIMER")
route {
xlog("L_NOTICE", "[$pr:$si:$sp] [$cs $rm] Request. (cid: $ci <$fu> => <$ru> len: $ml)");
drop();
}
event_route[tm:local-request] {
xlog("L_INFO", " [$pr:$si:$sp] [$cs $rm] Request (cid: $ci Branch: $T_branch_idx ToTag: $tt len:$ml) To <$ru> via <$du>");
}
onreply_route {
xlog("L_INFO", " [$pr:$si:$sp] [$cs $rm] Reply (cid: $ci Branch: $T_branch_idx Status: $rs $rr ToTag: $tt len:$ml)");
}
route[TIMER] {
if not $var(done) == 1 {
$var(done) = 1;
$uac_req(method) = "OPTIONS";
$uac_req(ruri) = "sip:failover.test.speakup.nl";
$uac_req(furi) = "sip:from.example.com";
$uac_req(turi) = "sip:to.example.com";
xlog("L_INFO", "Sending $uac_req(method) request to $uac_req(ruri)");
uac_req_send();
}
}
```
---
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/210