<!--
Kamailio Project uses GitHub Issues only for bugs in the code or feature requests. Please use this template only for feature requests.
If you have questions about using Kamailio or related to its configuration file, ask on sr-users mailing list:
* http://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-users
If you have questions about developing extensions to Kamailio or its existing C code, ask on sr-dev mailing list:
* http://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev
Please try to fill this template as much as possible for any issue. It helps the developers to troubleshoot the issue.
If you submit a feature request (or enhancement) add the description of what you would like to be added.
If there is no content to be filled in a section, the entire section can be removed.
You can delete the comments from the template sections when filling.
You can delete next line and everything above before submitting (it is a comment).
-->
### Description
Am using Kamailio 5.1.9 version, In my tls.cfg i have one client and server profile,
along with default client and server profile.
I have crl enabled for the non default client and server profile , the crl file size is 4 MB in my case.
I have 22 child tcp process.
With this what i observe is load_crl is taking close to 90 seconds to finish its execution and return.
### Expected behavior
load_Crl function should not take 90 seconds to complete its execution.
probably it should take in the range of 10-15 seconds to complete its execution or even lesser.
#### Actual observed behavior
load_Crl function is taking 90 seconds to complete its execution.
#### Debugging Data
It is very clear from the code, its because of this for loop.
time taken to complete load_Crl execution is 90 seconds
procs_no=get_max_procs();
for(i = 0; i < procs_no; i++) {
if (SSL_CTX_load_verify_locations(d->ctx[i], d->crl_file.s, 0) != 1) {
ERR("%s: Unable to load certificate revocation list '%s'\n",
tls_domain_str(d), d->crl_file.s);
TLS_ERR("load_crl:");
return -1;
}
store = SSL_CTX_get_cert_store(d->ctx[i]);
X509_STORE_set_flags(store,
X509_V_FLAG_CRL_CHECK | X509_V_FLAG_CRL_CHECK_ALL);
}
Is there a way this can be enhanced or as per the current kamailio design this is a must to do for each and every profile and its ssl context array list for each process and for every profile.
The same logic is seen in other load functions as well, for example load_cert,
load_ca_list,
load_crl,
set_cipher_list,
set_verification,
set_ssl_options,
set_session_cache,
ksr_tls_fix_domain,
#### Log Messages
<!--
Check the syslog file and if there are relevant log messages printed by Kamailio, add them next, or attach to issue, or provide a link to download them (e.g., to a pastebin site).
-->
```
(paste your log messages here)
```
#### SIP Traffic
<!--
If the issue is exposed by processing specific SIP messages, grab them with ngrep or save in a pcap file, then add them next, or attach to issue, or provide a link to download them (e.g., to a pastebin site).
-->
```
(paste your sip traffic here)
```
### Possible Solutions
Reply from Henning Westerholt on posting this problem to Users Mailing list
"But the code could be probably also improved, maybe it is possible to parallelize it. You can open a feature request about it,"
### Additional Information
Kamailio 5.1.9 version
* **Operating System**:
<!--
Details about the operating system, the type: Linux (e.g.,: Debian 8.4, Ubuntu 16.04, CentOS 7.1, ...), MacOS, xBSD, Solaris, ...;
Kernel details (output of `uname -a`)
-->
```
Linux Kernel version : 3.10.0-693.el7.x86_64
Centos version : CentOS Linux release 7.4.1708 (Core)
CPU : 2 cores with model name : Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz
[root@miv5000 ~]# cat /proc/meminfo
MemTotal: 3882076 kB
MemFree: 811244 kB
MemAvailable: 2320356 kB
Openssl verison : OpenSSL 1.0.2k-fips 26 Jan 2017
```
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/2312
### Description
Explain what you did, what you expected to happen, and what actually happened.
I'm looking at using presence module with **subs_db_mode** parameter set to 0-2 where it uses memory to store and query active watchers. I noticed that sometimes Kamailio will throw following message during PUBLISH processesing and it will not generate appropriate NOTIFY for watchers:
`DEBUG: presence [notify.c:1234]: publ_notify(): Could not find subs_dialog `
RPC command to dump current active watchers will be helpful during troubleshooting. Currently with **subs_db_mode** set to 0 you can not tell for sure if subsciption already expired or not. And with **subs_db_mode** set to 1 or 2 you can get information from DB with delay. This command will also allow to clarify subscriber _domain_ value which if I understood correctly must match domain from presentity.
Thanks a lot!
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/2188
Some times ims_charging module not sending ccr terminate request to Diameter Server upon receiving the BYE Request .
**ISSUE Description:**
1 ) Consider User A registered with kamailio.
2 ) A called PSTN number ..Initial CCR request went to diameter server successfully.
3 ) PSTN number hangup the call....Here After BYE Transaction is done...Kamailio is generating a new BYE request to itself and it is retransmitting it four times.
4 ) I think this is the reason Ims_cahrging not generating ccr terminate request.
**Please find below sip traces. [Proxy is : 7080 , gateway : 5060]**
BYE sip:+xxxxxxxxxxxx@xxxxxxxxxxxx:35465 SIP/2.0
Via: SIP/2.0/UDP xxxxxxxxxxxx:5060;branch=z9hG4bK05c0a4d6
Route: <sip:xxxxxxxxxxxx:7080;lr;transport=UDP;did=492.d7f1>
Max-Forwards: 70
From: <sip:xxxxxxxxxxxx@xxxxxxxxxxxx>;tag=as4ab1ab4b
To: "+xxxxxxxxxxxx" <sip:+xxxxxxxxxxxx@xxxxxxxxxxxx>;tag=8110a0S32NHgp
Call-ID: 8fa995a0-061c-11ea-8d83-00505697042b
CSeq: 102 BYE
User-Agent: SM SoftSwitch 11-06C13
X-Asterisk-HangupCause: Normal Clearing
X-Asterisk-HangupCauseCode: 16
Content-Length: 0
SIP/2.0 200 OK
Via: SIP/2.0/UDP xxxxxxxxxxxx:5060;rport=5060;branch=z9hG4bK05c0a4d6
From: <sip:xxxxxxxxxxxx@xxxxxxxxxxxx>;tag=as4ab1ab4b
To: "+xxxxxxxxxxxx" <sip:+xxxxxxxxxxxx@xxxxxxxxxxxx>;tag=8110a0S32NHgp
Call-ID: 8fa995a0-061c-11ea-8d83-00505697042b
CSeq: 102 BYE
User-Agent: Janus WebRTC Gateway SIP Plugin 0.0.6
Allow: INVITE, ACK, BYE, CANCEL, OPTIONS, UPDATE
Content-Length: 0
BYE sip:xxxxxxxxxxxx:7080;lr;transport=UDP;did=492.d7f1 SIP/2.0
Via: SIP/2.0/UDP xxxxxxxxxxxx:7080;branch=z9hG4bK461c.5003522650d9dbb33e47c6d83d7465b8.1
Via: SIP/2.0/UDP xxxxxxxxxxxx:5060;rport=5060;branch=z9hG4bK05c0a4d6
Max-Forwards: 69
From: <sip:xxxxxxxxxxxx@xxxxxxxxxxxx>;tag=as4ab1ab4b
To: "+xxxxxxxxxxxx" <sip:+xxxxxxxxxxxx@xxxxxxxxxxxx>;tag=8110a0S32NHgp
Call-ID: 8fa995a0-061c-11ea-8d83-00505697042b
CSeq: 102 BYE
User-Agent: SM SoftSwitch 11-06C13
X-Asterisk-HangupCause: Normal Clearing
X-Asterisk-HangupCauseCode: 16
Content-Length: 0
BYE sip:xxxxxxxxxxxx:7080;lr;transport=UDP;did=492.d7f1 SIP/2.0
Via: SIP/2.0/UDP xxxxxxxxxxxx:7080;branch=z9hG4bK461c.5003522650d9dbb33e47c6d83d7465b8.1
Via: SIP/2.0/UDP xxxxxxxxxxxx:5060;rport=5060;branch=z9hG4bK05c0a4d6
Max-Forwards: 69
From: <sip:xxxxxxxxxxxx@xxxxxxxxxxxx>;tag=as4ab1ab4b
To: "+xxxxxxxxxxxx" <sip:+xxxxxxxxxxxx@xxxxxxxxxxxx>;tag=8110a0S32NHgp
Call-ID: 8fa995a0-061c-11ea-8d83-00505697042b
CSeq: 102 BYE
User-Agent: SM SoftSwitch 11-06C13
X-Asterisk-HangupCause: Normal Clearing
X-Asterisk-HangupCauseCode: 16
Content-Length: 0
BYE sip:xxxxxxxxxxxx:7080;lr;transport=UDP;did=492.d7f1 SIP/2.0
Via: SIP/2.0/UDP xxxxxxxxxxxx:7080;branch=z9hG4bK461c.5003522650d9dbb33e47c6d83d7465b8.1
Via: SIP/2.0/UDP xxxxxxxxxxxx:5060;rport=5060;branch=z9hG4bK05c0a4d6
Max-Forwards: 69
From: <sip:xxxxxxxxxxxx@xxxxxxxxxxxx>;tag=as4ab1ab4b
To: "+xxxxxxxxxxxx" <sip:+xxxxxxxxxxxx@xxxxxxxxxxxx>;tag=8110a0S32NHgp
Call-ID: 8fa995a0-061c-11ea-8d83-00505697042b
CSeq: 102 BYE
User-Agent: SM SoftSwitch 11-06C13
X-Asterisk-HangupCause: Normal Clearing
X-Asterisk-HangupCauseCode: 16
Content-Length: 0
BYE sip:xxxxxxxxxxxx:7080;lr;transport=UDP;did=492.d7f1 SIP/2.0
Via: SIP/2.0/UDP xxxxxxxxxxxx:7080;branch=z9hG4bK461c.5003522650d9dbb33e47c6d83d7465b8.1
Via: SIP/2.0/UDP xxxxxxxxxxxx:5060;rport=5060;branch=z9hG4bK05c0a4d6
Max-Forwards: 69
From: <sip:xxxxxxxxxxxx@xxxxxxxxxxxx>;tag=as4ab1ab4b
To: "+xxxxxxxxxxxx" <sip:+xxxxxxxxxxxx@xxxxxxxxxxxx>;tag=8110a0S32NHgp
Call-ID: 8fa995a0-061c-11ea-8d83-00505697042b
CSeq: 102 BYE
User-Agent: SM SoftSwitch 11-06C13
X-Asterisk-HangupCause: Normal Clearing
X-Asterisk-HangupCauseCode: 16
Content-Length: 0
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/2129
Dialog and DMQ in db_mode 2 - dialog_vars entries created and not deleted.
Szenario: two proxies in DMQ synchronization with additional database. Configuration identical on both machines:
```
root@proxy-1:/etc/kamailio# grep dialog kamailio.cfg
loadmodule "dialog.so"
modparam("dialog", "db_mode", 2)
modparam("dialog", "db_update_period", 10)
modparam("dialog", "enable_dmq", 1)
modparam("dialog", "default_timeout", 60);
modparam("dialog", "send_bye", 1)
```
Call is placed on proxy1. Proxy1 handles the call, and synchronize the dialog data to proxy2. Proxy2 is able to list the dialog with kamcmd dlg.list etc..
Proxy2 will not write the dialog into dialog table, but will write the dialog variables to the dialog_var table:
```
root@proxy-2:/etc/kamailio# mysql kamailio
MariaDB [kamailio]> select * from dialog; select * from dialog_vars;
Empty set (0.00 sec)
+----+------------+---------+-------------+---------------------------------+
| id | hash_entry | hash_id | dialog_key | dialog_value |
+----+------------+---------+-------------+---------------------------------+
| 9 | 3479 | 1648 | _uac_fu | sip:customer-1@sip.XXXX.net |
| 10 | 3479 | 1648 | _uac_funew | sip:batman@XXX.org |
| 11 | 3479 | 1648 | _uac_fdp | |
| 12 | 3479 | 1648 | _uac_fdpnew | |
| 13 | 3479 | 1648 | _uac_to | sip:customer-2@sip.skalatan.net |
| 14 | 3479 | 1648 | _uac_tonew | sip:robin@XXX.org |
| 15 | 3479 | 1648 | _uac_tdp | |
| 16 | 3479 | 1648 | _uac_tdpnew | |
+----+------------+---------+-------------+---------------------------------+
8 rows in set (0.00 sec)
```
Because of this the dialog_vars will grow for every call.
In db_mode 1 this dialog_vars entries (and also dialog entries) are not written.
I suggest to adapt db_mode 2 with DMQ to the same behaviour.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/2093
<!--
Kamailio Project uses GitHub Issues only for bugs in the code or feature requests. Please use this template only for bug reports.
If you have questions about using Kamailio or related to its configuration file, ask on sr-users mailing list:
* http://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-users
If you have questions about developing extensions to Kamailio or its existing C code, ask on sr-dev mailing list:
* http://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-dev
Please try to fill this template as much as possible for any issue. It helps the developers to troubleshoot the issue.
If there is no content to be filled in a section, the entire section can be removed.
You can delete the comments from the template sections when filling.
You can delete next line and everything above before submitting (it is a comment).
-->
### Description
<!--
Kemi API xlog doesn't support log_facility configuration for xlog kemi api function
-->
### Troubleshooting
kamailio native config: -
xlog("LOG_LOCAL2", "L_INFO", "REQUEST: transaction_id=$avp(tid);timestamp=$Ts;method=$rm;source_ip=$si;source_port=$sp;from_user=$fU;from_domain=$fd;to_user=$tU;to_domain=$td;request_user=$oU;request_domain=$od;");
python kemi config
KSR.xlog.xlog( "LOG_LOCAL2", "L_INFO", "REQUEST: timestamp=$Ts;method=$rm;source_ip=$si;source_port=$sp;from_user=$fU;from_domain=$fd;to_user=$tU;to_domain=$td;request_user=$oU;request_domain=$od;")
produced following errors: -
ERROR: app_python [python_support.c:150]: python_handle_exception(): python_exec2: Unhandled exception in the Python code:#012TypeError: kemi-param-ss() takes exactly 2 arguments (3 given)
#### Debugging Data
<!--
If you got a core dump, use gdb to extract troubleshooting data - full backtrace,
local variables and the list of the code at the issue location.
gdb /path/to/kamailio /path/to/corefile
bt full
info locals
list
If you are familiar with gdb, feel free to attach more of what you consider to
be relevant.
-->
```
N-A
```
#### Log Messages
<!--
Check the syslog file and if there are relevant log messages printed by Kamailio, add them next, or attach to issue, or provide a link to download them (e.g., to a pastebin site).
-->
```
N-A
```
#### SIP Traffic
<!--
If the issue is exposed by processing specific SIP messages, grab them with ngrep or save in a pcap file, then add them next, or attach to issue, or provide a link to download them (e.g., to a pastebin site).
-->
```
N-A
```
### Possible Solutions
<!--
If you found a solution or workaround for the issue, describe it. Ideally, provide a pull request with a fix.
-->
### Additional Information
* **Kamailio Version** - output of `kamailio -v`
```
version: kamailio 5.1.8 (x86_64/linux)
flags: STATS: Off, USE_TCP, USE_TLS, USE_SCTP, TLS_HOOKS, USE_RAW_SOCKS, DISABLE_NAGLE, USE_MCAST, DNS_IP_HACK, SHM_MEM, SHM_MMAP, PKG_MALLOC, Q_MALLOC, F_MALLOC, TLSF_MALLOC, DBG_SR_MEMORY, USE_FUTEX, FAST_LOCK-ADAPTIVE_WAIT, USE_DNS_CACHE, USE_DNS_FAILOVER, USE_NAPTR, USE_DST_BLACKLIST, HAVE_RESOLV_RES
ADAPTIVE_WAIT_LOOPS=1024, MAX_RECV_BUFFER_SIZE 262144 MAX_URI_SIZE 1024, BUF_SIZE 65535, DEFAULT PKG_SIZE 8MB
poll method support: poll, epoll_lt, epoll_et, sigio_rt, select.
id: unknown
compiled with gcc 6.3.0
```
Debian 9.5
<!--
Details about the operating system, the type: Linux (e.g.,: Debian 8.4, Ubuntu 16.04, CentOS 7.1, ...), MacOS, xBSD, Solaris, ...;
Kernel details (output of `uname -a`)
-->
```
Linux routing-proxy-01 4.9.0-3-amd64 #1 SMP Debian 4.9.30-2+deb9u5 (2017-09-19) x86_64 GNU/Linux
```
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/2064
Hi Kamailio team,
I'm attempting to use Kamailio as a Diameter Routing Agent, but I can't seem to get the diameter_request() function to actually _send_ the diameter request, instead I get a warning about no JSON Response:
> WARNING: ims_diameter_server [avp_helper.c:341]: addAVPsfromJSON(): No JSON Response
Even when relaying received data without modifying it.
I posted this issue on the mailing list but have had no responses. Since then I've run a bunch more tests on a few different versions of Kamailio with a few different Diameter peers with the same result.
### Description
The IMS Diameter Server Module's[ diameter_request() function](https://www.kamailio.org/docs/modules/devel/modules/ims_diameter_… fails to send, reporting "No JSON Response" even when fed unmodified received data into it.
### Troubleshooting
Initially I was trying to use CDP & IMS Diameter Server Module to create & send Diameter messages, I thought my formatting in the message (the Diameter Message (as JSON)) was incorrect. as I was getting:
> WARNING: ims_diameter_server [avp_helper.c:341]: addAVPsfromJSON(): No JSON Response
When trying to send the request.
To rule out my JSON formatting being the issue I configured two Diameter Peers in Kamailio:
>From one Diameter peer, I sent a Diameter request to Kamailio.
Kamailio is configured to receive the Diameter request, and without modifying the message body ($diameter_request), put that message into the message of the the diameter_request() so as to rule out formatting errors in the message, just a relay of the message, but this also fails to send.
I also included a series of checks to confirm the peer to receive the relayed message was online and capable of handling the specified Diameter application, all of which passed.
I've defined the peers in the diametercfg.xml config file, and they're all showing as online when I do a "kamcmd cdp.list_peers":
FQDN: ims-hss.localdomain
Details: {
State: I_Open
Disabled: False
Last used: 0
Applications: {
appid:vendorid: 16777216:10415
appid:vendorid: 16777216:4491
appid:vendorid: 16777216:13019
appid:vendorid: 16777216:0
appid:vendorid: 16777217:10415
appid:vendorid: 16777221:10415
}
}
Here's a simplified version of my event_route[diameter:request], showing I simply receive the diameter request and then try to send it straight back out using the diameter_request() function:
```
event_route[diameter:request]{
xlog("Got diameter message");
diameter_request("ims-hss.localdomain", $diameter_application, $diameter_command, $diameter_request);
xlog("Forwarded Diameter Request");
}
```
When tailing syslog I see the "Got diameter message" entry but not the "Forwarded Diameter Request", which suggests it's not getting past the diameter_request(), instead I just see the:
> WARNING: ims_diameter_server [avp_helper.c:341]: addAVPsfromJSON(): No JSON Response
Packet captures show Kamailio receives the request but never relays it.
The source of avp_helper.c shows the warning is called if the length of the JSON is <= 0, but as I'm feeding back out what I've received it shouldn't be 0, an no recent major changes to the source, so I'm stumped as to why it's catching this.
I've attached a full copy of the config files below.
#### Reproduction
I've tried this with a few different Diameter servers, but I can reproduce it with two FreeDIAMETER peers configured in Diameter CDP Config XML, I've also tried with one Kamailio CDP based peer and using xhttp to call the diameter_request() function with the same results.
#### Log Messages
Copy of relevant SysLog:
https://pastebin.com/ZY8z2kd4
### Additional Information
Full Kamailio Config: https://pastebin.com/afgqUfWr
Diameter CDP Config XML: https://pastebin.com/bVrBG8mG
Relevant Syslog: https://pastebin.com/ZY8z2kd4
Kamcmd cdp.list_peers: https://pastebin.com/cKi4JAHC
Kamailio 5.1.2 on Ubuntu 18.04 installed from Repos.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/2035
I'm observing the following scenario:
* mod_dialog callbacks trigger 2 or more times (nearly) simultaneously for the same dialog
* pua_dialoginfo sends PUBLISH 1, referencing etag A
* pua_dialoginfo sends PUBLISH 2, referencing etag A
* presence_dialoginfo processes PUBLISH 1, replies with new etag B
* presence_dialoginfo processes PUBLISH 2, replies with a 412 (because etag A no longer exists)
* pua_dialoginfo receives the 412 and re-tries it as PUBLISH 3 ("sent a PUBLISH within a dialog that no longer exists, send again an intial PUBLISH")
* presence_dialoginfo processes PUBLISH 3, and may or may not accept it
The situation as described is not ideal since it'll fill up your logs with errors, but isn't critical per se. Much more problematic is when there are more than 2 PUBLISHes generated for the same dialog simultaneously, as this can cause a (near) infinite race between the various PUBLISH requests all fighting to update the same etag. For example, 10 PUBLISH are sent out for etag A; all but one are rejected with a 412; then the other 9 keep on bouncing back and forth between pua_dialoginfo and presence_dialoginfo because they do not share the same view on the dialog's latest etag.
Even worse is when presence_dialoginfo is rejecting *all* incoming PUBLISHes with a 412, for example because of a database/memory/replication problem or a malformed request. A `t_reply("412", "Not today")` in the presence_dialoginfo server, combined with a single PUBLISH from pua_dialoginfo is enough to reproducibly brick the pua_dialoginfo server because it runs into critical memory fragmentation levels.
I think there are multiple ways to fix or alleviate this problem.
## pua generic
* pua (publ_cback_func) should not retry 412-failed PUBLISHes indefinitely, but e.g. at most once
* pua should not generate simultaneous PUBLISHes for the same presentity. It should delay PUBLISH 2 until PUBLISH 1 is either (permanently) accepted or rejected; or it should discard PUBLISH 2 immediately when it is generated.
* Perhaps make handling of 412 replies more fine-grained. Currently every 412 reply is handled like this ("sent a PUBLISH within a dialog that no longer exists"), while that statement doesn't apply to all possible 412 replies.
## pua_dialoginfo specific
* pua_dialoginfo currently subscribes to a lot of mod_dialog callbacks. For example subscribing to both DLGCB_CONFIRMED and DLGCB_CONFIRMED_NA will always get you two (rapidly succeeding) PUBLISHes with exactly the same contents. Subscribing to DLGCB_REQ_WITHIN means you'll get a new PUBLISH for every re-INVITE such as hold/unhold or codec negotiation, which is useless in many usecases. It would be helpful to allow configuring pua_dialoginfo with a list of callbacks to subscribe to.
* With or without a smaller set of mod_dialog callbacks, pua_dialoginfo can generate multiple PUBLISH requests with exactly the same contents. Since pua is aware of the (last) state that it published for the presentity, it could compare if the newly generated PUBLISH is any different from the last known state, and discard it if it's not.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/2048
**Description**
Recently we have upgraded to **kamailio 5.3** version and we are performing load tests on it for scalability but Unfortunately it is **crashed** while performing in **ims_dialog** module.
we are using **ims_dialog** module instead of **dialog** module for **diameter** protocol purpose.
**Troubleshooting**
We found out that **dlg_out** is **NULL** but we are accessing the **dlg_out->to_tag.len** this leads to the crash..But unfortunately we don't know how this gets **NULL** as **dlg_out** is assigned to **d_entry_out->first** which is **NOT NULL**
**GDB messages:**
(gdb)
#0 0x00007fbe5a646ea6 in next_state_dlg (dlg=0x7fbe57dcf268, event=3, old_state=0x7ffc8b03f0a0, new_state=0x7ffc8b03f0a4,
unref=0x7ffc8b03f09c, to_tag=0x7ffc8b03f080) at dlg_hash.c:1180
#1 0x00007fbe5a622170 in dlg_onreply (t=0x7fbe57f7a3f0, type=1048576, param=0x7ffc8b03f2f0) at dlg_handlers.c:1276
#2 0x00007fbe5e2b5517 in run_trans_callbacks_internal (cb_lst=0x7fbe57f7a468, type=1048576, trans=0x7fbe57f7a3f0,
params=0x7ffc8b03f2f0) at t_hooks.c:254
#3 0x00007fbe5e2b5733 in run_trans_callbacks_with_buf (type=1048576, rbuf=0x7fbe57f7a4c0, req=0x7fbe57f7bab0,
repl=0x7fbe5fa1d218, flags=0) at t_hooks.c:297
#4 0x00007fbe5e2fc05f in relay_reply (t=0x7fbe57f7a3f0, p_msg=0x7fbe5fa1d218, branch=1, msg_status=183,
cancel_data=0x7ffc8b03f760, do_put_on_wait=1) at t_reply.c:1986
#5 0x00007fbe5e300ec3 in reply_received (p_msg=0x7fbe5fa1d218) at t_reply.c:2540
#6 0x00000000004b6f43 in do_forward_reply (msg=0x7fbe5fa1d218, mode=0) at core/forward.c:745
#7 0x00000000004b8a8f in forward_reply (msg=0x7fbe5fa1d218) at core/forward.c:846
#8 0x00000000005527c7 in receive_msg (
buf=0xb3b740 "SIP/2.0 183 Session Progress\r\nVia: SIP/2.0/UDP 182.72.244.91:5060;branch=z9hG4bK7fea.85af5c92096548bdd857481789b3e50f.1, SIP/2.0/UDP 182.72.244.91:5080;received=182.72.244.91;rport=5080;branch=z9hG4bK"..., len=613, rcv_info=0x7ffc8b040000)
at core/receive.c:510
#9 0x0000000000675077 in udp_rcv_loop () at core/udp_server.c:548
#10 0x0000000000425f4b in main_loop () at main.c:1673
#11 0x000000000042e52a in main (argc=13, argv=0x7ffc8b040808) at main.c:2802
*******************************************************************************
(gdb) f 0
#0 0x00007fbe5a646ea6 in next_state_dlg (dlg=0x7fbe57dcf268, event=3, old_state=0x7ffc8b03f0a0, new_state=0x7ffc8b03f0a4,
unref=0x7ffc8b03f09c, to_tag=0x7ffc8b03f080) at dlg_hash.c:1180
1180 if (dlg_out->to_tag.len == to_tag->len && memcmp(dlg_out->to_tag.s, to_tag->s, dlg_out->to_tag.len) == 0) {
(gdb) info locals
d_entry = 0x7fbe57d5ab70
d_entry_out = 0x7fbe57dcf378
dlg_out = 0x0
found = -1
delete = 1
__FUNCTION__ = "next_state_dlg"
(gdb) p d_entry_out->first
$10 = (struct dlg_cell_out *) 0x7fbe57fcf6b8
**Additional Information**
**version**: kamailio 5.3.2 (x86_64/linux)
Thanks in Advance...I am beginning to work with kamailio ....can you guys please give me some hints how to move forward with this..
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/2221
### Description
The `kamctl` tool seems to require `bash`, while trying to use `/bin/sh`, which can point to `dash` or other shell interpreters.
For example, the output of `kamdbctl create`:
```
MySQL password for root:
-e \E[37;33mINFO: creating database kamailio_simple_db ...
-e \E[37;33mINFO: granting privileges to database kamailio_simple_db ...
-e \E[37;33mINFO: creating standard tables into kamailio_simple_db ...
-e \E[37;33mINFO: Core Kamailio tables succesfully created.
Install presence related tables? (y/n): n
/usr/sbin/kamdbctl: 216: /usr/sbin/kamdbctl: Bad substitution
```
It seems that the issue is expanding the variable when getting the answer for y/n question:
* https://github.com/kamailio/kamailio/blob/master/utils/kamctl/kamdbctl.base…
Such expression seems to be specific for bash:
* https://mywiki.wooledge.org/Bashism
### Troubleshooting
#### Reproduction
Run `kamctl` with `/bin/sh` pointing to `bash`.
### Possible Solutions
Decide what to do to have an acceptable solution: enforce `bash`, remove bashisms`...
Or maybe focus to make `kamcli` a (full) replacement for `kamctl/kamdbctl` and get rid of those old-style shell/bash scripts:
* https://github.com/kamailio/kamcli
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/2019
### Description
If an EVAPI client drops dead and silently closes its TCP connection, EVAPI is not notified in an acceptable time frame. The default OS TCP keep-alive parameters are generally far too long to be effective, and there is no way to override these using module or runtime configuration. Moreover, any attempts to put in a workaround is thwarted by the inability to close an arbitrary EVAPI connection from say, a timer event (since there is no way to specify a connection to close outside the EVAPI context).
### Expected behavior
When an EVAPI client goes dead, the connection should be closed in the order of seconds (ideally configurable). The EVAPI should allow connections to be closed via scripting from any context.
#### Actual observed behavior
When an EVAPI client goes dead, the connection stays open (counting towards the maximum number of allowed clients). When the evapi_close is invoked from other contexts, nothing happens.
#### Debugging Data
#### Log Messages
#### SIP Traffic
### Possible Solutions
### Additional Information
* **Kamailio Version** - output of `kamailio -v`
```
version: kamailio 5.2.1 (x86_64/linux)
flags: STATS: Off, USE_TCP, USE_TLS, USE_SCTP, TLS_HOOKS, USE_RAW_SOCKS, DISABLE_NAGLE, USE_MCAST, DNS_IP_HACK, SHM_MEM, SHM_MMAP, PKG_MALLOC, Q_MALLOC, F_MALLOC, TLSF_MALLOC, DBG_SR_MEMORY, USE_FUTEX, FAST_LOCK-ADAPTIVE_WAIT, USE_DNS_CACHE, USE_DNS_FAILOVER, USE_NAPTR, USE_DST_BLACKLIST, HAVE_RESOLV_RES
ADAPTIVE_WAIT_LOOPS=1024, MAX_RECV_BUFFER_SIZE 262144 MAX_URI_SIZE 1024, BUF_SIZE 65535, DEFAULT PKG_SIZE 8MB
poll method support: poll, epoll_lt, epoll_et, sigio_rt, select.
id: unknown
compiled with gcc 6.3.0
```
* **Operating System**:
Debian GNU/Linux 9 (stretch)
```
Linux 3b83b180f7c9 4.15.0-45-generic #48-Ubuntu SMP Tue Jan 29 16:28:13 UTC 2019 x86_64 GNU/Linux
```
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/kamailio/kamailio/issues/1880