I tried to issue mtree.list command using xmlrpc query that would result in large response, and got error
Aug 15 18:12:07 char /usr/bin/sip-proxy[10348]: ERROR: <core> [core/tcp_main.c:618]: _wbufq_add(): (221375 bytes): write queue full or timeout (0, total 0, last write 55534052 s ago) Aug 15 18:12:07 char /usr/bin/sip-proxy[10348]: ERROR: xmlrpc [xmlrpc.c:811]: send_reply(): Error while sending reply Aug 15 18:12:07 char /usr/bin/sip-proxy[10367]: ERROR: <core> [core/tcp_main.c:3551]: handle_ser_child(): received CON_ERROR for 0x7f112ad22be0 (id 5), refcnt 3, flags 0x4018
I though that perhaps I could get rid of the error by increasing value of core var tcp_conn_wq_max. Description of the var tells:
Maximum bytes queued for write allowed per connection. Attempting to queue more bytes would result in an error and in the connection being closed (too slow). If tcp_write_buf is not enabled, it has no effect.
What is tcp_write_buf and how to enable it? I could not find anything about it in core cookbook.
-- Juha
Hi, Juha This issue is discussed here https://github.com/kamailio/kamailio/pull/1376
As temporary solution i use jsonrpcs fifo for big results. For example cat /tmp/kamailio_jsonrpc_reply_fifo > lastcontact.dump & echo '{"jsonrpc": "2.0", "method": "htable.dump", "params":["last_contact"], "reply_name": "kamailio_jsonrpc_reply_fifo", "id": 1 }' > /tmp/kamailio_jsonrpc.fifo
On 15 August 2018 at 18:23, Juha Heinanen jh@tutpro.com wrote:
I tried to issue mtree.list command using xmlrpc query that would result in large response, and got error
Aug 15 18:12:07 char /usr/bin/sip-proxy[10348]: ERROR: <core> [core/tcp_main.c:618]: _wbufq_add(): (221375 bytes): write queue full or timeout (0, total 0, last write 55534052 s ago) Aug 15 18:12:07 char /usr/bin/sip-proxy[10348]: ERROR: xmlrpc [xmlrpc.c:811]: send_reply(): Error while sending reply Aug 15 18:12:07 char /usr/bin/sip-proxy[10367]: ERROR: <core> [core/tcp_main.c:3551]: handle_ser_child(): received CON_ERROR for 0x7f112ad22be0 (id 5), refcnt 3, flags 0x4018
I though that perhaps I could get rid of the error by increasing value of core var tcp_conn_wq_max. Description of the var tells:
Maximum bytes queued for write allowed per connection. Attempting to queue more bytes would result in an error and in the connection being closed (too slow). If tcp_write_buf is not enabled, it has no effect.
What is tcp_write_buf and how to enable it? I could not find anything about it in core cookbook.
-- Juha
Kamailio (SER) - Users Mailing List sr-users@lists.kamailio.org https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-users
Dmitri Savolainen writes:
This issue is discussed here https://github.com/kamailio/kamailio/pull/1376
Thanks for the pointer.
As temporary solution i use jsonrpcs fifo for big results.
I tried that too (over http) and it worked.
-- Juha
Dmitri Savolainen writes:
This issue is discussed here https://github.com/kamailio/kamailio/pull/1376
There Daniel asks about increasing tcp_conn_wq_max and tcp_wq_max, which gets back to my original question: what is tcp_write_buf and how to enable it?
-- Juha
Am Mittwoch, 15. August 2018, 18:24:58 CEST schrieb Juha Heinanen:
Dmitri Savolainen writes:
This issue is discussed here https://github.com/kamailio/kamailio/pull/1376
There Daniel asks about increasing tcp_conn_wq_max and tcp_wq_max, which gets back to my original question: what is tcp_write_buf and how to enable it?
Hi Juha,
It needed a bit deeper digging in the repository.. ;-)
I found it in commit 885b9f62e10f45364993c597 from 2007 - I think there is a spelling mistake in the docs. I will fix this in the wiki in the last versions.
The correct spelling should be tcp_buf_write (new name tcp_async). You can find this in the core docs.
Best regards,
Henning
Henning Westerholt writes:
The correct spelling should be tcp_buf_write (new name tcp_async). You can find this in the core docs.
Thanks. tcp_async is enabled by default. I then went and set
tcp_conn_wq_max=256000
After that, I was able to fetch 1000 mtree records by issuing mtree.list command over xmlrpc.
-- Juha
After that, I was able to fetch 1000 mtree records by issuing mtree.list command over xmlrpc.
it works ofcause. But if response will become greater it wilt failed again. For example 30K rows htable:
root@sw5 src]# curl -X GET -H "Content-Type: application/json" -d '{"jsonrpc": "2.0", "method": "htable.dump", "params":["big_htable"], "id": 1 }' http://192.168.10.190:5071/jsonrpc/ > /tmp/big_htable.data % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 59 4181k 59 2479k 100 79 13.4M 437 --:--:-- --:--:-- --:--:-- 13.5M curl: (18) transfer closed with 1743335 bytes remaining to read
Now I have to set tcp_conn_wq_max > 1 743 335. for 40K rows htable tcp_conn_wq_max > 3 153 335
So i am afraid change default value (32k) so vast according to possible side effects for SIP tcp connections
On 16 August 2018 at 09:13, Juha Heinanen jh@tutpro.com wrote:
Henning Westerholt writes:
The correct spelling should be tcp_buf_write (new name tcp_async). You
can
find this in the core docs.
Thanks. tcp_async is enabled by default. I then went and set
tcp_conn_wq_max=256000
After that, I was able to fetch 1000 mtree records by issuing mtree.list command over xmlrpc.
-- Juha
Kamailio (SER) - Users Mailing List sr-users@lists.kamailio.org https://lists.kamailio.org/cgi-bin/mailman/listinfo/sr-users
Dmitri Savolainen writes:
So i am afraid change default value (32k) so vast according to possible side effects for SIP tcp connections
I agree. That is why my plan is to switch from xlmrpc transport to http, which seems to work without any need to increase tcp_conn_wq_max.
-- Juha