### Description We are trying to perform an outbound call through kamailio (5.6). When doing so, we are getting an expected 407 response from the trunk provider. As reponse we try to authenticate. This last invite being sent from kamailio to the trunk provider contains a bad content-length field.
if the content header field value was 1234 in the original invite, the last ones content-header field value is 12341234. So it looks like it's not cleared before its set.
### Troubleshooting this source we used to troubleshoot:
xlog("L_WARN", "TRUNKAUTH entered, message: $mbu\n"); AT THIS POINT CONTENT_HEADER = 1234 (just an example) if(t_check_status("401|407")) { $avp(auser) = "testuser"; $avp(apass) = "credshere"; uac_auth(); xlog("L_WARN", "TRUNKAUTH Auth done, updated message: $mbu\n"); AT THIS POINT CONTENT_HEADER = 12341234 route(RELAY); #### Reproduction See troubleshooting
### Additional Information
* **Kamailio Version** - output of `kamailio -v`
``` kamailio 5.6.2 (x86_64/linux) ```
* **Operating System**:
``` kamailio 5.6.2 (x86_64/linux) ```
Thanks for the report. So you are actually modifying the Content-Length header in your configuration? Or are you observing the change by not touching this header in the cfg at all?
Hi @henningw
We are not modifying the Content-Length header. Just observing the change and not touching the header.
The header changes after the call to uac_auth(). So in that flow something changes the content-length header value. the first mbu log is ok, while the second mbu log has the content-length value twice.
xlog("L_WARN", "TRUNKAUTH entered, message: $mbu\n"); **AT THIS POINT CONTENT_HEADER = 1234 (just an example)** if(t_check_status("401|407")) { $avp(auser) = "testuser"; $avp(apass) = "credshere"; **uac_auth();** xlog("L_WARN", "TRUNKAUTH Auth done, updated message: $mbu\n"); **AT THIS POINT CONTENT_HEADER = 12341234**
Kamailio itself also discovers the faulty header value when and if it grows above the allowed integer size. In this example it even concatenated the value 3 times:
2022-11-03 13:42:58 | 28(53) ERROR: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/parser/msg_parser.c:340]: parse_headers(): bad header field [**Content-Length: 129712971297**Show context -- | -- | | 2022-11-03 13:42:58 | 28(53) DEBUG: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/parser/msg_parser.c:281]: get_hdr_field(): error exit | | 2022-11-03 13:42:58 | 28(53) ERROR: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/parser/msg_parser.c:182]: get_hdr_field(): bad content_length header | | 2022-11-03 13:42:58 | 28(53) ERROR: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/parser/parse_content.c:276]: parse_content_length(): parse error near char [57][9] | | 2022-11-03 13:42:58 | 28(53) ERROR: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/parser/parse_content.c:237]: parse_content_length(): content **lenght** value is too large
@henningw ,
i see that for the example i gave above, the get_hdr_field() is called once, and then after that clen_builder() is called exactly 3 times, which corresponds with the 3 times concatenated value **129712971297** :
2022-11-03 13:42:58 28(53) DEBUG: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/msg_translator.c:432]: clen_builder(): content-length: 1297 (1297) 2022-11-03 13:42:58 28(53) DEBUG: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/msg_translator.c:432]: clen_builder(): content-length: 1297 (1297) 2022-11-03 13:42:58 28(53) DEBUG: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/msg_translator.c:432]: clen_builder(): content-length: 1297 (1297)
13:42:57 13(38) DEBUG: {Thu Nov 3 12:42:57 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/parser/msg_parser.c:187]: get_hdr_field(): content_length=1297
@henningw investigating further, between our own TRUNKAUTH logs entries, while uac_auth() is called, we also see these errors, i wonder if they are related and if you could point me in the right direction to solve these criticals:
2022-11-03 13:42:58 | 28(53) CRITICAL: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/msg_translator.c:494]: lump_check_opt(): null send socketShow context | | 2022-11-03 13:42:58 | 28(53) CRITICAL: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/msg_translator.c:494]: lump_check_opt(): null send socket | | 2022-11-03 13:42:58 | 28(53) CRITICAL: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/msg_translator.c:494]: lump_check_opt(): null send socket | | 2022-11-03 13:42:58 | 28(53) CRITICAL: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/msg_translator.c:1482]: process_lumps(): FIXME: null bind_address | | 2022-11-03 13:42:58 | 28(53) CRITICAL: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/msg_translator.c:494]: lump_check_opt(): null send socket | | 2022-11-03 13:42:58 | 28(53) DEBUG: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/mem/q_malloc.c:419]: qm_malloc(): qm_malloc(0x7f35c476e010, 2240) returns address 0x7f35c48ee960 frag. 0x7f35c48ee928 (size=3576) on 1 -th hit | | 2022-11-03 13:42:58 | 28(53) DEBUG: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/mem/q_malloc.c:374]: qm_malloc(): qm_malloc(0x7f35c476e010, 2238) called from core: core/msg_translator.c: build_req_buf_from_sip_req(2235) | | 2022-11-03 13:42:58 | 28(53) CRITICAL: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/msg_translator.c:494]: lump_check_opt(): null send socket | | 2022-11-03 13:42:58 | 28(53) CRITICAL: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/msg_translator.c:494]: lump_check_opt(): null send socket | | 2022-11-03 13:42:58 | 28(53) CRITICAL: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/msg_translator.c:494]: lump_check_opt(): null send socket | | 2022-11-03 13:42:58 | 28(53) CRITICAL: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/msg_translator.c:896]: lumps_len(): FIXME: null send_sock | | 2022-11-03 13:42:58 | 28(53) CRITICAL: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} <core> [core/msg_translator.c:494]: lump_check_opt(): null send socket
@henningw any ideas?
Hi All / @henningw,
Could this problem be caused because the ProxyAuth header is placed after the Content-Length header??

This error message looks a bit odd. Can you check that you are doing everything correct in your cfg regarding the handling of the INVITE? Maybe there is another error in the cfg?
`CRITICAL: {Thu Nov 3 12:42:58 2022 1 17733 INVITE 9326c0d2-2653-4041-8977-eecf1827a79c} [core/msg_translator.c:494]: lump_check_opt(): null send socket`
Regarding the order of the headers in the SIP message, this is usually not a problem as they can be sorted differently.
@henningw , Thanks, i'm focusing on this CRIT error, however we don't seems to do anything strange. Do you have an example of a Request_route config for an Outbound call? Or any direction you can point me?
Thanks in advance
Hi @henningw,
We tried to solve the "null send socket" issue, however we have no success yet. Can you point me in a direction, were to look, some docu , etc?
Thanks
My guess is that you change the body in the request_route (maybe rtpproxy/rtpengine usage or body operations) and then you do it again in the failure_route/branch_route. If it is that case, try to do it only in branch_route.
Hi @miconda,
We are using rtpengine indeed, and we are doing a rtpengine_offer() in the request_route. In the failure route we are only doing a uac_auth() and t_relay the message again.
However, just for rule things out, if i remove the rtpengine_offer() from the initial outbound route in the request_route(), then the content length is OK. But then of course we don't have the correct SDP.
we are using the rtpengine_offer like this for the outbound call in the request_route: `rtpengine_offer("replace-origin replace-session-connection RTP/AVP SDES-no-NULL_HMAC_SHA1_32 SDES-no-NULL_HMAC_SHA1_80 OSRTP=accept")`
So it is definitely caused by the SDP/body being changed in the request_route() in combination with the failure route doing a uac_auth(). But that should be totally fine right?
Hope you have more good idea's:)
As I said, move the rtpengine usage in a branch_route from request_route, otherwise if you touch the body again on re-routing, header operations can show up twice. I think the FAQ on wiki present more about this behaviour.
Anyhow, it is optimal to use rtpengine in branch route, because it is done when forwarding is decided.
Thanks i will look into it
Hi @miconda ,
we still don't have a success. We made sure the rtpengine stuff is only done in a branch route, also got rid of the FIXME errors, still now even worse, the initial INVITE contains a bad content-length like content-length: 12971297. Previously the initial one was OK only after a uac_auth in the failure route it went 'bad' :)
it's an annoying issue, but committed to solve it.
If you have any thoughts, please let me know (or maybe a working outbound config example we could try)...
Thanks
We have finally solved this issue with playing with branc_routes. Creating another branch_route in the failure_route (for auth purposes) and moving rtp_engine offer and record_route() to this branch_route fixed the issue. We also had to use set_send_socket () to get rid of the FIXME issues.
Thanks for the help.
Closed #3275 as completed.
OK, good that it was solved and the feedback here should help others with similar issues.