xiaoxiongxyy created an issue (kamailio/kamailio#4292)
version:5.7.3
## Kamailio dmq related configuration: ``` loadmodule "dmq.so" modparam("dmq", "server_address", DMQ_SIP_URL) modparam("dmq", "server_socket", DMQ_UDP_URL) modparam("dmq", "notification_channel", "peers") #!ifexp POD_NAME == "kamailio-bot-xyy-0" modparam("dmq", "notification_address", "sip:kamailio-bot-xyy-1.kamailio-bot-xyy-headless.devops.svc.bedin.bj3:5062") #!endif #!ifexp POD_NAME == "kamailio-bot-xyy-1" modparam("dmq", "notification_address", "sip:kamailio-bot-xyy-0.kamailio-bot-xyy-headless.devops.svc.bedin.bj3:5062") #!endif
modparam("dmq", "multi_notify", 1) modparam("dmq", "num_workers", 4) modparam("dmq", "ping_interval", 15)
# ----- htable params ----- loadmodule "htable.so" modparam("htable", "enable_dmq", 1) modparam("htable", "dmq_init_sync", 1) # #!ifdef WITH_DMQ # modparam("htable", "htable", "ipban=>size=8;autoexpire=300;dmqreplicate=1;") # #!else # modparam("htable", "htable", "ipban=>size=8;autoexpire=300;") # #!endif
# ----- dialog params ----- loadmodule "dialog.so" modparam("dialog", "db_url", DBURL) modparam("dialog", "track_cseq_updates", 0) modparam("dialog", "dlg_match_mode", 2) # modparam("dialog", "timeout_avp", "$avp(i:10)") modparam("dialog", "enable_stats", 1) modparam("dialog", "db_mode", 2) modparam("dialog", "dlg_flag", 9)
modparam("dialog", "enable_dmq", 1) modparam("dialog", "db_update_period", 10) modparam("dialog", "send_bye", 1)
modparam("dialog", "default_timeout", 3600) modparam("dialog", "end_timeout", 60)
# usrloc params loadmodule "dmq_usrloc.so" modparam("dmq_usrloc", "enable", 1) modparam("dmq_usrloc", "sync", 1)
modparam("dmq_usrloc", "replicate_socket_info", 1) modparam("dmq_usrloc", "usrloc_domain", "location") ```
## Phenomenon: kamailio-0:

freeswitch:

The current problem is that the 180 and 200 responses are routed to another node and returned along the same route, but the node that received the invite request before cannot perceive it. I learned from the official documentation that dmq cannot replicate the response, so the current status is not synchronized, causing the first pod to send a 408 request to interrupt the session. This problem has troubled me for a month and I have no idea where to start. Please help
Closed #4292 as completed.
henningw left a comment (kamailio/kamailio#4292)
The SIP transaction status is not synchronized, and it can also not be easily implemented. This was discussed several times on the sr-users mailing list. Please contact this mailing list with your question about other architectural approaches for high availability. The issue tracker is for tracking issues in the source code.
xiaoxiongxyy left a comment (kamailio/kamailio#4292)
The SIP transaction status is not synchronized, and it can also not be easily implemented. This was discussed several times on the sr-users mailing list. Please contact this mailing list with your question about other architectural approaches for high availability. The issue tracker is for tracking issues in the source code.
I understand that the transaction status is out of sync. Is there a solution for the situation I encountered? For example, copy 180/200 to another node. Is there any similar implementation/solution that can be used?