Hi Hennings, thanks a lot for your answer.
I'm sending you other information about our Kamailio xmpp server.
in attach the "sercmd cfg.set_now_int core mem_dump_shm" output.
And here:
kamctl fifo get_statistics | grep mem
shmem:fragments = 13107 shmem:free_size = 99724520 shmem:max_used_size = 57559216 shmem:real_used_size = 34493208 shmem:total_size = 134217728 shmem:used_size = 25651280
Thanks an kind regards, laura
On Wed, Jan 25, 2012 at 7:18 PM, Henning Westerholt hw@kamailio.org wrote:
On Tuesday 24 January 2012, laura testi wrote:
we are using the XMPP gateway modules: • PUA • PUA_XMPP • XMPP The behavior is ok but since the modules are started, the memory usage is getting increased and now also the swap memory area is used. I send to you the following information:
Hello Laura,
so kamailio uses that much memory that your machine runs out of memory? This sounds like a shared memory issue to me. Can you have a look to the shm_mem as well?
- Numer of records from PUA and xmpp_presentity tables, they are quite
stable:
SQL> select count(*) from pua;
COUNT(*)
10218
SQL> select count(*) from xmpp_presentity;
COUNT(*)
754
This output is in PKG mem, but it looks indeed a bit suspicious. Have you looked in to the respective code for this module, presence: utils_func.h and uandd_to_uri?
- list of PKG fragments with gdb :
$1 = {size = 40, u = {nxt_free = 0x0, is_free = 0}, file = 0x2b657aa12c00 "presence: utils_func.h", func = 0x2b657aa1aed0 "uandd_to_uri", line = 66, check = 4042322160} [..] $17 = {size = 40, u = {nxt_free = 0x0, is_free = 0}, file = 0x2b657aa12c00 "presence: utils_func.h", func = 0x2b657aa1aed0 "uandd_to_uri", line = 66, check = 4042322160}………….
- Output of sercmd cfg.set_now_int core mem_dump_pkg :
……………….. 10969. N address=0xb1b320 frag=0xb1b2f0 size=104 used=1 qm_status: alloc'd from presence: utils_func.h: uandd_to_uri(66) qm_status: start check=f0f0f0f0, end check= c0c0c0c0, abcdefed qm_status: 10970. N address=0xb1b3e8 frag=0xb1b3b8 size=32 used=1 qm_status: alloc'd from presence: utils_func.h: uandd_to_uri(66) qm_status: start check=f0f0f0f0, end check= c0c0c0c0, abcdefed qm_status: 10971. N address=0xb1b468 frag=0xb1b438 size=32 used=1 qm_status: alloc'd from presence: utils_func.h: uandd_to_uri(66) qm_status: start check=f0f0f0f0, end check= c0c0c0c0, abcdefed qm_status: dumping free list stats : qm_status: hash= 1. fragments no.: 4, unused: 0 bucket size: 8 - 8 (first 8) qm_status: hash= 2. fragments no.: 8, unused: 0 bucket size: 16 - 16 (first 16) qm_status: hash= 3. fragments no.: 6, unused: 0 bucket size: 24 - 24 (first 24) qm_status: hash= 4. fragments no.: 1, unused: 0 bucket size: 32 - 32 (first 32) qm_status: hash= 6. fragments no.: 3, unused: 0 bucket size: 48 - 48 (first 48) qm_status: hash= 8. fragments no.: 8, unused: 0 bucket size: 64 - 64 (first 64) qm_status: hash= 9. fragments no.: 6, unused: 0 bucket size: 72 - 72 (first 72) qm_status: hash= 12. fragments no.: 5, unused: 0 bucket size: 96 - 96 (first 96) qm_status: hash= 14. fragments no.: 6, unused: 0 bucket size: 112 - 112 (first 112) qm_status: hash= 17. fragments no.: 32, unused: 0 bucket size: 136 - 136 (first 136) qm_status: hash= 19. fragments no.: 1, unused: 0 bucket size: 152 - 152 (first 152) qm_status: hash= 21. fragments no.: 1, unused: 0 bucket size: 168 - 168 (first 168) qm_status: hash= 29. fragments no.: 1, unused: 0 bucket size: 232 - 232 (first 232) qm_status: hash= 31. fragments no.: 2, unused: 0 bucket size: 248 - 248 (first 248) qm_status: hash= 33. fragments no.: 2, unused: 0 bucket size: 264 - 264 (first 264) qm_status: hash= 36. fragments no.: 1, unused: 0 bucket size: 288 - 288 (first 288) qm_status: hash= 57. fragments no.: 1, unused: 0 bucket size: 456 - 456 (first 456) qm_status: hash= 60. fragments no.: 1, unused: 0 bucket size: 480 - 480 (first 480) qm_status: hash= 76. fragments no.: 709, unused: 0 bucket size: 608 - 608 (first 608) qm_status: hash= 88. fragments no.: 1, unused: 0 bucket size: 704 - 704 (first 704) qm_status: hash= 116. fragments no.: 1, unused: 0 bucket size: 928 - 928 (first 928) qm_status: hash= 145. fragments no.: 1, unused: 0 bucket size: 1160 - 1160 (first 1160) qm_status: hash= 216. fragments no.: 2, unused: 0 bucket size: 1728 - 1728 (first 1728) qm_status: hash= 2049. fragments no.: 1, unused: 0 bucket size: 16384 - 32768 (first 22032) qm_status: hash= 2057. fragments no.: 1, unused: 0 bucket size: 4194304 - 8388608 (first 5982824)
Is it a memory leak in one of these modules?
Peter found recently a lot of leaks in other presence modules (take a look to the sr-dev list), so possible that they are more in this module.
Viele Grüße/ best regards,
Henning Westerholt