Daniel,
Daniel-Constantin Mierla wrote:
...
> Running sipp with a call rate of 500 cps results
in 45 MB of memory
> allocation per openser process after a few seconds. What could cause
> this behavior, message queuing maybe? But then again having a message
> queue of 40+ MB seems to be unrealistic.
It is not per process. It is the share memory, I
guess. By default, each
openser process can use maximum of 1MB. Share memory is 32MB by default,
can be changed by -w command line parameter. It cannot go over it, it is
the moment of out of memory. It is all the time visible as allocated by
openser, but can be free. You can use "openserctl fifo get_statistics
all" to see how much of memory is really used.
Thanks for the clarification, I was always looking at 'top' output that
shows total shared memory as per process reserved memory. The fifo
statistics are very helpful.
A transaction lasts a bit more in memory after a final
response has been
answered. The transaction keeps at least two times the size of initial
request, plus a reply at a moment, so it demands some memory. If you get
high load at a moment you need more memory. Try to set share memory size
to higher values and run the tests again.
Ok, so memory consumption is directly related to number of ongoing SIP
transactions. Confusingly, 'top' only shows max used memory so I was
thinking that openser didn't release memory after stopping the SIP flood.
I always had 768MB shared memory configured though, so I still can't
explain the memory allocation errors I got. Some more test runs revealed
that I only get these errors when using a more production oriented
config that loads more modules than the one posted in my earlier email.
I now try to figure out what exactly causes these memory allocation
errors that happen reproducibly after about 220s at 400 cps.
I could successfully achieve about 2000 cps over a long period with the
simple config which is probably limited by the single sipp UAC instance
and not by openser's performance.
I also tested userloc in db-only mode with a mysql cluster storing
30'000 permanent registration entries (location table). With this setup
I could achieve about 800 cps (randomized user population), which is a
very good number. With a BHCA of 5 this would be enough to support over
500'000 user agents.
thanks,
Christian
And the memory consumption goes up the longer the test runs until
openser dies because of out-of-memory.
What do you mean by dies? It crashes or
just prints lot of error
messages in syslog? If you stop the sip flood and try a bit later, did
you get same errors from first time?
I observed the same behavior with openser 1.1 and 1.2. Is this a
memory leak or did I miss some settings?
We needed to set share memory to 768MB
to be able to flood openser at
full power of 2 sipp instances. The results were about 4000 cps per
instance.
And what's the best method to debug memory
issues with openser?
Not very easy right now, compiling without F_MALLOC and with
DBG_QM_MALLOC is the best. Plans to improve in this aspects are
scheduled for next release.
Cheers,
Daniel
>
>
> thanks in advance,
> Christian
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> Users mailing list
> Users(a)openser.org
>
http://openser.org/cgi-bin/mailman/listinfo/users
>