We are testing stress testing SER with location and subscriber databases
containing 500K to 1 million entries through an artificially generated
workload.
With the stock SER configuration when we stop and restart SER and it runs
out of memory almost immediately upon startup.
It runs out of memory in convert_rows() at around row #2740 out of many (>
500000) rows.
If we bump up PKG_MEM_POOL_SIZE from 1024*1024 to 100*1024*1024 ser starts
up successfully. However, my concern is that the memory pool is allocated in
each of the 20 or 30 children, sucking up a lot of resources.
So, my question is this: why isn't this memory allocation truly dynamic
rather than pre-allocated at startup? Or is there some better solution that
I am missing completely?
We also have similar issues with shared memory, and need to bump this
allocation via the -m command line option to ser.
FWIW, we are running SER 0.8.14 with a couple of local modifications.
Any advice would be appreciated
Thanks.
Bruce Bauman
Sr. Principal S/W Engineer
WorldGate Communications, Inc.
3190 Tremont Avenue
Trevose, PA 19053
Office: 215.354.5124
Cell: 215.768.8613