On 08-11-2005 17:32, Bruce Bauman wrote:
We are testing stress testing SER with location and
subscriber databases
containing 500K to 1 million entries through an artificially generated
workload.
With the stock SER configuration when we stop and restart SER and it runs
out of memory almost immediately upon startup.
It runs out of memory in convert_rows() at around row #2740 out of many (>
500000) rows.
If we bump up PKG_MEM_POOL_SIZE from 1024*1024 to 100*1024*1024 ser starts
up successfully. However, my concern is that the memory pool is allocated in
each of the 20 or 30 children, sucking up a lot of resources.
No, this is done just once before SER forks.
So, my question is this: why isn't this memory
allocation truly dynamic
rather than pre-allocated at startup? Or is there some better solution that
I am missing completely?
Actually the memory is consumed in the functions that convert the
result of mysql query. This does not happen during runtime. The memory
pool is limited in size because SER uses custom memory allocator.
Jan.