@charlesrchance - a solution is to use a shared memory variable to say if something needs to be consumed, like:
// mod init
int *dmq_usrloc_tasks = shm_malloc(sizeof(int));
*dmq_usrloc_tasks = 0;
gen_lock_t *dmq_usrloc_tasks_lock = ... // usual lock initialization
// runtime - producer
lock_get(dmq_usrloc_tasks_lock);
*dmq_usrloc_tasks++
lock_release(dmq_usrloc_tasks_lock);
// runtime - consumer
while (1) {
lock_get(dmq_usrloc_tasks_lock);
while(*dmq_usrloc_tasks>0) *dmq_usrloc_tasks++;
lock_release(dmq_usrloc_tasks_lock);
sleep_us(dmq_usrloc_tasks_sleep);
}
The incrementing/decrementing needs to be accompanied by producing/consuming the tasks.
The key here is the sleep_us(), where dmq_usrloc_tasks_sleep can be a new mod param to specify the miliseconds (or microseconds) to sleep. It proved that triggering a sleep takes the cpu from process and cpu load is kept low. One can tune its value to suit better its environment.
IIRC, this is used by maybe in async module for async_route().
An alternative is using in-memory sockets to pass tasks from producers to consumers. The consumer should be blocked in a read() and that should not consume cpu, although on some systems I got read() returning fast with error, ending in a loop of fast reads and high cpu again. This should be used by evapi to pass events between sip workers and evapi worker.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.