Enabled via module parameter.
First commit:
- initial implementation - replication of presentity updates over DMQ - adds ruid column for matching across cluster
Tested and in use across small production cluster (‘dialog’ and ‘message-summary’ events).
Need to update docs prior to merging but raising PR first to hear any feedback. You can view, comment on, or merge this pull request online at:
https://github.com/kamailio/kamailio/pull/1402
-- Commit Summary --
* presence: dmq integration
-- File Changes --
M src/modules/dmq/dmq.c (2) M src/modules/dmq/dmqnode.c (8) M src/modules/dmq/dmqnode.h (2) M src/modules/presence/Makefile (1) M src/modules/presence/notify.c (1) M src/modules/presence/notify.h (1) M src/modules/presence/presence.c (20) M src/modules/presence/presence.h (4) A src/modules/presence/presence_dmq.c (489) A src/modules/presence/presence_dmq.h (50) M src/modules/presence/presentity.c (270) M src/modules/presence/presentity.h (8) M src/modules/presence/publish.c (10)
-- Patch Links --
https://github.com/kamailio/kamailio/pull/1402.patch https://github.com/kamailio/kamailio/pull/1402.diff
Also need to update database schema definition for presentity table.
just curious, why would you want to replicate the presentity records? Wouldn't it make more sense to replicate the active_watchers records? Much like how `usrloc_dmq` replicates location data (which enables an INVITE to be sent to a user from any kamailio machine in the cluster), replicating the active_watchers data would allow a NOTIFY to be sent out from any kamailio machine in the cluster.
Closed #1402.
It’s a valid alternative, and one which works well if the subscribers are not in direct contact with the presence servers and instead there’s a load balancer in-between.
Where subscribers are in direct contact and also behind NAT the NOTIFYs will need to go via the server on which the subscription was received.
Also, it is assumed that there are typically more subscriptions than presentities.
This not only adds more internal traffic (proportional to the number of nodes), but also a dependency on each server for the lifetime of its ‘own’ watcher records.
If each node is responsible for its own subscribers, however, then if it disappears the subscribers simply move elsewhere (as a result of dead keepalive)
The main purpose of the replication is to enable each
Closed in error!
Reopened #1402.
I think the new parameter is not documented, can be done in a follow up commit.
If there are no other comments from devs, then it can be merged.
Thanks - I planned to document the parameter prior to merging.
The DB schema definition for presently table needs updating too - where should this be done?
For changes to db schema, you have to update the xml files from `src/lib/srdb1/schema/`.
The you can run `make dbschema` in the root folder of kamailio source code tree and the sql/db scripts for `kamdbctl` will be updated (they have to be committed, too).
Thanks!
@charlesrchance pushed 3 commits.
c21cae5 schema: add ruid column to presentity table 0c2a427 kamctl: regenerated db scripts to include presentity ruid column e60c4a8 presence: added enable_dmq parameter to module docs
@charlesrchance pushed 2 commits.
125759a Revert "kamctl: regenerated db scripts to include presentity ruid column" 125fcf5 kamctl: regenerated db scripts to include presentity ruid column
Merging now, having received no further comments from devs. Any issues, please raise in the usual manner.
Merged #1402.