Dear all,
The way decisions has been made for SER and iptel.org is not explicitly
described anywhere. In fact, there has not really been an official
policy for how to make decisions. The consequence has been that some
decisions have come out of consensus on the mailing lists (normally the
developer mailing lists), some have come out of developers/individuals
doing something they believe in and nobody has objected, and finally, in
some areas, a small group of people have had private discussions before
doing something.
The truth is that for a casual observer (on a user mailing list) it has
been hard to understand how (and why) iptel.org projects move forward.
It is also easy to get to the wrong conclusions and iptel.org has
suffered from this lack of transparency.
Over the last year we have worked to make such processes more
transparent and predictable. We have focused on development and
development decisions and have tried to encourage discussions on the
mailing lists, as well as to create wishlists and (starting) roadmaps
(see iptel.org website for these). However, we have not addressed
management issues like how to solve conflicts where no consensus can be
reached and so on. Jan (Janak) raised this issue during the Prague
Developer Workshop where 18 people participated. I will below try to
summarize what we had agreement on and what was decided. In order to
make sure this is accurate, several participants with different views
have reviewed this post before sending it on the mailing lists.
(enclosed in objective tags)
<objective>
Principles agreed to (but up for discussion in the spirit of IETF):
1. We want discussions on the mailing lists and consensus when making
decisions
2. We don't want a lot of bureaucracy that sounds nice, but that we
don't need. This includes how to resolve all sorts of issues
3. In general, we trust people (that is individuals, not companies) who
are most knowledgeable and most involved to make good decisions on our
behalf. Hence, contributors should have more to say in the areas they
contribute (ex. module developers)
4. We want to avoid "do-nothing" decisions, for example if everybody
agrees, but one person says no, or people are split in two camps
Decisions made (but up for discussion on the mailing lists in the spirit
of IETF):
* Discussions should be held on the developer mailing lists and no
formal voting process should be used
* If discussions just continue and no consensus can be reached, we want
a small technical board (TB) to have the authority to make the decision
on behalf of the community
* This board should be elected by the community in an open election process
* The technical board (TB) should also have the authority to decide how
to resolve issues if there are no obvious precedence from earlier cases
* The TB should be elected by the community for a pre-defined period
* The TB should focus on issues related to day-to-day development of the
projects, but should not manage on a day-to-day basis, just convene if
there are issues that could not be resolved by consensus
</objective>
I try here to identify open issues not discussed/decided:
* Should there be one TB for all iptel.org projects or one TB for each?
* What should be the criteria for selecting people on the TB (if any)?
* Who have voting rights when we vote on candidates for the TB?
* What should be the term for a TB member?
* Should this TB handle any issues beyond development? (ex. website
content, iptel.org in the wider SIP context, relationship with other
projects, packaging, longer-term positioning of projects, longer-term
development goals, and so on)
* If not, do we need another group that could handle such things?
Best regards,
Greger
please keep cc-ing to mailing list so other users can get the response
in case they have same questions, avoiding waste of time to solve same
issue.
Thanks,
Daniel
On 04/23/07 18:46, Tim Madorma wrote:
> Great - thanks Daniel!
>
> On 4/23/07, Daniel-Constantin Mierla <daniel(a)voice-system.ro> wrote:
>> Hello Tim,
>>
>> On 04/23/07 18:36, Tim Madorma wrote:
>> > Hey Daniel,
>> >
>> > When I looked at the ChangeLog, it does not seem to indicate that the
>> > fix is there.
>> >
>> >
>> http://openser.svn.sourceforge.net/svnroot/openser/branches/1.2/ChangeLog
>>
>> ChangeLog is usually left behind and updated just before the release.
>> >
>> > Should I be concerned about this?
>> No
>> > Should I just use the daily snapshot
>> > instead? How stable is the daily snapshot?
>> Daily snapshot is more or less what SVN 1.2 branch shows at the moment
>> when the snapshot is done. So using SVN will guarantee the access to
>> latest stable code. Snapshots and SVN of branch 1.2 should be the most
>> stable in the 1.2.x release. There are no other commits to branch 1.2
>> but bug fixes.
>>
>> We publish daily snapshots just as backup when SVN is not available,
>> otherwise SVN is advisable to use if you can use subversion.
>>
>> Cheers,
>> Daniel
>> >
>> > Tim
>> >
>> >
>> > On 4/23/07, Daniel-Constantin Mierla <daniel(a)voice-system.ro> wrote:
>> >> Hello,
>> >>
>> >> daily snapshots are enabled now for 1.2.x as well:
>> >>
>> >> http://www.openser.org/downloads/snapshots/openser-1.2.x/
>> >>
>> >> Cheers,
>> >> Daniel
>> >>
>> >> On 04/23/07 17:47, Ovidiu Sas wrote:
>> >> > Hi Tim,
>> >> >
>> >> > Check the download page from openser website:
>> >> > http://www.openser.org/mos/view/Download/:
>> >> >
>> >> > The command that you need to run:
>> >> > svn co
>> http://openser.svn.sourceforge.net/svnroot/openser/branches/1.2
>> >> > openser
>> >> >
>> >> > Make sure that you have svn installed.
>> >> >
>> >> >
>> >> > Regards,
>> >> > Ovidiu Sas
>> >> >
>> >> > On 4/23/07, Tim Madorma <tmadorma(a)gmail.com> wrote:
>> >> >> Hi Daniel,
>> >> >>
>> >> >> I have run into a leak in 1.2 and I assume it is the same one that
>> >> >> Ovidiu ran into. I see in your response that it was "backported to
>> >> >> 1.2", but I'm not sure how to get the fix. When I look at the SVN
>> >> >> repository at:
>> >> >> http://www.openser.org/pub/openser/latest-1.2.x/, the date is
>> earlier
>> >> >> than the date of your email exchange so I don't think the fix has
>> >> been
>> >> >> added there. Can you please let me know how I can get it?
>> >> >>
>> >> >> thanks,
>> >> >> Tim
>> >> >>
>> >> >> On 3/23/07, Daniel-Constantin Mierla <daniel(a)voice-system.ro>
>> wrote:
>> >> >> > Hello Ovidiu,
>> >> >> >
>> >> >> > On 03/23/07 17:04, Ovidiu Sas wrote:
>> >> >> > > Hi Daniel,
>> >> >> > >
>> >> >> > > Can we backport this one to 1.2?
>> >> >> > already done, two minutes after the commit in trunk.
>> >> >> >
>> >> >> > Cheers,
>> >> >> > Daniel
>> >> >> >
>> >> >> > >
>> >> >> > >
>> >> >> > > Regards,
>> >> >> > > Ovidiu Sas
>> >> >> > >
>> >> >> > > On 3/22/07, Daniel-Constantin Mierla <daniel(a)voice-system.ro>
>> >> wrote:
>> >> >> > >> Hello,
>> >> >> > >>
>> >> >> > >> the supposed fragmentation turned out to be a mem leak in
>> pkg.
>> >> >> Please
>> >> >> > >> take the latest SVN version and try again to see if you
>> got same
>> >> >> > >> results.
>> >> >> > >>
>> >> >> > >> Thanks,
>> >> >> > >> Daniel
>> >> >> > >>
>> >> >> > >> On 03/19/07 18:52, Christian Schlatter wrote:
>> >> >> > >> > ...
>> >> >> > >> >>> The memory statistics indeed show a high number of memory
>> >> >> fragments:
>> >> >> > >> >>>
>> >> >> > >> >>> before 'out of memory':
>> >> >> > >> >>>
>> >> >> > >> >>> shmem:total_size = 536870912
>> >> >> > >> >>> shmem:used_size = 59607040
>> >> >> > >> >>> shmem:real_used_size = 60106488
>> >> >> > >> >>> shmem:max_used_size = 68261536
>> >> >> > >> >>> shmem:free_size = 476764424
>> >> >> > >> >>> shmem:fragments = 9897
>> >> >> > >> >>>
>> >> >> > >> >>> after 'out of memory' (about 8000 calls per process):
>> >> >> > >> >>>
>> >> >> > >> >>> shmem:total_size = 536870912
>> >> >> > >> >>> shmem:used_size = 4171160
>> >> >> > >> >>> shmem:real_used_size = 4670744
>> >> >> > >> >>> shmem:max_used_size = 68261536
>> >> >> > >> >>> shmem:free_size = 532200168
>> >> >> > >> >>> shmem:fragments = 57902
>> >> >> > >> >>>
>> >> >> > >> >>>>
>> >> >> > >> >>>> You can try to compile openser with -DQM_JOIN_FREE
>> (add it
>> >> >> in DEFS
>> >> >> > >> >>>> variable of Makefile.defs) and test again. Free
>> fragments
>> >> >> should be
>> >> >> > >> >>>> merged and fragmentation should not occur -- processing
>> >> >> will be
>> >> >> > >> >>>> slower. We will try for next release to provide a better
>> >> >> solution
>> >> >> > >> >>>> for that.
>> >> >> > >> >>>
>> >> >> > >> >>> Compiling openser with -DQM_JOIN_FREE did not help.
>> I'm not
>> >> >> sure how
>> >> >> > >> >>> big of a problem this fragmentation issue is.
>> >> >> > >> >> What is the number of fragments with QM_JOIN_FREE after
>> >> >> flooding?
>> >> >> > >> >
>> >> >> > >> > The numbers included above are with QM_JOIN_FREE enabled.
>> >> >> > >> >
>> >> >> > >> >>> Do you think it would make sense to restart our
>> production
>> >> >> openser
>> >> >> > >> >>> instances from time to time just to make sure they're not
>> >> >> running
>> >> >> > >> >>> into this memory fragmentation limits?
>> >> >> > >> >> The issue will occur only when the call rate reaches the
>> >> >> limits of
>> >> >> > >> >> the proxy's memory. Otherwise the chunks are reused.
>> >> >> Transactions and
>> >> >> > >> >> avps are rounded up to be sure there will be minimized the
>> >> >> number of
>> >> >> > >> >> different sizes for memory chunks. It wasn't reported too
>> >> often,
>> >> >> > >> >> maybe that's why no big attention was paid to it. This
>> memory
>> >> >> system
>> >> >> > >> >> is in place since the beginning of ser. Alternative is
>> to use
>> >> >> sysv
>> >> >> > >> >> shared memory, but is much slower, along with libc private
>> >> >> memory
>> >> >> > >> >> manager.
>> >> >> > >> >
>> >> >> > >> > I've done some more testing and the same out-of-memory
>> stuff
>> >> >> happens
>> >> >> > >> > when I run sipp with 10 calls per second only. I tested
>> with
>> >> >> > >> > 'children=1' and I only could get through about 8200 calls
>> >> (again
>> >> >> > >> > those 8000 calls / process). And this is with QM_JOIN_FREE
>> >> >> enabled.
>> >> >> > >> >
>> >> >> > >> > Memory statistics:
>> >> >> > >> >
>> >> >> > >> > before:
>> >> >> > >> > shmem:total_size = 536870912
>> >> >> > >> > shmem:used_size = 2311976
>> >> >> > >> > shmem:real_used_size = 2335720
>> >> >> > >> > shmem:max_used_size = 2465816
>> >> >> > >> > shmem:free_size = 534535192
>> >> >> > >> > shmem:fragments = 183
>> >> >> > >> >
>> >> >> > >> > after:
>> >> >> > >> > shmem:total_size = 536870912
>> >> >> > >> > shmem:used_size = 1853472
>> >> >> > >> > shmem:real_used_size = 1877224
>> >> >> > >> > shmem:max_used_size = 2465816
>> >> >> > >> > shmem:free_size = 534993688
>> >> >> > >> > shmem:fragments = 547
>> >> >> > >> >
>> >> >> > >> > So I'm not sure if this is really a fragmentation issue. 10
>> >> >> cps surely
>> >> >> > >> > doesn't reach the proxy's memory.
>> >> >> > >> >
>> >> >> > >> > Thoughts?
>> >> >> > >> >
>> >> >> > >> > Christian
>> >> >> > >> >
>> >> >> > >> >
>> >> >> > >> >
>> >> >> > >> >> Cheers,
>> >> >> > >> >> Daniel
>> >> >> > >> >>
>> >> >> > >> >>>
>> >> >> > >> >>> thanks,
>> >> >> > >> >>> Christian
>> >> >> > >> >>>
>> >> >> > >> >>>>
>> >> >> > >> >>>> Cheers,
>> >> >> > >> >>>> Daniel
>> >> >> > >> >>>>
>> >> >> > >> >>>> On 03/18/07 01:21, Christian Schlatter wrote:
>> >> >> > >> >>>>> Christian Schlatter wrote:
>> >> >> > >> >>>>> ...
>> >> >> > >> >>>>>>
>> >> >> > >> >>>>>> I always had 768MB shared memory configured though,
>> so I
>> >> >> still
>> >> >> > >> >>>>>> can't explain the memory allocation errors I got. Some
>> >> >> more test
>> >> >> > >> >>>>>> runs revealed that I only get these errors when using
>> >> a more
>> >> >> > >> >>>>>> production oriented config that loads more modules
>> than
>> >> >> the one
>> >> >> > >> >>>>>> posted in my earlier email. I now try to figure out
>> what
>> >> >> exactly
>> >> >> > >> >>>>>> causes these memory allocation errors that happen
>> >> >> reproducibly
>> >> >> > >> >>>>>> after about 220s at 400 cps.
>> >> >> > >> >>>>>
>> >> >> > >> >>>>> I think I found the cause for the memory allocation
>> >> >> errors. As
>> >> >> > >> >>>>> soon as I include an AVP write operation in the routing
>> >> >> script, I
>> >> >> > >> >>>>> get 'out of memory' messages after a certain number of
>> >> calls
>> >> >> > >> >>>>> generated with sipp.
>> >> >> > >> >>>>>
>> >> >> > >> >>>>> The routing script to reproduce this behavior looks
>> like
>> >> >> (full
>> >> >> > >> >>>>> config available at
>> >> >> > >> >>>>> http://www.unc.edu/~cschlatt/openser/openser.cfg):
>> >> >> > >> >>>>>
>> >> >> > >> >>>>> route{
>> >> >> > >> >>>>> $avp(s:ct) = $ct; # commenting this line solves
>> >> >> > >> >>>>> # the memory problem
>> >> >> > >> >>>>>
>> >> >> > >> >>>>> if (!method=="REGISTER") record_route();
>> >> >> > >> >>>>> if (loose_route()) route(1);
>> >> >> > >> >>>>>
>> >> >> > >> >>>>> if (uri==myself) rewritehost("xx.xx.xx.xx");
>> >> >> > >> >>>>> route(1);
>> >> >> > >> >>>>> }
>> >> >> > >> >>>>>
>> >> >> > >> >>>>> route[1] {
>> >> >> > >> >>>>> if (!t_relay()) sl_reply_error();
>> >> >> > >> >>>>> exit;
>> >> >> > >> >>>>> }
>> >> >> > >> >>>>>
>> >> >> > >> >>>>> An example log file showing the 'out of memory'
>> >> messages is
>> >> >> > >> >>>>> available at
>> >> >> http://www.unc.edu/~cschlatt/openser/openser.log .
>> >> >> > >> >>>>>
>> >> >> > >> >>>>> Some observations:
>> >> >> > >> >>>>>
>> >> >> > >> >>>>> - The 'out of memory' messages always appear after
>> about
>> >> >> 8000 test
>> >> >> > >> >>>>> calls per worker process. One call consists of two SIP
>> >> >> > >> >>>>> transactions and six end-to-end SIP messages. An
>> openser
>> >> >> with 8
>> >> >> > >> >>>>> children handles about 64'000 calls, whereas 4 children
>> >> only
>> >> >> > >> >>>>> handle about 32'000 calls. The sipp call rate doesn't
>> >> >> matter, only
>> >> >> > >> >>>>> number of calls.
>> >> >> > >> >>>>>
>> >> >> > >> >>>>> - The 8000 calls per worker process are independent
>> >> from the
>> >> >> > >> >>>>> amount of shared memory available. Running openser
>> with -m
>> >> >> 128 or
>> >> >> > >> >>>>> -m 768 does not make a difference.
>> >> >> > >> >>>>>
>> >> >> > >> >>>>> - The more AVP writes are done in the script, the less
>> >> >> calls go
>> >> >> > >> >>>>> through. It looks like each AVP write is leaking memory
>> >> >> (unnoticed
>> >> >> > >> >>>>> by the memory statistics).
>> >> >> > >> >>>>>
>> >> >> > >> >>>>> - The fifo memory statistics do not reflect the 'out of
>> >> >> memory'
>> >> >> > >> >>>>> syslog messages. Even if openser does not route a
>> >> single SIP
>> >> >> > >> >>>>> message because of memory issues, the statistics still
>> >> >> show a lot
>> >> >> > >> >>>>> of 'free' memory.
>> >> >> > >> >>>>>
>> >> >> > >> >>>>>
>> >> >> > >> >>>>> All tests were done with openser SVN 1.2 branch on
>> Ubuntu
>> >> >> dapper
>> >> >> > >> >>>>> x86. I think the same is true for 1.1 version but I
>> >> >> haven't tested
>> >> >> > >> >>>>> that yet.
>> >> >> > >> >>>>>
>> >> >> > >> >>>>>
>> >> >> > >> >>>>> Christian
>> >> >> > >> >>>>>
>> >> >> > >> >>>
>> >> >> > >> >>>
>> >> >> > >> >
>> >> >> > >> >
>> >> >> > >> > _______________________________________________
>> >> >> > >> > Users mailing list
>> >> >> > >> > Users(a)openser.org
>> >> >> > >> > http://openser.org/cgi-bin/mailman/listinfo/users
>> >> >> > >> >
>> >> >> > >>
>> >> >> > >> _______________________________________________
>> >> >> > >> Users mailing list
>> >> >> > >> Users(a)openser.org
>> >> >> > >> http://openser.org/cgi-bin/mailman/listinfo/users
>> >> >> > >>
>> >> >> > >
>> >> >> >
>> >> >> > _______________________________________________
>> >> >> > Users mailing list
>> >> >> > Users(a)openser.org
>> >> >> > http://openser.org/cgi-bin/mailman/listinfo/users
>> >> >> >
>> >> >>
>> >> >
>> >>
>> >
>>
>
Hello Tim,
On 04/23/07 18:36, Tim Madorma wrote:
> Hey Daniel,
>
> When I looked at the ChangeLog, it does not seem to indicate that the
> fix is there.
>
> http://openser.svn.sourceforge.net/svnroot/openser/branches/1.2/ChangeLog
ChangeLog is usually left behind and updated just before the release.
>
> Should I be concerned about this?
No
> Should I just use the daily snapshot
> instead? How stable is the daily snapshot?
Daily snapshot is more or less what SVN 1.2 branch shows at the moment
when the snapshot is done. So using SVN will guarantee the access to
latest stable code. Snapshots and SVN of branch 1.2 should be the most
stable in the 1.2.x release. There are no other commits to branch 1.2
but bug fixes.
We publish daily snapshots just as backup when SVN is not available,
otherwise SVN is advisable to use if you can use subversion.
Cheers,
Daniel
>
> Tim
>
>
> On 4/23/07, Daniel-Constantin Mierla <daniel(a)voice-system.ro> wrote:
>> Hello,
>>
>> daily snapshots are enabled now for 1.2.x as well:
>>
>> http://www.openser.org/downloads/snapshots/openser-1.2.x/
>>
>> Cheers,
>> Daniel
>>
>> On 04/23/07 17:47, Ovidiu Sas wrote:
>> > Hi Tim,
>> >
>> > Check the download page from openser website:
>> > http://www.openser.org/mos/view/Download/:
>> >
>> > The command that you need to run:
>> > svn co http://openser.svn.sourceforge.net/svnroot/openser/branches/1.2
>> > openser
>> >
>> > Make sure that you have svn installed.
>> >
>> >
>> > Regards,
>> > Ovidiu Sas
>> >
>> > On 4/23/07, Tim Madorma <tmadorma(a)gmail.com> wrote:
>> >> Hi Daniel,
>> >>
>> >> I have run into a leak in 1.2 and I assume it is the same one that
>> >> Ovidiu ran into. I see in your response that it was "backported to
>> >> 1.2", but I'm not sure how to get the fix. When I look at the SVN
>> >> repository at:
>> >> http://www.openser.org/pub/openser/latest-1.2.x/, the date is earlier
>> >> than the date of your email exchange so I don't think the fix has
>> been
>> >> added there. Can you please let me know how I can get it?
>> >>
>> >> thanks,
>> >> Tim
>> >>
>> >> On 3/23/07, Daniel-Constantin Mierla <daniel(a)voice-system.ro> wrote:
>> >> > Hello Ovidiu,
>> >> >
>> >> > On 03/23/07 17:04, Ovidiu Sas wrote:
>> >> > > Hi Daniel,
>> >> > >
>> >> > > Can we backport this one to 1.2?
>> >> > already done, two minutes after the commit in trunk.
>> >> >
>> >> > Cheers,
>> >> > Daniel
>> >> >
>> >> > >
>> >> > >
>> >> > > Regards,
>> >> > > Ovidiu Sas
>> >> > >
>> >> > > On 3/22/07, Daniel-Constantin Mierla <daniel(a)voice-system.ro>
>> wrote:
>> >> > >> Hello,
>> >> > >>
>> >> > >> the supposed fragmentation turned out to be a mem leak in pkg.
>> >> Please
>> >> > >> take the latest SVN version and try again to see if you got same
>> >> > >> results.
>> >> > >>
>> >> > >> Thanks,
>> >> > >> Daniel
>> >> > >>
>> >> > >> On 03/19/07 18:52, Christian Schlatter wrote:
>> >> > >> > ...
>> >> > >> >>> The memory statistics indeed show a high number of memory
>> >> fragments:
>> >> > >> >>>
>> >> > >> >>> before 'out of memory':
>> >> > >> >>>
>> >> > >> >>> shmem:total_size = 536870912
>> >> > >> >>> shmem:used_size = 59607040
>> >> > >> >>> shmem:real_used_size = 60106488
>> >> > >> >>> shmem:max_used_size = 68261536
>> >> > >> >>> shmem:free_size = 476764424
>> >> > >> >>> shmem:fragments = 9897
>> >> > >> >>>
>> >> > >> >>> after 'out of memory' (about 8000 calls per process):
>> >> > >> >>>
>> >> > >> >>> shmem:total_size = 536870912
>> >> > >> >>> shmem:used_size = 4171160
>> >> > >> >>> shmem:real_used_size = 4670744
>> >> > >> >>> shmem:max_used_size = 68261536
>> >> > >> >>> shmem:free_size = 532200168
>> >> > >> >>> shmem:fragments = 57902
>> >> > >> >>>
>> >> > >> >>>>
>> >> > >> >>>> You can try to compile openser with -DQM_JOIN_FREE (add it
>> >> in DEFS
>> >> > >> >>>> variable of Makefile.defs) and test again. Free fragments
>> >> should be
>> >> > >> >>>> merged and fragmentation should not occur -- processing
>> >> will be
>> >> > >> >>>> slower. We will try for next release to provide a better
>> >> solution
>> >> > >> >>>> for that.
>> >> > >> >>>
>> >> > >> >>> Compiling openser with -DQM_JOIN_FREE did not help. I'm not
>> >> sure how
>> >> > >> >>> big of a problem this fragmentation issue is.
>> >> > >> >> What is the number of fragments with QM_JOIN_FREE after
>> >> flooding?
>> >> > >> >
>> >> > >> > The numbers included above are with QM_JOIN_FREE enabled.
>> >> > >> >
>> >> > >> >>> Do you think it would make sense to restart our production
>> >> openser
>> >> > >> >>> instances from time to time just to make sure they're not
>> >> running
>> >> > >> >>> into this memory fragmentation limits?
>> >> > >> >> The issue will occur only when the call rate reaches the
>> >> limits of
>> >> > >> >> the proxy's memory. Otherwise the chunks are reused.
>> >> Transactions and
>> >> > >> >> avps are rounded up to be sure there will be minimized the
>> >> number of
>> >> > >> >> different sizes for memory chunks. It wasn't reported too
>> often,
>> >> > >> >> maybe that's why no big attention was paid to it. This memory
>> >> system
>> >> > >> >> is in place since the beginning of ser. Alternative is to use
>> >> sysv
>> >> > >> >> shared memory, but is much slower, along with libc private
>> >> memory
>> >> > >> >> manager.
>> >> > >> >
>> >> > >> > I've done some more testing and the same out-of-memory stuff
>> >> happens
>> >> > >> > when I run sipp with 10 calls per second only. I tested with
>> >> > >> > 'children=1' and I only could get through about 8200 calls
>> (again
>> >> > >> > those 8000 calls / process). And this is with QM_JOIN_FREE
>> >> enabled.
>> >> > >> >
>> >> > >> > Memory statistics:
>> >> > >> >
>> >> > >> > before:
>> >> > >> > shmem:total_size = 536870912
>> >> > >> > shmem:used_size = 2311976
>> >> > >> > shmem:real_used_size = 2335720
>> >> > >> > shmem:max_used_size = 2465816
>> >> > >> > shmem:free_size = 534535192
>> >> > >> > shmem:fragments = 183
>> >> > >> >
>> >> > >> > after:
>> >> > >> > shmem:total_size = 536870912
>> >> > >> > shmem:used_size = 1853472
>> >> > >> > shmem:real_used_size = 1877224
>> >> > >> > shmem:max_used_size = 2465816
>> >> > >> > shmem:free_size = 534993688
>> >> > >> > shmem:fragments = 547
>> >> > >> >
>> >> > >> > So I'm not sure if this is really a fragmentation issue. 10
>> >> cps surely
>> >> > >> > doesn't reach the proxy's memory.
>> >> > >> >
>> >> > >> > Thoughts?
>> >> > >> >
>> >> > >> > Christian
>> >> > >> >
>> >> > >> >
>> >> > >> >
>> >> > >> >> Cheers,
>> >> > >> >> Daniel
>> >> > >> >>
>> >> > >> >>>
>> >> > >> >>> thanks,
>> >> > >> >>> Christian
>> >> > >> >>>
>> >> > >> >>>>
>> >> > >> >>>> Cheers,
>> >> > >> >>>> Daniel
>> >> > >> >>>>
>> >> > >> >>>> On 03/18/07 01:21, Christian Schlatter wrote:
>> >> > >> >>>>> Christian Schlatter wrote:
>> >> > >> >>>>> ...
>> >> > >> >>>>>>
>> >> > >> >>>>>> I always had 768MB shared memory configured though, so I
>> >> still
>> >> > >> >>>>>> can't explain the memory allocation errors I got. Some
>> >> more test
>> >> > >> >>>>>> runs revealed that I only get these errors when using
>> a more
>> >> > >> >>>>>> production oriented config that loads more modules than
>> >> the one
>> >> > >> >>>>>> posted in my earlier email. I now try to figure out what
>> >> exactly
>> >> > >> >>>>>> causes these memory allocation errors that happen
>> >> reproducibly
>> >> > >> >>>>>> after about 220s at 400 cps.
>> >> > >> >>>>>
>> >> > >> >>>>> I think I found the cause for the memory allocation
>> >> errors. As
>> >> > >> >>>>> soon as I include an AVP write operation in the routing
>> >> script, I
>> >> > >> >>>>> get 'out of memory' messages after a certain number of
>> calls
>> >> > >> >>>>> generated with sipp.
>> >> > >> >>>>>
>> >> > >> >>>>> The routing script to reproduce this behavior looks like
>> >> (full
>> >> > >> >>>>> config available at
>> >> > >> >>>>> http://www.unc.edu/~cschlatt/openser/openser.cfg):
>> >> > >> >>>>>
>> >> > >> >>>>> route{
>> >> > >> >>>>> $avp(s:ct) = $ct; # commenting this line solves
>> >> > >> >>>>> # the memory problem
>> >> > >> >>>>>
>> >> > >> >>>>> if (!method=="REGISTER") record_route();
>> >> > >> >>>>> if (loose_route()) route(1);
>> >> > >> >>>>>
>> >> > >> >>>>> if (uri==myself) rewritehost("xx.xx.xx.xx");
>> >> > >> >>>>> route(1);
>> >> > >> >>>>> }
>> >> > >> >>>>>
>> >> > >> >>>>> route[1] {
>> >> > >> >>>>> if (!t_relay()) sl_reply_error();
>> >> > >> >>>>> exit;
>> >> > >> >>>>> }
>> >> > >> >>>>>
>> >> > >> >>>>> An example log file showing the 'out of memory'
>> messages is
>> >> > >> >>>>> available at
>> >> http://www.unc.edu/~cschlatt/openser/openser.log .
>> >> > >> >>>>>
>> >> > >> >>>>> Some observations:
>> >> > >> >>>>>
>> >> > >> >>>>> - The 'out of memory' messages always appear after about
>> >> 8000 test
>> >> > >> >>>>> calls per worker process. One call consists of two SIP
>> >> > >> >>>>> transactions and six end-to-end SIP messages. An openser
>> >> with 8
>> >> > >> >>>>> children handles about 64'000 calls, whereas 4 children
>> only
>> >> > >> >>>>> handle about 32'000 calls. The sipp call rate doesn't
>> >> matter, only
>> >> > >> >>>>> number of calls.
>> >> > >> >>>>>
>> >> > >> >>>>> - The 8000 calls per worker process are independent
>> from the
>> >> > >> >>>>> amount of shared memory available. Running openser with -m
>> >> 128 or
>> >> > >> >>>>> -m 768 does not make a difference.
>> >> > >> >>>>>
>> >> > >> >>>>> - The more AVP writes are done in the script, the less
>> >> calls go
>> >> > >> >>>>> through. It looks like each AVP write is leaking memory
>> >> (unnoticed
>> >> > >> >>>>> by the memory statistics).
>> >> > >> >>>>>
>> >> > >> >>>>> - The fifo memory statistics do not reflect the 'out of
>> >> memory'
>> >> > >> >>>>> syslog messages. Even if openser does not route a
>> single SIP
>> >> > >> >>>>> message because of memory issues, the statistics still
>> >> show a lot
>> >> > >> >>>>> of 'free' memory.
>> >> > >> >>>>>
>> >> > >> >>>>>
>> >> > >> >>>>> All tests were done with openser SVN 1.2 branch on Ubuntu
>> >> dapper
>> >> > >> >>>>> x86. I think the same is true for 1.1 version but I
>> >> haven't tested
>> >> > >> >>>>> that yet.
>> >> > >> >>>>>
>> >> > >> >>>>>
>> >> > >> >>>>> Christian
>> >> > >> >>>>>
>> >> > >> >>>
>> >> > >> >>>
>> >> > >> >
>> >> > >> >
>> >> > >> > _______________________________________________
>> >> > >> > Users mailing list
>> >> > >> > Users(a)openser.org
>> >> > >> > http://openser.org/cgi-bin/mailman/listinfo/users
>> >> > >> >
>> >> > >>
>> >> > >> _______________________________________________
>> >> > >> Users mailing list
>> >> > >> Users(a)openser.org
>> >> > >> http://openser.org/cgi-bin/mailman/listinfo/users
>> >> > >>
>> >> > >
>> >> >
>> >> > _______________________________________________
>> >> > Users mailing list
>> >> > Users(a)openser.org
>> >> > http://openser.org/cgi-bin/mailman/listinfo/users
>> >> >
>> >>
>> >
>>
>
We are using t_replicate() to replicate REGISTERs among redundant
proxies (and a separate presence server that needs REGISTER for
pua_bla). Before reinventing the wheel here, I thought I'd ask if
others already have a method in place to re-sync a restarted proxy's
state? I guess one way is to pull the usrloc data from mysql....
Another would be to somehow ask the proxy to walk the usrloc table and
do a bunch of t_registers()...
Any thoughts appreciated.
/a
hi ravi prakash,
route[1] {
append_branch("sip:C@xx:xx:xx:xx");
t_relay();
}
above will work .but i have not enable nat routing in this .when i enabled
nat routing with media proxy same problem i am facing
raviprakash sunkara wrote:
> Hi Klaus ,
> Thanks for replying ,
>
> if(method=="INVITE" && uri=~"sip:B@xx.xxx.xx ")
> {
> setuser("C");
> append_branch()
> t_relay();
> };
>
> The B's Information is override by C , But Call goes to C , only,
>
> When A invites B, then B and C has ring parallel.
>
> On 4/23/07, Klaus Darilion <klaus.mailinglists(a)pernau.at> wrote:
>
> Take a look at append_branch(). (core documentation)
>
> regards
> klaus
>
> raviprakash sunkara wrote:
> > Hello Users,
> >
> > For My Sip Servicing, There is Call Hunting Features
> >
> > This had done , When I forwarded the Asterisk Server.
> > But I want to implements in OpenSER itself,
> >
> > When A Calls to B, the proxy has to send to invites to B and C, So that B
> > and C rings simultaneously
> >
>
>
>
Hello Users,
For My Sip Servicing, There is Call Hunting Features
This had done , When I forwarded the Asterisk Server.
But I want to implements in OpenSER itself,
When A Calls to B, the proxy has to send to invites to B and C, So that B
and C rings simultaneously
--
Thanks and Regards
Ravi Prakash Sunkara
ravi.sunkara(a)hyperion-tech.com
M:+91 9985077535
www.hyperion-tech.com
Client and Parent company :- www.august-networks.com
Hi Greg, thanks for answering.
I have tested with different UAs. I used XLite softphone and Grandstream hardphones, and always happens.
Is there any configuration to avoid this?
Thanks again.
Rosa.
----------------------------------------
> Subject: Re: [Serusers] Forward on busy hangs up actual call
> Date: Sat, 21 Apr 2007 07:56:30 +0200
> From: greger(a)teigre.com
> To: rosadesantis(a)hotmail.com
> CC: serusers(a)iptel.org
>
> Interesting...
> However, this is a bug in A's UA. You should test on error codes though.
> g-)
>
> ------- Original message -------
> From: Rosa De Santis
> Sent: 20.4.'07, 22:26
>
> >
> > Hi all.
> > Please, I have a problem forwarding to voicemail when busy.
> > I have this in my failure route to forward to voicemail server:
> >
> > failure_route[1] {
> > revert_uri();
> > rewritehostport("xx.xxx.xxx.xxx:5070");
> > append_branch();
> > t_relay();
> > break;
> > }
> >
> > When A is busy talking with B, and C calls A then C is forwarded to voicemail. The forwarding works fine, but the call between A and B ends.
> >
> > Please, what am I doing wrong?
> > Any help ¿?
> >
> > Thanks, Rosa.
> >
> > _________________________________________________________________
> > Descubre Live.com - tu propia página de inicio, personalizada para ver rápidamente todo lo que te interesa en un mismo sitio.
> > http://www.live.com/getstarted_____________________________________________…
> > Serusers mailing list
> > Serusers(a)lists.iptel.org
> > http://lists.iptel.org/mailman/listinfo/serusers
>
_________________________________________________________________
Descubre Live.com - tu propia página de inicio, personalizada para ver rápidamente todo lo que te interesa en un mismo sitio.
http://www.live.com/getstarted
Hello,
I want to welcome a new developer of OpenSER: Henning Westerholt,
representing 1und1, Germany. He has been very active contributor with
patches in the last time and now he just added a new module: cfgutils --
to become the silo of small and useful configuration script utilities.
Cheers,
Daniel
Hello all,
i've added a new module into trunk, called "cfgutils". It provides functions
for random number generation, to introduce execution delay of the server and
make random decisions based on a probability value.
The sleep functions can be used to work around buggy user agents or gateways,
the random functions are useful for load shedding on server overload. For
usage and configuration of this module please refer to the README file.
Credits for the sleep functions belongs to Carsten Bock, BASIS AudioNet GmbH.
Best regards,
Henning Westerholt