matthias,
i looked at partiSIPpation documentation and didn't quite understand why microphone and speaker volumes are part of the api. i would think that those are internal matters of the gui component. when gui places a call, i would imagine it gives to core an ip address and port that it wants to use for the media plus a list of codecs it supports. i don't understand what volume issue have to do with communication between gui and core.
-- juha
Hi Juha!
i looked at partiSIPpation documentation and didn't quite understand why microphone and speaker volumes are part of the api. i would think that those are internal matters of the gui component. when gui places a call, i would imagine it gives to core an ip address and port that it wants to use for the media plus a list of codecs it supports. i don't understand what volume issue have to do with communication between gui and core.
The Core will be responsible for providing the audio, i.e. audio transport, accessing the sound devices (speaker, microphone), codecs and so on. So if the user wants to change the volume, the Core has to be notified and has to perform the change.
If another application is changing the volumes, or if the user does this manually, the Core has to notify the GUI that the volumes have been adjusted.
I hope the issue is somewhat clearer now.
-Matthias
Matthias Liebig writes:
The Core will be responsible for providing the audio, i.e. audio transport, accessing the sound devices (speaker, microphone), codecs and so on. So if the user wants to change the volume, the Core has to be notified and has to perform the change.
i got the impression that core would be independent of the gui and those two could reside in different host running different operating systems. looks like that was not the case if core also deals with rtp packets and audio.
why didn't you completely decouple core (sip engine) from the gui so that rtp would terminate in gui? then one core instance could have simultaneously served many guis running on different hosts.
-- juha
Hi Juha,
The Core will be responsible for providing the audio, i.e. audio transport, accessing the sound devices (speaker, microphone), codecs and so on. So if the user wants to change the volume, the Core has to be notified and has to perform the change.
i got the impression that core would be independent of the gui and those two could reside in different host running different operating systems. looks like that was not the case if core also deals with rtp packets and audio.
actually GUI and Core can reside on different hosts, but in most cases they will be on the same machine. It is imaginable that the GUI is on a PDA and the Core is on a machine that is providing wireless audio for the current room. But I have to admit that is a very unusual scenario.
why didn't you completely decouple core (sip engine) from the gui so that rtp would terminate in gui? then one core instance could have simultaneously served many guis running on different hosts.
At the beginning of the project we agreed to define the relation between GUI and Core to be 1:1, so the implementation won't be too complex for the available time frame (which we exceeded anyway, by the way).
When we continue working on the project (which is planned to happen in February/March 2006), we will reconsider the design and maybe we are changing it. But our main principle was that the GUI has to be as simple as possible. That is not the case if it has to handle RTP.
Best regards, Matthias