Hi all!
There are several scenarios where TLS will be used to interconnect SIP proxies. (open)ser's TLS implementation should be generic enough to handle all the useful scenarios. Thus, to better understand the requirements, first I present some examples where (open)ser+TLS will be useful. (I do not propose which of the following interconnect models are good or bad. However, openser should be capable to handle all of them, best in a mixed mode).
Enterprise scenario: A company uses TLS to interconnect their SIP proxies via public Internet. The proxies import the companies selfsigned CA-cert as trusted CAs. The proxies trust other proxies as soon as their cert is validated using the root CA. This is already possible using openser 1.0.0 (= or ser+experimental TLS)
Federation scenario: Some ITSPs form a federation. The federation-CA signs the certs of the ITSPs. Here, the validation is like in the enterprise scenario. (open)ser validates against the federations CA-cert. This works with openser 1.0.0 as long as the ITSP is only in one federation, or uses different egress/ingress points for each federation. If the ITSP is member of two federations and uses one egress/ingress proxy, it has to decide which certificate it should present to the peer. The originating proxy could choose the proper client certificate for example by using a table like (or having the certificate as blob directly in the DB):
dst_domain certificate sip.atlanta.com /etc/openser/federationAcert.pem sip.biloxy.com /etc/openser/federationBcert.pem sip.chicago.com /etc/openser/federationAcert.pem
Presenting the proper server certificate, is more difficult. The server does not know if the incoming TLS request belongs to a member of fedA, fedB or someone else. Thus, presenting the wrong certificate will lead to the clients rejecting the certificate due to failed validation. One solution would be sending the "trusted_ca_keys" (TLS extension) in Client Hello. Unfortunatelly this is not supported in openssl (and gnutls). Any workaround for this?
Anyway, in this scenario it is important to have the certificate parameters (Subject, Issuer) available in the routing logic to make routing decisions based on the TLS authenticaten and adding them to the CDRs (e.g. via AVPs and extra accounting)
Bilateral scenario: An ITSP has bilateral trust relationships. Each ITSP has its own CA which signs the certs of this ITSP. If another ITSP wants to trust this ISTP it only has to import the others CA-cert. This works already with openser 1.0.0, but exporting the cert parameters for extra accounting will be useful.
Hosted SIP scenario: An ITSP hosts multiple SIP domains for its customers. If the server has to offer a certificate which includes the proper SIP domain, the server_name extension is needed to indicate the requested domain in the client_hello request. Then the server will present the proper certificate and domain validation (Subject domain == SIP domain) in the client will succeed. This will work fine with initial (out-of-dialog) requests as they usually will include the SIP domain in the request URI. There will be problems for responses and in-dialog requests as usually the Record-Route and Via headers only includes IP addresses. Thus, the SIP proxy either has to insert the SIP domain into Via and Record-Route, or the domain validation should only be done for in-dialog requests.
This leads to the problem of domain validation. The TLS connection will be set up after all the routing logic, somewhere inside t_relay. Thus, if we want domain validation, it will be inside t_relay. Maybe we can use a certain flag to indicate if domain-validation should be done (on a per-transaction basis). This might cause problems if there is already a TLS connection to the requested destination, but without domain validation or validation against a different domain (virtual domain hosting). How to solve this?
I can't propose a solution to all scenarios. But I think I showed that the certificate selection and validation should be very flexible, e.g. by choosing the proper client certificate for each transaction and different routing in the server depending on the presented client certificate and the cerfiticate signer (e.g. based on a whitelist).
Further we have to take care to add certifcates and CA-certs during runtime, e.g. using a FIFO command "tls_reload". This should also drop all existing TLS connections. Having a maximum connection time after which we force re-validation will also be useful.
Also (open)ser should allow to import CRL (certificate revocation lists) (shouldn't be a problem with openssl) or usage of OCSP (Online Certificate Status Protocol).
Now I'm ready for some discussions :-)
regards klaus
Hi Klaus,
indeed this is a long email ;). please see my inline comments.
regards. Bogdan
Klaus Darilion wrote:
Hi all!
There are several scenarios where TLS will be used to interconnect SIP proxies. (open)ser's TLS implementation should be generic enough to handle all the useful scenarios. Thus, to better understand the requirements, first I present some examples where (open)ser+TLS will be useful. (I do not propose which of the following interconnect models are good or bad. However, openser should be capable to handle all of them, best in a mixed mode).
Enterprise scenario: A company uses TLS to interconnect their SIP proxies via public Internet. The proxies import the companies selfsigned CA-cert as trusted CAs. The proxies trust other proxies as soon as their cert is validated using the root CA. This is already possible using openser 1.0.0 (= or ser+experimental TLS)
Federation scenario: Some ITSPs form a federation. The federation-CA signs the certs of the ITSPs. Here, the validation is like in the enterprise scenario. (open)ser validates against the federations CA-cert. This works with openser 1.0.0 as long as the ITSP is only in one federation, or uses different egress/ingress points for each federation. If the ITSP is member of two federations and uses one egress/ingress proxy, it has to decide which certificate it should present to the peer. The originating proxy could choose the proper client certificate for example by using a table like (or having the certificate as blob directly in the DB):
dst_domain certificate
sip.atlanta.com /etc/openser/federationAcert.pem sip.biloxy.com /etc/openser/federationBcert.pem sip.chicago.com /etc/openser/federationAcert.pem
Presenting the proper server certificate, is more difficult. The server does not know if the incoming TLS request belongs to a member of fedA, fedB or someone else. Thus, presenting the wrong certificate will lead to the clients rejecting the certificate due to failed validation. One solution would be sending the "trusted_ca_keys" (TLS extension) in Client Hello. Unfortunatelly this is not supported in openssl (and gnutls). Any workaround for this?
As I understood from Cesc, gnutls already support this extension, but to migrate to gnutls and restart all testing may not pay the effort as time as it's just a matter of time until the extension will be also available in openssl. As temporary solution I will suggest to go by default without the extension patch, but to provide the patch into the TLS directory and people interested in these multi-domain scenarios will have to apply and recompile the openssl lib. And maybe we should do some lobby (read pressure) on the openssl mailing list in order to push this extension in the official tree.
Just an idea.
Anyway, in this scenario it is important to have the certificate parameters (Subject, Issuer) available in the routing logic to make routing decisions based on the TLS authenticaten and adding them to the CDRs (e.g. via AVPs and extra accounting)
interesting but there might be some problems - the information you want to log comes from transport layer and you try you log it by using mechanism from the SIP level. It will works, but the info will be actually available only for requests that initiated the TLS connection (send or received) and not also for the requests that reuse the connection.
Bilateral scenario: An ITSP has bilateral trust relationships. Each ITSP has its own CA which signs the certs of this ITSP. If another ITSP wants to trust this ISTP it only has to import the others CA-cert. This works already with openser 1.0.0, but exporting the cert parameters for extra accounting will be useful.
not sure what you mean by cert parameters.......
Hosted SIP scenario: An ITSP hosts multiple SIP domains for its customers. If the server has to offer a certificate which includes the proper SIP domain, the server_name extension is needed to indicate the requested domain in the client_hello request. Then the server will present the proper certificate and domain validation (Subject domain == SIP domain) in the client will succeed.
the solution will also the mighty extension, indeed.....
This will work fine with initial (out-of-dialog) requests as they usually will include the SIP domain in the request URI. There will be problems for responses and in-dialog requests as usually the Record-Route and Via headers only includes IP addresses. Thus, the SIP proxy either has to insert the SIP domain into Via and Record-Route, or the domain validation should only be done for in-dialog requests.
I don't thing we should worry about replies - they will return via same connexion - the expiration time of a tcp connection must be higher than the expiration time of a transaction..
But about the within the dialog requests - you have a strong case here!! But is actually more complex : you need to know the inbound and outbound domains actually - if you received the request from another peer via TLS and fed it also via TLS to another peer (relaying) will need to remember both domains since the within the dialog request may flow in both directions ;). Maybe storing the domain names as RR param is the simplest and uglier solution...in the mean while I think is the only one without involving any dialog persistence.
This leads to the problem of domain validation. The TLS connection will be set up after all the routing logic, somewhere inside t_relay. Thus, if we want domain validation, it will be inside t_relay. Maybe we can use a certain flag to indicate if domain-validation should be done (on a per-transaction basis). This might cause problems if there is already a TLS connection to the requested destination, but without domain validation or validation against a different domain (virtual domain hosting). How to solve this?
one premiss we should based on is the fact that cannot exists (in my opinion) connections that should or not require domain validation in different case. Argumentation: AFAIK only two types of connections can be: user oriented and peering oriented; the first type will not require validation at all and the second one may or may nor, based on local policy. So, I think, we cannot have a case when connection to X will require validation and later no.
To control the validation (and maybe other parameter of the connection), prior setting from the script may be the solution - I was investigating with Cesc the idea of building a TLS module which will be used for provisioning the cert and to control the connection params. The TLS engine itself will stay in core as now.
So, I would say we never reach the case when we want to reuse an existing connection but with different settings.
I can't propose a solution to all scenarios. But I think I showed that the certificate selection and validation should be very flexible, e.g. by choosing the proper client certificate for each transaction and different routing in the server depending on the presented client certificate and the cerfiticate signer (e.g. based on a whitelist).
Further we have to take care to add certifcates and CA-certs during runtime, e.g. using a FIFO command "tls_reload". This should also drop all existing TLS connections. Having a maximum connection time after which we force re-validation will also be useful.
Also (open)ser should allow to import CRL (certificate revocation lists) (shouldn't be a problem with openssl) or usage of OCSP (Online Certificate Status Protocol).
Some utilities like this will became definitely needed in short time..... maybe all this will find the way into the TLS module - that will be actually it;s purpose - pure management and provisioning.
Now I'm ready for some discussions :-)
regards klaus
Devel mailing list Devel@openser.org http://openser.org/cgi-bin/mailman/listinfo/devel
Bogdan-Andrei Iancu wrote:
Klaus Darilion wrote:
Hi all!
There are several scenarios where TLS will be used to interconnect SIP proxies. (open)ser's TLS implementation should be generic enough to handle all the useful scenarios. Thus, to better understand the requirements, first I present some examples where (open)ser+TLS will be useful. (I do not propose which of the following interconnect models are good or bad. However, openser should be capable to handle all of them, best in a mixed mode).
Enterprise scenario: A company uses TLS to interconnect their SIP proxies via public Internet. The proxies import the companies selfsigned CA-cert as trusted CAs. The proxies trust other proxies as soon as their cert is validated using the root CA. This is already possible using openser 1.0.0 (= or ser+experimental TLS)
Federation scenario: Some ITSPs form a federation. The federation-CA signs the certs of the ITSPs. Here, the validation is like in the enterprise scenario. (open)ser validates against the federations CA-cert. This works with openser 1.0.0 as long as the ITSP is only in one federation, or uses different egress/ingress points for each federation. If the ITSP is member of two federations and uses one egress/ingress proxy, it has to decide which certificate it should present to the peer. The originating proxy could choose the proper client certificate for example by using a table like (or having the certificate as blob directly in the DB):
dst_domain certificate
sip.atlanta.com /etc/openser/federationAcert.pem sip.biloxy.com /etc/openser/federationBcert.pem sip.chicago.com /etc/openser/federationAcert.pem
Presenting the proper server certificate, is more difficult. The server does not know if the incoming TLS request belongs to a member of fedA, fedB or someone else. Thus, presenting the wrong certificate will lead to the clients rejecting the certificate due to failed validation. One solution would be sending the "trusted_ca_keys" (TLS extension) in Client Hello. Unfortunatelly this is not supported in openssl (and gnutls). Any workaround for this?
As I understood from Cesc, gnutls already support this extension, but to migrate to gnutls and restart all testing may not pay the effort as time as it's just a matter of time until the extension will be also available in openssl. As temporary solution I will suggest to go by default without the extension patch, but to provide the patch into the TLS directory and people interested in these multi-domain scenarios will have to apply and recompile the openssl lib. And maybe we should do some lobby (read pressure) on the openssl mailing list in order to push this extension in the official tree.
ACK
Anyway, in this scenario it is important to have the certificate parameters (Subject, Issuer) available in the routing logic to make routing decisions based on the TLS authenticaten and adding them to the CDRs (e.g. via AVPs and extra accounting)
interesting but there might be some problems - the information you want to log comes from transport layer and you try you log it by using mechanism from the SIP level. It will works, but the info will be actually available only for requests that initiated the TLS connection (send or received) and not also for the requests that reuse the connection.
Where is the missing link? Is it possible to retrieve the tcp_connection from which a SIP message was received? If yes, we should be able to get the SSL object (tcp_connection->extra_data). Is it possible to retrieve the certificate porpierties if we have the SSL obejct?
Bilateral scenario: An ITSP has bilateral trust relationships. Each ITSP has its own CA which signs the certs of this ITSP. If another ITSP wants to trust this ISTP it only has to import the others CA-cert. This works already with openser 1.0.0, but exporting the cert parameters for extra accounting will be useful.
not sure what you mean by cert parameters.......
The properties (subject: common name, Issuer ...)
Hosted SIP scenario: An ITSP hosts multiple SIP domains for its customers. If the server has to offer a certificate which includes the proper SIP domain, the server_name extension is needed to indicate the requested domain in the client_hello request. Then the server will present the proper certificate and domain validation (Subject domain == SIP domain) in the client will succeed.
the solution will also the mighty extension, indeed.....
This will work fine with initial (out-of-dialog) requests as they usually will include the SIP domain in the request URI. There will be problems for responses and in-dialog requests as usually the Record-Route and Via headers only includes IP addresses. Thus, the SIP proxy either has to insert the SIP domain into Via and Record-Route, or the domain validation should only be done for in-dialog requests.
I don't thing we should worry about replies - they will return via same connexion - the expiration time of a tcp connection must be higher than the expiration time of a transaction..
Is'nt is a valid scenario that the TLS connection may dropped (for whatever reason) during an ongoing transaction and thus, the TLS connection must be re-established for the replies?
But about the within the dialog requests - you have a strong case here!! But is actually more complex : you need to know the inbound and outbound domains actually - if you received the request from another peer via TLS and fed it also via TLS to another peer (relaying) will need to remember both domains since the within the dialog request may flow in both directions ;). Maybe storing the domain names as RR param is the simplest and uglier solution...in the mean while I think is the only one without involving any dialog persistence.
As workaround (IMO this is a bug in RFC3261) I would validate domains only for out-of-dialog request.
This leads to the problem of domain validation. The TLS connection will be set up after all the routing logic, somewhere inside t_relay. Thus, if we want domain validation, it will be inside t_relay. Maybe we can use a certain flag to indicate if domain-validation should be done (on a per-transaction basis). This might cause problems if there is already a TLS connection to the requested destination, but without domain validation or validation against a different domain (virtual domain hosting). How to solve this?
one premiss we should based on is the fact that cannot exists (in my opinion) connections that should or not require domain validation in different case. Argumentation: AFAIK only two types of connections can be: user oriented and peering oriented; the first type will not require validation at all and the second one may or may nor, based on local policy. So, I think, we cannot have a case when connection to X will require validation and later no.
I think we should differ between validation of the certificate against the CA certificates and validation of domain names (e.g. From domain == common name domain)
To control the validation (and maybe other parameter of the connection), prior setting from the script may be the solution - I was investigating with Cesc the idea of building a TLS module which will be used for provisioning the cert and to control the connection params. The TLS engine itself will stay in core as now.
So, I would say we never reach the case when we want to reuse an existing connection but with different settings.
regards klaus
Hi Klaus,
Klaus Darilion wrote:
Bogdan-Andrei Iancu wrote:
Klaus Darilion wrote:
Hi all!
There are several scenarios where TLS will be used to interconnect SIP proxies. (open)ser's TLS implementation should be generic enough to handle all the useful scenarios. Thus, to better understand the requirements, first I present some examples where (open)ser+TLS will be useful. (I do not propose which of the following interconnect models are good or bad. However, openser should be capable to handle all of them, best in a mixed mode).
Enterprise scenario: A company uses TLS to interconnect their SIP proxies via public Internet. The proxies import the companies selfsigned CA-cert as trusted CAs. The proxies trust other proxies as soon as their cert is validated using the root CA. This is already possible using openser 1.0.0 (= or ser+experimental TLS)
Federation scenario: Some ITSPs form a federation. The federation-CA signs the certs of the ITSPs. Here, the validation is like in the enterprise scenario. (open)ser validates against the federations CA-cert. This works with openser 1.0.0 as long as the ITSP is only in one federation, or uses different egress/ingress points for each federation. If the ITSP is member of two federations and uses one egress/ingress proxy, it has to decide which certificate it should present to the peer. The originating proxy could choose the proper client certificate for example by using a table like (or having the certificate as blob directly in the DB):
dst_domain certificate
sip.atlanta.com /etc/openser/federationAcert.pem sip.biloxy.com /etc/openser/federationBcert.pem sip.chicago.com /etc/openser/federationAcert.pem
Presenting the proper server certificate, is more difficult. The server does not know if the incoming TLS request belongs to a member of fedA, fedB or someone else. Thus, presenting the wrong certificate will lead to the clients rejecting the certificate due to failed validation. One solution would be sending the "trusted_ca_keys" (TLS extension) in Client Hello. Unfortunatelly this is not supported in openssl (and gnutls). Any workaround for this?
As I understood from Cesc, gnutls already support this extension, but to migrate to gnutls and restart all testing may not pay the effort as time as it's just a matter of time until the extension will be also available in openssl. As temporary solution I will suggest to go by default without the extension patch, but to provide the patch into the TLS directory and people interested in these multi-domain scenarios will have to apply and recompile the openssl lib. And maybe we should do some lobby (read pressure) on the openssl mailing list in order to push this extension in the official tree.
ACK
Anyway, in this scenario it is important to have the certificate parameters (Subject, Issuer) available in the routing logic to make routing decisions based on the TLS authenticaten and adding them to the CDRs (e.g. via AVPs and extra accounting)
interesting but there might be some problems - the information you want to log comes from transport layer and you try you log it by using mechanism from the SIP level. It will works, but the info will be actually available only for requests that initiated the TLS connection (send or received) and not also for the requests that reuse the connection.
Where is the missing link? Is it possible to retrieve the tcp_connection from which a SIP message was received? If yes, we should be able to get the SSL object (tcp_connection->extra_data). Is it possible to retrieve the certificate porpierties if we have the SSL obejct?
yes - this is possible - make available info about the connection evolved even if it's new one or a re-used one.
Bilateral scenario: An ITSP has bilateral trust relationships. Each ITSP has its own CA which signs the certs of this ITSP. If another ITSP wants to trust this ISTP it only has to import the others CA-cert. This works already with openser 1.0.0, but exporting the cert parameters for extra accounting will be useful.
not sure what you mean by cert parameters.......
The properties (subject: common name, Issuer ...)
right.
Hosted SIP scenario: An ITSP hosts multiple SIP domains for its customers. If the server has to offer a certificate which includes the proper SIP domain, the server_name extension is needed to indicate the requested domain in the client_hello request. Then the server will present the proper certificate and domain validation (Subject domain == SIP domain) in the client will succeed.
the solution will also the mighty extension, indeed.....
This will work fine with initial (out-of-dialog) requests as they usually will include the SIP domain in the request URI. There will be problems for responses and in-dialog requests as usually the Record-Route and Via headers only includes IP addresses. Thus, the SIP proxy either has to insert the SIP domain into Via and Record-Route, or the domain validation should only be done for in-dialog requests.
I don't thing we should worry about replies - they will return via same connexion - the expiration time of a tcp connection must be higher than the expiration time of a transaction..
Is'nt is a valid scenario that the TLS connection may dropped (for whatever reason) during an ongoing transaction and thus, the TLS connection must be re-established for the replies?
yes it's a valid scenario. If so you cannot relay on any transaction related data as you may choose to statelessly forward the request. Sounds bloated but in this case you need to store it in VIA also :-/ - i really don;t like the idea :(
But about the within the dialog requests - you have a strong case here!! But is actually more complex : you need to know the inbound and outbound domains actually - if you received the request from another peer via TLS and fed it also via TLS to another peer (relaying) will need to remember both domains since the within the dialog request may flow in both directions ;). Maybe storing the domain names as RR param is the simplest and uglier solution...in the mean while I think is the only one without involving any dialog persistence.
As workaround (IMO this is a bug in RFC3261) I would validate domains only for out-of-dialog request.
maybe this should be a configurable option.....at least on our side......
regards, bogdan
Bogdan-Andrei Iancu wrote:
Hi Klaus,
indeed this is a long email ;). please see my inline comments.
regards. Bogdan
Klaus Darilion wrote:
Hi all!
There are several scenarios where TLS will be used to interconnect SIP proxies. (open)ser's TLS implementation should be generic enough to handle all the useful scenarios. Thus, to better understand the requirements, first I present some examples where (open)ser+TLS will be useful. (I do not propose which of the following interconnect models are good or bad. However, openser should be capable to handle all of them, best in a mixed mode).
Enterprise scenario: A company uses TLS to interconnect their SIP proxies via public Internet. The proxies import the companies selfsigned CA-cert as trusted CAs. The proxies trust other proxies as soon as their cert is validated using the root CA. This is already possible using openser 1.0.0 (= or ser+experimental TLS)
Federation scenario: Some ITSPs form a federation. The federation-CA signs the certs of the ITSPs. Here, the validation is like in the enterprise scenario. (open)ser validates against the federations CA-cert. This works with openser 1.0.0 as long as the ITSP is only in one federation, or uses different egress/ingress points for each federation. If the ITSP is member of two federations and uses one egress/ingress proxy, it has to decide which certificate it should present to the peer. The originating proxy could choose the proper client certificate for example by using a table like (or having the certificate as blob directly in the DB):
dst_domain certificate
sip.atlanta.com /etc/openser/federationAcert.pem sip.biloxy.com /etc/openser/federationBcert.pem sip.chicago.com /etc/openser/federationAcert.pem
Presenting the proper server certificate, is more difficult. The server does not know if the incoming TLS request belongs to a member of fedA, fedB or someone else. Thus, presenting the wrong certificate will lead to the clients rejecting the certificate due to failed validation. One solution would be sending the "trusted_ca_keys" (TLS extension) in Client Hello. Unfortunatelly this is not supported in openssl (and gnutls). Any workaround for this?
As I understood from Cesc, gnutls already support this extension, but to migrate to gnutls and restart all testing may not pay the effort as time as it's just a matter of time until the extension will be also available in openssl. As temporary solution I will suggest to go by default without the extension patch, but to provide the patch into the TLS directory and people interested in these multi-domain scenarios will have to apply and recompile the openssl lib. And maybe we should do some lobby (read pressure) on the openssl mailing list in order to push this extension in the official tree.
Just an idea.
Can't openser use different ports for each domain it's serving? This of course requires that SRV records are configured in the DNS and that the UAC supports SRV.
domain port certificate atlanta.com 5061 /etc/openser/federationAcert.pem biloxy.com 5063 /etc/openser/federationBcert.pem chicago.com 5065 /etc/openser/federationAcert.pem
Regards, Mikael
Mikael Magnusson wrote:
Bogdan-Andrei Iancu wrote:
Hi Klaus,
indeed this is a long email ;). please see my inline comments.
regards. Bogdan
Klaus Darilion wrote:
Hi all!
There are several scenarios where TLS will be used to interconnect SIP proxies. (open)ser's TLS implementation should be generic enough to handle all the useful scenarios. Thus, to better understand the requirements, first I present some examples where (open)ser+TLS will be useful. (I do not propose which of the following interconnect models are good or bad. However, openser should be capable to handle all of them, best in a mixed mode).
Enterprise scenario: A company uses TLS to interconnect their SIP proxies via public Internet. The proxies import the companies selfsigned CA-cert as trusted CAs. The proxies trust other proxies as soon as their cert is validated using the root CA. This is already possible using openser 1.0.0 (= or ser+experimental TLS)
Federation scenario: Some ITSPs form a federation. The federation-CA signs the certs of the ITSPs. Here, the validation is like in the enterprise scenario. (open)ser validates against the federations CA-cert. This works with openser 1.0.0 as long as the ITSP is only in one federation, or uses different egress/ingress points for each federation. If the ITSP is member of two federations and uses one egress/ingress proxy, it has to decide which certificate it should present to the peer. The originating proxy could choose the proper client certificate for example by using a table like (or having the certificate as blob directly in the DB):
dst_domain certificate
sip.atlanta.com /etc/openser/federationAcert.pem sip.biloxy.com /etc/openser/federationBcert.pem sip.chicago.com /etc/openser/federationAcert.pem
Presenting the proper server certificate, is more difficult. The server does not know if the incoming TLS request belongs to a member of fedA, fedB or someone else. Thus, presenting the wrong certificate will lead to the clients rejecting the certificate due to failed validation. One solution would be sending the "trusted_ca_keys" (TLS extension) in Client Hello. Unfortunatelly this is not supported in openssl (and gnutls). Any workaround for this?
As I understood from Cesc, gnutls already support this extension, but to migrate to gnutls and restart all testing may not pay the effort as time as it's just a matter of time until the extension will be also available in openssl. As temporary solution I will suggest to go by default without the extension patch, but to provide the patch into the TLS directory and people interested in these multi-domain scenarios will have to apply and recompile the openssl lib. And maybe we should do some lobby (read pressure) on the openssl mailing list in order to push this extension in the official tree.
Just an idea.
Can't openser use different ports for each domain it's serving? This of course requires that SRV records are configured in the DNS and that the UAC supports SRV.
domain port certificate atlanta.com 5061 /etc/openser/federationAcert.pem biloxy.com 5063 /etc/openser/federationBcert.pem chicago.com 5065 /etc/openser/federationAcert.pem
AFAIK this is possible: http://openser.org/docs/tls.html#AEN343
But I think this is not nice solution (restart ser, firewall settings ...). It might be useful when hosting multiple domains, but only one federation. But if you take part in several federations you would have to publish different ingress points for each federation. That does nto scale. E.g. if you use ENUM (e164.arpa) to publish your ingress point, you can publish a sinlge URI for all federations.
regards klaus
Hi all,
A couple of notes i would like to remark ...
* On the "tls name extensions" ... it is indeed needed and it is not in openSSL. I do think we have a strong case for lobbying directly to OpenSSL core developers ... and i think openSER (and ser) have a rather strong arm. We could get in touch with the developer of the patch and openSSL core dev. Meanwhile ... the solution of providing the patch ... i see it as complicated and it won't spread very far, thus limiting the usefulness ... it could be sold as a way of testing the name extension patch and speed up it's inclusion in openssl ... but until that time, i think we should focus on other scenarios of openSER-tls.
* Klaus' initial email and scenarios ... I think it is a very enlightening explanation and it should be included in a tls-faq, but ... i would say that security is a very particular thing, and different people may wish to do things in a different way, thus we should provide a flexible solution. In my opinion, a core that sets up TLS connection plus a security-tls module which provides access to verification of certs against DB entries, tls connection management (tear down, etc), and this sort of stuff; this would be my choice. Provide the functinality, provide a nice FAQ and examples on standard practices, but give the user the power to do whatever he wants.
Regards,
Cesc
Cesc wrote:
Hi all,
A couple of notes i would like to remark ...
- On the "tls name extensions" ... it is indeed needed and it is not
in openSSL. I do think we have a strong case for lobbying directly to OpenSSL core developers ... and i think openSER (and ser) have a rather strong arm. We could get in touch with the developer of the patch and openSSL core dev.
Thus, who will contact the openssl developers?
Meanwhile ... the solution of providing the patch ... i see it as complicated and it won't spread very far, thus limiting the usefulness ... it could be sold as a way of testing the name extension patch and speed up it's inclusion in openssl ... but until that time, i think we should focus on other scenarios of openSER-tls.
- Klaus' initial email and scenarios ... I think it is a very
enlightening explanation and it should be included in a tls-faq, but ... i would say that security is a very particular thing, and different people may wish to do things in a different way, thus we should provide a flexible solution. In my opinion, a core that sets up TLS connection plus a security-tls module which provides access to verification of certs against DB entries, tls connection management (tear down, etc), and this sort of stuff; this would be my choice. Provide the functinality, provide a nice FAQ and examples on standard practices, but give the user the power to do whatever he wants.
I agree with you. My scenarios were just some the possible examples.
klaus
Regards,
Cesc
Hi all,
Klaus Darilion wrote:
Cesc wrote:
Hi all,
A couple of notes i would like to remark ...
- On the "tls name extensions" ... it is indeed needed and it is not
in openSSL. I do think we have a strong case for lobbying directly to OpenSSL core developers ... and i think openSER (and ser) have a rather strong arm. We could get in touch with the developer of the patch and openSSL core dev.
Thus, who will contact the openssl developers?
I can do the job, no prob...but should we do it via public lists or sending directly the relevant person (the guy that did the patch and the core developer of the project)
Meanwhile ... the solution of providing the patch ... i see it as complicated and it won't spread very far, thus limiting the usefulness ... it could be sold as a way of testing the name extension patch and speed up it's inclusion in openssl ... but until that time, i think we should focus on other scenarios of openSER-tls.
- Klaus' initial email and scenarios ... I think it is a very
enlightening explanation and it should be included in a tls-faq, but ... i would say that security is a very particular thing, and different people may wish to do things in a different way, thus we should provide a flexible solution. In my opinion, a core that sets up TLS connection plus a security-tls module which provides access to verification of certs against DB entries, tls connection management (tear down, etc), and this sort of stuff; this would be my choice. Provide the functinality, provide a nice FAQ and examples on standard practices, but give the user the power to do whatever he wants.
I agree with you. My scenarios were just some the possible examples.
ok -so we have a sort of agreement on the matter - I would say the first step will be extract all TLS management functionality and move it into a module. Than start building onto the module more control or monitoring functions (as Klaus suggested via AVPs).
regards, bogdan