Interesting idea. Requests to the shared_port daemon use the daemonCore
command protocol, so we have a clean indication of whether it is a
SHARED_PORT_CONNECT request or not. We don't have to guess.
The default update method is UDP, so if you want to support that, you'd
also have to forward UDP packets. That will involve extra overhead of
resending the data, so it might actually be more overhead than just
handling TCP. The collector will see the forwarded UDP packets coming
from the local machine instead of from the original sender, unless there
is some clever way to spoof that. That could mess up the authorization
configuration a bit. Another complication with UDP is the security
session. If shared_port forwards all authentication requests to the
collector, then will shared_port have enough information to still be
able to forward the UDP packets it receives to the collector? It seems
possible to me, but I don't know for sure.
--Dan
On 1/17/13 5:26 PM, Todd Tannenbaum wrote:
Hi Dan -
When using the shared port server on the central manager (collector
machine), things get a little complicated. Users either need to
1. open up two ports on their firewall for the central manager (one
for the collector, one for the shared_port), or
2. open just one port on their firewall for the shared_port, but
then they need to edit CONDOR_HOST everywhere on all nodes (including
flocking), and also intuitive things like "condor_status -pool
hostname.com" will not work as it used to.
Could we do better? The simple idea (credit: zmiller) is if the
shared_port sees a malformed request (e.g. a stream that does not
begin w/ a proper daemon name), the shared_port would send it to the
collector by default if a collector is registered. The hope is this
heuristic would enable us to run the shared_port on port 9618
everywhere including the central manager, and yet no need to modify
CONDOR_HOST etc. We wanna have our cake and eat it as well.
Possible? Crazy? If you think it is reasonable/possible, we'll make a
ticket.
thanks
Todd
|