Skip to content

parallel: socket connection behind a NAT router

7 messages · Henrik Bengtsson, Martin Morgan, Jiefei Wang

#
Hi all,

I have a few cloud instances and I want to use them to do parallel
computing. I would like to create a socket cluster on my local machine to
control the remote instances. Here is my network setup:

local machine -- NAT -- Internet -- cloud instances

In the parallel package, the server needs to call `makeCluster()` and
listens to the connection from the workers. In my case, the server is the
local machine and the workers are the cloud instances. However, since the
local machine is hidden behind the NAT, it does not have a public address
and the worker cannot connect to it. Therefore, `makeCluster()` will never
be able to see the connection from the workers and hang forever.

One solution for letting the external machine to access the device inside
the NAT is to use port forwarding. However, this would not work for my case
as the NAT is set by the network provider(not my home router) so I do not
have access to the router. As the cloud instances have public addresses,
I'll wonder if there is any way to build the cluster by letting the server
connect to the cloud? I have checked `?parallel::makeCluster` and
`?snow::makeSOCKcluster` but I found no result. The only promising solution
I can see now is to use TCP hole punching, but it is quite complicated and
may not work for every case. Since building a connection from local to the
remote is super easy, I would like to know if there exists any simple
solution. I have searched it on Google for a week but find no answer. I'll
appreciate it if you can provide me any suggestions!

Best,
Jiefei
#
If you have SSH access to the workers, then

workers <- c("machine1.example.org", "machine2.example.org")
cl <- parallelly::makeClusterPSOCK(workers)

should do it.  It does this without admin rights and port forwarding.
See also the README in https://cran.r-project.org/package=parallelly.

/Henrik
On Mon, Jan 18, 2021 at 6:45 AM Jiefei Wang <szwjf08 at gmail.com> wrote:
#
Thanks for introducing this interesting package to me! it is great to know
a new powerful tool, but it seems like this method does not work in my
environment. ` parallelly::makeClusterPSOCK` will hang until timeout.

I checked the verbose output and it looks like the parallelly package also
depends on `parallel:::.slaveRSOCK` on the remote instance to build the
connection. This explains why it failed for the local machine does not have
a public IP and the remote does not know how to build the connection.

I see in README the package states it works with "remote clusters without
knowing public IP". I think this might be where the confusion is, it may
mean the remote machine does not have a public IP, but the server machine
does. I'm in the opposite situation, the server does not have a public IP,
but the remote does. I'm not sure if this package can handle my case, but
it looks very powerful and I appreciate your help!

Best,
Jiefei





On Tue, Jan 19, 2021 at 1:22 AM Henrik Bengtsson <henrik.bengtsson at gmail.com>
wrote:

  
  
#
On Mon, Jan 18, 2021 at 9:42 PM Jiefei Wang <szwjf08 at gmail.com> wrote:
It's correct that the worker does attempt to connect back to the
parent R process that runs on your local machine.  However, it does
*not* do so by your local machines public IP address but it does it by
connecting to a port on its own machine - a port that was set up by
SSH.  More specifically, when parallelly::makeClusterPSOCK() connects
to the remote machine over SSH it also sets up a so-called reverse SSH
tunnel with a certain port on your local machine and certain port of
your remote machine.  This is what happens:
[local output] Workers: [n = 1] 'machine1.example.org'
[local output] Base port: 11019
...
[local output] Starting worker #1 on 'machine1.example.org':
'/usr/bin/ssh' -R 11068:localhost:11068 machine1.example.org
"'Rscript' --default-packages=datasets,utils,grDevices,graphics,stats,methods
-e 'workRSOCK <- tryCatch(parallel:::.slaveRSOCK, error=function(e)
parallel:::.workRSOCK); workRSOCK()' MASTER=localhost PORT=11068
OUT=/dev/null TIMEOUT=2592000 XDR=FALSE"
[local output] - Exit code of system() call: 0
[local output] Waiting for worker #1 on 'machine1.example.org' to
connect back  '/usr/bin/ssh' -R 11019:localhost:11019
machine1.example.org "'Rscript'
--default-packages=datasets,utils,grDevices,graphics,stats,methods -e
'workRSOCK <- tryCatch(parallel:::.slaveRSOCK, error=function(e)
parallel:::.workRSOCK); workRSOCK()' MASTER=localhost PORT=11019
OUT=/dev/null TIMEOUT=2592000 XDR=FALSE"

All the magic is in that SSH option '-R 11068:localhost:11068' SSH
options, which allow the parent R process on your local machine to
communicate with the remote worker R process on its own port 11068,
and vice versa, the worker R process will communicate with the parent
R process as if it was running on MASTER=localhost PORT=11068.
Basically, for all that the worker R process' knows, the parent R
process runs on the same machine as itself.

You haven't said what operating system you're running on your local
machine, but if it's MS Windows, know that the 'ssh' client that comes
with Windows 10 has some bugs in its reverse tunneling.  See
?parallelly::makeClusterPSOCK for lots of details.  You also haven't
said what OS the cloud workers run, but I assume it's Linux.

So, my guesses on your setup is, the above "should work" for you.  For
your troubleshooting, you can also set argument outfile=NULL.  Then
you'll also see output from the worker R process.  There are
additional troubleshooting suggestions in Section 'Failing to set up
remote workers' of ?parallelly::makeClusterPSOCK that will help you
figure out what the problem is.
Thanks. I've updated the text to "remote clusters without knowing
[local] public IP".

/Henrik
#
A different approach uses doRedis https://CRAN.R-project.org/package=doRedis (currently archived, but actively developed) for use with the foreach package, or RedisParam https://github.com/mtmorgan/RedisParam (not released) for use with Bioconductor's BiocParallel package.

These use a redis server https://redis.io/ to communicate -- the manager submits jobs / obtains results from the redis server, the workers retrieve jobs / submit results to the redis server. Manager and worker need to know the (http) address of the server, etc, but there are no other ports involved.

Redis servers are easy to establish in a cloud environment, using e.g., existing AWS or docker images. The README for doRedis https://github.com/bwlewis/doRedis probably provides the easiest introduction.

The (not mature) k8sredis Kubernetes / helm chart https://github.com/Bioconductor/k8sredis illustrates a complete system using RedisParam, deploying manager and workers locally or in the google cloud; the app could be modified to only start the workers in the cloud, exposing the redis server for access by a local 'manager'; this would be cool.

Martin 

?On 1/19/21, 1:50 AM, "R-help on behalf of Henrik Bengtsson" <r-help-bounces at r-project.org on behalf of henrik.bengtsson at gmail.com> wrote:
On Mon, Jan 18, 2021 at 9:42 PM Jiefei Wang <szwjf08 at gmail.com> wrote:
>
    > Thanks for introducing this interesting package to me! it is great to know a new powerful tool, but it seems like this method does not work in my environment. ` parallelly::makeClusterPSOCK` will hang until timeout.
    >
    > I checked the verbose output and it looks like the parallelly package also depends on `parallel:::.slaveRSOCK` on the remote instance to build the connection. This explains why it failed for the local machine does not have a public IP and the remote does not know how to build the connection.

    It's correct that the worker does attempt to connect back to the
    parent R process that runs on your local machine.  However, it does
    *not* do so by your local machines public IP address but it does it by
    connecting to a port on its own machine - a port that was set up by
    SSH.  More specifically, when parallelly::makeClusterPSOCK() connects
    to the remote machine over SSH it also sets up a so-called reverse SSH
    tunnel with a certain port on your local machine and certain port of
    your remote machine.  This is what happens:

    > cl <- parallelly::makeClusterPSOCK("machine1.example.org", verbose=TRUE)
    [local output] Workers: [n = 1] 'machine1.example.org'
    [local output] Base port: 11019
    ...
    [local output] Starting worker #1 on 'machine1.example.org':
    '/usr/bin/ssh' -R 11068:localhost:11068 machine1.example.org
    "'Rscript' --default-packages=datasets,utils,grDevices,graphics,stats,methods
    -e 'workRSOCK <- tryCatch(parallel:::.slaveRSOCK, error=function(e)
    parallel:::.workRSOCK); workRSOCK()' MASTER=localhost PORT=11068
    OUT=/dev/null TIMEOUT=2592000 XDR=FALSE"
    [local output] - Exit code of system() call: 0
    [local output] Waiting for worker #1 on 'machine1.example.org' to
    connect back  '/usr/bin/ssh' -R 11019:localhost:11019
    machine1.example.org "'Rscript'
    --default-packages=datasets,utils,grDevices,graphics,stats,methods -e
    'workRSOCK <- tryCatch(parallel:::.slaveRSOCK, error=function(e)
    parallel:::.workRSOCK); workRSOCK()' MASTER=localhost PORT=11019
    OUT=/dev/null TIMEOUT=2592000 XDR=FALSE"

    All the magic is in that SSH option '-R 11068:localhost:11068' SSH
    options, which allow the parent R process on your local machine to
    communicate with the remote worker R process on its own port 11068,
    and vice versa, the worker R process will communicate with the parent
    R process as if it was running on MASTER=localhost PORT=11068.
    Basically, for all that the worker R process' knows, the parent R
    process runs on the same machine as itself.

    You haven't said what operating system you're running on your local
    machine, but if it's MS Windows, know that the 'ssh' client that comes
    with Windows 10 has some bugs in its reverse tunneling.  See
    ?parallelly::makeClusterPSOCK for lots of details.  You also haven't
    said what OS the cloud workers run, but I assume it's Linux.

    So, my guesses on your setup is, the above "should work" for you.  For
    your troubleshooting, you can also set argument outfile=NULL.  Then
    you'll also see output from the worker R process.  There are
    additional troubleshooting suggestions in Section 'Failing to set up
    remote workers' of ?parallelly::makeClusterPSOCK that will help you
    figure out what the problem is.

    >
    > I see in README the package states it works with "remote clusters without knowing public IP". I think this might be where the confusion is, it may mean the remote machine does not have a public IP, but the server machine does. I'm in the opposite situation, the server does not have a public IP, but the remote does. I'm not sure if this package can handle my case, but it looks very powerful and I appreciate your help!

    Thanks. I've updated the text to "remote clusters without knowing
    [local] public IP".

    /Henrik

    >
    > Best,
    > Jiefei
    >
    >
    >
    >
    >
> On Tue, Jan 19, 2021 at 1:22 AM Henrik Bengtsson <henrik.bengtsson at gmail.com> wrote:
>>
    >> If you have SSH access to the workers, then
    >>
    >> workers <- c("machine1.example.org", "machine2.example.org")
    >> cl <- parallelly::makeClusterPSOCK(workers)
    >>
    >> should do it.  It does this without admin rights and port forwarding.
    >> See also the README in https://cran.r-project.org/package=parallelly.
    >>
    >> /Henrik
    >>
>> On Mon, Jan 18, 2021 at 6:45 AM Jiefei Wang <szwjf08 at gmail.com> wrote:
>> >
    >> > Hi all,
    >> >
    >> > I have a few cloud instances and I want to use them to do parallel
    >> > computing. I would like to create a socket cluster on my local machine to
    >> > control the remote instances. Here is my network setup:
    >> >
    >> > local machine -- NAT -- Internet -- cloud instances
    >> >
    >> > In the parallel package, the server needs to call `makeCluster()` and
    >> > listens to the connection from the workers. In my case, the server is the
    >> > local machine and the workers are the cloud instances. However, since the
    >> > local machine is hidden behind the NAT, it does not have a public address
    >> > and the worker cannot connect to it. Therefore, `makeCluster()` will never
    >> > be able to see the connection from the workers and hang forever.
    >> >
    >> > One solution for letting the external machine to access the device inside
    >> > the NAT is to use port forwarding. However, this would not work for my case
    >> > as the NAT is set by the network provider(not my home router) so I do not
    >> > have access to the router. As the cloud instances have public addresses,
    >> > I'll wonder if there is any way to build the cluster by letting the server
    >> > connect to the cloud? I have checked `?parallel::makeCluster` and
    >> > `?snow::makeSOCKcluster` but I found no result. The only promising solution
    >> > I can see now is to use TCP hole punching, but it is quite complicated and
    >> > may not work for every case. Since building a connection from local to the
    >> > remote is super easy, I would like to know if there exists any simple
    >> > solution. I have searched it on Google for a week but find no answer. I'll
    >> > appreciate it if you can provide me any suggestions!
    >> >
    >> > Best,
    >> > Jiefei
    >> >
    >> >         [[alternative HTML version deleted]]
    >> >
    >> > ______________________________________________
    >> > R-help at r-project.org mailing list -- To UNSUBSCRIBE and more, see
    >> > https://stat.ethz.ch/mailman/listinfo/r-help
    >> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
    >> > and provide commented, minimal, self-contained, reproducible code.

    ______________________________________________
    R-help at r-project.org mailing list -- To UNSUBSCRIBE and more, see
    https://stat.ethz.ch/mailman/listinfo/r-help
    PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
    and provide commented, minimal, self-contained, reproducible code.
#
Thank you! It works now!!

Your guess is correct, I'm using windows so the default ssh does not work.
Sadly the bug hasn't been fixed yet. The PuTTY solution works like a charm.
Glad to know the ssh tunneling trick. This is much simpler than using port
hole punching. This package is awesome! Many thanks for your help!!!

Best,
Jiefei

On Tue, Jan 19, 2021 at 2:50 PM Henrik Bengtsson <henrik.bengtsson at gmail.com>
wrote:

  
  
#
Thanks! This solution also looks promising. It should be more stable than
using ssh tunneling. I will also explore this method.

Best,
Jiefei

On Tue, Jan 19, 2021 at 3:11 PM Martin Morgan <mtmorgan.bioc at gmail.com>
wrote: