When ssh'ing in to a remote system (such as a cluster with substantial compute power and/or graphics harware) with X11 forwarding (eg, using ssh -X
or -Y
), where is the graphics rendering done? How would you run a graphics-intensive workload in such a way that it took advantage of the cluster's graphics hardware? And does running the program in a VM on the cluster complicate matters?
In X11 rendering always happens on the X11 server side, i.e. on the system that the display server is running on.
How would you run a graphics-intensive workload in such a way that it took advantage of the cluster's graphics hardware?
By running the X11 server on the clusters' systems and only redirect the output to the display system. There are several projects implementing this: VirtualGL and Chromium to name two.
However my personal favorite is using Xpra with a X server that utilzes the GPU. However the unfortunate drawback is, that with Xorg's current driver model you can not share the GPU between X servers. Yes you can run multiple X servers at the same time, but only one can make use of the GPU at any time.
Also keep in mind, that clustered GPU rendering is not easily done. So far NVidia is the only GPU vendor to provide a turnkey remote cluster rendering solution.