I'm using a cluster managed by slurm to run some yarn/hadoop benchmarks. To do this I am starting the hadoop servers on nodes allocated by slurm and then running the benchmarks on them. I realize that this is not the intended way to run a production hadoop cluster, but needs must.
To do this I started by writing a script that runs with srun eg srun -N 4 setup.sh
. This script writes the configuration files and starts the servers on the allocated nodes, with the lowest numbered machine acting as the master. This all works, and I am able to run applications.
However, as I would like to start the servers once and then launch multiple applications on them without restarting/encoding everything in at the begining I would like to use salloc
instead. I had thought that this would be a simple case of running salloc -N 4
and then running srun setup.sh
. Unfortunately this does not work as the different servers are unable to communicate with each other. Could any one explain to me what the difference in the operating environment is between using srun
and using salloc
then srun
?
Many thanks
Daniel
From the slurm-users mailing list:
sbatch and salloc allocate resources to the job, while srun launches parallel tasks across those resources. When invoked within a job allocation, srun will launch parallel tasks across some or all of the allocated resources. In that case, srun inherits by default the pertinent options of the sbatch or salloc which it runs under. You can then (usually) provide srun different options which will override what it receives by default. Each invocation of srun within a job is known as a job step.
srun can also be invoked outside of a job allocation. In that case, srun requests resources, and when those resources are granted, launches tasks across those resources as a single job and job step.