I'm running parallel tests with Jenkins.
The way I have it set up is I have a build flow job that executes three other jobs, in parallel. The three other jobs are connected to separate Test XML files.
When I initially started this I had a problem that only two jobs would execute at the same time and the third job would only execute after one of the others had finished.
I found this to be due to my Jenkins having the number of executors set to 2
, which is now set to 5
.
However, as a matter of interest, just for future planning, does Jenkins have a cap on the amount of executors you can have? Or is there a recommended number that you shouldn't exceed? Or would it be solely down to the environment you are running it on?
If there is a cap/recommend number not to exceed? I presume the best way to deal with this would be to use a master/slave scenario and spread the workload across multiple VMs.
For example, if I had it set to 6 executors, would this mean I would have 6 executors on each VM? Or 6 executors that are shared out between the VMs?
It really depends on your environment and the amount of resources allocated to that instance of Jenkins. I do not believe there is any limit to the amount of executor that Jenkins allows. We currently run a single instance of Jenkins with 20 executors with no problems. Of course, depending on your build pipeline and commit patterns, you may find that the number of executors you set is too high. You would just have to keep an eye on whether you actually used all the executors at any given time.
If you find that you are nearing the ceiling of resource for your instance of Jenkins and you can't increase the resource limits then you would want to start using slaves.
One way we keep an eye on resource usage is through the Monitoring plugin.
***Edit:
We recently had to reevaluate our executor to processor ratio due to what our pipelines are actually doing. You really need to determine how CPU bound your pipelines / jobs are. This will inevitable vary between use cases.
In our case, our pipelines were very CPU bound and became more so as we optimized our system, introduced parallel processing, etc. We ended up with one executor per cpu core, with a few extra "admin" executors to run quick / lightweight tasks.
We currently run jenkins on a single machine with 20 cores, but again, this will vary depending on your use case. We had to do a lot of calculations to determine how many excutors / cpu cores we need based off of our build frequency and target build time. When we eventually have to increase our processing power, we'll end up utilizing slaves to distribute jobs between multiple machines.
One other note: you also need to take into account the available memory since increasing the number of executors increases the amount of memory consumed. We had a case where we had enough processing power but not enough memory, which resulted in Jenkins' processes crashing due to OutOfMemory errors.