It's very convenient to have R scripts for doing simple plots from the command line. However, running R from bash scripts is not convenient at all. The ideal might be something like
#!/path/to/R
...
or
#!/usr/bin/env R
...
but I haven't been able to make either of those work.
Another option is keeping the scripts purely in R, e.g. script.R
, and invoking it with R --file=script.R
or similar. However, occasionally a script will rely on obscure command line switches at which point part of the code exists outside the script. Example: sneaking things into R from bash via a local .Rprofile, the desired switches are then everything --vanilla
implies except --no-init-file
.
Another option is a bash script to store the R flags and be painlessly executable, which then calls the R script. The problem is that this means a single program just got split into two files which now have to be keep in sync, transferred to new machines together, etc.
The option I currently despise least is embedding the R in a bash script:
#!/bin/bash
... # usage message to catch bad input without invoking R
... # any bash pre-processing of input
... # etc
R --random-flags <<RSCRIPT
# R code goes here
RSCRIPT
Everything's in a single file. It's executable and easily handles arguments. The problem is that combining bash and R like this pretty much eliminates the possibility of any IDE not failing on one or the other, and makes my heart hurt real bad.
Is there some better way I'm missing?
Content of script.r
:
#!/usr/bin/env Rscript
args = commandArgs(trailingOnly = TRUE)
message(sprintf("Hello %s", args[1L]))
The first line is the shebang line. It’s best practice to use /usr/bin/env Rscript
instead of hard-coding the path to your R installation. Otherwise you risk your script breaking on other computers.
Next, make it executable (on the command line):
chmod +x script.r
Invocation from command line:
./script.r world
# Hello world