I have split a large text file into a number of sets of smaller ones for performance testing that i'm doing. There are a number of directories like this:
/home/brianly/output-02 (contains 2 files myfile.chunk.00 and myfile.chunk.01)
/home/brianly/output-04 (contains 4 files...)
/home/brianly/output-06 (contains 6 files...)
It's important to note that there is an increasing number of files in each directory. What I need to do is run an executable against each of the text files in the output directories. The command looks something like this against a single file:
./myexecutable -i /home/brianly/output-02/myfile.chunk.00 -o /home/brianly/output-02/myfile.chunk.00.processed
Here the -i parameter is the input file and -o parameter is the output location.
In C# I'd loop over the directories get the list of files in each folder, then loop over them to run the commandlines. How do I traverse a directory structure like this using bash, and execute the command with the correct parameters based on the location and files in that location?
For this kind of thing I always use find together with xargs:
$ find output-* -name "*.chunk.??" | xargs -I{} ./myexecutable -i {} -o {}.processed
Now since your script processes only one file at a time, using -exec (or -execdir) directly with find, as already suggested, is just as efficient, but I'm used to using xargs, as that's generally much more efficient when feeding a command operating on many arguments at once. Thus it's a very useful tool to keep in one's utility belt, so I thought it ought to be mentioned.