I run my own script to dump databases into files on a nightly basis.
I wanted to count time (in seconds) it takes to dump each database, so I was trying to write some functions to help me achieve it, but I'm running into problems.
I am no expert in scripting in bash, so if I'm doing it plain wrong, just say so and ideally suggest alternative, please.
Here's the script:
#!/bin/bash
declare -i time_start
function get_timestamp {
declare -i time_curr=`date -j -f "%a %b %d %T %Z %Y" "\`date\`" "+%s"`
echo "get_timestamp:" $time_curr
return $time_curr
}
function timer_start {
get_timestamp
time_start=$?
echo "timer_start:" $time_start
}
function timer_stop {
get_timestamp
declare -i time_curr=$?
echo "timer_stop:" $time_curr
declare -i time_diff=$time_curr-$time_start
return $time_diff
}
timer_start
sleep 3
timer_stop
echo $?
The code should really be quite self-explanatory. echo
commands are only for debugging.
I expect the output to be something like this:
$ bash timer.sh
get_timestamp: 1285945972
timer_start: 1285945972
get_timestamp: 1285945975
timer_stop: 1285945975
3
Now this is not the case unfortunately. What I get is:
$ bash timer.sh
get_timestamp: 1285945972
timer_start: 116
get_timestamp: 1285945975
timer_stop: 119
3
As you can see, the value that local var time_curr
gets from the command is a valid timestamp, but returning this value causes it to be changed to an integer between 0 and 255.
Can someone please explain to me why this is happening?
PS. This obviously is just my timer test script without any other logic.
UPDATE Just to be perfectly clear, I want this to be part of a bash script very similar to this one, where I want to measure each loop cycle.
Unless of course I can do it with time
, then please suggest a solution.
You don't need to do all this. Just run time <yourscript>
in the shell.