I'm learning Go right now using and one of my first projects is a simple ping script. Essentially I want to ping a bunch of urls, and on response of each one wait XXX number of seconds then ping again. Here is the abridged code:
func main() {
// read our text file of urls
f, err := ioutil.ReadFile(urlFile)
if err != nil {
log.Print(err)
}
urlStrings := []string{}
urlStrings = strings.Split(string(f), "\n")
for _, v := range urlStrings {
go ping(v)
}
// output logs to the terminal
// channel is global
for i := range c {
fmt.Println(i)
}
}
func ping(url string) {
// for our lag timer
start := time.Now()
// make our request
_, err := http.Get(url)
if err != nil {
msg := url + " Error:" + err.Error()
fmt.Println(msg)
c <- msg
reportError(msg)
} else {
lag := time.Since(start)
var msg string
// running slow
if lag > lagThreshold*time.Second {
msg = url + " lag: " + lag.String()
reportError(msg)
}
msg = url + ", lag: " + lag.String()
c <- msg
}
time.Sleep(pingInterval * time.Second)
go ping(url) // is this acceptable?
}
On my Get request I was previously calling defer res.Body.Close() but that was panicing after the app ran for awhile. I assumed that the defer could not call the Close() on the response until the goroutine had been garbage collected and the res no longer existed.
That got me thinking if by calling a goroutine inside of a goroutine was best practice or if I as causing the function to never exit, and then a defer would only be called once the goroutine was garbage collected.
That is fine. It's perfectly acceptable to call a goroutine from another goroutine. The calling goroutine will still exit and the new goroutine will go on it's merry way.