I'm writing a load balanced server system in Go.
The load balancing server will communicate with several application servers and process requests. These servers can both, be running on the same machine or on the network.
I already figured the networking out but now I need to find an optimal way for the load-balancer to communicate with a local application server. Using localhost
-networking seems far from optimal.
I'm trying to share memory via the shmget
and shmat
system-calls but haven't found any working examples and the syscall
package is also completely undocumented.
Can someone provide me with an example of how to use these calls or a realistic alternative that works on Go for doing IPC?
Go has a built-in RPC system (http://golang.org/pkg/rpc/) for easy communication between Go processes.
Another option is to send gob-encoded data (http://blog.golang.org/2011/03/gobs-of-data.html) via network connection.
You shouldn't dismiss local networking without benchmarking. For example Chrome uses named pipes for IPC and they transfer a lot of data (e.g. rendered bitmaps) between processes:
Our main inter-process communication primitive is the named pipe. On Linux & OS X, we use a socketpair()
-- http://www.chromium.org/developers/design-documents/inter-process-communication
If named pipes are good enough for that, they are probably good enough for your use case. Plus, if you write things well, you could start using named pipes (because it's easy) and then switch to shared memory if you find performance of named pipes not good enough (shared memory is not easy regardless of the language).