CloudWatch Log costing too much

mwright picture mwright · May 24, 2016 · Viewed 9.2k times · Source

I've been doing some Amazon AWS tinkering for a project that pulls in a decent amount of data. The majority of the services have been super cheap, however, log storage for Cloud Watch is dominating the bill, cloud watch log storage is $13 of the total $18 bill. I'm already deleting logs as I go.

cloud watch usage

cloud watch bill

How do I get rid of the logs from storage (removing the groups from the console doesn't seem to be doing it) or lower the cost of the logs (this post indicated it should be $0.03/GB which mine is more than that) or something else?

What strategies are people using?

Answer

eduncan911 picture eduncan911 · May 25, 2016

Don't Log Everything

Can you tell us how many logs/hour you are pushing?

One thing I've learned over the years is while having multi-level logging is nice (Debug, Info, Warn, Error, Fatal), it has two serious drawbacks:

  • slows down the application having to evaluate all of those levels at runtime - even if you say "only log Warn, Error and Fatal", Debug and Info are all still evaluated at runtime!
  • increases logging costs (I was using LogEntries and the move to use devops labor and hosting costs of running a cluster of LogStash + ElasticSearch just increased things more).

For the record, I've paid over $1000/mo for logging for previous projects. PCI compliancy for security audits requires 2 years of logs, and we were sending 1000s of logs per second.

I even gave talks about how you should be logging everything in context:

http://go-talks.appspot.com/github.com/eduncan911/go-slides/gologit.slide#1

I have since retracted from this stance after benchmarking my applications and funcs and the overall costs of labor and log storage in production.

I now only log the minimal (errors), and use packages that negate the evaluation at runtime if the log level is not set, such as Google's Glog.

Also since moving to Go development, I have adopted the strategy of very small amounts of code (e.g. microservices and packages) and dedicated CLI utils that negates the need to have lots of Debug and Info statements in monolithic stacks - if i can just log the RPC to/from each service instead. Better yet - just monitor the event bus.

Finally, with unit tests of these small services, you can be assured of how your code is acting - as you don't need those Info and Debug statements because your tests show the good and bad input conditions. Those Info and Debug statements can go inside of your unit tests, leaving your code free of cross-cutting concerns.

All of this basically reduces your logging needs in the end.

Alternative: Filter your Logs

How are you shipping your logs?

If you are not able to exclude all of the Debug, Infos and other lines, another idea is to filter your logs before you ship them by using sed, awk or alike to pipe to another file.

When you need to debug something, that's when you change the sed/awk and send the extra log info. When done debugging, go back to filtering and only log the minimal like Exceptions and Errors.