I was recently debugging an issue with a fleet of Apache web servers. I needed to watch for some low level network events we felt might be causing an issue (TCP resets, etc.). I thought CloudWatch Logs would be a cool, albeit unnecessary, solution.
NOTE: I found a much cleaner way to do this presented here.
The awslogs package/daemon can be configured to upload any log file. Just add a new configuration block to /etc/awslogs/awslogs.conf. For example, the configuration below says to upload the contents of /var/log/tcpdump to a stream identified with the servers instance id in a log group called NetworkTrace. Note that the group and stream must be created on the AWS console first.
[/var/log/tcpdump]
file = /var/log/tcpdump
log_group_name = NetworkTrace
log_stream_name = {instance_id}
datetime_format = %Y-%m-%d:%H:%M:%S.%f
With that done, you can start tcptrace and have it dump to a file. But, by default, tcp trace does not include the full date and time in each record. You need to include the -tttt option to so that awslogs can parse the date and time correctly. The -tttt option will use the format 2014-09-24 15:20:29.522949.
Now simply start a background process to dump the trace to a file and you should start to see events in CloudWatch. For example, this will capture everything with minimal detail.
sudo tcpdump -tttt >> /var/log/tcpdump &
If you want to capture more detail, you should filter it for only some events. For example, the this will capture all traffic on port 80 including a hex dump of the data.
sudo tcpdump -tttt -nnvvXS tcp port 80 >> /var/log/tcpdump &