Below you will find pages that utilize the taxonomy term “CLI”
Posts
CloudWatch Logs Push
In my last post I used the awslogs daemon to push tcpdump events to AWS CloudWatch logs. At the time it felt silly to use a file on disk and a daemon to push events from an interactive session. Well I had some time to dig and I found a much cleaner way to do it without the daemon.
It turns out that CloudWatch logs is implemented as a plugin to the AWS CLI. The plugin can be configured to read from a file or you can simply pipe events directly yo it on the command line.
You need to register the plugin in your config file (~/.aws/config). Mine looks like this.
Now you can simply pipe data to "aws logs push." You need to specify the group stream and date format as parameters. And, of course, the group and stream must already exist in AWS. For example:
It turns out that CloudWatch logs is implemented as a plugin to the AWS CLI. The plugin can be configured to read from a file or you can simply pipe events directly yo it on the command line.
You need to register the plugin in your config file (~/.aws/config). Mine looks like this.
|
|
Now you can simply pipe data to "aws logs push." You need to specify the group stream and date format as parameters. And, of course, the group and stream must already exist in AWS. For example:
|
|
Posts
CloudWatch Logs and TCPDump
I was recently debugging an issue with a fleet of Apache web servers. I needed to watch for some low level network events we felt might be causing an issue (TCP resets, etc.). I thought CloudWatch Logs would be a cool, albeit unnecessary, solution.
NOTE: I found a much cleaner way to do this presented here.
The awslogs package/daemon can be configured to upload any log file. Just add a new configuration block to /etc/awslogs/awslogs.conf. For example, the configuration below says to upload the contents of /var/log/tcpdump to a stream identified with the servers instance id in a log group called NetworkTrace. Note that the group and stream must be created on the AWS console first.
With that done, you can start tcptrace and have it dump to a file. But, by default, tcp trace does not include the full date and time in each record. You need to include the -tttt option to so that awslogs can parse the date and time correctly. The -tttt option will use the format 2014-09-24 15:20:29.522949.
Now simply start a background process to dump the trace to a file and you should start to see events in CloudWatch. For example, this will capture everything with minimal detail.
If you want to capture more detail, you should filter it for only some events. For example, the this will capture all traffic on port 80 including a hex dump of the data.
NOTE: I found a much cleaner way to do this presented here.
The awslogs package/daemon can be configured to upload any log file. Just add a new configuration block to /etc/awslogs/awslogs.conf. For example, the configuration below says to upload the contents of /var/log/tcpdump to a stream identified with the servers instance id in a log group called NetworkTrace. Note that the group and stream must be created on the AWS console first.
|
|
With that done, you can start tcptrace and have it dump to a file. But, by default, tcp trace does not include the full date and time in each record. You need to include the -tttt option to so that awslogs can parse the date and time correctly. The -tttt option will use the format 2014-09-24 15:20:29.522949.
Now simply start a background process to dump the trace to a file and you should start to see events in CloudWatch. For example, this will capture everything with minimal detail.
|
|
|
|