7

I use nohup for starting Ansible playbooks inside the Master which is spawned via boto3 from AWS Lambda.

This playbook creates multiple workers, make them do some tasks, and terminates them.

So, I want to design/build a reliable monitoring system for the same. For now, I am writing the logs of nohup to a tasks.log file inside master, and pushing it to a store(for now, S3) once the Ansible plays are completed.

But, I wanted to know whether there is a much more elegant (and/or) better method for doing the same? And also, whether there are good tools which would help me in the process?

Dawny33
  • 2,816
  • 3
  • 24
  • 62

1 Answers1

8

I'm not sure it's a best practice but what I would do is using filebeat to read the log in near real time and push it to ElasticSearch.
AWS provides an ElasticSearch Service which can save you from this part.

The drawback of using filebeat directly to elasticsearch is that the time of events will be the time the prospector read them and if your logs have a timestamp they can be "ingested" in different order than produced (from multiple workers).

You can add Logstash inbetween, sending from filebeat to logstash and then to elasticsearch. Logstash is able to interpret your logs and split them into fields to search again, in the same manner you can parse your date in log so events will be stored with a correct timestamp.

You may use logstash directly to read the logs, but it's quite heavy on resources so I would keep it separated to no impact the workers.

If you buy a license from x-pack you can add alerts on messages directly from elasticsearch.

If you want to save you from the burden of maintaining logstash there's services like logmatic.io which provides you with the full stack, you just have to push your logs and a bunch of parsing are handled from stock.

Tensibai
  • 11,416
  • 2
  • 37
  • 63