3

I'm trying to set up a centralized syslog for multiple log sources.

So I have a logstash that has two separate inputs and two separate outputs

However for some reason the data from one of the inputs ends up in both indexes

What am i doing wrong?

Below are both pipelines' configs

input {
  tcp {
    port => 5052
    codec => "json_lines"
  }
}
output {
  elasticsearch {
    hosts => "10.50.6.116"
    index => "remote"
  }
  file {
    path => "/var/log/logstash/remote-tcp.log"
  }
  stdout { codec => rubydebug }
}

input {
  file {
    path => "/data/vmlist/*.csv"
    start_position => "beginning"
    sincedb_path => "/tmp/sincedb"
  }
}
filter {
  csv {
    separator => ","
    columns => ["VM Name","Creation Date","Owner","Type","Message"]
  }
}
output {
  elasticsearch {
    hosts => "http://10.50.6.116:9200"
    index => "vms"
    document_type => "csv"
  }
  stdout{ codec=> rubydebug}
}
Dan Cornilescu
  • 6,780
  • 2
  • 21
  • 45
Saar Grin
  • 71
  • 2
  • 4

2 Answers2

2

There are two ways to accomplish this, though one of them was only available recently.

The old-school version, the one you can do as far back as Logstash 1.5, is to pay attention to tags and use conditionals to separate your inputs. Roughly...

input {
  tcp {
    port => 1525
    codec => json_lines
    tags => [ 'tcp' ]
  }
}

input {
  file {
    path => '/var/log/app.log'
    codec => 'json'
    tags => [ 'file' ]
  }
}

output {
  if 'file' in [tags] {
    elasticsearch {
      host => 'logstash-es'
      index => 'files'
    }
  }
  if 'tcp' in [tags] {
    elasticsearch {
      host => 'logstash-es'
      index => 'tcp'
    }
  }
}

This results in two inputs that output to two separate outputs. This is all one file, though. Elastic figured out people were muxing pipelines this way, and came up with a way to do multiple pipelines in separate files.

- pipeline.id: tcp-inputs
  path.config: '/etc/logstash/pipelines/tcp.cfg'
  pipeline.workers: 3
- pipeline.id: file-inputs
  path.config: '/etc/logstash/pipelines/files.cfg'
  pipeline.workers: 2

This approach is somewhat more maintainable since the pipelines are in separate files, and humans don't have to reason out how the flows work when presented in a single big file. Pipelines are available in Logstash 6.0 and newer.

sysadmin1138
  • 206
  • 1
  • 7
0

Once parsed your config create one and only one pipeline, with various inputs, various filters and various outputs.

You have to use some conditional constructs to apply filter and output only to specific messages, usually using a special tag or field set on input.

Some exemple are available in logstash documentation:

output {
  if [type] == "apache" {
    if [status] =~ /^5\d\d/ {
      nagios { ...  }
    } else if [status] =~ /^4\d\d/ {
      elasticsearch { ... }
    }
    statsd { increment => "apache.%{status}" }
  }
}
Tensibai
  • 11,416
  • 2
  • 37
  • 63