System Monitoring with ELK Stack: 6 Configuration Options
The ELK Stack (Elasticsearch, Logstash, and Kibana) is a popular open-source solution for system monitoring and log analysis. In this article, we’ll explore six different configuration options for using the ELK Stack in a system monitoring setup.
What is the ELK Stack?
Before diving into the configurations, let’s quickly review what the ELK Stack is all about:
- Elasticsearch: A search and analytics engine that stores and indexes log data.
- Logstash: A log collection and processing tool that ingests logs from various sources and sends them to Elasticsearch for indexing.
- Kibana: A web-based interface for visualizing and exploring the indexed data in Elasticsearch.
Configuration Options
Here are six ELK Stack configuration options for system monitoring:
1. Basic Log Collection
In this configuration, we’ll set up Logstash to collect logs from a single server (e.g., Apache or Nginx) and send them to Elasticsearch for indexing. This is the most basic setup and can be useful for small-scale deployments.
Config Files:
logstash.conf:input { file { path => "/var/log/apache2/access.log" } } output { elasticsearch { hosts => ["localhost:9200"] } }elasticsearch.yml:network.host: localhost
2. Log Collection with Filtering
In this configuration, we’ll set up Logstash to collect logs from multiple servers (e.g., Apache and MySQL) and filter out irrelevant data before sending it to Elasticsearch.
Config Files:
logstash.conf:input { file { path => ["/var/log/apache2/access.log", "/var/log/mysql/error.log"] } } filter { if [message] =~ "error" { drop {} } else { mutate { add_field => { "timestamp" => "%{+YYYY-MM-dd HH:mm:ss}" } } } } output { elasticsearch { hosts => ["localhost:9200"] } }elasticsearch.yml:network.host: localhost
3. Logstash with Grok Filtering
In this configuration, we’ll set up Logstash to collect logs from multiple servers and use Grok filtering to parse the log messages into structured data.
Config Files:
logstash.conf:input { file { path => ["/var/log/apache2/access.log", "/var/log/mysql/error.log"] } } filter { grok { patterns => ["%{IP:client} - %{word:method} %{word:request_uri} %{word:http_version}", "%{GREEDYDATA:message}"] } mutate { add_field => { "timestamp" => "%{+YYYY-MM-dd HH:mm:ss}" } } } output { elasticsearch { hosts => ["localhost:9200"] } }elasticsearch.yml:network.host: localhost
4. Kibana Dashboard with Visualizations
In this configuration, we’ll set up a Kibana dashboard to visualize the indexed data in Elasticsearch and provide insights into system performance.
Config Files:
kibana.yml:server.name: kibanaindex-patterns.json:[{"type": "date_histogram", "id": "timestamp", "interval": "1m"}]
5. ELK Stack with Docker
In this configuration, we’ll set up the ELK Stack using Docker containers to provide a scalable and isolated environment for system monitoring.
Config Files:
docker-compose.yml:version: '3' services: elasticsearch: image: elasticsearch:7.10.1 ports: - "9200:9200" restart: always logstash: build: ./logstash volumes: - ./data:/var/log dependencies: - elasticsearchlogstash.conf:input { file { path => "/var/log/apache2/access.log" } } output { elasticsearch { hosts => ["localhost:9200"] } }
6. ELK Stack with Security
In this configuration, we’ll set up the ELK Stack to include security features such as authentication and authorization to protect sensitive data.
Config Files:
elasticsearch.yml:xpack.security.authc.providers:[{ "name": "basic_auth", "order": 0 }] xpack.security.authz.allow_es_api_key_access: truekibana.yml:server.name: kibana server.basePath: /server.xsrf: false