ELK 스택을 이용해서 플라스크의 로그를 남기자 Logging Flask Applications with ELK Stack
What is ELK?
ELK is an acronym for three open-source projects: Elasticsearch, Logstash, and Kibana.
Each project works together and can be used as a data collection and analysis tool.
Download Project (Optional)
$ git clone https://github.com/paullee714/ELK-docker-python.git
Project Structure
ELK-docker-python
├── README.md
├── docker-elk
│ ├── LICENSE
│ ├── README.md
│ ├── docker-compose.yml
│ ├── docker-stack.yml
│ ├── elasticsearch
│ ├── extensions
│ ├── kibana
│ └── logstash
├── elk-flask
│ ├── __pycache__
│ ├── app.py
│ ├── elk_lib
│ └── route
├── requirements.txt
└── venv
├── bin
├── lib
└── pyvenv.cfg
Setting Up ELK - Docker
Go into the docker-elk directory inside the project folder and run docker-compose.
$ cd docker-elk
$ docker-compose build && docker-compose up -d
Execution Result
➜ docker-elk git:(develop) docker-compose build && docker-compose up -d
Building elasticsearch
Step 1/2 : ARG ELK_VERSION
Step 2/2 : FROM docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION}
---> f29a1ee41030
Successfully built f29a1ee41030
Successfully tagged docker-elk_elasticsearch:latest
Building logstash
Step 1/2 : ARG ELK_VERSION
Step 2/2 : FROM docker.elastic.co/logstash/logstash:${ELK_VERSION}
---> fa5b3b1e9757
Successfully built fa5b3b1e9757
Successfully tagged docker-elk_logstash:latest
Building kibana
Step 1/2 : ARG ELK_VERSION
Step 2/2 : FROM docker.elastic.co/kibana/kibana:${ELK_VERSION}
---> f70986bc5191
Successfully built f70986bc5191
Successfully tagged docker-elk_kibana:latest
Starting docker-elk_elasticsearch_1 ... done
Starting docker-elk_kibana_1 ... done
Starting docker-elk_logstash_1 ... done
Let’s verify with docker ps
➜ docker-elk git:(develop) docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ae318f58a9af docker-elk_logstash "/usr/local/bin/dock…" 2 days ago Up 47 seconds 0.0.0.0:5000->5000/tcp, 0.0.0.0:9600->9600/tcp, 0.0.0.0:5000->5000/udp, 5044/tcp docker-elk_logstash_1
00a032b5c5c4 docker-elk_kibana "/usr/local/bin/dumb…" 2 days ago Up 47 seconds 0.0.0.0:5601->5601/tcp docker-elk_kibana_1
3b62a3ba2e21 docker-elk_elasticsearch "/usr/local/bin/dock…" 2 days ago Up 47 seconds 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp docker-elk_elasticsearch_1
ELK Port Configuration
When checking with docker ps, each port is as follows:
Elasticsearch: 9200, 9300 Logstash: 5000, 9600 Kibana: 5601
Let’s check the docker-compose.yml file and each config file.
Looking at the docker-compose.yml file below, it pulls and sets the configuration from each service’s config files.
Elasticsearch: /elasticsearch/config/elasticsearch.yml Logstash: /logstash/config/logstash.yml Kibana: /kibana/config/kibana.yml
docker-elk docker-compose.yml file
```docker
# /ELK-docker-python/docker-elk/docker-compose.yml
version: '3.2'
services:
elasticsearch:
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
# Use single node discovery in order to disable production mode and avoid bootstrap checks
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
networks:
- elk
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./kibana/config/kibana.yml
target: /usr/share/kibana/config/kibana.yml
read_only: true
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
elasticsearch:
```
Logstash Logging Configuration
Logstash is the component of the ELK stack that receives Flask logs.
Logstash —> Elasticsearch —> Kibana (query/analysis)
That’s why Logstash configuration is important.
Let’s set the ES index to ‘elk-logger’ and collect logs.
$ vim /ELK-docker-python/docker-elk/logstash/pipeline/logstash.conf
input {
tcp {
port => 5000
}
}
## Add your filters / logstash plugins configuration here
output {
elasticsearch {
hosts => "elasticsearch:9200"
user => "elastic"
password => "changeme"
index => "elk-logger"
}
}
Creating a Simple Flask App
-
requirements.txt
certifi==2020.4.5.1 click==7.1.2 elasticsearch==7.7.1 Flask==1.1.2 itsdangerous==1.1.0 Jinja2==2.11.2 MarkupSafe==1.1.1 python-dotenv==0.13.0 python-json-logger==0.1.11 python-logstash==0.4.6 python3-logstash==0.4.80 urllib3==1.25.9 Werkzeug==1.0.1
I’ve included the requirements.txt. Installing packages in a virtual environment will allow you to use Flask and various other modules.
Flask Logger Configuration
import logging, logstash
log_format = logging.Formatter('\n[%(levelname)s|%(name)s|%(filename)s:%(lineno)s] %(asctime)s > %(message)s')
def create_logger(logger_name):
logger = logging.getLogger(logger_name)
if len(logger.handlers) > 0:
return logger # Logger already exists
logger.setLevel(logging.INFO)
**logger.addHandler(logstash.TCPLogstashHandler('localhost', 5000, version=1))**
return logger
In the logger configuration, addHandler(logstash.TCPLogstashHandler(‘localhost’,5000,version=1)) means that logs will be sent to Logstash.
@elk_test.route('/', methods=['GET'])
def elk_test_show():
logger = elk_logger.create_logger('elk-test-logger')
logger.info('hello elk-test-logstash')
return "hello world!"
Create a method that logs and pass the logs.
Checking in Kibana
Log in with the id and pw set in each config file.
If you haven’t changed the config file, the id is elastic and the pw is changeme.

After logging in, you can verify that the index pattern and data are properly stored.

$ docker-compose down

Terminate the process with the command above.
댓글남기기