0

I have my django application proxied with nginx and using fluentd(logging solution) to collect the logs written by nginx and just write them to a output file. I am using docker images for nginx and for fluentd both composed in a same file(Hence both share same network i.e, bridge)

The problem is fluentd is not appending the new logs to the output file continously as the activity is logged in the nginx log file. When i checked the logs of fluentd container I found out this error for many buffer files

2025-03-14 10:55:15 +0530 [warn]: #0 failed to flush the buffer. retry_times=1 next_retry_time=2025-03-14 10:55:17 +0530 chunk="63046a9b36189f2d58c2f1760b80336b" error_class=Errno::2025-03-14 10:55:15 +0530 [warn]: #0 failed to flush the buffer. retry_times=1 next_retry_time=2025-03-14 10:55:17 +0530 chunk="63046a9b36189f2d58c2f1760b80336b" error_class=Errno::ENOENT error="No such file or directory - fstat"
  2025-03-14 10:55:15 +0530 [warn]: #0 suppressed same stacktrace
  2025-03-14 10:55:15 +0530 [warn]: #0 suppressed same stacktrace

But when i checked in the container The .log buffer files are present and .log.meta files are referencing them properly but looks like docker is not detecting then So they are not being flushed out. But when i restart docker container using docker restart fluentd the new logs are appended but this time fluentd is trying to delete the appended buffer files and is giving the same error_class=Errno::ENOENT error="No such file or directory - fstat" I have everything configured correctly in docker-compose.yaml as well as in fluend.conf I you want them i will provide their code as well. And let me tell you the fluentd has all permissions for the file_logs folder so it can write read and detele from the folder.

Fluentd.conf

<source>
  @type tail
  format json
  read_from_head false
  tag nginx.access
  path /fluentd/log/files/access.log  
  pos_file /fluentd/log/fluentd.pos  
</source>

<match nginx.access> @type file path /fluentd/file_logs/nginx_access.log chunk_limit_size 16 append true
<buffer> @type file path /fluentd/file_logs/nginx_access_buffer
flush_mode immediate </buffer> </match>

docker-compose.yaml

services:
  nginx:
    image: nginx
    container_name: nginx
    ports:
      - "80:80"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf
      - ./nginx/logs:/var/log/nginx  
    restart: unless-stopped

fluentd: build: context: ./fluentd image: fluentd-custom container_name: fluentd user: root
volumes: - ./fluentd/fluentd.conf:/fluentd/etc/fluent.conf
- ./nginx/logs:/fluentd/log/files
- ./fluentd/file_logs:/fluentd/file_logs
- ./fluentd/file_logs/nginx_access_buffer:/fluentd/file_logs/nginx_access_buffer ports: - "24224:24224" restart: unless-stopped depends_on: - nginx environment: - TZ=Asia/Kolkata

**Why is this happening? What is the issue? **

One thing I forgot to mention is I used a custom image of fluentd and it's dockerfile code is given below

# Use the official Fluentd image as the base
FROM fluent/fluentd:v1.15-debian-1

Switch to root user for installing dependencies

USER root

Install necessary system dependencies

RUN apt-get update && apt-get install -y build-essential ruby-dev

Install required Fluentd plugins

RUN gem install
fluent-plugin-parser
fluent-plugin-rewrite-tag-filter
fluent-plugin-prometheus

Copy the custom Fluentd configuration file

COPY fluentd.conf /fluentd/etc/fluentd.conf

Ensure correct permissions for Fluentd directories

RUN mkdir -p /fluentd/log &&
chown -R fluent:fluent /fluentd /fluentd/etc /fluentd/log

Switch back to the default Fluentd user

USER fluent

Start Fluentd

CMD ["fluentd", "-c", "/fluentd/etc/fluentd.conf"]

I tried changing the chunk_limit_size in conf file and reducing it thinking that it will flush more quickly as the limit is reached quickly But it didn't work as the fluentd is not detecting the buffer files itself it is unable to flush them an i am getting same ENOENT error I have also tried giving permission to the fluentd manually to that folder and run the command chmod -R 777 ./fluentd/file_logs/nginx_access_buffer but it is still giving the same error for the .log buffer files which are being referenced by .log.meta files

And also I have tried deleting all buffer files and restarting the container and If I did so then it is restarting and old logs-if exists when I didn't delete- are appended to the output file. But if they don't exist then too the container is restarting But the problem is, when I do any activity in the site that creates logs in nginx and fluentd collects them and then ideally it has to output them to output file But I am facing the same issue again Which is ENONET error

Naseer
  • 1

0 Answers0