25 Minute ELK Stack With Docker - Part 2

25 Minute ELK Stack With Docker - Part 2

In the last article in this series we set up a functioning ELK stack in 25 minutes. But we left a few things still to do for a production-quality system; some form of authentication, tuning ElasticSearch to prevent queue-limit problems with large quantities of data, and having persistent storage so we don't lose our logs if our container gets deleted. Let's start with setting up some authentication. We'll do it the simple and easy way; running another container with the nginx web server to act as a reverse proxy in front of our unsecured Kibana service.

If you want to follow along with the git repository, part 2 is here: https://github.com/mattkimber/elkstack-2. I'll be assuming you already have the files from the previous instalment so make sure you've read through that first before continuing.

Setting up a reverse proxy with nginx

Firstly, let's add nginx to the bottom of the docker-compose file:

nginx:
  image: nginx:1.9.6
  ports:
    - "8000:80"
  links:
    - kibana

Again, we choose a specific version, and we link it to our kibana container (this is what it'll be the proxy for). However, we forward port 80 from the container to 8000 on the host - this is as an example, but you may want to use custom ports, for example if you're running multiple copies of the stack on a single box.

At this stage we could start the cluster, but we'll just get the "Welcome to nginx!" page. So we need to set up a configuration for the reverse proxy we want. Firstly, let's tell our nginx container to look for that configuration externally, by adding the following to the nginx section of the docker-compose file:

  volumes:
  - ./nginx/nginx.conf:/etc/nginx/nginx.conf

Now create the configuration file. mkdir nginx then vim ./nginx/nginx.conf (other editors than vim may be available). There are a ton of things which can be put in an nginx configuration file, but for now let's keep things to the minimum necessary for nginx to start up and run without warnings or errors:

events {
  worker_connections 512;
}

http {
  server {
    listen 80;
    server_name _;

    location / {
      proxy_pass http://kibana:5601;
    }
  }
}

What does this configuration do?

  • Tells nginx to limit simultaneous connections to 512. (You'll get a warning if this line isn't present)
  • Creates a server listening on port 80.
  • Sets the server name to '_', making it listen to all requests on port 80.
  • Forwards all requests to Kibana.

Now we have nginx up and running, we can remove the port forwards from Kibana and ElasticSearch to keep things secure. Delete the ports: - "9200:9200" and ports: - "5601:5601" lines from the kibana and elasticsearch containers. (Don't worry about Kibana and Logstash still being able to see ElasticSearch; because they're linked containers, they can still access each other). We'll leave the Logstash ports open for now so we can still pass data to it.

Adding authentication

This is all well and good, but what about authentication? For this we'll need to set up an htpasswd file and tell nginx to use it. Creating the file is beyond the scope of this tutorial, but there are a couple of options - install the Apache tools, use this website, or copy the following file, which will give you one user called "test" with password "test":

test:$apr1$jwmuxXbi$i56SQOlmd8HtxH5DQHNib.

Create or copy the file to nginx/htaccess. Next we'll need this accessible from the nginx container, so update the nginx/volumes section of our docker-compose file so it reads like this:

  volumes:
  - ./nginx/nginx.conf:/etc/nginx/nginx.conf
  - ./nginx/htpasswd:/etc/nginx/htpasswd

Now we need to update nginx.conf to tell nginx to look at the password file. Alter the http/server/location directive to the following:

    location / {
      auth_basic "This site requires a user name and password";
      auth_basic_user_file /etc/nginx/htpasswd;
      proxy_pass http://kibana:5601;
    }

Notice that we tell nginx to look at the /etc/nginx/htpasswd location we created a volume for, not the outside directory itself.

Now we can run the cluster using docker-compose up, paying attention to any untoward log messages. Assuming things have gone well, when we connect to localhost:8000:

Enter the user name and password you put in the htpasswd file (test/test if you're using my example) and you're in! We can now open up port 8000 to the outside world, safe in the knowledge that not just anyone can get into our logs... at least, once we set a better password than "test".

Next time: Productionising the system with ElasticSearch tuning and persistent data volumes.

Image by Matt Kimber CC-SA 3.0