Docker - Terasology headless server in a container


Docker Dude!
So, for fullfilling my already 'earned' nickname 'Docker Dude' in this community I want to introduce some of you, who are interested in such kind of things, to Docker and the container image I am providing.

Here some literature:
What is Docker?
Where can you run Docker images/container?

Now for the docker-terasology project on docker-hub and how to setup a server.

The easiest approach is to use docker-compose, where you just need a docker-compose.yml configuration file and some easy commands for the console (I don't know how this is made with docker on windows, I'll just cover the Linux and Unix (MacOSX) parts).

Here a example of the docker-compose.yml (docker compose version 1!)
  image: qwick/terasology:latest
  - 25777:25777
  - /path/to/host/dir/where/to/store/data:/terasology/server
  restart: always
The only bit you have to change is where you want to store the data terasology is generating, the world, the server configuration - so you really want to have that data somewhere, where you are able to put your hands on, for configuration tinkering or just backup purposes.
If you want to stick to a specific version just change 'image: qwick/terasology:latest' to 'image: qwick/terasology:<version-tag>'.

To start the server and watch the logs on start just type docker-compose up in the directory of your docker-compose.yml, docker-compose parses your config, pulls the specified image and starts it, and also binds the direct output of the running executable to your current shell, so you see everything that happens on start. (CTRL+C to quit and also stop the currently container)

If you don't want the output, because you have already finished configuration, you can start the image with docker-compose start.

You only want to stick to the latest image? Well, a update is only made manually, so when you recognise that a new image is available, pull that image with docker-compose pull and bring your current service down with docker-compose down (which also deletes the current container, no fear! all your data still resides in the given path of your docker-compose.yml at volumes). A docker-compose create creates a new container with the same configuration and a docker-compose up or a docker-compose start will start that newly created container with your current server configuration.

Tada! Latest image working!
(Except the changes were so deep that your terasology config is outdated or the format of the world chunks has changed and the game is unable to load... :eek: ... but for that bit you should always read the project news. :D)

Okay, so much for the short introduction, of course you can dive deeper into docker, but that's work for yourself - I'm able to answer questions, but I won't write a deeper howto. ;)

What's the benefit of docker?
  • You do not have to install a Java 8 VM on your server, it is encapsulated inside the docker image
  • Updates are as easy as pulling a new image and recreating containers
  • Multiple servers on one machine? Just copy the config, change the ports and volumes and run them.
  • AWS and Azure are no problem, so no need to rent a rootserver
Well, that's it for now. I'm happy to answer questions. :D


New Member
Hey great "Docker Dude" @qwc :geek: Nice to meet you!!!

I'm applying for gsoc this year, here is my draft of proposal : In short, we are working on a metric & error reporting system. I set up a prototype locally, it's here. We are thinking about integrating all the server system into a docker image! However, we don't know whether it's complicate to do that. Can you give us some advice? Feel free to leave comments :) Thank you


Docker Dude!
Hi, nice to have you on board. :)

I had a quick readthrough of your proposal, looks nice and interesting.
I also had the plan to tinker a bit with the ELK stack, but didn't had the time to. :(
Maybe I'll get to it some time. :cautious:

So lets look at your server idea.
Basically you have two options. As I've seen in your document you have the complete ELK stack for usage in your server.
  1. Just put your server configuration with the complete ELK stack into a docker image, as you would do, when you install everything the normal way. (may be a bunch of work)
  2. Check which parts of the ELK stack are already dockerized, put your software part into an own docker image and feed/link the other parts with your data. (may be very easy to set up)
I would go for 2. :D
It's far more easy to put small portions of distributed applications into it's own docker image and link them together, than try to set up monolithic all-in-one images.

Docker is quite easy to learn I would estimate you need at most a day for your software part to put into a docker image. And maybe a second day to set the whole ELK+yours up.
Good knowledge of Linux and standard distributions assumed. ;)

Last edited:


Docker Dude!
Small reminder for all those, who may use my two 'official' servers from time to time to test things.

If YOU want to have other modules activated in the configuration just drop me a note and I can change the config and restart. It's not so much of a hassle to do that. Can do that almost every time, when I have my laptop around (maybe also from mobile phone... :eek:), except at night at CET zone. :p

(CET = Central European Time, sometimes also called MET (Middle...) o_O)