A container hosting a front-end, mid-tier and database of an app is certainly not lightweight or easy to update.
Containers are hard. Not so much the creation of containers, but what comes after. The architecture of an app and the actual management of containers. I encourage you to check out Azure Container Services / AWS EC2 Container Services or Kubernetes for both the hosting and deployment of containers. Do not forget about security, and logs and metrics from containers. But for now, we will talk about how to build containers for an actual application.
What is in a container?
Containers are supposed to be small/lightweight, portable, stateless and in general easy to build and update. This can only be the case if we build them like that.
A container hosting a front-end, mid-tier and database of an app is certainly not lightweight or easy to update. Every time I want to update something on the front-end I would have to rebuild the image including the database, which does not make any sense.
Further, if the application needs more power on the frond-end (more web servers), if everything is running in one VM/container then it is impossible to only scale the front-end. This is why applications have been broken down into their respective tiers for years now. The same approach is true for containers.
Containers host a single function, a single binary even, depending on the application. This concept enables developers to also test containers in isolation after they rebuilt the web-server container.
Bring it together
Docker has a way of “bundling” containers together into a stack and it uses “docker-compose.exe” for this. Docker-compose is available on both Windows and MacOS after you installed Docker.
A composition is controlled via a YAML file and is able to deploy multiple containers into one stack and make sure that containers inside of this stack can also communicate with each other.
Docker-compose expects a “docker-compose.yml” file in the current directory. In above screenshot we define the following:
A service called “wordpress” and a service called “mysql”
“wordpress” service is based on the docker image “wordpress”, “mysql” on the “mysql” image, both exist on Docker Hub
The “wordpress” service will have a port forwarded from the host to the container
a. Host port 8080 will be forwarded to container port 80
Both services will be injected with an environment variable
In PowerShell, we can now browse to the location where we created the yml file and execute “docker-compose up”.
Docker will read the yml file and execute on it. If the images have not been downloaded before then it will download the WordPress and MySQL Linux container images. It will also execute the commands the images have had configured to run on execution, which means that as soon as these containers have been created we can be greeted by the initial WordPress splash screen.
As PowerShell will be waiting in a sort of interactive mode we will have to stop the containers by hitting
Alternatively we could also launch the container stack by calling “docker-compose up -d” which will launch the stack in a detached mode. After we have tested the application works we then need to call “docker-compose stop” to stop the containers.
Docker-compose is awesome for local development. I can spin up, test, tear down, change, spin up and so on, to my heart’s content until I think the application is ready for the next stage, at which point at lot of things will change. Environment variables will differ, storage mount points and forwarded ports might change. These are all things to consider when using docker-compose.
Docker themselves do not see an issue with using docker-compose from development to production, but have listed a few things to consider on its documentation.
One noteworthy thing is that docker-compose.yml files work against a single host and also a multi-host / swarm deployment.