NOTE: this somehow turned into a small rant about PHP, but that better explains why I like Golang and Docker setups so much
Everyone who has ever worked with PHP web development knows how painful can be to set up your development environment. There are loads of problems :
php.ini
configurationsOne of the current projects I work with has ~200 lines of README.md to just set up the development environment, and it didn’t even work completely for me since I work on Linux compared to everyone else mostly running Windows.
I gotta add that modern projects like Laravel fix this by having ready-to-go vagrant VMs, but that is beside the general point.
The beauty about Go is that everything compiles to a single binary that is easy to compile and run. The only thing you need is Go toolchain installed in your system and you are ready to compile and run.
For example, a REST API written can be up and running simply running
go build -o server && ./server
The only problem with this setup is that if I want to run a Postgres database I need to install it and set it up manually. That generally is very platform-specific. That is not something that you can document easily and lowers every developer’s velocity.
Docker is great in the way that you can create platform-independent project setups using docker images. You can use bare prebuilt images like Ubuntu or Alpine Linux or use prebuilt software images, like Postgres or Go images.
And with all this at your hand you can even automate, the leftover parts, of setting up multiple images with docker-compose
using some YAML.
Firstly we define the basic version that we are gonna use for docker-compose in a file called docker-compose.yml
.
This defines what version of docker-compose
we are gonna use, basically meaning what features we have at our disposal. It’s better to lock this at a specific version, otherwise, you might encounter times where you are running a slightly newer version of docker-compose and you can use the new shiny feature while your colleague is having some weird issues that you can’t explain.
It’s done like this
versions: "3.7"
After that, we can define our services that we are gonna use. I like to think of services like servers that we are gonna need. Firstly let’s define our database service. In this case that would be our Postgres database.
versions: "3.7"
services:
db:
image: "postgres:13.1-alpine"
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: 1234
POSTGRES_DB: mydatabase
To break down this simply :
db
- the name of the service. This also would be the hostname of the server in the internal docker network, so if someone needed to connect to this service from the same docker network you would use db
image
- docker image to use for the service
restart
- always
ensures that the container restarts if it crashes
ports
- what ports should be exposed outside the docker internal network. This maps the container’s port 5432 to our host systems port 5432 allowing us to connect to Postgres like it was running on the host system.
environment
- key pair values that are passed to the environment of the image. They are used as configuration parameters for the image.
versions: "3.7"
services:
db:
image: "postgres:13.1-alpine"
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: 1234
POSTGRES_DB: mydatabase
backend:
build:
context: ./
dockerfile: local.Dockerfile
container_name: backend
volumes:
- ./:/go/src/app
ports:
- "8080:80"
depends_on:
- "db"
Breaking down the backend part
build
- instructions on how to build the image for this service
context
- which is the start directory for the docker build
dockerfile
- Dockerfile to use for build the image, we are gonna get to that
container_name
- what is gonna be the name of the container
volumes
- what host directories should be mounted inside the image at what locations. Works like host:container
e.g. ./:/go/src/app
means that current directory is gonna be mounted inside the container at /go/src/app
ports
- just as same as last time, except this time we are mounting container port 80 to host port 8080
depends_on
- allows us to define what other services are needed before we can start this one. In this case, we need the db
service running before we can run ours.
I like to keep my local setup working fast, and I like to have hot reloads, so I don’t have to run some commands every time I change one line of code. This where the special sauce of this docker-compose
setup comes.
Firstly we start by extending the latest Go alpine image, installing some dependencies, and setting the working directory.
FROM golang:1.16-alpine
RUN apk update && apk add git make
WORKDIR /go/src/app
The next part is the special sauce. The usual thing to do is copy go files, install dependencies and then build the thing. That is great for production builds with multi-stage builds, but for a local setup, we don’t need that. We need to change few lines and see the changes.
For that, we are gonna use github.com/githubnemo/CompileDaemon. It’s a simple file watcher that compiles your go files running some command.
We add it to the Dockerfile by adding
RUN go get github.com/githubnemo/CompileDaemon
After that we copy our files to the container, expose our PORT 80 for the server and run the CompileDaemon
COPY /. ./
EXPOSE 80/tcp
ENTRYPOINT CompileDaemon --build="go build main.go" --command="./bin/backend serve"
To explain the CompileDaemon command. --build
flag is a command that is run every time your .go
files change, and --command
is the command that runs to start the program.
FROM golang:1.16-alpine
RUN apk update && apk add git make
WORKDIR /go/src/app
RUN go get github.com/githubnemo/CompileDaemon
COPY /. ./
EXPOSE 80/tcp
ENTRYPOINT CompileDaemon --build="go build main.go" --command="./bin/backend serve"
With a file structure
- main.go
- local.Dockerfile
- docker-compose.yml
Running docker-compose up -d
Would start your go server with the Postgres, which can be accessed at port 8080.