Microservices with Spring Boot, Docker and Kubernetes Part 2

WeiTang Lau
7 min readOct 23, 2020

In part 1 of the guide, I have explained and demonstrated how you can setup a simple Spring Boot Application adopting the Microservice architecture. The code can be found here. The next step would be utilising Docker and Kubernetes for deployment and scaling of our application.

Why Docker and Kubernetes for Microservices?

Manual management and scaling of microservices is tedious. If you have followed from Part 1 of the guide, you would have realised that manual application start up can be time-consuming (even in our small example). Furthermore, we were unable to start up multiple instances of the same microservices due to port conflict, unless we modified the port before starting up.

Docker and Kubernetes handle the hassle for us. For those who are new to Docker and Kubernetes, Docker is a technology that containerise applications such that these containers can be deployed regardless of platform (Windows, Mac or Linux) without installing the relevant dependencies. Kubernetes is a tool that allows developers to manage and scale these containers easily.

Therefore, this guide will first demonstrate how we can containerise our Spring Boot microservices. Following which, we will see how we can integrate Kubernetes into our project.

Docker and Kubernetes Installation

For our demonstration we will be using Docker Desktop for both Docker and Kubernetes cluster. You can choose other alternative such as Minikube for Kubernetes based on your preference.

To verify that you have docker and Kubernetes, simply open your terminal and enter these commands, docker --version and kubectl version respectively.

Docker Integration

In the architectural diagram below, we can see that it is similar to the architectural diagram as mentioned in Part 1. The only difference is that we are containerising each of our microservices. Furthermore, we will be using docker-compose to start up our application with a single command.

Architectural Diagram — Containerisation

Dockerfiles

In the root directory of this project, create a file Dockerfile . This is necessary to create a docker image of the microservice. It specifies the instruction for Docker to build the image.

FROM openjdk:8-jdk-alpine
EXPOSE 8761
ARG JAR_FILE=build/libs/*.jar
COPY ${JAR_FILE} app.jar
CMD ["java", "-jar", "app.jar"]

FROM — The base image of the build process. The image will be built onto of this base image

EXPOSE — Exposing the container via this port

ARG — The argument of the file. You can think of it as variable of the Dockerfile

COPY — Copy the variable into the directory ./app.jar in the container

CMD — The command to be run

That’s all you need to create a docker image of your Spring Boot application. Before building the image, make sure that you have the jar file created. The jar file can be created using Gradle’s bootJar Task by running ./gradlew bootJar. This will create the executable of our microservice.

To build the image, open your terminal at the root directory of the Service Discovery Server application and run docker build -t service-discovery-server . . Note the . at the end of the command. If you want to run the container, you can run docker run -d -p 8761:8761 --name="service-discovery-server" service-discovery-server .

Repeat this step for all the other microservices. After which, running docker images should give you the result shown below.

Docker images created

Docker-Compose

With the docker images built, we are ready to compose the construction of containers. There are two ways to start the containers. The first way is via the command line using the docker run command. However, command line is not ideal if you have multiple containers and with complex instructions. The second way is to use Docker-Compose. Docker-Compose is a file that contains a set of instructions in starting the applications.

In the root directory of the repo (even though it can be anyway), create a docker-compose.yml file. Copy the instructions as shown below. In the code below, you can see that we have multiple services. Each service contains the relevant information of the service. The important part is to contain the links and depends_on fields to indicate the dependencies between microservices.

version: "3.8"
services:
service-discovery-server:
container_name: service-discovery-server
image: service-discovery-server:latest
ports:
- "8761:8761"
api-gateway:
container_name: api-gateway
image: api-gateway:latest
environment:
- eureka.client.serviceUrl.defaultZone=http://service-discovery-server:8761/eureka/
ports:
- "8080:8080"
depends_on:
- service-discovery-server
links:
- service-discovery-server
payment-service:
container_name: payment-service
image: payment-service:latest
environment:
- SPRING_PROFILES_ACTIVE=docker
- eureka.client.serviceUrl.defaultZone=http://service-discovery-server:8761/eureka/
ports:
- "8081:8081"
deploy:
restart_policy:
condition: on-failure
depends_on:
- service-discovery-server
links:
- service-discovery-server
order-service:
container_name: order-service
image: order-service:latest
environment:
- SPRING_PROFILES_ACTIVE=docker
- eureka.client.serviceUrl.defaultZone=http://service-discovery-server:8761/eureka/
ports:
- "8082:8082"
deploy:
restart_policy:
condition: on-failure
depends_on:
- service-discovery-server
links:
- service-discovery-server
customer-service:
container_name: customer-service
image: customer-service:latest
environment:
- SPRING_PROFILES_ACTIVE=docker
- eureka.client.serviceUrl.defaultZone=http://service-discovery-server:8761/eureka/
ports:
- "8083:8083"
deploy:
restart_policy:
condition: on-failure
depends_on:
- service-discovery-server
- order-service
- payment-service
links:
- service-discovery-server
- order-service
- payment-service

To start up the application, simply run docker-compose up or docker-compose -d up in detached mode. Similarly, to stop the application run docker-compose down . After the application has successfully started, enter http://localhost:8080/api/customers to verify that the application has been successfully started.

Kubernetes Integration

Kubernetes is a technology that supports automating deployment, scaling, and management of containerized applications. In this guide, I will be demonstrating how we can take advantage of Kubernetes for our application. From the architectural diagram below, now the whole application lives within the Kubernetes cluster. Furthermore, kKubernetes has a component called pod, which is the smallest deployable units in Kubernetes. A pod contains one or more instance of container. As you can see from the diagram, our Payment pod contains two payment microservice instances.

Now that we have all the ingredients necessary for Kubernetes, we will start with the Kubernetes integration.

Architectural Diagram — Kubernetes

Kubernetes Configuration and Project Setup

Before we start creating the files for Kubernetes, we have to make some adjustments to our microservices. We have to import the Spring Cloud Kubernetes dependencies using implementation 'org.springframework.cloud:spring-cloud-start-kubernetes' in build.gradle .

In application.yml , the configuration highlighted below. This disabled the Kubernetes features by default.

spring:
application:
name: payment-service
cloud:
kubernetes:
enabled: false

server:
port: 8081
forward-headers-strategy: framework
eureka:
client:
serviceUrl:
defaultZone: ${EUREKA_SERVER:http://localhost:8761/eureka}

When the application is deployed on Kubernetes, we would like to have the Kubernetes settings and configurations. This can be achieved using Spring Profiles. Furthermore, since Kubernetes comes with default service discovery mechanism, we will be disabling Eureka when deployed on Kubernetes.

For each of the microservices, except Service Discovery Server, create a docker-kubernetes.yml file src/main/resources and add the following configuration. This will enable Kubernetes and disable Eureka. With the Kubernetes profile, we have to rebuild the docker images as mentioned in Docker section.

spring:
cloud:
kubernetes:
enabled: true
eureka:
client:
enabled: false

Kubernetes Files

Create a folder called kubernetes in the root directory. This folder will contain all the files needed to start the application in kubernetes, similar to docker-compose.yml . In the folder, create a yaml file for each of the services, e.g. api-gateway.yml. For simplicity, I will be demonstrating API Gateway only as other services have similar setup.

In api-gateway.yml file, copy the configuration below. In our example, we are using two types of Kubernetes recources, Deployment and Service .

Deployment — Creates the pod with the relevant metadata and based on the template specified. This is where we can specify the number of containers in the pod.

Service —Exposes the deployment such that it can be reached by other pods in the cluster.

apiVersion: apps/v1
kind: Deployment
metadata:
name: api-gateway
labels:
app: api-gateway
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: api-gateway
template:
metadata:
labels:
app: api-gateway
spec:
containers:
- name: api-gateway
image: api-gateway:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: api-gateway
namespace: default
spec:
selector:
app: api-gateway
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: LoadBalancer

For the other services that does not need to perform Load Balancing, simply change spec.type under Service into NodePort .

Creating the Kubernetes resources is as simple as entering the command below. Make sure that you cd into the Kubernetes folder.

kubectl create -f api-gateway.yml -f customer-service.yml -f order-service.yml -f payment-service.yml

Now you should be able to see the result when you enter http://localhost:8080/api/customer . If you try other ports, you will realise that those ports are not exposed outside of the Kubernetes cluster. This provides a layer of security and a single entry point (API Gateway).

You can view the resources by running kubectl get all . This will return all the resources that are running in the cluster.

Kubernetes resources running in the local Kubernetes Cluster

Scaling is extremely simple. Simply change the value of replicas under Deployment and run kubectl apply -f FILE_NAME . For example, I want to scale customer-service to two instances. I have modified the replicas to 2 and run kubectl apply -f customer-service.yml . You will see two customer-service pods and in deployment.apps/customer-service .

Kubernetes resources after scaling

Finally, to shut down the application, run the command below. This command will delete all the resources.

kubectl delete -f api-gateway.yml -f customer-service.yml -f order-service.yml -f payment-service.yml

Summary

In this guide, we went through how we can integrate Docker and Kubernetes into our Microservice Spring Boot application developed in Part 1. We first created the Docker images for each of our microservices and used Docker-Compose to assist us in creating the containers. Furthermore, we have also went through how you can use yaml files to create Kubernetes resources to start up your application.

--

--

WeiTang Lau

An undergraduate Computer Science student at National University of Singapore interested in Backend Development