Infrastructure

How to Use Kubernetes for Microservices

Bravin Wasike

April 28, 2025

Share via Social Media

Microservices have changed the way developers build and deploy applications, making it easier to scale and manage complex systems. Instead of relying on a single, monolithic codebase, microservices break applications into smaller, independent services that work together. This approach improves flexibility, but it also introduces challenges—like managing service communication, scaling efficiently, and handling failures.

Kubernetes addresses these challenges. As a container orchestration platform, it helps teams deploy, scale, and manage microservices efficiently. It takes care of scheduling, networking, load balancing, and failover, allowing developers to focus on building features rather than managing infrastructure.

In this guide, you'll learn how to deploy microservices in Kubernetes, set up networking between services, scale them effectively, and ensure high availability.

What Is Kubernetes?

Kubernetes is an open-source container orchestration platform that automates containerized applications' deployment, scaling, and management. Google initially developed it, and it's now maintained by the Cloud Native Computing Foundation (CNCF). Kubernetes allows developers to manage complex application infrastructures and maintain reliability, scalability, and high availability.

Key Features of Kubernetes

  • Automated deployment and scaling: Dynamically manages workloads based on resource demands.
  • Load balancing and service discovery: Automatically distributes traffic across microservices.
  • Self-healing capabilities: Automatically restarts containers that fail.
  • Declarative configuration: Uses YAML files to define desired states for applications.
  • Secret and configuration management: Securely stores sensitive information, such as API keys and credentials.

What Are Microservices?

Microservices is an architectural style where developers build applications as a collection of small, independent services that communicate via APIs. Furthermore, each microservice focuses on a specific business capability, allowing teams to develop, scale, and maintain applications more efficiently.

  • Independently deployable: Teams can deploy and update each microservice separately.
  • Loosely coupled: Services interact with minimal dependencies, increasing flexibility.
  • Technology agnostic: Developers can use different programming languages and frameworks to build microservices.
  • Resilient & scalable: Microservices help isolate faults and simplify scaling.

How to Use Kubernetes for Microservices

Step 1: Setting Up a Kubernetes Cluster

To begin, you need a Kubernetes cluster to orchestrate and manage your services before deploying microservices.

There are several ways to set up a Kubernetes cluster, depending on your development and production needs:

  • Minikube (for local development)
  • Kubernetes on Docker Desktop
  • Managed Kubernetes Services such as Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), or Azure Kubernetes Service (AKS).

To install and start Minikube:

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

sudo install minikube-linux-amd64 /usr/local/bin/minikube

minikube start

Using Kind (Kubernetes in Docker)

kind create cluster --name microservices-demo

To check if your cluster is running:

kubectl version --client

kubectl get nodes

Step 2: Containerizing Microservices for Kubernetes

Before deploying microservices to Kubernetes, they need to be containerized using Docker. Create a simple Node.js microservice and containerize it.

const express = require('express');

const app = express();

app.get('/api', (req, res) => {

res.json({ message: 'Hello from Microservice' });

});

const PORT = process.env.PORT || 3000;

app.listen(PORT, () => console.log(`Microservice running on port ${PORT}`));

After that, create a Dockerfile for the Node.js microservice.

The following Dockerfile defines how to package a user-service microservice:

# Use official Node.js runtime as base image

FROM node:18-alpine

# Set working directory

WORKDIR /app

# Copy package.json and install dependencies

COPY package.json .

RUN npm install

# Copy application files

COPY . .

# Expose the application port

EXPOSE 3000

# Command to run the application

CMD ["node", "server.js"]

Build and push the image to a container registry.

docker build -t myregistry/users-service:latest .

docker push myregistry/users-service:latest

Step 3: Creating a Kubernetes Deployment With Microservices Containers

Once your microservice is containerized and available in a container registry, deploy it to a Kubernetes cluster so that the required number of replicas (instances) of a microservice is always running. If a pod fails, the deployment automatically replaces it to maintain availability.

Below is a deployment YAML file (microservice-deployment.yaml) that runs a Node.js-based microservice in Kubernetes:

apiVersion: apps/v1

kind: Deployment

metadata:

  name: microservice-deployment

spec:

  replicas: 3

  selector:

   matchLabels:

    app: my-microservice

  template:

   metadata:

    labels:

     app: my-microservice

   spec:

    containers:

     - name: my-microservice

       image: my-microservice:latest

       ports:

        - containerPort: 3000

Breaking Down the Deployment YAML:

  • replicas: 3: Ensures that three instances of the microservice run at all times. If one fails, Kubernetes will automatically start a replacement.
  • selector.matchLabels: Ensures that only pods with the label app: my-microservice are managed by this deployment.
  • template.metadata.labels: Applies labels to the pods created by this deployment.
  • containers.image: Specifies the container image that Kubernetes will pull from a registry.
  • ports.containerPort: Defines the port (3000) exposed by the container inside the cluster.

Apply the deployment:

kubectl apply -f microservice-deployment.yaml

Verify the deployment:

kubectl get deployments

You should see an output similar to the following:

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE

microservice-deployment  3/3     3            3           5m

To check the status of the running pods, use:

kubectl get pods

Afterward, this will list the three pod instances managed by the deployment:

NAME                                     READY   STATUS    RESTARTS   AGE

microservice-deployment-6cbb59d4c7-7n2df   1/1     Running   0          2m

microservice-deployment-6cbb59d4c7-k4q5r   1/1     Running   0          2m

microservice-deployment-6cbb59d4c7-xv6bn   1/1     Running   0          2m

Step 4: Creating a Kubernetes Service

After you deploy a microservice, it needs a way to communicate with other services or to be accessed within the cluster. In Kubernetes, a service provides a stable network endpoint that routes traffic to the appropriate set of pods even if pod instances are restarted or replaced.

Below is a service YAML file (my-microservice-service.yaml) that exposes the previously deployed microservice:

apiVersion: v1

kind: Service

metadata:

  name: my-microservice-service

spec:

  selector:

   app: my-microservice

  ports:

   - protocol: TCP

     port: 80

     targetPort: 3000

  type: ClusterIP

Breaking Down the Service YAML:

  • selector.app: my-microservice: Makes sure that traffic is routed to the pods labeled app: my-microservice.
  • port: 80: The port on which the service is accessible within the cluster.
  • targetPort: 3000: The port where the microservice is listening inside the container.
  • type: ClusterIP: Exposes the microservice internally within the Kubernetes cluster.

Note:

  • If you want to expose the service externally, change type: ClusterIP to type: LoadBalancer (for cloud environments) or type: NodePort (for testing).
  • With LoadBalancer, the cloud provider will assign an external IP automatically.

Apply the service.

kubectl apply -f my-microservice-service.yaml

Checking the Service Status

To confirm the service is running and get its details, use:

kubectl get services

Output for a ClusterIP service:

NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE

my-microservice-service  ClusterIP   10.108.127.154   <none>        80/TCP    3m

If using a LoadBalancer service, the external IP will be assigned (may take a few minutes):

NAME                     TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)         AGE

my-microservice-service  LoadBalancer   10.108.127.154   35.224.55.101   80:31234/TCP    3m

To test the service, use curl (for ClusterIP, test from within the cluster):

curl http://10.108.127.154:80

For LoadBalancer, you can access the microservice directly using the assigned external IP:

curl http://35.224.55.101

Step 5: Scaling Microservices

Kubernetes allows the scaling of microservices by adjusting the number of replicas in the deployment. To handle increased traffic, Kubernetes can scale microservices using horizontal pod autoscaling.

Scale Up to 5 Instances

kubectl scale deployment microservice-deployment --replicas=5

Auto-scaling Based on CPU Usage

apiVersion: autoscaling/v2

kind: HorizontalPodAutoscaler

metadata:

  name: microservice-hpa

spec:

 scaleTargetRef:

  apiVersion: apps/v1

  kind: Deployment

  name: microservice-deployment

 minReplicas: 2

 maxReplicas: 10

 metrics:

  - type: Resource

    resource:

     name: cpu

     target:

      type: Utilization

      averageUtilization: 50

Apply the autoscaler:

kubectl apply -f hpa.yaml

To check the autoscaler status, use:

kubectl get hpa

Afterward, this is the output:

NAME               REFERENCE                     TARGETS    MINPODS   MAXPODS   REPLICAS   AGE

microservice-hpa   Deployment/microservice-deployment   40% / 50%   2         10        3          2m

Testing Autoscaling

To test autoscaling, simulate CPU load using stress-ng in a pod:

kubectl run stress-test --image=busybox --restart=Never -- sh -c "while true; do echo 'Running CPU stress test'; done"

Then monitor the pod count:

kubectl get pods -w

As CPU usage increases, the HPA will automatically scale up pods to handle the load.

Step 6: Load Balancing

Load balancing ensures efficient distribution of traffic across multiple instances of a microservice, preventing overload on any single pod. In Kubernetes, you can expose microservices externally using a LoadBalancer or an Ingress Controller.

Using an Ingress Controller (Recommended for Multiple Microservices)

If you need to manage multiple microservices under a single domain, an Ingress Controller is more efficient. The Ingress resource routes external traffic to the appropriate microservice based on the request URL.

Example Ingress configuration (ingress.yaml):

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

  name: microservice-ingress

spec:

  rules:

   - host: mymicroservice.example.com

     http:

      paths:

       - path: /

         pathType: Prefix

         backend:

          service:

           name: my-microservice-service

           port:

            number: 80

Applying the Ingress Resource

To enable Ingress, install an Ingress Controller like NGINX:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml

Then deploy the Ingress rule:

kubectl apply -f ingress.yaml

Check the Ingress status:

kubectl get ingress

If you’re using Minikube, expose the Ingress:

minikube tunnel

Now your microservice is accessible at http://mymicroservice.example.com.

Step 7: Configuring Microservices With Environment Variables

Microservices often require configuration values such as database URLs, API keys, and service endpoints. Instead of hardcoding these values in application code, Kubernetes provides ConfigMaps and secrets to manage them securely.

Using ConfigMaps for Non-Sensitive Data

A ConfigMap is used to store nonsensitive configuration data, such as database connection strings, feature flags, and environment-specific variables.

Example: Creating a ConfigMap (configmap.yaml)

apiVersion: v1

kind: ConfigMap

metadata:

  name: microservice-config

data:

  DATABASE_URL: "mongodb://database:27017"

Apply the ConfigMap:

kubectl apply -f configmap.yaml

Using Secrets for Sensitive Data

For sensitive data such as API keys, passwords, and credentials, use secrets instead of ConfigMaps. Secrets store data in a Base64-encoded format to add a layer of security.

Example: Creating a Secret for Database Credentials (secret.yaml)

apiVersion: v1

kind: Secret

metadata:

  name: microservice-secret

type: Opaque

data:

  DB_USERNAME: bXl1c2Vy # Base64-encoded 'myuser'

  DB_PASSWORD: cGFzc3dvcmQ= # Base64-encoded 'password'

Apply the secret:

kubectl apply -f secret.yaml

Using ConfigMaps and Secrets in a Deployment

Now update the microservice Deployment file (my-microservice-service.yaml) to inject these values as environment variables.

env:

  - name: DATABASE_URL

    valueFrom:

     configMapKeyRef:

      name: microservice-config

      key: DATABASE_URL

  - name: DB_USERNAME

    valueFrom:

     secretKeyRef:

      name: microservice-secret

      key: DB_USERNAME

  - name: DB_PASSWORD

    valueFrom:

     secretKeyRef:

      name: microservice-secret

      key: DB_PASSWORD

Apply the updated deployment:

kubectl apply -f my-microservice-service.yaml

Verifying Configuration

Check if the ConfigMap and secret are correctly applied:

kubectl get configmap microservice-config -o yaml

kubectl get secret microservice-secret -o yaml

Finally, verify that environment variables are correctly set in a running pod:

kubectl exec -it <pod-name> -- printenv | grep DATABASE_URL

kubectl exec -it <pod-name> -- printenv | grep DB_USERNAME

Final Thoughts: Simplifying Kubernetes for Microservices With DevZero

Deploying microservices on Kubernetes can be complex, requiring infrastructure setup, monitoring, and scaling. DevZero simplifies Kubernetes development by providing a preconfigured cloud development environment that allows teams to focus on building applications rather than managing infrastructure. With DevZero, developers can spin up microservices environments instantly, collaborate seamlessly, and deploy confidently.

Ready to streamline your Kubernetes microservices workflow? Sign up for DevZero today!

Reduce Your Cloud Spend with Live Rightsizing MicroVMs
Run workloads in secure, right-sized microVMs with built-in observability and dynamic scaling. Just a single operator and you are on the path to reducing cloud spend.
Get full visiiblity and pay only for what you use.