Kubernetes

Kubernetes Workload Types: When to Use What

Kubernetes workload type

Debo Ray

Co-Founder, CEO

July 21, 2025

Share via Social Media

Introduction

Choosing the right Kubernetes workload type is crucial to building efficient and scalable applications. Each workload controller is designed for a specific use case, and understanding these differences is vital for both optimal application performance and resource optimization. This guide examines all major Kubernetes workload types, when to use each one, and provides real-world examples to help you make informed architectural decisions.

Core Workload Types

Deployments

Purpose: Manage stateless applications with rolling updates and replica management.

When to Use:

  • Web applications and APIs that don't store state locally
  • Microservices without persistent data requirements
  • Applications requiring high availability through multiple replicas
  • Workloads needing frequent updates with zero downtime
  • Services that can be easily replaced or restarted

Common Examples:

  • Frontend web servers (nginx, Apache, React/Angular apps)
  • REST API services and GraphQL endpoints
  • Load balancers and reverse proxies
  • Stateless backend services (authentication, notification services)
  • Content delivery and caching layers (Redis for sessions, not persistence)

Key Characteristics:

  • Pods are interchangeable and can be created/destroyed freely
  • Rolling updates ensure zero-downtime deployments
  • Horizontal scaling is straightforward
  • No persistent storage is attached to individual pods
  • Network identity is not important

Configuration Example:

apiVersion: apps/v1

kind: Deployment

metadata:

  name: web-api

spec:

  replicas: 3

  selector:

    matchLabels:

      app: web-api

  template:

    metadata:

      labels:

        app: web-api

    spec:

      containers:

      - name: api

        image: mycompany/web-api:v1.2.3

        ports:

        - containerPort: 8080

        resources:

          requests:

            cpu: 100m

            memory: 256Mi

          limits:

            cpu: 500m

memory: 512Mi

StatefulSets

Purpose: Manage stateful applications requiring stable network identities and persistent storage.

When to Use:

  • Databases requiring persistent storage and stable identities
  • Applications with master-slave or leader-follower architectures
  • Services requiring ordered deployment and scaling
  • Applications that store data locally and need consistent network identities
  • Clustered applications with peer discovery requirements

Common Examples:

  • Database clusters (PostgreSQL, MySQL, MongoDB)
  • Message brokers (RabbitMQ, Apache Kafka)
  • Distributed storage systems (Cassandra, Elasticsearch)
  • Consensus-based systems (etcd, Consul, Zookeeper)
  • Analytics platforms requiring data locality

Key Characteristics:

  • Pods have stable, unique network identities (pod-0, pod-1, pod-2)
  • Persistent storage follows pods during rescheduling
  • Ordered deployment and scaling (pod-0 before pod-1, etc.)
  • Stable DNS names for service discovery
  • Graceful termination and ordered updates

Configuration Example:

apiVersion: apps/v1

kind: StatefulSet

metadata:

  name: postgres-cluster

spec:

  serviceName: postgres

  replicas: 3

  selector:

    matchLabels:

      app: postgres

  template:

    metadata:

      labels:

        app: postgres

    spec:

      containers:

      - name: postgres

        image: postgres:14

        ports:

        - containerPort: 5432

        volumeMounts:

        - name: postgres-storage

          mountPath: /var/lib/postgresql/data

  volumeClaimTemplates:

  - metadata:

      name: postgres-storage

    spec:

      accessModes: ["ReadWriteOnce"]

      resources:

        requests:

          storage: 100Gi

DaemonSets

Purpose: Run exactly one pod per node for system-level services.

When to Use:

  • Node-level monitoring and logging
  • Network plugins and system services
  • Security agents and compliance tools
  • Hardware management and device plugins
  • Any service that needs to run on every node

Common Examples:

  • Log collection agents (Fluentd, Filebeat, Logstash)
  • Monitoring agents (Prometheus Node Exporter, Datadog agent)
  • Network overlay components (Calico, Flannel)
  • Security and compliance tools (Falco, Twistlock)
  • Storage drivers and CSI plugins

Key Characteristics:

  • Automatically schedules pods on new nodes
  • Ensures exactly one pod per node (unless node selectors are used)
  • Typically requires elevated privileges
  • Often uses host networking and file system access
  • Survives node reboots and maintenance

Configuration Example:

apiVersion: apps/v1

kind: DaemonSet

metadata:

  name: log-collector

spec:

  selector:

    matchLabels:

      name: log-collector

  template:

    metadata:

      labels:

        name: log-collector

    spec:

      containers:

      - name: fluentd

        image: fluentd:v1.14

        volumeMounts:

        - name: varlog

          mountPath: /var/log

          readOnly: true

        - name: containers

          mountPath: /var/lib/docker/containers

          readOnly: true

      volumes:

      - name: varlog

        hostPath:

          path: /var/log

      - name: containers

        hostPath:

          path: /var/lib/docker/containers

Jobs

Purpose: Run batch workloads to completion with guaranteed execution.

When to Use:

  • One-time data processing tasks
  • Database migrations and schema updates
  • Backup and restore operations
  • Batch analytics and reporting
  • Image or video processing pipelines

Common Examples:

  • ETL (Extract, Transform, Load) processes
  • Database migrations and maintenance scripts
  • Report generation and data exports
  • Machine learning model training
  • File processing and format conversion

Key Characteristics:

  • Runs until successful completion
  • Can run multiple pods for parallel processing
  • Automatically retries failed pods (configurable)
  • Cleans up completed pods based on retention policy
  • Supports different completion modes (parallel, indexed)

Configuration Example:

apiVersion: batch/v1

kind: Job

metadata:

  name: data-migration

spec:

  parallelism: 4

  completions: 1

  backoffLimit: 3

  template:

    spec:

      restartPolicy: OnFailure

      containers:

      - name: migrator

        image: mycompany/data-migrator:v2.1.0

        env:

        - name: SOURCE_DB

          value: "postgresql://old-db:5432/data"

        - name: TARGET_DB

          value: "postgresql://new-db:5432/data"

        resources:

          requests:

            cpu: 1

            memory: 2Gi

          limits:

            cpu: 2

            memory: 4Gi

CronJobs

Purpose: Schedule recurring batch workloads.

When to Use:

  • Scheduled backups and maintenance
  • Periodic data synchronization
  • Regular cleanup and housekeeping tasks
  • Time-based report generation
  • Health checks and monitoring tasks

Common Examples:

  • Database backups and archiving
  • Log rotation and cleanup
  • Data synchronization between systems
  • Periodic health checks and system maintenance
  • Scheduled report generation and delivery

Key Characteristics:

  • Uses cron syntax for scheduling
  • Creates Jobs on schedule
  • Configurable concurrency policies
  • Can handle missed schedules
  • Automatic cleanup of old jobs

Configuration Example:

apiVersion: batch/v1

kind: CronJob

metadata:

  name: database-backup

spec:

  schedule: "0 2 * * *"  # Daily at 2 AM

  concurrencyPolicy: Forbid

  successfulJobsHistoryLimit: 3

  failedJobsHistoryLimit: 1

  jobTemplate:

    spec:

      template:

        spec:

          restartPolicy: OnFailure

          containers:

          - name: backup

            image: postgres:14

            command:

            - /bin/bash

            - -c

            - pg_dump $DATABASE_URL > /backup/$(date +%Y%m%d_%H%M).sql

            env:

            - name: DATABASE_URL

              valueFrom:

                secretKeyRef:

                  name: db-credentials

                  key: url

Advanced Workload Types

ReplicaSets

Purpose: Low-level replica management (typically managed by Deployments).

ReplicaSets are rarely used directly in modern Kubernetes deployments. Deployments provide a higher-level abstraction that handles ReplicaSet management automatically, including rolling updates and rollback capabilities.

When you might use ReplicaSets directly:

  • Building custom controllers
  • Very specific scaling requirements not met by Deployments
  • Legacy applications with unique update patterns

Custom Resources and Operators

Purpose: Application-specific workload management through custom controllers.

When to Use:

  • Complex applications requiring custom lifecycle management
  • Multi-component applications with interdependencies
  • Applications needing specialized scaling or update strategies
  • When existing workload types don't fit your use case

Common Examples:

  • Database operators (PostgreSQL Operator, MongoDB Operator)
  • Application platforms (Istio, Knative)
  • ML/AI workload managers (Kubeflow, Seldon)
  • Backup and disaster recovery operators
Reduce Your Cloud Spend with Live Rightsizing MicroVMs
Run workloads in secure, right-sized microVMs with built-in observability and dynamic scaling. Just a single operator and you are on the path to reducing cloud spend.
Get full visiiblity and pay only for what you use.