Back to all articles
DevOps2025-03-28· 12 min read

Docker & Kubernetes: A Complete Containerization Guide for Production-Ready Apps

From writing your first Dockerfile to orchestrating workloads with Kubernetes—a practical containerization guide covering image optimisation, security, and a real-world case study that cut deployment time from hours to minutes.

DockerKubernetesContainersDevOpsCloud Native
Docker & Kubernetes: A Complete Containerization Guide for Production-Ready Apps

The Promise (and the Reality) of Containerisation

Containers solve a problem every developer knows well: "it works on my machine." By packaging an application alongside all its runtime dependencies into a portable, reproducible unit, containers eliminate the friction between development, staging, and production environments. But getting from a working Dockerfile to a resilient, production-ready container strategy requires more than just running docker build.

This guide walks through the full journey—from container fundamentals to Kubernetes orchestration—drawing on patterns we've refined across dozens of client projects at MediaFront.

What Containers Actually Are

Unlike virtual machines, containers don't virtualise hardware. They share the host operating system kernel while running in isolated user-space processes. The practical result:

  • Startup in milliseconds, not minutes
  • Image sizes in megabytes, not gigabytes
  • Predictable resource usage — you can set hard CPU and memory limits
  • Immutable deployments — every release is a new image tag, making rollbacks trivial

The trade-off: containers on the same host share the kernel, which has security implications we'll address below.

Writing Production-Grade Dockerfiles

A Dockerfile is infrastructure as code for your runtime environment. Small choices here have outsized consequences in production.

Use Multi-Stage Builds

Multi-stage builds separate the build toolchain from the runtime image, dramatically reducing final image size and attack surface:

# Stage 1: Build
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
WORKDIR /src
COPY ["OrderService.csproj", "./"]
RUN dotnet restore
COPY . .
RUN dotnet publish -c Release -o /app/publish

# Stage 2: Runtime-only image (~100MB vs ~800MB SDK image)
FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS runtime
WORKDIR /app
EXPOSE 8080
COPY --from=build /app/publish .

# Never run as root in production
USER app

ENTRYPOINT ["dotnet", "OrderService.dll"]

Security Hardening Checklist

Before any image goes to production, verify:

  • Non-root user — add USER app (or create a dedicated user) so a compromised container doesn't have root access to the host
  • Read-only filesystem — mount volumes only where writes are genuinely needed
  • No secrets in image layers — use build secrets or runtime environment injection, never COPY .env .
  • Pinned base image tagsFROM node:20.14.0-alpine3.20 is reproducible; FROM node:latest is a reliability hazard
  • Vulnerability scanning — integrate Trivy or Grype into your CI pipeline; fail the build on HIGH or CRITICAL findings
# GitHub Actions step — fails PR if critical CVEs found
- name: Scan image for vulnerabilities
  uses: aquasecurity/trivy-action@master
  with:
    image-ref: ${{ env.IMAGE }}
    exit-code: '1'
    severity: 'CRITICAL,HIGH'

Composing Services Locally with Docker Compose

Docker Compose is the right tool for local development and integration testing. Here's a production-mirroring setup with secrets management:

services:
  api:
    build: ./api
    ports: ["8080:8080"]
    environment:
      DB_HOST: db
      DB_PASSWORD_FILE: /run/secrets/db_password
    depends_on:
      db:
        condition: service_healthy
    secrets: [db_password]
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
      interval: 30s
      timeout: 5s
      retries: 3
      start_period: 10s

  db:
    image: postgres:16-alpine
    volumes: [db_data:/var/lib/postgresql/data]
    environment:
      POSTGRES_PASSWORD_FILE: /run/secrets/db_password
    secrets: [db_password]
    healthcheck:
      test: ["CMD", "pg_isready", "-U", "postgres"]
      interval: 10s

volumes:
  db_data:

secrets:
  db_password:
    file: ./secrets/db_password.txt

Two things worth highlighting here: depends_on with condition: service_healthy ensures the API only starts once the database is truly ready (not just the container process), and secrets are mounted as files rather than plain environment variables—reducing the risk of accidental exposure in logs.

Kubernetes: Orchestration That Scales

Kubernetes (K8s) takes over where Docker Compose leaves off. When you need automatic self-healing, zero-downtime deploys, and the ability to run hundreds of container replicas across a cluster, Kubernetes is the answer.

Core Kubernetes Primitives

ObjectPurpose
PodSmallest deployable unit; one or more containers sharing networking and storage
DeploymentDeclares desired state; manages rolling updates and rollbacks
ServiceStable network endpoint in front of a set of pods
ConfigMap / SecretExternalise configuration and credentials from images
HorizontalPodAutoscalerAutomatically scales replica count based on CPU, memory, or custom metrics

A Production Deployment Manifest

apiVersion: apps/v1
kind: Deployment
metadata:
  name: order-service
spec:
  replicas: 3
  selector:
    matchLabels:
      app: order-service
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0       # Never take a pod down before a new one is ready
      maxSurge: 1
  template:
    metadata:
      labels:
        app: order-service
    spec:
      containers:
      - name: order-service
        image: registry.example.com/order-service:v2.4.1
        ports: [{containerPort: 8080}]
        resources:
          requests:
            cpu: "250m"
            memory: "256Mi"
          limits:
            cpu: "500m"
            memory: "512Mi"
        livenessProbe:
          httpGet:
            path: /health/live
            port: 8080
          initialDelaySeconds: 10
          periodSeconds: 15
        readinessProbe:
          httpGet:
            path: /health/ready
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 10
        securityContext:
          runAsNonRoot: true
          readOnlyRootFilesystem: true

Setting maxUnavailable: 0 and maxSurge: 1 means Kubernetes always keeps the full replica count healthy during a rollout — zero-downtime deploys become the default.

Case Study: Containerising a Legacy .NET Platform

A financial services client came to us with a Windows-server monolith that took three hours to deploy and consistently caused configuration drift between environments. Our containerisation project over eight weeks delivered:

MetricBeforeAfter
Deployment time~3 hours~8 minutes
Environment parity❌ frequent drift✅ identical images
Infrastructure costbaseline−40%
Release cadencemonthlyweekly

The key steps: extracting configuration into environment variables, adding structured health-check endpoints the probes could target, building images in CI using multi-stage Dockerfiles, and deploying to AKS with Helm charts versioned alongside the application code.

What to Containerise First

Not everything benefits equally from containerisation. Start here:

  1. Stateless HTTP services — the easiest wins with the highest immediate benefit
  2. Background workers and scheduled jobs — clean separation of concerns from your main API
  3. Third-party dependencies in dev (databases, queues, email servers) — Compose makes local parity trivial

Tackle stateful services (primary databases, file stores) last, and only when you have a solid understanding of volume management and backup strategies in your target cluster.

Containerisation isn't a destination—it's the foundation for everything from zero-downtime deployments to auto-scaling to disaster recovery. Build the discipline into your workflow from day one.

Want to work together?

We build high-performance web applications and backend systems.

Get in touch