28 Environments und Deployment-Strategien

Environments sind GitLab’s Antwort auf die Frage: “Wo läuft mein Code?” Sie tracken Deployments, visualisieren Deployment-History, und ermöglichen Rollbacks. Von simplen Staging/Production-Setups bis zu komplexen Multi-Region-Deployments mit Canary-Releases – Environments sind das Rückgrat von Production-CI/CD.

Dieses Kapitel erklärt Environment-Konzepte, statische vs. dynamische Environments, Review Apps, Deployment-Strategien (Blue-Green, Canary, Rolling), Protected Environments, und Rollback-Mechanismen.

28.1 Was sind Environments?

Definition: Ein Environment ist eine benannte Deployment-Destination – typischerweise eine Server-Infrastruktur oder Cloud-Umgebung.

Beispiele: - staging: Staging-Server (pre-production testing) - production: Production-Server (live for users) - review/feature-x: Temporäre Environment für Feature-Branch

GitLab trackt pro Environment: - Deployment-History: Welcher Commit wann deployed - Current State: Welcher Commit ist aktuell live - Deployment-URL: Direkter Link zur deployed App - Deployment-Jobs: Welche Pipeline deployed hat

Zugriff: Deployments → Environments

Environments

production      v2.1.0  Deployed 3 days ago   ✓ Healthy
staging         v2.2.0  Deployed 2 hours ago  ✓ Healthy
review/feat-x   v2.2.0  Deployed 5 hours ago  ⚙ Deploying

28.2 Environment in .gitlab-ci.yml definieren

28.2.1 Minimales Beispiel

deploy:
  script:
    - ./deploy.sh
  environment:
    name: production

Effekt: - GitLab erstellt Environment “production” (wenn nicht existiert) - Job wird mit Environment verknüpft - Deployment wird in History getrackt

28.2.2 Mit URL

deploy:
  script:
    - ./deploy.sh
  environment:
    name: production
    url: https://myapp.com

UI zeigt:

Environment: production
URL: https://myapp.com [View deployment]
Last deployed: 2 hours ago by @andreas
Commit: 7a8f9b0c - Update homepage

Click “View deployment” → öffnet https://myapp.com.

28.2.3 Mit on_stop (Cleanup)

deploy:review:
  script:
    - ./deploy.sh review-$CI_COMMIT_REF_SLUG
  environment:
    name: review/$CI_COMMIT_REF_NAME
    url: https://review-$CI_COMMIT_REF_SLUG.myapp.com
    on_stop: stop:review

stop:review:
  script:
    - ./cleanup.sh review-$CI_COMMIT_REF_SLUG
  environment:
    name: review/$CI_COMMIT_REF_NAME
    action: stop
  when: manual

Workflow: 1. deploy:review erstellt Review-Environment 2. In UI erscheint “Stop” Button 3. Click Stop → stop:review Job läuft → Environment wird gelöscht

28.3 Statische vs. Dynamische Environments

28.3.1 Statische Environments

Definition: Fest definierte Environments, existieren dauerhaft.

deploy:staging:
  environment:
    name: staging
    url: https://staging.myapp.com

deploy:production:
  environment:
    name: production
    url: https://myapp.com

Use Cases: - Staging, Production - Fixed Test-Environments - Long-lived Feature-Previews

28.3.2 Dynamische Environments

Definition: Environments mit variablem Namen, basierend auf Branch/Tag/Commit.

deploy:review:
  script:
    - ./deploy.sh $CI_COMMIT_REF_SLUG
  environment:
    name: review/$CI_COMMIT_REF_NAME
    url: https://$CI_COMMIT_REF_SLUG.myapp.com
  only:
    - branches
  except:
    - main

Für Branch feature-new-ui: - Environment: review/feature-new-ui - URL: https://feature-new-ui.myapp.com

Für Branch bugfix-login: - Environment: review/bugfix-login - URL: https://bugfix-login.myapp.com

Visualisierung in UI:

Environments

review/feature-new-ui    Deployed 2 hours ago
review/bugfix-login      Deployed 1 day ago
review/refactor-api      Deployed 3 days ago

Cleanup: on_stop triggert Cleanup wenn Branch deleted oder merged.

28.4 Review Apps: Comprehensive Guide

28.4.1 Was sind Review Apps?

Definition: Automatisch deployed App-Instanz für jeden Merge Request.

Workflow: 1. Developer erstellt Feature-Branch 2. Developer erstellt Merge Request 3. Pipeline deployed automatisch Review App 4. Reviewer testen live Preview 5. Feedback direkt im MR 6. Merge → Review App wird gelöscht

Vorteile: - Visual Review: Sehen statt imaginieren - Cross-Team: Designer, PMs, QA können testen - Früh Feedback: Probleme vor Merge erkannt - Isoliert: Keine Interference mit Staging/Prod

28.4.2 Review App Implementation

Full Example:

stages:
  - build
  - deploy
  - cleanup

variables:
  REVIEW_DOMAIN: myapp-review.com

build:
  stage: build
  script:
    - npm run build
  artifacts:
    paths:
      - dist/

deploy:review:
  stage: deploy
  script:
    # Deploy zu unique subdomain
    - export REVIEW_URL="${CI_COMMIT_REF_SLUG}.${REVIEW_DOMAIN}"
    - echo "Deploying to https://${REVIEW_URL}"
    - ./scripts/deploy-review.sh $REVIEW_URL dist/
  environment:
    name: review/$CI_COMMIT_REF_NAME
    url: https://$CI_COMMIT_REF_SLUG.$REVIEW_DOMAIN
    on_stop: stop:review
    auto_stop_in: 7 days  # Auto-cleanup nach 7 Tagen
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"

stop:review:
  stage: cleanup
  script:
    - export REVIEW_URL="${CI_COMMIT_REF_SLUG}.${REVIEW_DOMAIN}"
    - echo "Cleaning up https://${REVIEW_URL}"
    - ./scripts/cleanup-review.sh $REVIEW_URL
  environment:
    name: review/$CI_COMMIT_REF_NAME
    action: stop
  rules:
    - if: $CI_PIPELINE_SOURCE == "merge_request_event"
      when: manual

Deploy-Script (scripts/deploy-review.sh):

#!/bin/bash
SUBDOMAIN=$1
BUILD_DIR=$2

# Create namespace in Kubernetes
kubectl create namespace review-${SUBDOMAIN} || true

# Deploy app
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: review-app
  namespace: review-${SUBDOMAIN}
spec:
  replicas: 1
  selector:
    matchLabels:
      app: review-app
  template:
    metadata:
      labels:
        app: review-app
    spec:
      containers:
      - name: app
        image: nginx:alpine
        volumeMounts:
        - name: app-content
          mountPath: /usr/share/nginx/html
      volumes:
      - name: app-content
        hostPath:
          path: ${BUILD_DIR}
---
apiVersion: v1
kind: Service
metadata:
  name: review-service
  namespace: review-${SUBDOMAIN}
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: review-app
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: review-ingress
  namespace: review-${SUBDOMAIN}
spec:
  rules:
  - host: ${SUBDOMAIN}.myapp-review.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: review-service
            port:
              number: 80
EOF

Cleanup-Script (scripts/cleanup-review.sh):

#!/bin/bash
SUBDOMAIN=$1

# Delete Kubernetes namespace (deletes all resources)
kubectl delete namespace review-${SUBDOMAIN}

28.4.3 Review Apps in Merge Request

MR-Ansicht:

Merge Request #123: Add new homepage design

Pipeline: ✓ passed

Environment: review/feature-new-homepage
Deployed to: https://feature-new-homepage.myapp-review.com
[View app] [Stop environment]

Changes:
- New hero section
- Updated color scheme
- Mobile-responsive layout

Discussion:
@designer: "Love the new hero! But font-size too small on mobile."
@pm: "Can we make the CTA button more prominent?"

Direkt im MR: Live-Preview accessible, Feedback kontextuell.

28.4.4 Review Apps: Best Practices

1. Auto-Stop für Cleanup:

environment:
  auto_stop_in: 7 days  # Verhindert Environment-Sprawl

2. Resource-Limits:

deploy:review:
  script:
    - kubectl set resources deployment/review-app \
        --limits=cpu=200m,memory=256Mi \
        --namespace=review-$CI_COMMIT_REF_SLUG

3. Authentication für Review Apps:

deploy:review:
  script:
    - export BASIC_AUTH_USER=review
    - export BASIC_AUTH_PASS=$(openssl rand -base64 12)
    - ./deploy-with-auth.sh
    - echo "Username: $BASIC_AUTH_USER, Password: $BASIC_AUTH_PASS" >> mr-comment.txt

4. Cost-Management: - Nutze billige Instance-Types (Review braucht keine Production-Performance) - Auto-Scale zu Zero bei Inaktivität - Shared Database für alle Review Apps

28.5 Environment Tiers

Tiers kategorisieren Environments nach Wichtigkeit.

Verfügbare Tiers: - development - testing - staging - production - other

Definition:

deploy:production:
  environment:
    name: production
    deployment_tier: production  # Highest tier

deploy:staging:
  environment:
    name: staging
    deployment_tier: staging

deploy:review:
  environment:
    name: review/$CI_COMMIT_REF_NAME
    deployment_tier: development  # Lowest tier

Effekt in UI: Environments sortiert nach Tier, Production immer zuerst.

Protected Environments per Tier: Production-Tier kann automatisch Protected sein.

28.6 Protected Environments

Definition: Deployments zu Protected Environments benötigen spezielle Permissions.

Setup:

1. Environment als Protected markieren:

Settings → CI/CD → Protected Environments

Environment: production
Deployment tier: production
Allowed to deploy:
  - Maintainers
  - Specific users: @andreas, @bob
Approval required: ☐

2. Nur Maintainer/Specific Users können deployen:

deploy:production:
  environment:
    name: production
  script:
    - ./deploy.sh production
  only:
    - main

Wenn nicht autorisiert: Job fails mit “user not authorized to deploy to production”.

Use Case: Verhindert accidental Production-Deployments durch Junior-Devs.

28.6.1 Deployment Approvals (Premium/Ultimate)

Setup:

Settings → CI/CD → Protected Environments

Environment: production
Approval required: ☑
Required approvals: 2
Approvers: @senior-dev-1, @senior-dev-2, @cto

Workflow: 1. Pipeline läuft bis deploy:production 2. Job pausiert: “Waiting for approval” 3. Approvers bekommen Notification 4. 2 Approvers müssen “Approve” clicken 5. Job startet automatisch nach Approval

In UI:

Pipeline #123

deploy:production  ⏸ Waiting for approval (1/2 approved)

Approvals:
✓ @senior-dev-1 approved 10 minutes ago
⏳ Waiting for 1 more approval

[Approve] [Reject]

28.7 Deployment-Strategien

28.7.1 1. Basic Deployment (Recreate)

Strategy: Stop old version, start new version.

deploy:
  script:
    - kubectl delete deployment myapp || true
    - kubectl apply -f deployment.yaml
  environment:
    name: production

Downtime: Ja (während alte stoppt und neue startet) Rollback: Manuell (re-deploy alte Version) Complexity: Niedrig

28.7.2 2. Rolling Update

Strategy: Gradual replacement, keine Downtime.

deploy:
  script:
    - kubectl apply -f deployment.yaml
    - kubectl rollout status deployment/myapp
  environment:
    name: production

Kubernetes Deployment (mit Rolling-Update-Config):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 10
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 2        # Max 2 neue Pods zusätzlich
      maxUnavailable: 1  # Max 1 Pod down während Update
  template:
    spec:
      containers:
      - name: app
        image: myapp:$CI_COMMIT_SHA

Was passiert: 1. Neue Pod startet (maxSurge: 2 → 12 Pods total) 2. Neue Pod wird healthy 3. Alte Pod terminiert (maxUnavailable: 1 → 11 Pods) 4. Repeat bis alle Pods updated

Downtime: Nein Rollback: kubectl rollout undo Complexity: Mittel

28.7.3 3. Blue-Green Deployment

Strategy: Zwei identische Environments (Blue=Current, Green=New), Switch traffic.

deploy:blue-green:
  script:
    # Determine current color
    - CURRENT=$(kubectl get service myapp -o jsonpath='{.spec.selector.version}')
    - NEW=$([ "$CURRENT" == "blue" ] && echo "green" || echo "blue")
    
    # Deploy new version
    - kubectl apply -f deployment-${NEW}.yaml
    - kubectl wait --for=condition=ready pod -l version=${NEW}
    
    # Health check
    - ./health-check.sh ${NEW}
    
    # Switch traffic
    - kubectl patch service myapp -p '{"spec":{"selector":{"version":"'${NEW}'"}}}'
    
    # Keep old version for 10 minutes (quick rollback)
    - sleep 600
    - kubectl delete deployment myapp-${CURRENT}
  environment:
    name: production
    url: https://myapp.com

Kubernetes Manifests:

# deployment-blue.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-blue
spec:
  replicas: 10
  selector:
    matchLabels:
      app: myapp
      version: blue
  template:
    metadata:
      labels:
        app: myapp
        version: blue
    spec:
      containers:
      - name: app
        image: myapp:$CI_COMMIT_SHA

---
# deployment-green.yaml (identisch, nur version: green)

---
# service.yaml (switches between blue/green)
apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  selector:
    app: myapp
    version: blue  # Initial auf blue, später zu green
  ports:
  - port: 80

Ablauf:

1. Blue ist live (version: blue in Service)
2. Deploy Green (deployment-green)
3. Test Green (internal URL)
4. Switch Service zu Green
5. Blue bleibt 10 Min (Rollback möglich)
6. Delete Blue

Downtime: Nein Rollback: Switch Service zurück (instant!) Complexity: Hoch (doppelte Resources)

28.7.4 4. Canary Deployment

Strategy: Route 5% traffic zu neuer Version, monitor, gradually increase.

deploy:canary:
  script:
    # Deploy canary (10% traffic)
    - kubectl apply -f deployment-canary.yaml
    - kubectl wait --for=condition=ready pod -l version=canary
    
    # Monitor metrics for 10 minutes
    - ./monitor-canary.sh 10
    
    # If healthy, promote to 50%
    - kubectl scale deployment myapp-canary --replicas=5
    - kubectl scale deployment myapp-stable --replicas=5
    - ./monitor-canary.sh 10
    
    # If still healthy, promote to 100%
    - kubectl scale deployment myapp-canary --replicas=10
    - kubectl scale deployment myapp-stable --replicas=0
    - kubectl delete deployment myapp-stable
    - kubectl label deployment myapp-canary version=stable --overwrite
  environment:
    name: production

Initial State:

myapp-stable: 10 Pods (100% traffic)
myapp-canary: 0 Pods

After Canary Deploy:

myapp-stable: 9 Pods (90% traffic)
myapp-canary: 1 Pod (10% traffic)

After Promotion to 50%:

myapp-stable: 5 Pods (50% traffic)
myapp-canary: 5 Pods (50% traffic)

After Full Promotion:

myapp-stable: 0 Pods (deleted)
myapp-canary: 10 Pods (100% traffic, renamed to stable)

Istio-basierte Canary (Traffic-Splitting):

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: myapp
spec:
  hosts:
  - myapp.com
  http:
  - match:
    - headers:
        canary:
          exact: "true"
    route:
    - destination:
        host: myapp
        subset: canary
  - route:
    - destination:
        host: myapp
        subset: stable
      weight: 95
    - destination:
        host: myapp
        subset: canary
      weight: 5  # 5% zu Canary

Downtime: Nein Rollback: Scale Canary zu 0 Complexity: Sehr hoch (Metrics, Monitoring nötig)

28.7.5 Deployment-Strategien: Vergleich

Strategy Downtime Rollback Speed Resource Overhead Complexity Risk
Recreate Ja Slow (re-deploy) 0% Low High
Rolling Nein Medium (rollout undo) 20% Medium Medium
Blue-Green Nein Instant (switch) 100% High Low
Canary Nein Instant (scale down) 10-50% Very High Very Low

Empfehlung: - Small Projects: Rolling Update (einfach, built-in K8s) - Medium Projects: Blue-Green (instant rollback) - Large Projects: Canary (minimizes blast radius)

28.8 Rollback-Mechanismen

28.8.1 1. Re-Deploy alter Commit

UI-basiert:

Deployments → Environments → production → Deployment #122
[Re-deploy]

Click “Re-deploy” → Triggert Pipeline für diesen Commit → Deploy.

28.8.2 2. Manual Rollback-Job

rollback:production:
  stage: deploy
  script:
    - kubectl rollout undo deployment/myapp
  environment:
    name: production
  when: manual
  only:
    - main

In Pipeline: “Play” Button für rollback:production → Undo letztes Deployment.

28.8.3 3. Automated Rollback (Monitoring-basiert)

deploy:production:
  script:
    - ./deploy.sh
    - export DEPLOYMENT_ID=$(kubectl get deployment myapp -o jsonpath='{.metadata.generation}')
  after_script:
    - |
      # Monitor metrics for 5 minutes
      for i in {1..30}; do
        ERROR_RATE=$(curl -s http://prometheus/api/v1/query?query=error_rate | jq .data.result[0].value[1])
        if (( $(echo "$ERROR_RATE > 0.05" | bc -l) )); then
          echo "Error rate too high! Rolling back..."
          kubectl rollout undo deployment/myapp
          exit 1
        fi
        sleep 10
      done
  environment:
    name: production

Automated: Wenn Error-Rate > 5%, automatic Rollback.

28.9 Environment-Variable per Environment

.deploy-template:
  script:
    - echo "Deploying to $ENVIRONMENT"
    - echo "API URL: $API_URL"
    - echo "Database: $DATABASE_HOST"
    - ./deploy.sh

deploy:staging:
  extends: .deploy-template
  variables:
    ENVIRONMENT: staging
    API_URL: https://staging-api.myapp.com
    DATABASE_HOST: staging-db.myapp.com
  environment:
    name: staging

deploy:production:
  extends: .deploy-template
  variables:
    ENVIRONMENT: production
    API_URL: https://api.myapp.com
    DATABASE_HOST: prod-db.myapp.com
  environment:
    name: production
  when: manual

DRY: Template für Common Logic, Variables per Environment.

28.10 Praktische Patterns

28.10.1 Pattern 1: Multi-Region Deployment

deploy:eu:
  script:
    - kubectl config use-context eu-cluster
    - kubectl apply -f deployment.yaml
  environment:
    name: production/eu
    url: https://eu.myapp.com

deploy:us:
  script:
    - kubectl config use-context us-cluster
    - kubectl apply -f deployment.yaml
  environment:
    name: production/us
    url: https://us.myapp.com

deploy:asia:
  script:
    - kubectl config use-context asia-cluster
    - kubectl apply -f deployment.yaml
  environment:
    name: production/asia
    url: https://asia.myapp.com

UI:

production/eu    Deployed 1 hour ago
production/us    Deployed 1 hour ago
production/asia  Deployed 1 hour ago

28.10.2 Pattern 2: Scheduled Maintenance Windows

deploy:production:
  script:
    - ./deploy.sh
  environment:
    name: production
  rules:
    - if: '$CI_PIPELINE_SOURCE == "schedule"'  # Nur via Schedule
    - if: '$CI_COMMIT_BRANCH == "main"'
      when: manual  # Oder manual auf main

Schedule: CI/CD → Schedules → “Nightly Production Deploy” → 02:00 UTC

28.10.3 Pattern 3: Feature-Flag-basiertes Deployment

deploy:production:
  script:
    - export FEATURE_NEW_UI=${FEATURE_NEW_UI:-false}
    - ./deploy.sh --feature-flags FEATURE_NEW_UI=$FEATURE_NEW_UI
  environment:
    name: production

Deployment: Alte Version deployed, Feature per Flag aktivierbar (ohne Re-Deploy).

28.11 Zusammenfassung

Environments sind mehr als “wo Code läuft” – sie sind GitLab’s Deployment-Tracking, History, und Rollback-System:

Environment-Typen: - Static: Staging, Production (permanent) - Dynamic: Review Apps, Feature-Previews (temporär)

Review Apps: - Automatisch für MRs - Visual Testing vor Merge - Auto-Cleanup mit on_stop

Deployment-Strategien: - Rolling: Gradual, keine Downtime - Blue-Green: Instant Rollback, doppelte Resources - Canary: Minimize Risk, sehr komplex

Protected Environments: - Nur autorisierte User - Optional: Approval Workflow

Rollback: - Re-deploy alter Commit - Manual Rollback-Job - Automated (Monitoring-basiert)

Master Environments, und Deployments werden von “hoffen und beten” zu “tracken, monitoren, instant rollback”.