Saltar al contenido principal

Contenedores y Orquestación 🐳

¿Por qué Contenedores Revolucionaron DevOps?

Los contenedores resolvieron el problema de "funciona en mi máquina" y democratizaron la infraestructura. Desde Docker hasta Kubernetes, los contenedores son el foundation de DevOps moderno.

🎯 Objetivos del Capítulo

  • Dominar containerización con Docker avanzado
  • Implementar orquestación con Kubernetes
  • Optimizar performance y seguridad de contenedores
  • Manejar ecosistemas de microservicios
  • Crear estrategias de monitoreo y troubleshooting

🐳 Docker Avanzado para DevOps

Multi-stage Builds Optimizados

# Dockerfile.optimized
# ================================
# Stage 1: Build dependencies
# ================================
FROM node:18-alpine AS dependencies
WORKDIR /app

# Copy package files first (for better caching)
COPY package*.json ./
COPY yarn.lock ./

# Install ALL dependencies (including devDependencies)
RUN yarn install --frozen-lockfile

# ================================
# Stage 2: Build application
# ================================
FROM dependencies AS builder
WORKDIR /app

# Copy source code
COPY . .

# Build the application
RUN yarn build

# Run tests (fail build if tests fail)
RUN yarn test --passWithNoTests

# ================================
# Stage 3: Production dependencies
# ================================
FROM node:18-alpine AS prod-deps
WORKDIR /app

COPY package*.json ./
COPY yarn.lock ./

# Install only production dependencies
RUN yarn install --frozen-lockfile --production && \
yarn cache clean

# ================================
# Stage 4: Runtime
# ================================
FROM node:18-alpine AS runtime

# Security: Create non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nextjs -u 1001

# Install security updates and required packages
RUN apk update && \
apk add --no-cache \
dumb-init \
curl \
&& rm -rf /var/cache/apk/*

# Set working directory
WORKDIR /app

# Copy production dependencies
COPY --from=prod-deps --chown=nextjs:nodejs /app/node_modules ./node_modules

# Copy built application
COPY --from=builder --chown=nextjs:nodejs /app/dist ./dist
COPY --from=builder --chown=nextjs:nodejs /app/public ./public
COPY --chown=nextjs:nodejs package*.json ./

# Switch to non-root user
USER nextjs

# Expose port (non-privileged)
EXPOSE 3000

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/api/health || exit 1

# Use dumb-init to handle signals properly
ENTRYPOINT ["dumb-init", "--"]

# Start application
CMD ["node", "dist/server.js"]

Docker Compose para Desarrollo Local

# docker-compose.dev.yml
version: '3.8'

services:
app:
build:
context: .
dockerfile: Dockerfile.dev
target: development
ports:
- "3000:3000"
- "9229:9229" # Debug port
environment:
- NODE_ENV=development
- DEBUG=app:*
- DATABASE_URL=postgresql://postgres:password@postgres:5432/myapp_dev
- REDIS_URL=redis://redis:6379
volumes:
- .:/app
- /app/node_modules # Anonymous volume to preserve node_modules
- app_logs:/app/logs
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
networks:
- app-network
restart: unless-stopped

postgres:
image: postgres:15-alpine
environment:
POSTGRES_DB: myapp_dev
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
- ./db/init:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
networks:
- app-network

redis:
image: redis:7-alpine
command: redis-server --appendonly yes
ports:
- "6379:6379"
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
networks:
- app-network

nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/dev.conf:/etc/nginx/nginx.conf:ro
- ./ssl:/etc/ssl/certs:ro
depends_on:
- app
networks:
- app-network

prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--storage.tsdb.retention.time=15d'
- '--web.enable-lifecycle'
networks:
- app-network

grafana:
image: grafana/grafana:latest
ports:
- "3001:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
volumes:
- grafana_data:/var/lib/grafana
- ./monitoring/grafana/dashboards:/var/lib/grafana/dashboards
- ./monitoring/grafana/provisioning:/etc/grafana/provisioning
depends_on:
- prometheus
networks:
- app-network

volumes:
postgres_data:
redis_data:
app_logs:
prometheus_data:
grafana_data:

networks:
app-network:
driver: bridge

Container Security Best Practices

# Dockerfile.secure
FROM node:18-alpine AS base

# Security: Install security updates
RUN apk update && apk upgrade && \
apk add --no-cache \
dumb-init \
&& rm -rf /var/cache/apk/*

# Security: Create dedicated user
RUN addgroup -g 1001 -S appgroup && \
adduser -S appuser -u 1001 -G appgroup

# Security: Set proper permissions on directories
RUN mkdir -p /app && \
chown -R appuser:appgroup /app

WORKDIR /app

# Install dependencies as root, then switch to user
COPY package*.json ./
RUN npm ci --only=production && \
npm cache clean --force && \
chown -R appuser:appgroup /app/node_modules

# Copy application files
COPY --chown=appuser:appgroup . .

# Security: Switch to non-root user
USER appuser

# Security: Use read-only filesystem
# Add this in docker run: --read-only --tmpfs /tmp

# Security: Drop capabilities and add only needed ones
# Add this in docker run: --cap-drop=ALL --cap-add=NET_BIND_SERVICE

# Security: Limit resources
# Add this in docker run: --memory=512m --cpus=0.5

# Use specific, non-privileged port
EXPOSE 8080

# Health check with timeout
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD node healthcheck.js || exit 1

# Security: Use dumb-init as PID 1
ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "index.js"]

⚓ Kubernetes Fundamentos para DevOps

Namespace y Resource Management

# namespace-setup.yml
apiVersion: v1
kind: Namespace
metadata:
name: myapp-production
labels:
environment: production
team: backend
cost-center: engineering

---
# Resource Quotas
apiVersion: v1
kind: ResourceQuota
metadata:
name: production-quota
namespace: myapp-production
spec:
hard:
requests.cpu: "10"
requests.memory: 20Gi
limits.cpu: "20"
limits.memory: 40Gi
persistentvolumeclaims: "10"
services: "5"
secrets: "10"
configmaps: "10"

---
# Limit Ranges
apiVersion: v1
kind: LimitRange
metadata:
name: production-limits
namespace: myapp-production
spec:
limits:
- default:
cpu: 500m
memory: 512Mi
defaultRequest:
cpu: 100m
memory: 128Mi
type: Container
- max:
cpu: 2
memory: 4Gi
min:
cpu: 50m
memory: 64Mi
type: Container

---
# Network Policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: myapp-network-policy
namespace: myapp-production
spec:
podSelector:
matchLabels:
app: myapp
policyTypes:
- Ingress
- Egress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
- podSelector:
matchLabels:
app: myapp
ports:
- protocol: TCP
port: 8080
egress:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432

Application Deployment

# app-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
namespace: myapp-production
labels:
app: myapp
version: v2.0.0
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
version: v2.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9090"
prometheus.io/path: "/metrics"
spec:
# Security Context
securityContext:
runAsNonRoot: true
runAsUser: 1001
runAsGroup: 1001
fsGroup: 1001
seccompProfile:
type: RuntimeDefault

# Service Account
serviceAccountName: myapp-sa

# Init Containers
initContainers:
- name: migration
image: myapp:v2.0.0
command: ['python', 'manage.py', 'migrate']
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: myapp-secrets
key: database-url
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL

containers:
- name: myapp
image: myapp:v2.0.0
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 9090
name: metrics
protocol: TCP

# Resource Management
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"

# Environment Variables
env:
- name: PORT
value: "8080"
- name: NODE_ENV
value: "production"
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: myapp-secrets
key: database-url
- name: REDIS_URL
valueFrom:
configMapKeyRef:
name: myapp-config
key: redis-url

# Volume Mounts
volumeMounts:
- name: config-volume
mountPath: /app/config
readOnly: true
- name: cache-volume
mountPath: /app/cache
- name: tmp-volume
mountPath: /tmp

# Health Checks
livenessProbe:
httpGet:
path: /health/live
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1

readinessProbe:
httpGet:
path: /health/ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 2
successThreshold: 1

# Startup Probe (for slow-starting containers)
startupProbe:
httpGet:
path: /health/startup
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 30

# Security Context
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE

# Sidecar for logging (Fluent Bit)
- name: fluent-bit
image: fluent/fluent-bit:2.0
volumeMounts:
- name: fluent-bit-config
mountPath: /fluent-bit/etc
- name: app-logs
mountPath: /var/log/app
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"

volumes:
- name: config-volume
configMap:
name: myapp-config
- name: cache-volume
emptyDir:
sizeLimit: 1Gi
- name: tmp-volume
emptyDir:
sizeLimit: 100Mi
- name: fluent-bit-config
configMap:
name: fluent-bit-config
- name: app-logs
emptyDir: {}

# Node Selection
nodeSelector:
kubernetes.io/arch: amd64
node-type: application

# Pod Anti-Affinity (spread across nodes)
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: myapp
topologyKey: kubernetes.io/hostname

# Tolerations
tolerations:
- key: "application-workload"
operator: "Equal"
value: "true"
effect: "NoSchedule"

---
# Horizontal Pod Autoscaler
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa
namespace: myapp-production
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
- type: Pods
pods:
metric:
name: http_requests_per_second
target:
type: AverageValue
averageValue: "100"
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 10
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 60
policies:
- type: Percent
value: 50
periodSeconds: 30
- type: Pods
value: 2
periodSeconds: 60

---
# Pod Disruption Budget
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: myapp-pdb
namespace: myapp-production
spec:
selector:
matchLabels:
app: myapp
maxUnavailable: 1

🎛️ ConfigMaps y Secrets Management

Advanced ConfigMap Usage

# configmaps.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: myapp-config
namespace: myapp-production
data:
# Simple key-value pairs
redis-url: "redis://redis-service:6379"
log-level: "info"
feature-flags: "new-ui:enabled,api-v2:disabled"

# Configuration files
nginx.conf: |
upstream backend {
server myapp-service:8080;
}

server {
listen 80;
server_name _;

location /health {
access_log off;
return 200 "healthy\n";
}

location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

# Application configuration
app-config.json: |
{
"database": {
"pool_size": 20,
"timeout": 30000,
"retry_attempts": 3
},
"cache": {
"ttl": 3600,
"max_size": "100MB"
},
"monitoring": {
"metrics_enabled": true,
"tracing_enabled": true,
"sample_rate": 0.1
}
}

# Environment-specific settings
production.env: |
DEBUG=false
LOG_LEVEL=warn
CACHE_TTL=3600
MAX_CONNECTIONS=100

---
# Secrets management
apiVersion: v1
kind: Secret
metadata:
name: myapp-secrets
namespace: myapp-production
type: Opaque
data:
# Base64 encoded values
database-url: cG9zdGdyZXNxbDovL3VzZXI6cGFzc3dvcmRAaG9zdDo1NDMyL2RiCg==
api-key: YWJjMTIzZGVmNDU2Z2hpNzg5Cg==
jwt-secret: c3VwZXJfc2VjcmV0X2p3dF9rZXlfZm9yX3Byb2R1Y3Rpb24K

---
# External Secrets (using External Secrets Operator)
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: vault-secret
namespace: myapp-production
spec:
refreshInterval: 1h
secretStoreRef:
name: vault-backend
kind: SecretStore
target:
name: myapp-vault-secrets
creationPolicy: Owner
data:
- secretKey: database-password
remoteRef:
key: secret/myapp/production
property: database_password
- secretKey: api-keys
remoteRef:
key: secret/myapp/production
property: api_keys

---
# SecretStore configuration for Vault
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: vault-backend
namespace: myapp-production
spec:
provider:
vault:
server: "https://vault.company.com"
path: "secret"
version: "v2"
auth:
kubernetes:
mountPath: "kubernetes"
role: "myapp-role"
serviceAccountRef:
name: myapp-sa

🌐 Service Mesh con Istio

Istio Service Mesh Configuration

# istio-configuration.yml
# VirtualService for traffic management
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: myapp-vs
namespace: myapp-production
spec:
hosts:
- myapp.company.com
gateways:
- myapp-gateway
http:
# Canary deployment rules
- match:
- headers:
x-canary-user:
exact: "true"
route:
- destination:
host: myapp-service
subset: v2
weight: 100

# A/B testing
- match:
- headers:
user-agent:
regex: ".*Mobile.*"
route:
- destination:
host: myapp-service
subset: v2
weight: 20
- destination:
host: myapp-service
subset: v1
weight: 80

# Default routing
- route:
- destination:
host: myapp-service
subset: v1
weight: 90
- destination:
host: myapp-service
subset: v2
weight: 10
fault:
delay:
percentage:
value: 0.1
fixedDelay: 100ms
abort:
percentage:
value: 0.01
httpStatus: 500

---
# DestinationRule for load balancing and circuit breaker
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: myapp-destination
namespace: myapp-production
spec:
host: myapp-service
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
connectionPool:
tcp:
maxConnections: 100
http:
http1MaxPendingRequests: 50
http2MaxRequests: 100
maxRequestsPerConnection: 10
maxRetries: 3
consecutiveGatewayErrors: 5
interval: 30s
baseEjectionTime: 30s
outlierDetection:
consecutive5xxErrors: 3
interval: 30s
baseEjectionTime: 30s
maxEjectionPercent: 50
minHealthPercent: 30
subsets:
- name: v1
labels:
version: v1.9.0
- name: v2
labels:
version: v2.0.0
trafficPolicy:
connectionPool:
tcp:
maxConnections: 50

---
# ServiceEntry for external services
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-api
namespace: myapp-production
spec:
hosts:
- api.external-service.com
ports:
- number: 443
name: https
protocol: HTTPS
location: MESH_EXTERNAL
resolution: DNS

---
# AuthorizationPolicy for security
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: myapp-authz
namespace: myapp-production
spec:
selector:
matchLabels:
app: myapp
rules:
- from:
- source:
principals: ["cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"]
- to:
- operation:
methods: ["GET", "POST"]
paths: ["/api/*"]
- when:
- key: source.ip
notValues: ["192.168.1.0/24"] # Block specific IP range

---
# PeerAuthentication for mTLS
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: myapp-peer-auth
namespace: myapp-production
spec:
selector:
matchLabels:
app: myapp
mtls:
mode: STRICT

📊 Monitoring y Observability

Prometheus ServiceMonitor

# monitoring.yml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: myapp-metrics
namespace: myapp-production
labels:
app: myapp
release: prometheus
spec:
selector:
matchLabels:
app: myapp
endpoints:
- port: metrics
interval: 30s
path: /metrics
honorLabels: true
relabelings:
- sourceLabels: [__meta_kubernetes_pod_name]
targetLabel: pod
- sourceLabels: [__meta_kubernetes_pod_node_name]
targetLabel: node

---
# PrometheusRule for alerts
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: myapp-alerts
namespace: myapp-production
spec:
groups:
- name: myapp.rules
rules:
- alert: MyAppHighErrorRate
expr: |
(
sum(rate(http_requests_total{job="myapp", status=~"5.."}[5m])) /
sum(rate(http_requests_total{job="myapp"}[5m]))
) > 0.05
for: 2m
labels:
severity: critical
service: myapp
annotations:
summary: "High error rate detected for MyApp"
description: "Error rate is {{ $value | humanizePercentage }} for the last 5 minutes"

- alert: MyAppHighLatency
expr: |
histogram_quantile(0.95,
sum(rate(http_request_duration_seconds_bucket{job="myapp"}[5m])) by (le)
) > 0.5
for: 5m
labels:
severity: warning
service: myapp
annotations:
summary: "High latency detected for MyApp"
description: "95th percentile latency is {{ $value }}s"

- alert: MyAppPodCrashLooping
expr: |
rate(kube_pod_container_status_restarts_total{pod=~"myapp-.*"}[5m]) > 0
for: 1m
labels:
severity: critical
service: myapp
annotations:
summary: "MyApp pod is crash looping"
description: "Pod {{ $labels.pod }} is restarting frequently"

---
# Grafana Dashboard ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: myapp-dashboard
namespace: monitoring
labels:
grafana_dashboard: "1"
data:
dashboard.json: |
{
"dashboard": {
"title": "MyApp Dashboard",
"panels": [
{
"title": "Request Rate",
"type": "graph",
"targets": [
{
"expr": "sum(rate(http_requests_total{job=\"myapp\"}[5m]))",
"legendFormat": "Requests/sec"
}
]
},
{
"title": "Error Rate",
"type": "graph",
"targets": [
{
"expr": "sum(rate(http_requests_total{job=\"myapp\", status=~\"5..\"}[5m])) / sum(rate(http_requests_total{job=\"myapp\"}[5m]))",
"legendFormat": "Error Rate"
}
]
},
{
"title": "Response Time",
"type": "graph",
"targets": [
{
"expr": "histogram_quantile(0.95, sum(rate(http_request_duration_seconds_bucket{job=\"myapp\"}[5m])) by (le))",
"legendFormat": "95th percentile"
},
{
"expr": "histogram_quantile(0.50, sum(rate(http_request_duration_seconds_bucket{job=\"myapp\"}[5m])) by (le))",
"legendFormat": "50th percentile"
}
]
}
]
}
}

🛠️ DevOps Tools para Kubernetes

Helm Charts

# Chart.yaml
apiVersion: v2
name: myapp
description: A Helm chart for MyApp
type: application
version: 0.1.0
appVersion: "2.0.0"

dependencies:
- name: postgresql
version: 12.1.9
repository: https://charts.bitnami.com/bitnami
condition: postgresql.enabled
- name: redis
version: 17.3.14
repository: https://charts.bitnami.com/bitnami
condition: redis.enabled
# values.yaml
replicaCount: 3

image:
repository: myapp
pullPolicy: IfNotPresent
tag: "2.0.0"

service:
type: ClusterIP
port: 80
targetPort: 8080

ingress:
enabled: true
className: "nginx"
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: "letsencrypt-prod"
hosts:
- host: myapp.company.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: myapp-tls
hosts:
- myapp.company.com

resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 250m
memory: 256Mi

autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 80

postgresql:
enabled: true
auth:
postgresPassword: "secretpassword"
database: "myapp"

redis:
enabled: true
auth:
enabled: false

monitoring:
enabled: true
serviceMonitor:
enabled: true

security:
podSecurityContext:
runAsNonRoot: true
runAsUser: 1001
fsGroup: 1001

securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL

Kustomize para Environment Management

# base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- deployment.yaml
- service.yaml
- configmap.yaml

commonLabels:
app: myapp

images:
- name: myapp
newTag: latest
# overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: myapp-production

resources:
- ../../base

patchesStrategicMerge:
- deployment-patch.yaml
- service-patch.yaml

replicas:
- name: myapp
count: 5

images:
- name: myapp
newTag: v2.0.0

configMapGenerator:
- name: myapp-config
files:
- config/production.properties

secretGenerator:
- name: myapp-secrets
envs:
- secrets/production.env

🎯 Ejercicios Prácticos

Ejercicio 1: Container Optimization

  1. Optimiza un Dockerfile existente usando multi-stage builds
  2. Implementa security best practices
  3. Mide y compara tamaños de imagen
  4. Configura health checks apropiados

Ejercicio 2: Kubernetes Deployment

  1. Crea deployment completo con ConfigMaps y Secrets
  2. Configura HPA y PDB
  3. Implementa network policies
  4. Configura monitoring con Prometheus

Ejercicio 3: Service Mesh

  1. Despliega Istio en cluster de desarrollo
  2. Configura traffic management
  3. Implementa circuit breaker patterns
  4. Configura observability con Jaeger

✅ Container & Orchestration Checklist

  • Images optimizadas con multi-stage builds
  • Security contexts configurados apropiadamente
  • Resource limits y requests definidos
  • Health checks implementados correctamente
  • Secrets management configurado
  • Monitoring y logging implementados
  • Network policies configuradas
  • Backup y disaster recovery planificados
  • CI/CD pipeline integrado
  • Documentation completa

🔗 Recursos Adicionales


💡 Recuerda: Los contenedores son la unidad básica de deployment moderno. Dominar su orquestación es esencial para escalabilidad y confiabilidad.