Saltar al contenido principal

Seguridad en DevOps (DevSecOps) 🔒

¿Por qué DevSecOps y no solo DevOps?

La seguridad no puede ser una reflexión tardía. En el mundo acelerado de DevOps, integrar la seguridad desde el inicio es fundamental para evitar vulnerabilidades costosas y cumplir con regulaciones.

🎯 Objetivos del Capítulo

  • Integrar seguridad en cada fase del pipeline
  • Implementar "Security as Code"
  • Automatizar testing de seguridad
  • Gestionar secrets y compliance
  • Crear una cultura de security-first

🛡️ Principios de DevSecOps

Shift-Left Security

pipeline_security_gates:
pre_commit:
- secrets_detection
- code_quality_check
- dependency_vulnerability_scan

build_stage:
- static_analysis_security_testing # SAST
- software_composition_analysis # SCA
- container_image_scanning

test_stage:
- dynamic_analysis_security_testing # DAST
- interactive_security_testing # IAST
- infrastructure_security_testing

deploy_stage:
- runtime_security_configuration
- compliance_validation
- penetration_testing_automation

🔍 Security Testing Automation

1. Static Analysis Security Testing (SAST)

Con SonarQube

# .github/workflows/security.yml
name: Security Pipeline

on: [push, pull_request]

jobs:
sast_scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0

- name: SonarQube Scan
uses: sonarqube-quality-gate-action@master
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
with:
projectBaseDir: .
args: >
-Dsonar.projectKey=my-project
-Dsonar.sources=src/
-Dsonar.tests=tests/
-Dsonar.coverage.exclusions=tests/**
-Dsonar.security.hotspots.maxSize=1000

semgrep_scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3

- name: Semgrep Security Scan
uses: returntocorp/semgrep-action@v1
with:
config: >-
p/security-audit
p/secrets
p/owasp-top-ten
env:
SEMGREP_APP_TOKEN: ${{ secrets.SEMGREP_APP_TOKEN }}

Configuración personalizada Semgrep

# .semgrep.yml
rules:
- id: hardcoded-password
patterns:
- pattern: password = "..."
- pattern: PASSWORD = "..."
message: "Hardcoded password detected"
languages: [python, javascript, java]
severity: ERROR

- id: sql-injection-risk
patterns:
- pattern: |
cursor.execute("... " + $VAR + " ...")
- pattern: |
query = "... " + $VAR + " ..."
cursor.execute(query)
message: "Potential SQL injection vulnerability"
languages: [python]
severity: WARNING

- id: weak-crypto
patterns:
- pattern: hashlib.md5(...)
- pattern: hashlib.sha1(...)
message: "Weak cryptographic hash function"
languages: [python]
severity: ERROR

2. Dependency Vulnerability Scanning

Con Snyk

# snyk-security.yml
name: Snyk Security Check

on: [push]

jobs:
security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3

- name: Install dependencies
run: npm install

- name: Run Snyk to check for vulnerabilities
uses: snyk/actions/node@master
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
with:
args: --severity-threshold=high

- name: Monitor project with Snyk
uses: snyk/actions/node@master
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
with:
command: monitor

Automated Dependency Updates

# .github/workflows/dependency-update.yml
name: Automated Dependency Updates

on:
schedule:
- cron: '0 9 * * MON' # Every Monday at 9 AM

jobs:
update_dependencies:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3

- name: Update npm dependencies
run: |
npm update
npm audit fix

- name: Security audit
run: |
npm audit --audit-level=high

- name: Create Pull Request
uses: peter-evans/create-pull-request@v4
with:
token: ${{ secrets.GITHUB_TOKEN }}
commit-message: 'chore: update dependencies'
title: 'Automated dependency updates'
body: |
## Automated Dependency Updates

This PR contains automated dependency updates.

### Security Changes
- Updated packages with known vulnerabilities
- Applied security patches

Please review the changes before merging.

3. Container Security

Multi-stage Dockerfile con Security Best Practices

# Usar imagen base oficial y específica
FROM node:18.17.0-alpine3.18 AS builder

# Crear usuario no-root
RUN addgroup -g 1001 -S nodejs && \
adduser -S nextjs -u 1001

# Instalar dependencias primero (cache layer)
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force

# Copiar código fuente
COPY . .
RUN chown -R nextjs:nodejs /app

# Build de la aplicación
USER nextjs
RUN npm run build

# Imagen de producción
FROM node:18.17.0-alpine3.18 AS runtime

# Instalar dumb-init para signal handling
RUN apk add --no-cache dumb-init

# Crear usuario no-root
RUN addgroup -g 1001 -S nodejs && \
adduser -S nextjs -u 1001

# Copiar solo lo necesario
WORKDIR /app
COPY --from=builder --chown=nextjs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nextjs:nodejs /app/dist ./dist
COPY --from=builder --chown=nextjs:nodejs /app/package.json ./

# Configuración de seguridad
USER nextjs
EXPOSE 3000

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1

# Usar dumb-init como PID 1
ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "dist/server.js"]

Container Scanning con Trivy

# container-security.yml
name: Container Security Scan

on:
push:
branches: [main]
pull_request:
branches: [main]

jobs:
container_scan:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3

- name: Build image
run: docker build -t myapp:${{ github.sha }} .

- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: 'myapp:${{ github.sha }}'
format: 'sarif'
output: 'trivy-results.sarif'

- name: Upload Trivy scan results to GitHub Security
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: 'trivy-results.sarif'

- name: Fail on HIGH/CRITICAL vulnerabilities
uses: aquasecurity/trivy-action@master
with:
image-ref: 'myapp:${{ github.sha }}'
format: 'table'
exit-code: '1'
severity: 'HIGH,CRITICAL'

🔑 Secrets Management

1. Nunca Hardcodear Secrets

# ❌ MAL - Secrets hardcodeados
DATABASE_URL = "postgresql://admin:password123@db.company.com:5432/prod"
API_KEY = "sk_live_abc123def456ghi789"

# ✅ BIEN - Usar variables de entorno
import os
from functools import lru_cache

@lru_cache()
def get_settings():
return {
'database_url': os.getenv('DATABASE_URL'),
'api_key': os.getenv('API_KEY'),
'secret_key': os.getenv('SECRET_KEY')
}

# Validar que existen
settings = get_settings()
if not all(settings.values()):
raise ValueError("Missing required environment variables")

2. HashiCorp Vault Integration

import hvac
import os

class VaultManager:
def __init__(self):
self.client = hvac.Client(
url=os.getenv('VAULT_URL', 'http://localhost:8200')
)

# Autenticación con token
token = os.getenv('VAULT_TOKEN')
if token:
self.client.token = token
else:
# Autenticación con AppRole
self._authenticate_approle()

def _authenticate_approle(self):
role_id = os.getenv('VAULT_ROLE_ID')
secret_id = os.getenv('VAULT_SECRET_ID')

response = self.client.auth.approle.login(
role_id=role_id,
secret_id=secret_id
)
self.client.token = response['auth']['client_token']

def get_secret(self, path):
"""Obtener secret de Vault"""
try:
response = self.client.secrets.kv.v2.read_secret_version(
path=path
)
return response['data']['data']
except Exception as e:
raise Exception(f"Error retrieving secret {path}: {e}")

def create_secret(self, path, secret_dict):
"""Crear/actualizar secret en Vault"""
try:
self.client.secrets.kv.v2.create_or_update_secret(
path=path,
secret=secret_dict
)
except Exception as e:
raise Exception(f"Error creating secret {path}: {e}")

# Uso
vault = VaultManager()
db_secrets = vault.get_secret('myapp/database')
DATABASE_URL = db_secrets['url']
DB_PASSWORD = db_secrets['password']

3. Kubernetes Secrets con External Secrets Operator

# external-secret.yml
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: app-secrets
namespace: production
spec:
refreshInterval: 1h
secretStoreRef:
name: vault-secret-store
kind: SecretStore
target:
name: app-secrets
creationPolicy: Owner
data:
- secretKey: database-password
remoteRef:
key: myapp/database
property: password
- secretKey: api-key
remoteRef:
key: myapp/external-api
property: key

---
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: vault-secret-store
namespace: production
spec:
provider:
vault:
server: "https://vault.company.com"
path: "secret"
version: "v2"
auth:
kubernetes:
mountPath: "kubernetes"
role: "myapp-role"

🏰 Infrastructure Security

1. Security as Code con Terraform

# security-groups.tf
resource "aws_security_group" "web_server" {
name_prefix = "web-server-"
vpc_id = var.vpc_id

# HTTP/HTTPS from ALB only
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = [aws_security_group.alb.id]
}

ingress {
from_port = 443
to_port = 443
protocol = "tcp"
security_groups = [aws_security_group.alb.id]
}

# SSH desde bastion host únicamente
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
security_groups = [aws_security_group.bastion.id]
}

# Outbound - solo lo necesario
egress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
description = "HTTPS outbound"
}

egress {
from_port = 53
to_port = 53
protocol = "udp"
cidr_blocks = ["0.0.0.0/0"]
description = "DNS resolution"
}

tags = {
Name = "web-server-sg"
Environment = var.environment
}
}

# Bucket S3 con encriptación y acceso restringido
resource "aws_s3_bucket" "app_data" {
bucket = "myapp-data-${var.environment}-${random_string.bucket_suffix.result}"
}

resource "aws_s3_bucket_encryption" "app_data" {
bucket = aws_s3_bucket.app_data.id

server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.app_data.arn
sse_algorithm = "aws:kms"
}
}
}
}

resource "aws_s3_bucket_public_access_block" "app_data" {
bucket = aws_s3_bucket.app_data.id

block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}

# KMS key para encriptación
resource "aws_kms_key" "app_data" {
description = "KMS key for app data encryption"
deletion_window_in_days = 7

policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "Enable root permissions"
Effect = "Allow"
Principal = {
AWS = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"
}
Action = "kms:*"
Resource = "*"
},
{
Sid = "Allow app access"
Effect = "Allow"
Principal = {
AWS = aws_iam_role.app_role.arn
}
Action = [
"kms:Decrypt",
"kms:GenerateDataKey"
]
Resource = "*"
}
]
})
}

2. Policy as Code con Open Policy Agent (OPA)

# kubernetes-security-policies.rego
package kubernetes.admission

# Deny pods without security context
deny[msg] {
input.request.kind.kind == "Pod"
not input.request.object.spec.securityContext
msg := "Pod must have securityContext defined"
}

# Require non-root user
deny[msg] {
input.request.kind.kind == "Pod"
input.request.object.spec.securityContext.runAsUser == 0
msg := "Pod cannot run as root user"
}

# Deny privileged containers
deny[msg] {
input.request.kind.kind == "Pod"
input.request.object.spec.containers[_].securityContext.privileged == true
msg := "Privileged containers are not allowed"
}

# Require resource limits
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not container.resources.limits.memory
msg := sprintf("Container %v must have memory limits", [container.name])
}

# Deny latest tag
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
endswith(container.image, ":latest")
msg := sprintf("Container %v cannot use 'latest' tag", [container.name])
}

# Require image scanning
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not container.annotations["security.scan.passed"]
msg := sprintf("Container %v must pass security scan", [container.name])
}

🔒 Runtime Security

1. Falco para Detección de Anomalías

# falco-rules.yml
- rule: Detect crypto mining
desc: Detect cryptocurrency mining
condition: >
spawned_process and
(proc.name in (mining_binaries) or
proc.cmdline contains mining_cmdlines)
output: >
Crypto mining detected (user=%user.name command=%proc.cmdline
container=%container.name image=%container.image.repository)
priority: CRITICAL
tags: [process, mitre_execution]

- rule: Unexpected network connection
desc: Detect unexpected outbound connections
condition: >
outbound and
not fd.sport in (80, 443, 53, 22) and
not proc.name in (allowed_processes)
output: >
Unexpected network connection (user=%user.name command=%proc.cmdline
connection=%fd.name container=%container.name)
priority: WARNING
tags: [network]

- rule: File integrity violation
desc: Detect modifications to critical files
condition: >
open_write and
fd.name in (/etc/passwd, /etc/shadow, /etc/hosts) and
not proc.name in (allowed_editors)
output: >
Critical file modified (file=%fd.name user=%user.name
command=%proc.cmdline container=%container.name)
priority: CRITICAL
tags: [filesystem]

2. RBAC Kubernetes Granular

# rbac.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: production
name: app-developer
rules:
# Pods - read/create/update, no delete
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "create", "update", "patch"]

# Logs - read only
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list"]

# ConfigMaps - read only
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list"]

# Secrets - forbidden
- apiGroups: [""]
resources: ["secrets"]
verbs: [""]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-developer-binding
namespace: production
subjects:
- kind: User
name: developer@company.com
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: app-developer
apiGroup: rbac.authorization.k8s.io

---
# ClusterRole para operaciones limitadas
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: namespace-viewer
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]
- apiGroups: ["metrics.k8s.io"]
resources: ["nodes", "pods"]
verbs: ["get", "list"]

📋 Compliance Automation

1. CIS Benchmarks Automation

#!/bin/bash
# cis-kubernetes-benchmark.sh

# CIS 1.1.1 Ensure API server is not accessible
echo "Checking API server accessibility..."
if kubectl get pods --namespace=kube-system | grep -q "kube-apiserver"; then
echo "✅ API server is running in kube-system namespace"
else
echo "❌ API server configuration issue"
fi

# CIS 1.2.1 Ensure anonymous-auth is set to false
echo "Checking anonymous authentication..."
if kubectl get pod -n kube-system -l component=kube-apiserver -o yaml | grep -q "anonymous-auth=false"; then
echo "✅ Anonymous authentication is disabled"
else
echo "❌ Anonymous authentication should be disabled"
fi

# CIS 1.3.1 Ensure controller manager secure port
echo "Checking controller manager..."
if kubectl get pod -n kube-system -l component=kube-controller-manager -o yaml | grep -q "secure-port=10257"; then
echo "✅ Controller manager secure port configured"
else
echo "❌ Controller manager secure port not configured"
fi

# Generate report
echo "CIS Kubernetes Benchmark Report - $(date)" > cis-report.txt
echo "=================================" >> cis-report.txt
# ... rest of checks

2. GDPR Compliance Monitoring

#!/usr/bin/env python3
import re
import json
from datetime import datetime, timedelta

class GDPRComplianceMonitor:
def __init__(self):
self.pii_patterns = [
r'\b\d{3}-\d{2}-\d{4}\b', # SSN
r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b', # Email
r'\b\d{4}[-\s]?\d{4}[-\s]?\d{4}[-\s]?\d{4}\b', # Credit card
r'\b\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\b' # IP address
]

def scan_logs_for_pii(self, log_file):
"""Escanear logs en busca de PII expuesto"""
pii_violations = []

with open(log_file, 'r') as f:
for line_num, line in enumerate(f, 1):
for pattern in self.pii_patterns:
if re.search(pattern, line):
pii_violations.append({
'line': line_num,
'pattern': pattern,
'content': line.strip()[:100] # Truncated
})

return pii_violations

def check_data_retention(self, database_connection):
"""Verificar políticas de retención de datos"""
# Ejemplo: verificar datos > 7 años
old_data_query = """
SELECT table_name, COUNT(*) as old_records
FROM information_schema.tables t
JOIN user_data ud ON t.table_name = ud.table_name
WHERE ud.created_at < NOW() - INTERVAL '7 years'
GROUP BY table_name
"""

# Ejecutar query y reportar
# ...

def verify_consent_tracking(self):
"""Verificar tracking de consentimiento"""
# Verificar que existe tabla de consentimientos
# Verificar que se registra el consentimiento
# Verificar que se puede revocar
pass

def generate_compliance_report(self):
"""Generar reporte de compliance GDPR"""
report = {
'timestamp': datetime.now().isoformat(),
'pii_violations': self.scan_logs_for_pii('/var/log/app.log'),
'data_retention_issues': [],
'consent_tracking_status': 'compliant',
'recommendations': []
}

# Añadir recomendaciones basadas en findings
if report['pii_violations']:
report['recommendations'].append(
"Implement log sanitization to remove PII"
)

return report

if __name__ == "__main__":
monitor = GDPRComplianceMonitor()
report = monitor.generate_compliance_report()

with open(f'gdpr-report-{datetime.now().strftime("%Y%m%d")}.json', 'w') as f:
json.dump(report, f, indent=2)

🚨 Incident Response Automation

1. Security Incident Pipeline

# .github/workflows/security-incident.yml
name: Security Incident Response

on:
repository_dispatch:
types: [security-incident]

jobs:
immediate_response:
runs-on: ubuntu-latest
steps:
- name: Alert Security Team
uses: 8398a7/action-slack@v3
with:
status: custom
custom_payload: |
{
"channel": "#security-alerts",
"username": "Security Bot",
"text": "🚨 SECURITY INCIDENT DETECTED",
"attachments": [
{
"color": "danger",
"fields": [
{
"title": "Incident Type",
"value": "${{ github.event.client_payload.incident_type }}",
"short": true
},
{
"title": "Severity",
"value": "${{ github.event.client_payload.severity }}",
"short": true
},
{
"title": "Affected System",
"value": "${{ github.event.client_payload.system }}",
"short": false
}
]
}
]
}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK }}

- name: Create Incident Issue
uses: actions/github-script@v6
with:
script: |
const incident = context.payload.client_payload;

await github.rest.issues.create({
owner: context.repo.owner,
repo: context.repo.repo,
title: `Security Incident: ${incident.incident_type}`,
body: `
## Security Incident Report

**Severity:** ${incident.severity}
**Type:** ${incident.incident_type}
**Affected System:** ${incident.system}
**Detection Time:** ${new Date().toISOString()}

## Immediate Actions Required
- [ ] Assess impact and scope
- [ ] Contain the incident
- [ ] Preserve evidence
- [ ] Notify stakeholders

## Investigation Tasks
- [ ] Root cause analysis
- [ ] Timeline reconstruction
- [ ] Evidence collection
- [ ] Damage assessment

## Resolution Tasks
- [ ] Apply fixes
- [ ] Verify resolution
- [ ] Update security measures
- [ ] Post-mortem review
`,
labels: ['security', 'incident', `severity-${incident.severity}`],
assignees: ['security-team-lead']
});

automated_containment:
runs-on: ubuntu-latest
if: github.event.client_payload.severity == 'critical'
steps:
- name: Emergency Shutdown
run: |
# Deshabilitar servicios comprometidos
kubectl scale deployment suspicious-app --replicas=0

- name: Isolate Network
run: |
# Aplicar network policies restrictivas
kubectl apply -f emergency-network-policy.yml

- name: Backup Evidence
run: |
# Crear snapshots para investigación
./scripts/create-evidence-snapshot.sh

🎯 Ejercicios Prácticos

Ejercicio 1: Security Pipeline Completo

  1. Configura SAST con SonarQube o Semgrep
  2. Añade dependency scanning con Snyk
  3. Implementa container scanning con Trivy
  4. Crea gates de seguridad que bloqueen deploys

Ejercicio 2: Secrets Management

  1. Configura HashiCorp Vault
  2. Integra con Kubernetes usando External Secrets
  3. Implementa rotación automática de secrets
  4. Crea alertas para secrets expirados

Ejercicio 3: Runtime Security

  1. Despliega Falco en Kubernetes
  2. Configura reglas personalizadas
  3. Integra alertas con Slack/PagerDuty
  4. Crea dashboards de security en Grafana

✅ Security Checklist

  • SAST/DAST integrados en CI/CD
  • Dependency scanning automatizado
  • Container images escaneadas pre-deploy
  • Secrets nunca en código fuente
  • RBAC implementado granularmente
  • Network policies configuradas
  • Runtime security monitoring activo
  • Incident response plan automatizado
  • Compliance checks automáticos
  • Security training para el equipo

🔗 Recursos Adicionales


💡 Recuerda: La seguridad no es un checkpoint, es un proceso continuo integrado en cada fase del desarrollo y operación.