Container Security: Locking Down the Hull Before Deployment
Let's be honest: default container configurations are a security nightmare. As someone who has spent the last decade debugging production clusters from Oslo to Frankfurt, I’ve seen the same story play out repeatedly. A developer pulls node:latest, runs it as root, mounts the host socket, and then wonders why their entire infrastructure got crypto-jacked three weeks later.
By March 2024, the "containers are lightweight VMs" myth should be dead. They aren't. They are processes lying to themselves about how much access they have. The recent CVE-2024-21626 (Leaky Vessels) vulnerability in runc was a harsh reminder: if your isolation layer relies solely on software namespaces, you are one exploit away from a host takeover.
This isn't about buying expensive security suites. It's about configuring the primitives that Linux already gives you, and understanding where your software ends and the infrastructure begins.
1. The Infrastructure: Shared Kernels vs. Hard Isolation
The biggest risk in containerization is the shared kernel model. In a standard container environment, every pod talks to the same host kernel. If a bad actor finds a kernel panic or an exploit, they don't just crash their container; they crash the node. Or worse, they escape.
This is why the underlying VPS architecture matters. At CoolVDS, we specifically rely on KVM (Kernel-based Virtual Machine) rather than container-based virtualization (like LXC/OpenVZ) for our instances. When you run Docker on a CoolVDS instance, you are running on your own dedicated kernel. If you mess up, you only break your sandbox, not the neighbor's.
Pro Tip: Never run production containers on "Shared Container" hosting plans. You have no control over kernel modules or sysctl flags. Always opt for a KVM-based VPS where you control the boot partition.
2. The Supply Chain: Stop Trusting :latest
Using :latest isn't just lazy; it's negligent. You have no guarantee that the image you pulled today is the same one you pulled yesterday. Supply chain attacks often target widely used base images.
Use specific SHA256 digests or immutable tags. Furthermore, strip your images down. If your production container has curl or wget installed, you are giving an attacker the tools they need to download their payload after they break in.
The Multi-Stage Build Pattern
Here is how we build Go applications for our internal monitoring tools. Notice the transition to a distroless image.
# Build Stage
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY . .
RUN go build -o main .
# Production Stage
# Google's distroless images contain ONLY the application and runtime dependencies.
# No shell. No package manager.
FROM gcr.io/distroless/static-debian12
COPY --from=builder /app/main /
CMD ["/main"]If an attacker gets into this container, there is no shell to run commands. They are trapped in a binary void.
3. Runtime Security: Drop Those Capabilities
By default, Docker grants a container a broad set of Linux capabilities. You likely don't need NET_RAW (unless you are pinging things) or SYS_CHROOT. The golden rule is: deny all, permit necessary.
When defining your Kubernetes workloads or Docker Compose files, you must explicitly drop capabilities. We also enforce readOnlyRootFilesystem wherever possible. If an attacker can't write to disk, they can't persist malware.
Hardening a Kubernetes Deployment
Here is a snippet from a standard deployment manifest we use for web services:
apiVersion: apps/v1
kind: Deployment
metadata:
name: secure-nginx
spec:
template:
spec:
securityContext:
runAsNonRoot: true
runAsUser: 10001
runAsGroup: 10001
fsGroup: 10001
containers:
- name: nginx
image: nginx:1.25-alpine
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
volumeMounts:
- name: tmp
mountPath: /tmp
volumes:
- name: tmp
emptyDir: {}Note the use of emptyDir for /tmp. Nginx needs to write temporary files, but the root filesystem remains read-only. This configuration breaks 90% of automated exploit scripts.
4. Network Policies: Zero Trust Inside the Cluster
If your frontend is compromised, can it talk to your database? In a default cluster, the answer is yes. Kubernetes implies a flat network where every pod can reach every other pod.
We need to implement a default-deny policy. You explicitly whitelist traffic. This is crucial for GDPR compliance—Datatilsynet looks favorably on architectures that minimize data exposure radius.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- EgressOnce applied, nothing moves. You then layer specific allow rules on top.
5. The Norwegian Context: Latency and Law
Security isn't just about hackers; it's about lawyers. With the strict enforcement of Schrems II, moving data outside the EEA is a compliance headache. Hosting your container cluster on US-owned hyperscalers adds a layer of legal risk regarding data sovereignty.
Running your container infrastructure on CoolVDS ensures your data stays in Europe, under European jurisdiction. Beyond compliance, there is physics. If your users are in Oslo or Bergen, routing traffic through a data center in Frankfurt or Amsterdam adds unnecessary milliseconds. Our direct peering at NIX (Norwegian Internet Exchange) keeps latency negligible. For high-frequency trading bots or real-time gaming backends, those 15ms matter.
6. Continuous Scanning
Security is a state, not a destination. You need to scan your running images against new CVE databases daily. Tools like Trivy are excellent for this.
Don't just scan the registry. Scan the running environment:
# Scan a running pod for known vulnerabilities
trivy k8s --report summary clusterThis will return a grim list of everything outdated in your cluster. Fix the Criticals immediately. Triage the Highs.
Conclusion
Container security requires a shift in mindset. You stop trusting the defaults and start explicitly defining what is allowed. You move from "it works" to "it is contained."
But software hardening implies you trust the hardware it runs on. A secure container on a noisy, oversold host is still a performance risk and a side-channel attack vector. Infrastructure is the bedrock.
Ready to run your hardened clusters on bare-metal performance? Deploy a CoolVDS NVMe instance today and experience the stability of true KVM isolation.