Surviving Schrems II: Building a GDPR-Compliant APM Stack on Norwegian Soil
It is 3:00 AM. Your pager is screaming. The API is throwing 504 Gateway Timeouts, and you have absolutely no idea why because your SaaS monitoring dashboard is lagging by five minutes. To make matters worse, your legal team just sent a memo regarding the CJEU's Schrems II ruling from July: "Stop sending user IP addresses and metadata to US-based cloud providers immediately."
If you are running infrastructure in Europe—and specifically here in Norway—the game changed this summer. Relying on US-hosted APM (Application Performance Monitoring) tools is now a compliance minefield. The Datatilsynet (Norwegian Data Protection Authority) is clear: data sovereignty matters.
But let's put the legal headaches aside for a moment. Let's talk about raw engineering. The most reliable monitoring stack is the one you control, the one that sits milliseconds away from your application servers, and the one that doesn't cost $25 per host per month just to tell you your CPU usage is high.
Today, we are building a production-grade, self-hosted monitoring stack using Prometheus and Grafana on Ubuntu 20.04 LTS. We will host this in Oslo to ensure minimal latency and strict GDPR compliance.
The Hardware Bottleneck: Why Your Monitor Dies First
Before we touch a single config file, we need to address the elephant in the server room: IOPS.
Time-series databases (TSDBs) like Prometheus are disk destroyers. They write thousands of data points per second. If you deploy this on a budget VPS with spinning rust (HDD) or throttled SATA SSDs, your monitoring will fail exactly when you need it most—during a traffic spike. I've seen Prometheus instances freeze because the underlying storage couldn't handle the write load during a DDoS attack.
Pro Tip: Never skimp on storage I/O for monitoring nodes. We use CoolVDS NVMe instances because they expose raw NVMe performance via KVM, rather than the emulated I/O often found in budget containers. If iowait exceeds 5% on your monitoring box, your data is already stale.
Benchmarking Your Disk
Before installing anything, verify your host can handle the ingestion rate. Run this fio test on your VPS:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=1G --readwrite=randwrite --rwmixwrite=75
On a standard CoolVDS NVMe instance, you should see IOPS well into the tens of thousands. If you see anything under 2,000 IOPS, abort. You cannot host a serious TSDB there.
The Architecture: Prometheus + Grafana + Node Exporter
We will use Docker (version 19.03+) and Docker Compose. It is September 2020; if you are still installing these binaries manually in /usr/bin, you are making future upgrades a nightmare.
1. The Composition
Create a directory structure: /opt/monitoring. Inside, create a docker-compose.yml:
version: '3.7'
services:
prometheus:
image: prom/prometheus:v2.20.1
container_name: prometheus
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--storage.tsdb.retention.time=15d'
ports:
- 9090:9090
networks:
- monitoring_net
restart: always
grafana:
image: grafana/grafana:7.1.5
container_name: grafana
volumes:
- grafana_data:/var/lib/grafana
ports:
- 3000:3000
networks:
- monitoring_net
restart: always
node_exporter:
image: prom/node-exporter:v1.0.1
container_name: node_exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
ports:
- 9100:9100
networks:
- monitoring_net
restart: always
networks:
monitoring_net:
driver: bridge
volumes:
prometheus_data:
grafana_data:
2. The Prometheus Configuration
Create prometheus.yml. This tells Prometheus where to scrape metrics from. In a real scenario, you would use service discovery (like Consul or file_sd), but for this robust standalone setup, static config works.
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node_exporter'
static_configs:
- targets: ['node_exporter:9100']
# Monitor your critical database
- job_name: 'production_db'
static_configs:
- targets: ['10.8.0.5:9104'] # Assuming VPN/Internal IP
Security Note: Do not expose ports 9090 or 9100 to the public internet. Use ufw to restrict access to your VPN IP or the CoolVDS private network IP. Grafana (port 3000) should sit behind an Nginx reverse proxy with SSL/TLS.
Deep Dive: Database Monitoring
CPU graphs look pretty, but databases are where applications actually die. To monitor MySQL 8.0 properly, we need the mysqld_exporter. Don't just run it with default flags; you need to see the InnoDB buffer pool stats.
First, create a dedicated user in MySQL:
CREATE USER 'exporter'@'%' IDENTIFIED BY 'ComplexPassword2020!';
GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'%';
FLUSH PRIVILEGES;
Then, create a .my.cnf file for the exporter credentials:
[client]
user=exporter
password=ComplexPassword2020!
When you deploy the prom/mysqld-exporter container, mount this config. This setup allows you to track Queries Per Second (QPS), Slow Queries, and InnoDB Row Lock Time. If you see Row Lock Time spiking while CPU is low, your disk I/O is choking. Move that DB to a CoolVDS High-Frequency instance immediately.
The Network Latency Advantage
Why host this in Norway? Aside from GDPR compliance, physics is undefeated. If your users are in Oslo, Bergen, or Trondheim, and your servers are in Frankfurt or Amsterdam, you are adding 20-40ms of round-trip time (RTT) to every request.
When your monitoring is local (on the same CoolVDS datacenter floor or via NIX peering), you can set aggressive scrape intervals (e.g., 5 seconds) without saturating the network. This allows you to catch "micro-bursts"—transient CPU spikes that last only a few seconds but cause dropped packets.
Nginx Optimization for Grafana
Finally, put Grafana behind Nginx. Don't serve it raw. Here is a snippet for your nginx.conf to support WebSockets, which Grafana uses for live updates:
server {
listen 80;
server_name monitor.your-domain.no;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name monitor.your-domain.no;
ssl_certificate /etc/letsencrypt/live/monitor.your-domain.no/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/monitor.your-domain.no/privkey.pem;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Conclusion
The era of blindly trusting overseas cloud providers with your operational data is over. Between the Schrems II ruling and the increasing need for low-latency performance, the argument for self-hosted monitoring on Norwegian infrastructure has never been stronger.
You have the stack (Prometheus/Grafana), you have the configuration, and you know the hardware requirements. Don't let a slow disk be the reason you missed a critical alert.
Ready to build? Deploy a CoolVDS NVMe instance in Oslo today. With KVM virtualization and local peering, it’s the foundation your observability stack deserves.