Console Login

Stop the Bleeding: A Pragmatic Guide to Cloud Cost Control in 2020

The "Pay-As-You-Go" Trap: Why Your Infrastructure Bill is Skyrocketing

Let’s be honest. The initial pitch of the public cloud was seductive: "Only pay for what you use." It sounded like the ultimate efficiency hack for startups and enterprises alike. But as we settle into 2020, the reality for many CTOs and Systems Architects in Oslo and across Europe is starkly different. We are seeing "bill shock" become a monthly ritual.

Complexity is the enemy of cost control. Between instance types, provisioned IOPS, egress bandwidth fees, and load balancer hourly rates, the Total Cost of Ownership (TCO) calculation has become a dark art. If you are running a heavy workload targeting the Norwegian market, routing traffic through Frankfurt or London hyperscalers involves latency penalties and data transfer costs that simply do not exist with local infrastructure.

As a pragmatist who looks at the bottom line as closely as the htop output, I am going to walk you through how to audit your current setup. We will look at finding zombie processes, right-sizing via actual metrics, and why a hybrid approach using predictable, high-performance VDS is often the financial firewall your company needs.

1. The Zombie Server Hunt

The easiest money you will ever save is turning off what you are not using. In the rush of CI/CD pipelines and DevTest environments, developers spin up instances and forget them. We call this "Zombie Infrastructure."

Before you commit to a Reserved Instance or a Savings Plan, you need to audit utilization. Do not rely on the cloud provider's dashboard alone; they benefit from your waste. Use standard Linux tools to verify activity. If an instance claims to be a "critical worker" but has a load average of 0.01 for a week, it is a zombie.

Here is a quick snippet you can push via Ansible to your fleet to check for genuine idleness over a 24-hour period using sar (part of the sysstat package):

# Install sysstat if missing
yum install sysstat -y

# Check average CPU usage for the current day
sar -u -f /var/log/sa/sa$(date +%d) | awk '{sum+=$3} END {print "Average CPU User Load: ", sum/NR}'

# Check memory commitment
free -m | grep Mem | awk '{print "Used RAM: " $3/$2 * 100.0 "%"}'

If your CPU average is under 5% and RAM is under 20%, you are burning cash. Consolidate these services onto a single KVM slice. Virtualization overhead has dropped significantly in recent years, especially with modern kernels.

2. The "Provisioned IOPS" Racket

This is where the major providers get you. You spin up a standard instance, but the disk performance is capped. To get the database performance you actually need, you have to pay extra for "Provisioned IOPS." Suddenly, your cheap database server costs three times the base rate.

In 2020, NVMe storage should be the standard, not a luxury add-on. We built CoolVDS on this principle. When you have direct NVMe pass-through or high-performance virtio drivers, you don't need to pay for arbitrary IOPS limits. You get the raw speed of the drive.

Test your current disk I/O latency. If you are seeing high iowait, your provider is throttling you. Verify it with fio:

# A realistic random read/write test (simulating a database)
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
Pro Tip: Run this on your current cloud instance. If your IOPS are capped at 3000 while paying a premium, you are being throttled. A standard CoolVDS NVMe instance typically pushes vastly higher numbers because we don't artificially cap your hardware potential for an upsell.

3. Egress Fees and The Nordic Latency Advantage

Data gravity is real. If your primary customer base is in Norway, why serve them from a datacenter in Ireland? Not only are you adding 30-40ms of latency (round trip), but you are also likely paying per gigabyte for data leaving that datacenter.

By hosting in Norway (or close proximity), you leverage the NIX (Norwegian Internet Exchange) infrastructure. Local peering means lower hops, better stability, and often, zero egress fees included in your flat monthly rate. For a media-heavy application or a SaaS platform with high data turnover, the bandwidth bill alone can exceed the compute cost on public clouds.

Comparison: Hyperscale vs. Local VDS

FeatureHyperscale CloudCoolVDS (Local)
Compute PricingHourly (Unpredictable)Fixed Monthly (Predictable)
Storage I/OThrottled / Paid ExtrasNVMe Standard (High Speed)
BandwidthHigh Egress FeesGenerous/Unmetered Limits
Data PrivacyComplex (US Cloud Act concerns)Simplified (GDPR/Datatilsynet aligned)

4. Optimizing the Stack: Caching Before Scaling

Before you upgrade your server size, upgrade your configuration. I often see clients upgrading to 32GB RAM instances because their Apache setup is bloated. Switching to Nginx or OpenLiteSpeed, or properly tuning your database, can delay the need for vertical scaling.

For example, if you are running a MySQL database (or MariaDB), the default my.cnf is almost always too conservative. Ensure your innodb_buffer_pool_size is set to utilize about 60-70% of your available RAM if it is a dedicated DB server. This keeps data in memory and reduces those expensive disk reads.

[mysqld]
# Example for a server with 8GB RAM
innodb_buffer_pool_size = 5G
innodb_log_file_size = 512M
innodb_flush_log_at_trx_commit = 2 # Slight risk, big performance gain for non-financial data

5. The Hybrid Strategy

I am not suggesting you abandon the public cloud entirely. It has its place—specifically for highly elastic, short-term bursting workloads. But for your baseline load—the database that runs 24/7, the web servers that always need to be up—paying hourly premiums is fiscally irresponsible.

The "Pragmatic Architecture" for 2020 involves a hybrid approach:

  • Core Infrastructure: Host your databases and primary application servers on CoolVDS. You get fixed costs, high NVMe I/O, and data sovereignty in Norway.
  • Burst/Backup: Use S3-compatible storage for backups or serverless functions for sporadic tasks that trigger once a day.

This setup allows you to predict 80% of your bill with absolute certainty, while retaining the flexibility to scale specific microservices if needed.

Conclusion

Optimization isn't just about code; it's about architectural economics. In the Nordic market, where reliability and privacy are paramount, the premium you pay for "infinite scalability" on public clouds often yields a negative ROI.

Review your invoices. Identify the egress leaks and the provisioned IOPS traps. Then, test the alternative. You can deploy a high-performance, predictable KVM instance on CoolVDS in less than a minute. Compare the benchmark results. Your CFO (and your latency metrics) will thank you.