Cloud Storage Reality Check: SAN vs. Local RAID in the Post-2010 Era
The year 2010 will likely be remembered as the year marketing departments hijacked the word "Cloud." Suddenly, every dedicated server, shared host, and FTP repository became a "Cloud Solution." As we settle into 2011, it is time for a reality check. For those of us managing actual infrastructure—provisioning /var/www, tweaking my.cnf, and losing sleep over I/O wait times—the buzzwords mean nothing if the disk performance isn't there.
I recently consulted for a logistics firm in Drammen that swallowed the hype whole. They migrated their entire inventory system to a major US-based public cloud provider. On paper, it looked cheaper. In reality, their warehouse scanners timed out constantly. Why? Latency. The speed of light is a stubborn constant, and the round-trip time (RTT) from Norway to Virginia is simply too high for heavy database transactions.
If you are a CTO or Lead Sysadmin targeting the Nordic market, you have two real enemies right now: Disk I/O latency and the Norwegian Data Protection Authority (Datatilsynet).
The Great Storage Debate: SAN vs. Local RAID
In the current VPS landscape, providers generally offer two storage backends. Understanding the difference is critical for your database performance.
1. The SAN (Storage Area Network) Illusion
Many "Cloud" providers use centralized SANs. Your data lives on a massive array of disks connected via the network (iSCSI or Fiber Channel). The pitch is redundancy; if a compute node dies, your storage persists.
The Catch: In a multi-tenant environment, the network is a bottleneck. I have seen iSCSI latency spike to 50ms+ during peak backup windows. For a busy MySQL server, that is death. You will see your load average skyrocket while the CPU sits idle, waiting for disk blocks to arrive.
2. Local RAID-10 (The CoolVDS Standard)
The alternative is keeping storage local to the hypervisor. We use hardware RAID-10 with high-RPM SAS drives (and increasingly, Enterprise SSDs for caching tiers). This eliminates network latency entirely. The data travels over the PCIe bus, not an Ethernet cable.
Pro Tip: Don't guess your disk performance. Test it. On your current Linux VPS, run this command to see your buffered disk read speed:
hdparm -tT /dev/sda
If you aren't seeing speeds north of 100 MB/sec on a standard VPS in 2011, your provider is overloading the node.
Data Sovereignty: The "Safe Harbor" Trap
Beyond performance, we have to talk about compliance. The EU Data Protection Directive (95/46/EC) is strict. While the US-EU "Safe Harbor" framework technically allows data transfer to the US, privacy advocates are already poking holes in it. Datatilsynet (The Norwegian Data Protection Authority) has been very clear: the safest place for Norwegian citizen data is on servers physically located in Norway.
Hosting within the country isn't just about patriotism; it's risk management. If you host sensitive customer data (personopplysninger) on a server in a jurisdiction with conflicting laws, you expose your company to legal limbo. A VPS located in Oslo, governed by Norwegian law, solves this headache instantly.
Optimizing for the Norwegian Internet Exchange (NIX)
When your users are in Oslo, Bergen, or Trondheim, routing matters. You want your VPS provider to peer directly at NIX. This ensures traffic stays within the national infrastructure rather than bouncing through Sweden or Germany.
Here is a comparison of average ping times we measured this week from a broadband connection in Oslo:
| Target Location | Average Latency | Impact on Interactive SSH |
|---|---|---|
| CoolVDS (Oslo) | ~2-4 ms | Instant. Feels like localhost. |
| Amsterdam (Common EU Hub) | ~25-35 ms | Noticeable lag in VIM/Emacs. |
| US East Coast | ~110-130 ms | Painful. Autocomplete delays. |
The Verdict: Pragmatism Wins
Cloud scalability is useful, but raw iron performance is what keeps applications responsive. At CoolVDS, we have rejected the trend of "overselling" resources via thin-provisioned SANs. We use KVM virtualization to ensure that the RAM and Disk I/O you pay for are actually reserved for you.
If you are running a high-traffic e-commerce site (Magento or osCommerce) or a critical backend API, you cannot afford to share your disk I/O queue with 500 other noisy neighbors. You need dedicated spindles or Enterprise SSDs, and you need them close to your users.
Don't let latency kill your project before it starts. Stop relying on best-effort storage from overseas giants. Deploy a local VPS Norway instance today, secure your data under Norwegian law, and get the low latency your users deserve.