We use cookies and similar technologies to improve your experience, analyze site traffic, and personalize content. By clicking "Accept All", you consent to our use of cookies. You can manage your preferences or learn more in our Privacy Policy.
Privacy & Cookie Settings
We respect your privacy and give you control over your data. Choose which cookies you want to allow:
These cookies are necessary for the website to function and cannot be disabled. They are set in response to actions made by you such as setting your privacy preferences, logging in, or filling in forms.
These cookies help us understand how visitors interact with our website by collecting and reporting information anonymously. This helps us improve our services.
Providers: Google Analytics, Plausible Analytics (privacy-friendly)
These cookies are used to track visitors across websites to display relevant advertisements and measure campaign effectiveness.
Providers: LinkedIn, Twitter/X, Reddit
These cookies enable the website to remember choices you make (such as your language preference or region) to provide enhanced, more personalized features.
Your Privacy Rights
Right to Access: You can request a copy of your personal data
Right to Deletion: You can request deletion of your data
Right to Object: You can object to processing of your data
Right to Portability: You can request your data in a portable format
Perimeter security is no longer sufficient. Learn how to implement a Zero Trust model using Nginx mTLS, strict SSH 2FA, and segmented networking on KVM VPS, preparing your stack for the upcoming 2018 GDPR enforcement.
Latency is the silent killer of microservices. In this deep dive, we bypass default settings to tune the Linux kernel, optimize SSL handshakes, and configure Nginx for raw throughput on high-performance KVM infrastructure.
Your microservices might be fast, but your gateway is likely the bottleneck. A deep dive into kernel tuning, NGINX optimization, and why hardware choices in 2017 dictate your API's survival.
Microservices are adding network hops that kill your latency. In this deep dive, we strip down Nginx and Kernel parameters to handle high-concurrency loads on Norwegian infrastructure.
Manual server configuration is killing your uptime. Learn how to implement a 'Git as Source of Truth' workflow using GitLab CI, Ansible, and Docker on high-performance infrastructure.
Stop guessing why your application is slow. From analyzing CPU steal time to configuring the ELK stack, this guide cuts through the marketing fluff and shows you how to monitor Linux systems properly.
Stop managing retry logic in your application code. A battle-hardened guide to implementing the emerging Service Mesh pattern using Linkerd and Consul on high-performance Norwegian infrastructure.
Is Serverless the end of the sysadmin? Hardly. In this 2016 retrospective, we dissect the latency, cost, and lock-in risks of FaaS, and propose a high-performance hybrid model using Docker and NVMe VPS in Norway.
Microservices are exploding, but so is your latency. Learn how to tune Nginx, optimize Linux kernel parameters for high concurrency, and why hardware selection determines 50% of your API's response time.
Docker is revolutionizing deployment, but default configurations are a security nightmare. Learn how to lock down your containers, drop capabilities, and why KVM virtualization is your last line of defense.
Moving to microservices introduces network chaos. Learn how to implement a service discovery and routing layer using Linkerd (or Nginx+Consul) on high-performance infrastructure, without sacrificing latency.
Running Docker as root is a ticking time bomb. This guide covers essential container security hardening, from dropping Linux capabilities to enabling user namespaces, specifically tailored for Norwegian infrastructure standards.
A battle-hardened look at the Kubernetes 1.3 network model. We break down CNI, overlay trade-offs, and why low-latency infrastructure is critical for microservices in the Nordic region.
With the recent death of Safe Harbor and the looming GDPR enforcement, the 'castle and moat' security strategy is obsolete. Here is a pragmatic guide to implementing micro-segmentation and strict access controls on your Norwegian VPS infrastructure.
It is June 2016. Microservices are rising, but your Nagios checks are stuck in 2010. Learn why traditional monitoring fails to catch latency spikes and how to build true observability using ELK and StatsD on high-performance infrastructure.
The 'castle and moat' security model failed Target and OPM. It will fail you. Learn how to implement the Google BeyondCorp philosophy using Nginx, OpenVPN, and iptables on Norwegian infrastructure.
Stop relying on firewalls alone. Learn how to build a Zero-Trust architecture using Nginx mTLS, SSH hardening, and strict segmentation on Norwegian infrastructure.
A battle-hardened comparison of container orchestration tools available in early 2016. We analyze Kubernetes 1.1, Docker Swarm, and Mesos, focusing on infrastructure requirements, Safe Harbor compliance, and deploying on KVM-based VPS in Norway.
Don't let connection overhead kill your microservices. We dig deep into kernel tuning, NGINX worker optimization, and the specific latency challenges of serving the Nordic market.
With the recent invalidation of Safe Harbor, hosting monitoring infrastructure outside Norway is a risk you can't afford. Here is how to architect a high-performance ELK and Zabbix stack on local KVM instances.
It's rarely the code; it's usually the infrastructure. We dissect how to monitor disk I/O, steal time, and Nginx upstream latency on Linux servers. Featuring the implications of the recent Safe Harbor ruling for Norwegian data.
Stop praying during 'service restart'. Learn how to implement robust Blue-Green deployments using Nginx and KVM to ensure zero downtime for your Norwegian infrastructure.
Is your API gateway choking on concurrent connections? We dive into kernel-level tuning, the brand new HTTP/2 protocol, and why the recent Safe Harbor invalidation makes local Norwegian hosting the only smart technical choice.
Migrating a live production database without killing your uptime is a surgical procedure. We break down the Master-Slave swing strategy, disk I/O bottlenecks, and why keeping your data on Norwegian soil is the smartest compliance move you can make right now.
Managing containers across multiple nodes is the new infrastructure nightmare. We break down the current state of Docker Swarm, Kubernetes 1.0, and Mesos, and why your underlying hardware determines who wins.
Hardcoded IP addresses in your load balancers are a ticking time bomb. Learn how to automate service discovery using HashiCorp Consul and consul-template on a Linux stack.
Your code isn't the bottleneck—your TCP stack is. A deep dive into kernel tuning, NGINX upstream keepalives, and why hardware virtualization matters for low-latency APIs in Norway.
Is your REST API choking under load? We dive deep into Linux kernel tuning, NGINX upstream keepalives, and why CPU Steal Time is the silent killer of API performance in virtualized environments.
While AWS Lambda makes headlines, the real power of the 'serverless' concept lies in decoupled, asynchronous architectures. Here is how to build event-driven worker pools using RabbitMQ and Docker without suffering vendor lock-in or latency penalties.
Stop throwing code over the wall. In 2013, the divide between Dev and Ops is costing you money and sleep. Here is how to fix it with Puppet, culture, and KVM.