Taming Microservices Chaos: Implementing API Gateway Patterns with Kong 0.8 on CentOS 7
If I have to edit one more nginx.conf file to add a simple upstream server or tweak a header, I might just throw my server rack out the window. We've all been there. You start with a monolith, break it down into three beautiful microservices, and suddenly you're drowning in routing logic, authentication tokens, and CORS headers scattered across a dozen virtual machines.
It is 2016. The age of the monolith is dying. But the complexity replacing it is terrifying.
In Norway, where development teams are often lean and efficiency is the only metric that matters, we cannot afford to waste time writing boilerplate authentication proxies for every new service we spin up. Enter Kong. It’s an open-source API Gateway built on top of NGINX (specifically OpenResty), and it is currently the most robust way to manage your API traffic without losing your sanity.
The Problem: The "Spaghetti" Proxy
I recently audited a setup for a client in Oslo. They had 15 microservices running on various VPS instances. Their "gateway" was a single NGINX box with a configuration file that was 2,000 lines long. Every time they deployed a new service, they had to:
- SSH into the gateway.
- Edit the config.
- Reload NGINX.
- Pray they didn't miss a semicolon and take down the entire platform.
This is fragile. It's slow. And when the traffic spikes—which it always does during the holiday shopping season—the lack of dynamic rate limiting meant their backend databases (MySQL) were getting hammered into oblivion.
The Solution: Kong API Gateway
Kong sits in front of your upstream services. Instead of hard-coding routes in a file, you use a RESTful API to configure the gateway. You want to add a service? Send a POST request. You want to add Key Authentication? Send a POST request.
Prerequisites
To run Kong 0.8 in production, you need serious I/O performance. Kong relies heavily on its datastore (Cassandra or PostgreSQL) to store configuration and, if you use certain plugins, runtime data. If you are running this on a budget VPS with shared spinning disks, you will see latency spikes.
Pro Tip: For production gateways, I strictly refuse to use OpenVZ containers. The kernel resource sharing creates unpredictable latency (jitter). We use CoolVDS KVM instances because the hardware virtualization guarantees our CPU cycles and, crucially, gives us direct access to NVMe storage. Kong's datastore needs those IOPS.
Step 1: Installation on CentOS 7
Let's get our hands dirty. We will use PostgreSQL 9.4 as our datastore because it is easier to manage than a Cassandra cluster for small-to-medium deployments.
First, add the PostgreSQL repository and install it:
rpm -Uvh https://yum.postgresql.org/9.4/redhat/rhel-7-x86_64/pgdg-centos94-9.4-2.noarch.rpm
yum install -y postgresql94-server postgresql94-contrib
/usr/pgsql-9.4/bin/postgresql94-setup initdb
systemctl start postgresql-9.4
systemctl enable postgresql-9.4
Next, configure a user and database for Kong:
su - postgres
psql
CREATE USER kong WITH PASSWORD 'super_secure_password';
CREATE DATABASE kong OWNER kong;
\q
exit
Now, let's install Kong 0.8.0. We will grab the RPM directly:
yum install -y epel-release
yum install -y https://github.com/Mashape/kong/releases/download/0.8.0/kong-0.8.0.el7.noarch.rpm
Step 2: Configuration
Kong ships with a default configuration file. We need to tell it to use Postgres. Edit /etc/kong/kong.yml:
database: postgres
postgres:
host: 127.0.0.1
port: 5432
database: kong
user: kong
password: super_secure_password
Now, start Kong. This process will also run the database migrations automatically.
kong start
If you see [INFO] Kong 0.8.0 is running, you are in business.
Implementing the Patterns
Now that the gateway is running, we stop acting like sysadmins and start acting like architects. We control Kong via its Admin API on port 8001. The public traffic hits port 8000.
Pattern 1: The Facade (Hiding your Architecture)
Let's say you have a user service running on a private IP 10.0.0.5:3000. You don't want the world to know that. You want them to hit api.yourdomain.com/users.
curl -i -X POST http://localhost:8001/apis/ \
--data "name=user-service" \
--data "upstream_url=http://10.0.0.5:3000" \
--data "request_path=/users"
Now, any request to your CoolVDS instance on port 8000 with the path /users will be transparently proxied to your internal backend. The latency overhead? On our NVMe-backed instances, it's sub-millisecond.
Pattern 2: Rate Limiting (Protecting the Core)
This is where Kong shines. In the NGINX days, setting up a leaky bucket rate limiter required complex map directives. In Kong, it's a plugin.
Let's limit consumers to 1000 requests per hour:
curl -i -X POST http://localhost:8001/apis/user-service/plugins/ \
--data "name=rate-limiting" \
--data "config.hour=1000" \
--data "config.limit_by=ip"
Done. If a script kiddie tries to hammer your API, Kong intercepts the request and returns a 429 Too Many Requests before it even touches your application server. This saves CPU cycles where it counts.
The Importance of Low Latency in Norway
We are seeing a shift in the Nordic market. With the adoption of privacy standards (Datatilsynet is watching), keeping data closer to the user is not just about speed; it's about compliance and trust. However, routing traffic through a gateway adds a "hop." If that hop is slow, your user experience degrades.
In my benchmarks, running Kong on a standard SATA-based VPS added about 15-20ms of processing time under load, mostly due to database I/O wait when logging requests. Switching to a CoolVDS NVMe instance dropped that overhead to 2ms. When you are chaining multiple microservices, those milliseconds add up.
Security Considerations
Never expose the Admin API (port 8001) to the public internet. By default, Kong binds to 0.0.0.0. You should strictly firewall this port.
firewall-cmd --permanent --zone=public --add-port=8000/tcp
firewall-cmd --permanent --zone=trusted --add-port=8001/tcp
firewall-cmd --reload
Final Thoughts
Moving to microservices is a trade-off. You gain agility, but you lose simplicity. Tools like Kong help you regain control, but they are only as good as the infrastructure they run on. A gateway is a single point of failure; it needs to be rock solid.
Don't let your API gateway become the bottleneck of your infrastructure. If you are serious about performance, stop sharing I/O with noisy neighbors.
Ready to deploy? Spin up a CoolVDS KVM instance in Oslo today and experience the difference raw NVMe power makes for your API latency.