This article is over two years old. Life moves quickly, and thus there is a good chance any references, recommendations, or opinions in this content are out of date. Make sure to verify any information independently.

DevOps in the Closet

Published 18 June 2018
A 12U rack of servers in a normal closet.

I love hardware. I love the raw power you can get from bare metal, and the ownership and control one can have over it. Even though my needs are modest, and could easily be satisfied by a couple mid-range VPS instances, there's no fun in that! Recently I moved to a hybrid setup -- a 12U rack of machines in my apartment connected to a load balancer running on a cloud provider.

For a while I rented colocation space from the wonderful people at Hurricane Electric -- I highly recommend checking them out if you need colocation, IP transit, or anything of the sort. However, in my efforts to move all my data and property within Canadian borders, I had to take my children back into my home.

I am fortunate enough to have a symmetrical gigabit fibre connection to my apartment, and am doubly fortunate to have the resources and ability to run it entirely on my own hardware — this means I have ample bandwidth to spare, and with a newfound pile of servers at my disposal, what better than to run production traffic from my closet?

Overcoming NAT

The first problem here, of course, is addressing. Consumer ISPs being what they are, my residential internet plan has only IPv4, and only one address. Not only that, but an address that is different every time the PPPoE tunnel renegotiates. None of these attributes are particularly helpful for hosting.

But then, an idea: what if I used a VPN as they are intended to be used? Everyone loves to use VPNs these days just to tunnel their traffic through someone else's connection, but really that's nothing more than an abuse of a very powerful tool. The point of a Virtual Private Network server is, well, to provide a virtual, private network.

So I set up a VPS running OpenVPN. Every machine in my closet has an OpenVPN client running on it, connected to this server (I refer to this machine as "the hub"). All machines connect to this VPN and get a static IP assigned to them in the VPN's private subnet. Now, any machine connected to The Hub can seamlessly communicate with any other, as if they were sitting next to each other on a switch. This is what VPN's are for.

The astute among you may have already realised: there's nothing about this setup that restricts it to servers in my closet. Indeed, since the OpenVPN Clients authenticate using certificates, I could simply copy the certs to a server somewhere entirely different, and it would connect and be available on the same IP. This means with no deployments or config changes I can fail-over machines to ones in entirely different regions, behind entirely different networks -- even a hidden machine in my parents' basement -- and it would be available just the same as everything else, and on the same IP.

The one downside with this setup is the OpenVPN "hub" server is a single point of failure. If the VPN goes down, so does every machine connected to it. I'm sure there are ways to work around this and deploy a HA VPN, but for my purposes I'm fine with the risk of failure in the name of simplicity.

Serving the Web from my closet

So thus far I have a bunch of servers connected to a VPN and it's all peachy, but this still doesn't solve the original problem: they're still not publicly-accessible. A private IP, even if connected to a public server, is still private. Enter, HAProxy.

Playing off of the fact any machine can connect to the VPN, I have a second VPS beside The Hub running HAProxy. This machine acts as my "edge" (the machine that accepts all incoming traffic) and has IPv6, backups, a fast datacenter-grade network connection, etc. This machine is also connected to the VPN, just as the rest of the servers, and HAProxy's backends point to the static IPs on the VPN for the respective services.

This allows me to still support IPv6 and still have the edge of my network in a reliable place, but host the bulk of my data and harness the power of raw hardware at the same time -- without having to shell out hundreds of dollars per month to pay for colocation.

Additionally, this allows me to build in high-availability completely transparently -- I could have HAProxy load balance between two servers: one in my closet, and one on a VPS provider (for example), and then even if my power goes out or I accidentally unplug everything while cleaning traffic would be uninterrupted. This also means that under heavy traffic, I can quickly scale-out instances on a cloud provider to handle it, while still keeping my affordable baseline of dedicated hardware.

Simplified management

One other benefit of this setup is my laptop has an IP on The Hub too. With one click I can have my laptop directly attached to the production VPN, and have full access to every machine in my arsenal, no matter where it is or what firewalls it may be behind.

This means I don't have to expose SSH anywhere except on the private VPN network (as I can run Ansible anywhere in the world over the VPN), and don't need to worry about which servers are on which providers to make sure I have the right VPNs enabled -- it's only one VPN everywhere.

Utilization of… unusual hardware

A top-down shot of a rack shelf with some consumer-grade computers

In addition to my pile of Xeon-based production-grade server hardware, I also had a pile of unused consumer trash boxes that needed something to do. Under this new setup, I now am running several "internal" or "secondary" servers on a few perhaps-not-production-grade computers: a Gigabyte Brix and a Zotac ZBox, as well as a couple old laptops.

These are obviously not machines I would have put in a datacenter, but are still useful nonetheless, and for non-critical "tier two" services (like CI builders, yum repo mirrors, or internal services) they serve their role perfectly and compliment the "shoestring budget" of the entire setup.

This has made me feel much better about the previously-useless pile of old laptops and dust-covered relics I had been hiding away in a box. This project has allowed me to give them a new life and purpose.

But Don't Do This.

This setup works great for me, because I run a handful of shitty websites only for myself, and don't get any significant amount of traffic. Please do not use this article as a guide. Do not deploy this ever. If any of my projects ever came close to hosting anything that mattered or held customer data I would immediately move everything to colocation and implement full high-availability practices.

This was a fun project, and since I pay out-of-pocket for all this it made a significant budgeting difference. If you're in a similar boat, perhaps it could work for you, but absolutely do not think this is an acceptible production network for anything that remotely matters.

What would it take…

But what if you did in fact want to make this setup more legitimate? What if we wanted to deploy a proper on-premises "hybrid cloud" solution? Perhaps surprisingly, it's not that far off from what we already have.

  1. Router-level tunneling.

    Switching the individually-connected servers to a router-level network bridge would probably be a better permanent solution. This would mean running a single tunnel between the on-prem VLAN and the remote cloud private network, and continuing to utilize the existing subnets of both. This would mean every machine in each subnet would be automatically visible to the whole network, as the tunnel is bridging networks instead of machines.

    However, this would impose more conditions on each on-prem network: each network being connected would have to be specifically designed to be connected. This would lose the current benefit of being able to connect any machine anywhere without anything more than running OpenVPN.

    This also would mean using a simple VPS for this would probably not work, as the networking options provided by most VPS providers are limited at best. You would need something more akin to an AWS VPC or Azure Virtual Network.

  2. HA OpenVPN.

    The OpenVPN "Hub" server is a single point of failure in this design. For more legitimate hybrid cloud deployments, you'd want to ensure high-availability of the OpenVPN hub so that a machine failure or region outage wouldn't take down the entirety of your infrastructure.

    Unfortunately I don't really know the extent of OpenVPN's HA capabilities, as it's never been something I've needed; the first thought I have is CARP to enable machine-level redundancy in case of hardware failure. This isn't "globally-redundant", but it is a good first step. Chances are, if you're running on-prem, an outage of that premises would be catastrophic anyway.