I've been working on and off for the past four years in spec'ing out and designing a server infrastructure for the Otaku Central non-profit organization that I'm aiming to get off the ground in the future. In the process of the buildout, we've had equipment in Lincoln NE, Kansas City MO, and Chicago IL, before finally pulling it out a collocated datacenter for the time being and putting it in my home.
During this time, it's grown extensively. What originally began as a $250 budget 1U SuperMicro server with a couple spare hard drives in it has turned into a farm that's in the neighborhood of $45,000 between hardware and software.
It's been an incredible journey for me. If you consider that when this began back in 2016, I was working for a small IT firm in the Midwest for $18/hour, it makes sense that any server infrastructure ideas I had at the time didn't amount to much - I didn't have the knowledge to do very much, either. Fasting forward four years, 26 active certifications and a moderate salary later, I can safely say that this server farm played an absolutely crucial role in enabling me to overcome my weaknesses, truly grab hold of my life for a change, and empower my career.
Here's what I'm currently doing for infrastructure, with the idea that it can perhaps give you some pointers if you're looking to do the same yourself. Except a journey filled with humor, and more than a little bit of sarcasm.
As many people in my circle of contacts have probably heard me say before:
"Good things come to those who wait? Well, I say better things come to those who don't."
Why A Farm, & Front-End Challenges
Frankly, the list of reasons to have a server rack at your home is far larger than the list of reasons to not have a server rack at your home. Anyone who says otherwise is an uninformed bigot.
Despite these obvious advantages, there are some outside naysaying "peasants" who would attempt to dissuade you from the lofty pursuit of private infrastructure hosting.
For the married male audience, this often boils down to "the wife". Married women have a noted penchant for despising adult men who enjoy tinkering with tools and systems, instead of spending every waking moment and dollar working towards their own self-centered female happiness. Resistance on this front may at first seem heavy, but can sometimes be offset by utilizing a few simple principles:
- Gently remind her that if you use the server farm to build your skillset and your resume, you will likely make more money. This often quiets them, at least until they realize later on that you're simply going to use the extra income to buy more servers instead of an upteenth pair of shoes for her.
- What happens in the man cave stays in the man cave, and really isn't any of her business. If she tries to pry extensively into what you've been doing down there so much lately, casually drop a few hints that you're secretly watching videos of innocent puppies dying. That should be enough to keep her at bay, albiet with a few minor side effects.
- You're a grown-@$$ man. What kind of 'Alpha' would you be if you allowed a few snide comments to keep you from living your dreams?
- If all else fails, buy a TON of extra RAM for your main workstation at home, and virtualize a smaller-scale infrastructure off the workstation instead. If she asks why that computer is always "humming away" in your home office, tell her that you're using it to mine Bitcoin and make extra money on the side. Most women are nowhere near technical enough to know what this actually means or to pry further, and she'll likely think that you're on top of your game and walk away.
These reasons, when applied prudently to most environments, are enough to get your foot in the door and keep it in the door in terms of getting your private farm off the ground.
The other associated utility costs with farming can be covered using the same set of tricks in a married setting. If at any point in time she becomes aware of what your hidden game plan has been with the entire project, you have the option of either coming totally clean and standing up for yourself, or moving the entire buildout into a more expensive collocated setting, along with chopping your balls off and handing them over to her. I can only advise the former.
For those who aren't living in a married setting, and are otherwise on their lonesome in an apartment but are looking to get a server farm going within reason and without property management catching wind of things, you're dealing with a slightly different set of challenges. Most of the remainder of this article will be spent looking at these considerations, as they're far more limiting than if you owned your own house and had complete liberty in your buildout (within zoning, of course).
Power - How Much, & What Kind
First things first - how much power do you actually need?
If you're running a Tier 1 hardware vendor's equipment for your build, they likely have a tool that can not only help you calculate power consumption, but also the price per KwH and the BTU output of your build. Links to a few more popular vendors' tools for this below:
- Dell - http://dell-ui-eipt.azurewebsites.net/#/
- HP - https://paonline56.itcs.hpe.com/?Page=Index
- Cisco - https://ucspowercalc.cloudapps.cisco.com/public/index.jsp#listProject
If you use one of these tools, make a note of the BTU output for your intended build - we'll circle back around to this in a future topic.
You also have the option with these tools of picking out your AC current rate for input. For readers that are within the United States, this would traditionally be 120VAC, with European readers trending towards 240VAC, and Japanese readers sitting at 100VAC.
A quick power primer for those that aren't familiar. All of these common voltages are 'single-phase' in our discussion, as opposed to a tri-phase power system. Tri-phase power isn't used in apartment settings at all, so it's outside the scope of our discussion. Amperage tends to scale relative to voltage; roughly speaking, a 240V circuit at ~5 amps is providing the same power equivalent as a 120V circuit at ~10 amps, although the cyclic rate is varied. Higher voltage tends to have a greater efficiency threshold (down-volting from 240 to 120 causes roughly an 11% loss in amperage conversion efficiency; power comes into your apartment naturally at 240V, and keeping it here is 'better' if you can manage it).
The reason I bring up the discussion of voltage selection here is that if you're looking at building out a larger infrastructure footprint (full rack, etc.), you're likely going to exceed the power maximum of a single 15 amp, 120V United States circuit breaker. You may even be in danger of exceeding two of them combined. Adding in an AC unit that will likely have it's own 15 amp, 120V breaker, you're now looking at having to use three separate breakers just to get adequate power for your intended build.
This brings me around to the secret weapon that most single guys have at their disposal: the stove breaker.
Generally, if you're like me, you boil water on a single burner on the stovetop on occasion, but you don't otherwise use the actual 'oven'. Buying a single, 1100W electric burner which you can place on a separate countertop breaker now frees up the stove breaker for use with our rack, and you're going to want to heavily consider this because it's easily the strongest breaker in your apartment. Period. Usually either 40 or 50 amps at 240V or 250V. That'll do what we need it to.
What If The Stove Breaker is 250VAC, & Safety Considerations
In older (think 1970s or prior) apartment housing, the wiring going from the breaker panel to the stove plug wasn't held to as high of a gauging standard as it is in modern building code. The reason for this is that the stove typically doesn't chew 40 amps constantly, it typically does it in alternating bursts to heat up the cooking coils. Because of this, builders didn't plan on gauging stove wire as heavily.
Times have since changed.
Modern stove wire is heavier duty, and can easily sustain a larger load indefinitely. I say this so that you know that if you're living in an old apartment with old wiring, it's probably best to not run more than 20A at 250V over your server mains at any point in time to keep the wiring intact. If you need more than this for power, add in another breaker at 120V.
For more modern apartments, you can safely put more electricity over the stove breaker, although I would still recommend staying at 30A at 250V as a maximum. If you're using more power than that for your infrastructure, you're borderline at the point where you're going to want to move to something larger scale.
What I've done is as follows:
Run a 20-foot extension cable from the stove plug into the bedroom where the rack is. The cable is a 10-50P cable, meaning three-prong at 50A, 250V. The cable has to be 6-gauge wire. This is expensive, but you have to go to this standard to be compliant with fire code. If your apartment is ever inspected by management, you'll likely get yelled at for doing this, but you won't be violating fire code, which would immediately get you evicted in most cases.
The cable runs around over into the bedroom, where it goes into a 10-50P splitter. The splitter joins into two 15-foot 10-50P to L5-30R cables that run into my APC PDUs. My PDUs are specifically rated for 120VAC up to 240VAC with no issue. While I am running most of the equipment in my rack off of a single breaker, and some of you are probably yelling at the screen for me doing this, I'm running my critical systems and at least one redundant VMWare High Availability slice of key VMs on servers that are powered by two UPS systems on a separate breaker set. This is the best I can manage with what I have.
Let's take a minute and discuss 240V vs. 250V, because there's not a lot of documentation on the Internet about doing this at the moment, and I want to set the record straight.
For your stove plug, some people have their stove breakers rated at 240V, while others have them at 250V. Since the upper stated limit for voltage on most servers and infrastructure devices is '240V', there's a degree of justified concern that if your breaker is actually 250V, you're going to be overvolting your equipment and burning it out.
This is actually a non-issue in all but the most one-off situations, and here's why:
The ANSI electrical standard declares that all utility services providing general electricity must stay within a 5% deviation margin of their declared voltage. This means that electrical companies most provide service to you that, for 115V or 120V, must be within a range of 109V to 126V or they will otherwise be liable for faulty service. For 230V or 240V, this equates to 218V to 252V. A fairly wide range of deviation.
Since the electrical service in itself allows for this degree of deviation, almost all computer equipment vendors actually create their power supplies with a 10% margin of deviation acceptance, to account for normal ANSI deviation as well as temporary surges or brownouts. Per our discussion, this means that a Dell server spec'd to run up to 240V will almost always be able to actually handle up to 264V. The same goes for Cisco equipment. I have not researched this with HP, or other vendors firsthand to confirm with them, but I would imagine that it's the case.
For my apartment, since I'm right next to the power substation for this district, my '120V' plugs actually come through as 124V and my '240V' stove plug comes through as 248V. This is within the ANSI declared deviation range, although it is on the high side. I once had power spike up to 252V on this breaker for a short period of time, but this was still within the allowable margin of the circuit and did not exceed the equipment's 10% deviation acceptance range.
In short, this won't matter for most engineers, but check your equipment's documentation and manufacturer information to be sure.
HVAC - Ever. Only. Always.
If you're running any degree of racked server systems, HVAC becomes an issue beyond what your apartment's central HVAC can probably accommodate. You're going to need extra cooling if you're putting out more than 8,000 BTUs. For most of us, three servers, a pair of switches and a firewall/router is enough to cross that threshold.
A portable air conditioner is typically enough to cover the extra heat footprint, as long as you have a window to exhaust it through and a drain to run the water condensation into (your furnace or air conditioner's drain is usually the easiest for this). Match your BTU coverage within a 25% deviation from your server rack's calculated BTU output from earlier - your central HVAC can handle the rest.
If you're using a conventional 120V wall circuit breaker for powering your rack, make sure that you're running your HVAC on a separate breaker. Run extension cables if you have to.
In positioning your cooling unit, bear in mind that server and switch intakes are (mostly) on the front of the units, and cold air needs to come in that way. Hot air needs to be recycled out of the back of the systems. There's a reason why we see so many vendors building in-row cooling systems for servers - because the "hot row, cold row" mentality has been the industry standard for decades due to its effectiveness. Airflow works best this way.
Internet Service - Static IPs and Bandwidth
High speeds are a must. Having static IP addresses or ranges usually helps, too. Making sure you have either symmetrical bandwidth or close to symmetrical bandwidth is a good thing.
I took this concept a step further when I was in the selection stages for my current apartment. I pulled up the city's metropolitan fiber grid, and only selected housing along where 10G mains fiber leads were run. With this, I was fairly guaranteed good Internet service through a handful of vendors, and the rest was a cakewalk.
You may be entertaining the idea of having redundant Internet circuits. Handling the routing and weight needed to correctly "load balance" this setup becomes another can of worms. This conversation tends to open an entire gateway to exciting, as well as 'crazy'.
Once you've oncorked the redundancy and load balancing keg like this, where do you draw the limit at? I mean, if you're going to have redundant ISPs and circuits, should you have redundant perimeter firewalls to adequately handle load and failover? Should you have redundant core switching? Should your cabling out of each redundant core switch be redundant as well? Should your port channels be redundant, too?!
For Otaku Central, I answered 'yes' to all of these questions. I was then promptly was faced with a price tag beyond my wildest expectations. I'm almost done paying it at this point. Almost.
I'm not going to tell you to only "buy what you actually need, within reason". Let's be frank - masculine IT engineering as a subculture doesn't ask such stupid questions within a personal scope. You want it? You can afford it? Buy it. It has pretty lights. It makes a nice humming noise. Your girlfriend will abandon you over it.
Most importantly, you'll be truly happy. That's what really matters here.
The Rack Itself, & The Golden Rule of Infrastructure
Always aim for racks that use cage nuts, instead of the direct screw-in racks. Extremely few server rack rail systems will work with screw-in racks, and if you strip the threads in one of the holes, you're stuck with a bad hole for the remainder of you owning the rack, which will likely be reduced to a rage-filled hour or two once you find out. Don't be 'that' guy.
Try to ensure that you'll have enough front-end space to allow for protruding cabling out the front end of switches and still close the rack door, and enough room at the back for proper cable management and adequate airflow. The standard rack depth these days is 42 inches, and tends to be sufficient for most buildouts. If you're going to run higher than average amounts of cables, extended depth options at 48 inches and beyond are available.
For brands, I've been loyal to APC for years because they build an accessory type for just about every conceivable use case you can imagine, and their PDUs and UPSs are second-to-none. I've used TrippLite and Panduit in the past as well with good results, but keep gravitating back to APC for my home turf.
All of this is going to set you up to follow what I refer to as the 'Golden Rule of IT'. This rule is pivotal to the success of every sound engineer, and goes back decades as a solid foundation to build not only your network on, but your manliness out of:
Always, always always always, use proper cable management. It's the highest-impacting $100 you can spend on ANY rack buildout.
"Hey, did you hear about the new guy who refused to do proper cable management on his gear?"
"Yeah! I heard he got his ass kicked."
In this world, there are network engineering men, and there are boys who wish they were men. I know which one I am. Cable management.
What you build out in the rack is up to you. Virtualization tends to be a good thing. Redundancy tends to be merited in at least small increments in most deployments. Backups are vital, too.
The Long Term
A common mistake that some make is in thinking that the adventure is just in acquiring the rack, and they may not have a 5-year, 10-year, or 15-year plan of what the long term looks like for the rack's future.
Gentlemen, the rack is a matter of commitment. I know I'm throwing this out here in a day and age where most men have been branded as "commitment-phobic" by society, but this is an opportunity that you simply don't want to pursue unless you've got your crap together. You want to be five moves ahead in this metaphorical chess game.
When it's just you going through life on your lonesome, these things aren't as big of a deal. The world's a big place, and many of us choose to go through life at our own personal pace as a result. But, you can't do that in this case. Because it's not just you anymore - it's you, and the rack.
I had to re-think what had formerly been my personal free time when I became a "rack man". Free evenings of gaming and watching anime had turned into plans for our scalability's future. Weekends of going out for drinks at the bars with friends turned into nights of coding and working on redundant failovers. I had to give up a lot, but... I got so much more back in return.
Over time, I turned into the kind of person who had to start paying attention to the "little things", because the rack was sensitive to that kind of stuff, you know? Things like making sure the air intakes weren't too dusty, or that there wasn't any chip creep due to heat coming off of the RAM modules. What I formerly thought "wasn't a big deal" were things that the rack handled with absolutely seriousness.
"Awwww... your load averages are running a little high today! It's OK if you're a little needy like this from time to time; I missed you, too."
And then, one day - completely out of the blue - IT happened.
I came home from a rough day's work, and noticed something was off about the rack. Temperatures were higher than normal, and I could audibly hear higher fan RPMs from the servers within. Even the interface activity lights on the primary firewall were blinking faster than normal.
Sitting down at my workstation, I logged in to the systems administration panel, and learned the truth the rack had been bashfully hiding from me. Due to a combination of high bandwidth and frontend load, one of the websites I was running had hit a warning threshold, and Ansible along with VMWare's Auto Deploy feature had attempted to compensate for the increased load by creating and spinning up not one new little virtual server... but two.
My eyes welled up. Nothing in the world can really prepare you for the moment you find out - find out that your world is about to get a whole lot bigger. We... we created these VMs. We. Us.
In that instant, a pride rose up in my chest. Pride for what I'd accomplished as a man. Pride for what the road ahead would hold. Pride... that all this time, all the effort we'd put in hadn't been for nothing.
Dealing With The Haters
- Why do I always hear this background noise on all your phone calls at home. Don't tell me you've got those darned servers running again!
- Will you STOP talking about how you haven't had to turn on your furnace at all this winter! I don't care if your little IT hobby creates enough heat to get you by - some of us have to pay rounded utility bills like REAL adults!
- Why are all these power cables running along your walls? This makes your place look kinda ratty, you know?
- Hey, so we've been wondering... why are your bedroom windows always open? Is trying to force your neighbors to listen in on your nighttime "escapades" some kind of a weird turn-on for you?
Perhaps you've noticed the commonality among all these complains; I know I have. People.
The less you talk to toxic people such as these, the fewer insults you'll get, and the fewer people who will know about your infrastructure side-hustle and try to rat you out to apartment management over it. Apparently we live in a country where you can be an open homosexual with neon-pink hair and only one nostril, and society barely bats an eye - but God forbid you should stoop to the moral depravity of running your own servers at home for educational purposes and to save money!
An excellent example of this would be a recent security clearance assessment call that I had for one of the clearances my workplace requires. When the investigator inquired as to any neighbors or closely-located "references" I had for my previous residence history, I proudly told her: "I don't have any. I don't really talk to any of my neighbors, and they tend to not really talk to me."
She was taken aback, and there was a satisfying period of silence on the call. Silence, where I'm sure she realized all-too-clearly that I'd found a way to beat the game - a game where everyone in life is just trying to beat you down, and the simple solution is just not to talk or interact with people like that, or people in general. So easy, yet it slides right under the noses of the unsuspecting. Blink - and you miss it.
She politely continued onward after taking a moment to collect herself, but the seed had been planted. Now she too would know of the path to enlightenment; the path to true inner peace. The path away from toxic people.
I think it goes without saying that when she closed out our call, she immediately went on eBay and started spec'ing out secondhand Poweredge boxes to get that oriental fusion cooking website she'd always wanted to try her hand at off the ground. That purchase would then lead to her first Catalyst switch, followed by a little "love poke" into the field of a BIG IP application acceleration appliance.
That's the kind of power that this mindset holds, gentlemen. Absolutely incredible to see touching stories like this happen right in front of me.
At the time of this writing, my farm of 11 physical servers, four switches, and two firewalls is happily humming away in my bedroom. It's powering the site infrastructure for the development version of the new Otaku Central website that will be unveiled in the not-too-distant future.
The rack's been a wild ride for me. The scale of what it would contain has changed several times on me, and it's now at the point where every system in it is fully redundant and can be set up to carry that redundancy over across disparate physical locations.
I'm really proud of it. Although many people would probably consider it to conceptually be pretty stupid, it's enabled me to build the knowledge to take my career in a pretty incredible set of directions. I owe more personal growth to it than I can really put into words.
I occasionally hear reports of guys in my circle of contacts who start down the road of building their own racks out, and some readers of this site have messaged me with their builds and ideas for at-home implementations. It's pretty awesome to hear about them all, and to know that many are being used for educational purposes to help people get more out of life.
Now, if you'll excuse me, I received another box of Twinax in the mail today and should probably get some of my interconnects migrated over.