Power Circuit Planning for High-Density Server Rack with Xeon Processors

bewani2462

New User
Location
New York
Occupation
Manager
Hello everyone,

I'm working on the electrical plan for a new small business data closet that will house a high-performance server. This isn't a full data center, but the server is a significant step up from typical office equipment. The unit is specified with a dual Xeon 24 Core 2.9GHz processor configuration (the one with the 10.4GT/s UPI interconnect).

My primary concern is the power delivery and load characteristics. The server's PSU is rated for 1600W, and it will be accompanied by a UPS. I want to ensure the branch circuit and outlets are correctly specified for safety and reliability.

My specific questions for this forum are:

  1. Inrush Current: Processors like this, with a high core count and complex architecture, can cause significant inrush current when the server powers on. For a 1600W PSU on a 120V circuit, the theoretical max draw is ~13.3A, but what is a realistic safety factor to account for inrush when sizing the branch circuit breaker? Should I be considering a 20A or even a 30A dedicated circuit for a single server of this class to prevent nuisance tripping?
  2. Dedicated Circuit Best Practice: NEC 645.5(B) discusses information technology equipment rooms. Is it a code requirement or simply a best practice to put a server with this power profile on its own dedicated branch circuit, even if the calculated steady-state load doesn't exceed the circuit's capacity when shared with other low-power devices?
  3. Heat Load & Ventilation: The thermal design power (TDP) for these CPUs is substantial. The heat output of this server will be significant. From an electrical and safety standpoint, are there any code considerations (NEC 110.10, for example) regarding the ambient temperature of the closet and its effect on the current-carrying capacity of the conductors feeding the outlet? Should I be specifying a specific receptacle type rated for higher temperatures in an enclosed space that will likely get warm?
  4. UPS Interaction: When specifying the UPS for this load, are there any known issues or considerations regarding the power factor or harmonic distortion that modern server PSUs with these types of processors might introduce that could affect the UPS performance or necessitate a specific type of UPS?
Thank you for your professional insight. I want to make sure this installation is not only functional but also fully compliant and safe.
 
1) The CPU will not cause any significant 'inrush' on the AC side of the power supply. The CPU will change the load on the power supply, and this will change the power drawn from the mains circuit, but this will not exceed the power supply rating.

There will likely be an inrush caused by the power supply itself. These power supplies rectify incoming AC to DC and then perform switching magic to generate the appropriate equipment voltages. The power supply input rectifier can draw considerable inrush current when initially energized. Check the datasheet for the power supply, but I expect it is designed to function on a typical 20A circuit.

2) I am not familiar enough with NEC 645 to answer.

3) You should design for the expected ambient temperature. I would not try to design for an excessive ambient temperature (eg above 40C) because the server itself will have temperature limits. Your design should include cooling to keep the server itself at an acceptable temperature.

4) I do not have enough information to answer. I'd suggest that you provide the PSU specifications to the UPS supplier.
 
1) Multiply by 125% for wire sizing and you will be fine.
2) Yes. Put each on a separate DC branch circuit. So, for example, I did one that was a 600A inverter at 48V. Each pluggable inverter added 50A. IT was a stack. We only put in half fill but I sized the wire to the full fill. The branch circuit feeding it was 150A 208V 2 pole. Then the DC bus for the batteries was 100A each string. The loads were 20A 48V DC breakers. Or you can do small fuses. I didn't need to fully load out the inverter system. Then a bunch of dedicated 120V receptacles. A 200A 120/208V 3ph panel in the room. A couple of 120V APC battery packs with plugs if they had some simple stuff to hold over from the ISP. All the wire at 125% of the load. ~40k in equipment cost. But this is for ISP style stuff. I am guessing you can do a lot less.
3) Air condition (wall mini split) or dedicated unit. Temperature probes and sensors for the batteries and between the equipment. Try to maintain 1RU between mounted equipment that you expect to be hot. Hot air and cold air typically flow through the device. So cold air through the intake isle and hot air out (should be sucked out / vented out). You won't need to worry about much for the electrical equipment and wire. Mainly the batteries will deteriorate faster fail if the room gets to hot. You will end up with higher voltage than you want, etc.
4) No. You will be fine. Just over size everything to the load. If you get to close to it, like reduced neutral in the feeder or no spare inserted inverters to the power unit than you will probably run into some equipment issues or reduced life.
 
Computer power supplies are typically over rated. The nameplate will be for a server wth all the cpus it can take, max ram, and max disk drives. Even if this server has everything maxxed, suppliers typically oversize the power supply. So a server with a 1000w power suply probably hd a max draw of 700 to 800 watts. If it has dual power supplies, say two 1000w supplies, power draw will be no more than one supply because it is designed to run on one (so you can swap out a bad one). If both are working, power is split between them.

Computers are typically not a continuous load. Yes, they can run 24/7, but they dont draw max power all the time. Load varies with processing and disk drive usage. I had many servers that would draw 2 to 3 amps on startup, settle down to 1 to 2 amps, and hit maybe 4 to 5 amps varying quite a bit while processing long jobs

NEC 645 is an optional article. If you design the room to meet all the requirements, then you get some allowances that you normally cant do. For example, with a 645 compliant room, you can run DP rated power cords under a raised floor. To me, the benefits of 645 were never worth all the HVAC and other requirements, so I never used it. Plus I could never find power cords marked DP.

So no, dont add anything for inrush as the breakers can ride through that, and I wouldnt even do a 125% factor. If you size per power supply rating, that will be plenty. You could use a PDU with a built in ammeter to watch it. I know I was surprised when I built server racks with those.
 
NEC 645 is an optional article. If you design the room to meet all the requirements, then you get some allowances that you normally cant do. For example, with a 645 compliant room, you can run DP rated power cords under a raised floor. To me, the benefits of 645 were never worth all the HVAC and other requirements, so I never used it. Plus I could never find power cords marked DP.
Agreed, definitely do not use 645.
A dedicated circuit for a 1600W power supply is a good idea, as you will never know how much of that rating will be drawn upon.
 
2) Yes. Put each on a separate DC branch circuit. So, for example, I did one that was a 600A inverter at 48V. Each pluggable inverter added 50A. IT was a stack. We only put in half fill but I sized the wire to the full fill. The branch circuit feeding it was 150A 208V 2 pole. Then the DC bus for the batteries was 100A each string. The loads were 20A 48V DC breakers. Or you can do small fuses. I didn't need to fully load out the inverter system.
FYI some server manufacturers offer 48V server PSU's, so you can omit the inverter's.
 
It's not the wire size or type you should be worrying about. It's the connections and terminations. Use hospital grade duplex. Solder ring terms onto wires. Trust me I've done tons of server data connections.
 
Top