Computer rooms - EPO's

Status
Not open for further replies.

bhsrnd

Senior Member
Location
Fort Worth, TX
I'm sure this will end up like some other threads do so I'll throw this out....

Check with your local AHJ, fire codes, building codes, etc... They will give you a definitive answer on what you should do in order to keep the hospital (and yourself) out of hot water.

Cause I'm sure like water and electricity....hot water and electricians don't mix. :D
 

mshields

Senior Member
Location
Boston, MA
We've kind of come full circle on this thread

We've kind of come full circle on this thread

Your advice is good and it's certainly essential that you coordinate with the AHJ. But be prepared to be told that you need to have the EPO. At least in MA, 9 times out of 10 if not more frequently, they require it. They really are not interested in your arguments about 645 not being required. They see it as a simple safety matter. Fire department wants to be able to de-energize equipment at entrance to the data center. They don't want to be messing with the trickery of a UPS.

But if you've got a client who is going to lose millions for one outage, you need to be prepared to go to the mattresses for that client and you need to know what is and is not required by code. If you look at some of the earlier threads you'll see that there are examples of inspectors/fire marshalls saying you need an EPO and then an appeal to the state being successful in turning that requirement around.

Mike
 

tallgirl

Senior Member
Location
Great White North
Occupation
Controls Systems firmware engineer
jjs said:
high density data centers are not able to be cooled by underfloor cooling alone. In rack cooling is required. Blade servers generate way too much heat per SF to cool properly with in floor only. You get hotspots in the rack itself. Data center design actually goes in cycles, so something in that 1990 document will eventually come back in to mainstream.

Blade servers generate too much heat per cubic foot -- forget square foot, I have blade servers that are 5kW or more in what I think is something on the order of a 15 or 20U package. I requested 3 new 208v 30A circuits today for several half-full racks I'm being shipped. Emphasis on half-full

Also liquid cooled computers such as the old mainframes were are becoming more popular as power densities go up.

That's where the trend is, but there is also a lot of resistance to liquid cooling. Look for data centers to become a lot windier in the future through the use of even higher capacity fans and "vectored cooling", where the servers are designed to route the cooling air more intentionally through the inside of the server, rather than just in one side and out the other.
 

tmillard

Member
EPO's again

EPO's again

I am still going to have an EPO in my room but it won't shut off the telemetry rack.

Tom
 
I rather like the idea of an EPO, but at a minimum it should be a flush button, not a mushroom, and it had better have a plastic cover over it. Even better is two buttons next to each other wired in parallel, then a single button failure or accidental press won't bring everything down. Don't know how most AHJs feel about this, though.

Regards cooling, we're now seeing 10+kw per rack, and it's still going up.
 

tallgirl

Senior Member
Location
Great White North
Occupation
Controls Systems firmware engineer
zbang said:
I rather like the idea of an EPO, but at a minimum it should be a flush button, not a mushroom, and it had better have a plastic cover over it. Even better is two buttons next to each other wired in parallel, then a single button failure or accidental press won't bring everything down. Don't know how most AHJs feel about this, though.

Good ideas, all.

Regards cooling, we're now seeing 10+kw per rack, and it's still going up.

I could easily put 15kW in a blade server rack if I wanted to. I might be wrong, but I think the larger of the non-blade servers I use go about 500w per U, which means with a 40U rack -- just under 6' -- that's 20kW. Ignoring cooling for a moment, I have a much harder time with wiring availability than anything else.

FWIW, I mentioned the power insanity here when I first arrived on the forum and one of the posters felt I was a lying sack of bovine fertilizer. Nice to see someone else who plays with racks is seeing the same thing :cool:
 

bwyllie

Senior Member
Location
MA
the 20kw per rack, is that measured or nameplate? i would be interested what the actual measured load is of a blade server compared to its nameplate data.
 

tallgirl

Senior Member
Location
Great White North
Occupation
Controls Systems firmware engineer
bwyllie said:
the 20kw per rack, is that measured or nameplate? i would be interested what the actual measured load is of a blade server compared to its nameplate data.

That depends. I have a blade server that is 1/3 full and it typically runs at 1/2 the nameplate of 5kW. The servers that operate at about 500w per U are 1,500w 3U multiprocessor boxen with more memory and disk than anyone has any business using.

A lot of servers have a much higher nameplate than what they will consume when shipped. For example, a server with slots for 4 or 8 processors might ship with a single processor, a single bank of RAM, and a single hard drive. That might be somewhere in the neighborhood of 300w or less, fully utilized. But put another 3 or 7 processors, 24GB DDR2 or XDR RAM, and another 750GB of U320 SCSI disks and the power requirements go up dramatically -- all in a 3U package.
 

iwire

Moderator
Staff member
Location
Massachusetts
tmillard said:
I am still going to have an EPO in my room but it won't shut off the telemetry rack.

Tom

Tom that sounds like a very bad idea.

You will have a button that all others will assume kills all the power and when they push it they will see power go out but there will still be power feeding this 'telemetry rack'

IMO at the least you should run that by the AHJ or the fire dept.

Either have a fully functional EPO or have none, half way is looking for trouble.

One other thing, if the EPO does not kill that rack than you do not have an Article 645 room and all the wiring will have to comply with chapter 3, so the 'half way' EPO gains you nothing.
 

bhsrnd

Senior Member
Location
Fort Worth, TX
tmillard said:
I am still going to have an EPO in my room but it won't shut off the telemetry rack.

Tom

To add to iwire's post, think of the legal implications you could place on yourself and the hospital. If someone is shocked to death (God forbid) in that one rack and passerby thinks that EPO button will shed power and it doesn't....I wouldn't want to be named in that lawsuit....

I agree with the "all or nothing" approach in regards to an EPO in any particular area.
 

bhsrnd

Senior Member
Location
Fort Worth, TX
jjs said:
Also liquid cooled computers such as the old mainframes were are becoming more popular as power densities go up.

There may be a slight trend in this technology but one has to consider that liquid cooled racks also come with a slew of additional single points of failure. In the world of five 9's (although I saw a six 9's advertisement the other day) that we live in, it's not always feasible for the business to use a liquid cooled solution.

However, it could be a good solution to an existing data center that cannot be "retroed" to accomodate larger CRAC units and the like or for the large co-location sites with extremely high density computing in a 42U rack.

Also, I agree with what tallgirl said. In most environments with blade chassis the likelyhood of that chassis to be 100% pegged out for utilization is slim. Most data centers with blade chassis may see a 50% utilization per chassis (even when fully populated) which means you won't come close to the potential nameplate kW.
 

tallgirl

Senior Member
Location
Great White North
Occupation
Controls Systems firmware engineer
bhsrnd said:
Also, I agree with what tallgirl said. In most environments with blade chassis the likelyhood of that chassis to be 100% pegged out for utilization is slim. Most data centers with blade chassis may see a 50% utilization per chassis (even when fully populated) which means you won't come close to the potential nameplate kW.

Just a minor note on the subject of server utilization, there is a trend in the biz towards server virtualization which will drive utilization rates and server sizes higher. So, whereas a blade server might be installed with only half its blades today, in the future it will not only be installed with its full compliment of blades, each with its full compliment of goodies, but it's also likely that each blade will be virtualized into several more servers to make up for any lack of utilization.

I've written here in the past about the server I have at home. It typically consumes well below its peak, but there are times when it approaches the full capacity of certain power busses in the power supply (850w which is provided as +3.3, +5, +12 and -12 volts). Its predecessor eventually got to the point that it was using too much of its power supply (640w, as I recall) before blowing up.

I've never sat down and compared power consumption, but there are mainframe-like servers that have power requirements which make racks look like toys. Not sure how much I can disclose, but I'll say I saw a room being put together where the power cables going to each of the machines were about 2" in diameter. Other machines I've seen have DC bus bars that look to be 1" wide by 1/4" thick going to a processor complex with heat sinks about the same size as the cylinder head on a small motor cycle. I have a nice photo of me in front of one, but as it accentuates my double chin I'm not going to be sharing it with you :cool:
 
bwyllie said:
the 20kw per rack, is that measured or nameplate? i would be interested what the actual measured load is of a blade server compared to its nameplate data.

In the server world, nameplate is usually pretty close to measured max. Or, if you don't like the electrical load, use the rated heat load and work back:grin:.

I measured the servers I was working on last year, rated about 5kva/unit, and they were reasonably close to that.
 

dbuckley

Senior Member
tallgirl said:
I could easily put 15kW in a blade server rack if I wanted to.

Yep, easy these days. My eyebrows went up when the words "rack mount servers" and "three phase" started being used in the same sentence by normal server people who previously wouldn't have known a phase from an apartment building..

If you have a few blade servers in a rack then you need additional cooling rather than just letting the rack get on with it, and I know this isn't news to you, but maybe is to those who are a bit behind the curve on server technology.

There are a few popular methods of achieving extra cooling, some very scary and impressive, but the next data centre I'm involved in is using specialist fans (Uniflair active panels), but just standard process cooling, it's a moderate size facility (2 x 200KVA UPS). Just gotta make sure that those floor fans are on UPS power not just essential, or when the juice goes the blades will be toast before the genset is up :)

I'm also concerned (in the general case) that when someone hits the EPO then the power will go off, and so will the cooling, but my experience from some years ago with high heat output computers is that even when off, if you stop cooling them, they get hotter for a while...

Fortunately on this job the design engineers have volunteered up front to not have EPO, so this time I don't have a battle to fight. Yay.

Data centres are getting fun again :)
 

bhsrnd

Senior Member
Location
Fort Worth, TX
tallgirl said:
Just a minor note on the subject of server utilization, there is a trend in the biz towards server virtualization which will drive utilization rates and server sizes higher. So, whereas a blade server might be installed with only half its blades today, in the future it will not only be installed with its full compliment of blades, each with its full compliment of goodies, but it's also likely that each blade will be virtualized into several more servers to make up for any lack of utilization.

You are correct and that was one aspect I didn't think of. We have several vitual server installations running, however, our network team has chosen a 3U "beefed up" servers to handle the VM instances versus using our blade centers.

I promise not to ask about the pic. :)
 

tallgirl

Senior Member
Location
Great White North
Occupation
Controls Systems firmware engineer
bhsrnd said:
You are correct and that was one aspect I didn't think of. We have several vitual server installations running, however, our network team has chosen a 3U "beefed up" servers to handle the VM instances versus using our blade centers.

I promise not to ask about the pic. :)

That's what I'm using for my virtualization servers -- fully laden 3U Xeon SMP boxen.

My son is handy with Photoshop. If I can get him to give me a facelift I might share the server pic with y'all.

Here's one of me standing by the back wall in my office. The two plaques on the ends are patents I've been awarded. The middle one is from the NSA, back when I was doing computer based spookery.

BackWall.jpg


(The double chin is less noticible, but once again I forgot that putting on makeup for flash photography isn't like putting on makeup for regular light ...)
 
Last edited:
Status
Not open for further replies.
Top