Small data center

Status
Not open for further replies.

nickelec

Senior Member
Location
US
Hi all is there a demand factor for CPUs I have a custom who is requesting to fill his place with 100 CPUs running approximately 5a each is there a demand factor for the service or do I actually need to count them each as a continuous load at 125%?

Sent from my SM-G950U using Tapatalk
 

Ponchik

Senior Member
Location
CA
Occupation
Electronologist
If they are going to be running longer than 3 hours they it is considered a continuous load. I don't think NEC has a demand factor on the CPUs.
 

gadfly56

Senior Member
Location
New Jersey
Occupation
Professional Engineer, Fire & Life Safety
???
isn't that the main chip in a computer?
computer?
server?
if one of those I would say it runs 24x7, so continuous

In one sense yes, but in another sense, no. If you turn on the monitoring software for your computer you can see that your CPU isn't loaded to 100% all the time. It's power draw is proportional to its loading. I've got Task Manager open as I'm typing this and CPU usage is running about 11%. Memory is dead steady at 44%. For the OP's servers, it all depends on what they are used for. If they are running a general circulation model for global climate they are probably pedal-to-the-metal. Point-of-sales transactions, maybe not so much.
 
Usually we refer to servers or server chassis (some can contain 8 or more CPU chips), but 5a for a server chassis isn't very big. Is that from measurement or from the nameplate? It also depends on the type and number of drives in the chassis.

As for demand, it really depends on what they're doing- lots of systems sit at 10% load all the time and don't draw near the nameplate rating, but if someone is doing graphics rendering, bitcoin mining, or other CPU-intensive workloads, they processors will be running full-tilt most of the time. The drives & fans will also be running all the time. If the workloads are heavy, then I'd consider most of the systems as continuous loads, but use a measurement if you can, not the size of the power supply (a 1-proc & 2-drive system will not be using much of an 1100w power supply).
 

Ingenieur

Senior Member
Location
Earth
In one sense yes, but in another sense, no. If you turn on the monitoring software for your computer you can see that your CPU isn't loaded to 100% all the time. It's power draw is proportional to its loading. I've got Task Manager open as I'm typing this and CPU usage is running about 11%. Memory is dead steady at 44%. For the OP's servers, it all depends on what they are used for. If they are running a general circulation model for global climate they are probably pedal-to-the-metal. Point-of-sales transactions, maybe not so much.

I'm not sure processing duty cycle is proportional to power
anyways wouldn't load be based on nameplate i?
and if on all the time would it not be continuous?

moot, the real power sinks are aux devices, monitor, drives, fan, etc
the fan likely the largest by an order of magnitude
 

Fulthrotl

~Autocorrect is My Worst Enema.~
Hi all is there a demand factor for CPUs I have a custom who is requesting to fill his place with 100 CPUs running approximately 5a each is there a demand factor for the service or do I actually need to count them each as a continuous load at 125%?


well, that comes out to 60 watts per cpu. seems a bit low to me.

so, i went looking for an app.... ;-)

this macbook, typing this reply, is drawing 37 watts... i refreshed
the clone on the hard drive, writing from a SSD to a SSD, and it
pulled 88 watts.

let me see if i can load it down with a benchmark.... running geekbench 4,
it pulled 79 watts peak. cpu core is 157 degrees, thunderbolt port 133 degrees.

this isn't a server or anything, just a laptop. who came up with the 60 watts?
 
I'm not sure processing duty cycle is proportional to power
anyways wouldn't load be based on nameplate?

Power consumed is very much dependent on processing load- an idle system might consume less than a third of it's full-bore draw.

For nameplates, it's kind of like lampholders- you can put a 15w lamp in the 660w socket. (I have some chassis that contain two 1100w supplies (redundant), but with only one proc, one drive, and not much memory, they're probably not pulling 300w.


moot, the real power sinks are aux devices, monitor, drives, fan, etc
the fan likely the largest by an order of magnitude

A back-of-the-envelope calc for a fairly decent server:
one Intel Haswell-series processor (E5-2690 v3 2.6GHz 30m cache 12 core, 135w*)
Motherboard, estimated at 100w
Memory 256GB @ about 6w/16G = 96w
four hard drives @ 11w each (operating) = 44w (idle is more like 6w)
toss in 20w for misc interface boards
= 375w
a 4" fan is a couple of watts but the chassis could have more but smaller fans, so let's assume 25w for fans
= 400w + power supply inefficiency

* "Thermal Design Power (TDP) represents the average power, in watts, the processor dissipates when operating at Base Frequency with all cores active under an Intel-defined, high-complexity workload. Refer to Datasheet for thermal solution requirements." (copied from Intel)

A monitor might be 100w but you only need one :D.
 

steve66

Senior Member
Location
Illinois
Occupation
Engineer
I definitely would not consider these continuous. But deciding the correct load to use takes some engineering work and judgement. Sometimes clients like this have a pretty good idea what their peak load will actually be.

Determining the correct amount of cooling (and the electricity required to run that) will be just as important.
 

ron

Senior Member
I definitely would not consider these continuous.

With (100) servers being a relatively small amount, but also considering larger than some, the load of a data center is pretty flat.

The loads fluctuate, but effectively the load once connected is a good base load that utility like.

That being said, the nameplate is always overstated, but once you pick a number you feel is real per server, there is no NEC load diversity factor to apply. It is what it is.

I do DC's most of the time, and due to equipment changeouts, we often establish a max per rack, and an average per square foot (or per rack) and establish the feeders / UPS based on that.
 
they get upgraded every few years. did you keep this in mind?

Some do and some don't. I've seen a lot of 5-7 year old systems purring along in data centers; equipment is only changed out when business reasons dictate.

Also let's clarify the term data center, to some it means one room (or more rooms in a building) with 100's of equipment racks and to others it's a single room that'll only hold 4-5.

A group of 100 servers is likely to be only 3-4 racks, and that would fit in a large walk-in closet. I'd just call this a server room, but YMMV.
 
Status
Not open for further replies.
Top