I'm trying to understand the KW/KVA/PF relationship in the real world and i'm hoping you guys can give me a hand I understand PF=KW/KVA...here is the scenario:
I'm working on an addition to an existing 120/240V, 1phase service. I used an IDEAL 400A ac clamp meter and got the following:
Panel A = 62A
Panel B = 3A (almost entirely receptacle loads)
Panel C = 69A
Panel D = 22A
(these readings are the larger of the two phase loads feeding the panel).
With the panels totaling 156A; 156A x 240V = 37440 (W or VA?, shouldn't it be W since that is the real power?) (I don't want to get into the nitty gritty details, i know there are other factors to consider such as loads out of phase, etc...i don't care about the exact calculations, i just want to understand what's going on.)
The utility meter had a current demand reading of 42KW, I'm trying to determine if the difference in the ultility reading vs. the panel readings are due to PF or possibly just loads switching on and off?
If the utility meter and the panel readings (37440) are both in W, where do you get the VA to determing the power factor (I hope i'm not getting ahead of myself)?
Do you even need to consider the PF in this type of situation? What is the acceptable tolerance for a negligable PF (i.e. +/- .05) for calculation puropses?
With equipment nameplate ratings, are the manufacturers assuming the PF is negligable, I would expect a PF<1 in the testing conditions?
With KW=KVA*PF is it better to calculate the loads, services, etc...using KVA since it will be than KW?
I'm sure I'm brain f***ing this but I want to make sure I understand these relationships and what's going on. Thanks in advance!!
-solarEI
I'm working on an addition to an existing 120/240V, 1phase service. I used an IDEAL 400A ac clamp meter and got the following:
Panel A = 62A
Panel B = 3A (almost entirely receptacle loads)
Panel C = 69A
Panel D = 22A
(these readings are the larger of the two phase loads feeding the panel).
With the panels totaling 156A; 156A x 240V = 37440 (W or VA?, shouldn't it be W since that is the real power?) (I don't want to get into the nitty gritty details, i know there are other factors to consider such as loads out of phase, etc...i don't care about the exact calculations, i just want to understand what's going on.)
The utility meter had a current demand reading of 42KW, I'm trying to determine if the difference in the ultility reading vs. the panel readings are due to PF or possibly just loads switching on and off?
If the utility meter and the panel readings (37440) are both in W, where do you get the VA to determing the power factor (I hope i'm not getting ahead of myself)?
Do you even need to consider the PF in this type of situation? What is the acceptable tolerance for a negligable PF (i.e. +/- .05) for calculation puropses?
With equipment nameplate ratings, are the manufacturers assuming the PF is negligable, I would expect a PF<1 in the testing conditions?
With KW=KVA*PF is it better to calculate the loads, services, etc...using KVA since it will be than KW?
I'm sure I'm brain f***ing this but I want to make sure I understand these relationships and what's going on. Thanks in advance!!
-solarEI