how does improving power factor saves electricity in industries ?

Status
Not open for further replies.

panthripu

Member
I am trying to understand, how can we save electrical energy by improving power factor. I understand that the electricity bill we pay is based on the kW. and also I agree that by improving the power factor we can reduce the line losses and thus lower rating conductors can be used. Also more kW can be consumed from the available MVA .For example a transformer is supplying a load and the power factor is .8 , then active power supplied by transformer is 8 kW. Now if i improve the power factor to .9 ,in this way i have 1 kW addition available to supply other load. Correct ?
But how does the kwh will reduce for a fix load ?If i neglect the line lossess due to high current and If the utility company does not charge you on the reactive power consumed.
 
Last edited:

suemarkp

Senior Member
Location
Kent, WA
Occupation
Retired Engineer
With all the things you just tossed out, it doesn't. The savings are in electrical generation (more available capacity if PF reduced). If the power company cares about that, you'll have a meter to reactive power factor, and that reactive power charge is where you'll save money if fix the PF. There is a minimal savings in line losses if you improve the PF of utilization equipment.

What are you asking "But how does the kwh will reduce for a fix load"? KWH doesn't reduce, VA does when you improve the PF in a given device. In a residential setting, there are no reactive power measurements so you will not save any money by reducing PF. Probably true for commercial and light industrial too.
 

G._S._Ohm

Senior Member
Location
DC area
As it was explained to me:
when you get a fridge delivered, you pay for shipping the fridge and the crate it come in.
If your PF is not equal to 1.0 the difference is what you pay to ship the crate.
 

Electric-Light

Senior Member
Another thing is phantom loss. Transformers need to support the maximum load, yet they can't just be disconnected when the load level is low. Real transformers don't behave like ideal ones on paper, so even in idle state, these large transformers run a power of kWs just as cars consume gas even when they're idling without moving.

Some utilities provide deduction for idle loss along with conversion efficiency through its rate schedule if the customer provides their own transformer and the metering is on the distribution side.

http://www.seattle.gov/light/accounts/rates/docs/2010/Oct/2010Oct_mdb.pdf

Now, if customer receives 4160v and steps it down to 480/277 for their own use, but have to use a 3VA per load VA 480 to 208/120 transformer to accommodate harmonic loads for its computer equipment, the transformer will burn off more energy at idle than one sized at 1/3 the VA rating.

By minimizing reactive power, the waste of power due to resistance loss in wiring, as well as higher standby losses associated with oversized transformers needed to accommodate full load is eliminated.
 

Electric-Light

Senior Member
With all the things you just tossed out, it doesn't. The savings are in electrical generation (more available capacity if PF reduced). If the power company cares about that, you'll have a meter to reactive power factor, and that reactive power charge is where you'll save money if fix the PF. There is a minimal savings in line losses if you improve the PF of utilization equipment.

What are you asking "But how does the kwh will reduce for a fix load"? KWH doesn't reduce, VA does when you improve the PF in a given device. In a residential setting, there are no reactive power measurements so you will not save any money by reducing PF. Probably true for commercial and light industrial too.

The situation is different now than it was 50 years ago. Back then, power factor was pretty much focused around cos phi shift, so you can easily counter weight it with capacitance and inductance.

These days loads with rectifier & bulk caps that pull in power only through the peaks of a wave cycle are significant portions of loads.(CFLs, consumer ballasts and computers)

When you've got many customers on the same bus these loads pollute the power by flat topping the wave top. Power companies can't really do much about it.

In Europe, computer power supplies are legally required to have a power factor correction to prevent imposing non-linear loading on the grid. This additional front-end adds cost to power supply, increases power use (a few watts) and a point of failure yet a necessary evil.
 

Jraef

Moderator, OTD
Staff member
Location
San Francisco Bay Area, CA, USA
Occupation
Electrical Engineer
I am trying to understand, how can we save electrical energy by improving power factor. I understand that the electricity bill we pay is based on the kW. and also I agree that by improving the power factor we can reduce the line losses and thus lower rating conductors can be used. Also more kW can be consumed from the available MVA .For example a transformer is supplying a load and the power factor is .8 , then active power supplied by transformer is 8 kW. Now if i improve the power factor to .9 ,in this way i have 1 kW addition available to supply other load. Correct ?
But how does the kwh will reduce for a fix load ?If i neglect the line lossess due to high current and If the utility company does not charge you on the reactive power consumed.

"...how can we save electrical energy by improving power factor." There's the fly in the ointment. This concept is promoted very heavily by people trying to sell power factor correction devices, but that does not make it true, because it isn't. Your observations are correct, improving power factor does NOT save energy, other than a small amount in I2R losses in the conductors, and only if the PF correction takes place at the point of inductive power use (which flies in the face of all of the "single connection point" PF energy saver scams). For industrial users and some others who are PENALIZED for having poor power factor, PF correction may save them MONEY, but not in the form of ENERGY reduction, only in the form of PENALTY reduction. If you are not penalized for poor PF, then correcting it does almost nothing for you.
 

Electric-Light

Senior Member
The question was save electrical energy. If you specify active PFC power supply for all the computers in classrooms and eliminate the need for over sized transformers, it increases the utilization efficiency of kWh received. (kWh per month used by load vs kWh going into transformer, factoring in the time the loads aren't used)
 

suemarkp

Senior Member
Location
Kent, WA
Occupation
Retired Engineer
The situation is different now than it was 50 years ago. Back then, power factor was pretty much focused around cos phi shift, so you can easily counter weight it with capacitance and inductance. These days loads with rectifier & bulk caps that pull in power only through the peaks of a wave cycle are significant portions of loads.(CFLs, consumer ballasts and computers). When you've got many customers on the same bus these loads pollute the power by flat topping the wave top. Power companies can't really do much about it.

In Europe, computer power supplies are legally required to have a power factor correction to prevent imposing non-linear loading on the grid. This additional front-end adds cost to power supply, increases power use (a few watts) and a point of failure yet a necessary evil.

I wasn't really thinking of non-linear loading as a power factor issue. Does a switch mode power supply have a non-unity power factor? It would seem its current waveform is in phase with the voltage, but it certainly would not look line a sine wave -- it would have a large current peak near the voltage peak. Isn't adding a capacitor technically delaying the current waveform (thereby making power factor "worse") but it should squish the current waveform perhaps doing more good than harm?

I agree that this certainly can cause issues with utility power service and require excessive capacity. But are power companies penalizing people for non-linear loads?
 

gar

Senior Member
Location
Ann Arbor, Michigan
Occupation
EE
121126-2400 EST

suemarkup:

A non-linear load such as a normal switching regulator, or any capacitor input filter, only draws current near the voltage waveform peak. However, the result at the output of the capacitor input filter is nearly a steady value because of the energy storage capability of the capacitor. This steady value and its load current to a resistive load is an approximately constant flow of electrical charge energy supplied from the capacitor (at least if the capacitor is large enough). But going into the capacitor are rather short pulses of current. The average input current in has to equal the average ouput current

A steady DC current has an RMS value that is a minimum for the power delivered to a resistive load. The average output current has to equal the average input current. The short pulse input current has a much higher RMS current than does the steady output DC current. The sharper the peak the greater the ratio.

Power into this circuit excluding diode, and other component losses is equal to the DC output power. But the VA (Vrms*Irms) input is substantially greater than the power input and therefore the power factor is less than 1.00 .

Look at the flattened top of power company sine waves today. This largely results from all the capacitor input filters hung across the power company supply.

You can slightly see this flattening in my photo P-5 at http://beta-a2.com/EE-photos.html .

.
 

Electric-Light

Senior Member
I wasn't really thinking of non-linear loading as a power factor issue. Does a switch mode power supply have a non-unity power factor?
pf = I rms x cos theta only works if the current and voltage are both sine wave.

I agree that this certainly can cause issues with utility power service and require excessive capacity. But are power companies penalizing people for non-linear loads?
Absolutely. Definition of power factor is true power/VA.
 

JoeStillman

Senior Member
Location
West Chester, PA
There are times when power factor correction can save significant energy. If the savings were negligable, the utility wouldn't be penalizing customers.

Almost all the times when I've been commissioned to retrofit a facility for low power factor, the solution was to install capacitors near the service entrance. When you do that, the only I2R loss reduction is on the utility side. The only incentive for a customer in this situation is to make his penalty go away.

Only once did I have a client with a facility large enough that the loss-reduction warranted distributed capacitors. We could have installed a 3,300 kVAR bank at the service entrance and eliminated the penalties, but the distributed solution passed the savings on to the customer. We calculated a 35 kW reduction in I2R losses in feeders and transformers or about 102,000 kWH per year.
 

Electric-Light

Senior Member
Let's talk more about non-linear load.

Technology classrooms and server room consisting mainly of computer loads.
Loads are fed from 480v building power through a 480 to 208/120v transformer.

75kVA transformer at 80% load @ 60kVA.
60kVA x 0.60 PF = 36kW
150 workstations @ 200W ea = 30kW
server rooms @ 6kW

You're looking at transformer loss of about 4.75kW.
36kW/40.75kW =88.3% eff.

If it was loaded with fully power corrected equipment only... at 80%, the transformer can provide almost 60kW and would only lose about 2kW.

60kW / 62kW = 96.77% eff.

2.4 times the loss while delivering 0.6x the energy, which comes out to four times the loss for every watt of rectifier capacitor load compared to watt of linear load.

This translates to about 10,000kW of energy wasted per year, because of added transformer loss due to non-linear loss assuming 12hrs/day and 25 days per month. (servers are 24/7/365)

So, that's $800-$1,200/ year on kWh rate wasted for a small 75kVA transformer simply due to additional transformer loss.

These calculations are made on application of information gathered from the reference:
http://ecmweb.com/contractor/overcoming-transformer-losses

Losses incurred in customer owned low voltage transformers(i.e. 480 to 208/120) are paid for by user.

This is why 80Plus certified computer power supplies are gaining traction. (.9+ PF and .8+ efficiency)
 

kwired

Electron manager
Location
NE Nebraska
There are times when power factor correction can save significant energy. If the savings were negligable, the utility wouldn't be penalizing customers.

Almost all the times when I've been commissioned to retrofit a facility for low power factor, the solution was to install capacitors near the service entrance. When you do that, the only I2R loss reduction is on the utility side. The only incentive for a customer in this situation is to make his penalty go away.

Only once did I have a client with a facility large enough that the loss-reduction warranted distributed capacitors. We could have installed a 3,300 kVAR bank at the service entrance and eliminated the penalties, but the distributed solution passed the savings on to the customer. We calculated a 35 kW reduction in I2R losses in feeders and transformers or about 102,000 kWH per year.

When it does save energy it is because of I2R losses in the conductors, not because energy used by the load has changed. When customer has high power factor POCO also does not need to spend money on correction if they want correction to happen. POCO does put correction on their lines at times, but this is to minimize a somewhat average power factor that is constantly on the lines, coming from homes and small businesses. Larger facilities will have more variance in PF and they do not want to have to switch correction methods as the load changes - so they charge many of the larger users a penalty that they are more than welcome to correct if they don't wish to pay the penalty.
 
Status
Not open for further replies.
Top