I have recently been running fault current calculations/studies and had been wondering what effect the utility X/R ratio had on the results.
I'll first start by asking how a modern software package converts the inputed utility information to use for calcualation with the rest of the system. When the system model (load) it will have a specifiec combined impedance (thevinin) and X/R ratio. This is then combined with the utility fault contribution for the SS analysis on the system. Most of the time the utility contribution is entered in avaliable fault current value with an associated X/R ratio. Does the software then take this current and X/R value and use the utility voltage to convert the current value to a p.u. impedance value based on the MVA base entered for the system. Does it then combine the utility and system impedance values for use in any fault current calculation with the system?
Seeing that what I said above is true I am looking to see the effect that an X/R ratio has on the utility contribution. For instance lets say we are analysing a system that has a specified impedance and X/R value. We then add a utility fault contribution to this sytem that has a specifed fault contribution in amps with an associated X/R ratio. Now lets say that we keep the utility fault vaule the same but change the X/R ratio to a higher value. Would this represent a worse case?
From what I can tell for a given utility fault current raising the utilty X/R ratio will make the combined Uitlity&system impedance lower and therefore result in higher fault magnitudes for faults anywhere in the system for a fixed voltage.
Is this correct that raising the utility X/R ratio will result in worse case faults?
I'll first start by asking how a modern software package converts the inputed utility information to use for calcualation with the rest of the system. When the system model (load) it will have a specifiec combined impedance (thevinin) and X/R ratio. This is then combined with the utility fault contribution for the SS analysis on the system. Most of the time the utility contribution is entered in avaliable fault current value with an associated X/R ratio. Does the software then take this current and X/R value and use the utility voltage to convert the current value to a p.u. impedance value based on the MVA base entered for the system. Does it then combine the utility and system impedance values for use in any fault current calculation with the system?
Seeing that what I said above is true I am looking to see the effect that an X/R ratio has on the utility contribution. For instance lets say we are analysing a system that has a specified impedance and X/R value. We then add a utility fault contribution to this sytem that has a specifed fault contribution in amps with an associated X/R ratio. Now lets say that we keep the utility fault vaule the same but change the X/R ratio to a higher value. Would this represent a worse case?
From what I can tell for a given utility fault current raising the utilty X/R ratio will make the combined Uitlity&system impedance lower and therefore result in higher fault magnitudes for faults anywhere in the system for a fixed voltage.
Is this correct that raising the utility X/R ratio will result in worse case faults?