I'm trying to wrap my mind around this concept so that I can ensure full understanding. The scenario is based on a calculation example in the 2011 Handbook. But because I've been studying the NEC 2008 code book for some time now, any code references below are from the 2008 edition. Also, please bear with me here, this scenario is somewhat complex:
OCPD terminations rated at 75 C
125A noncontinuous + 200A continuous loads
Therefore, per 215.3: OCPD=125 + 1.25*200 = 375A... per 240.6 OCPD = 400A
3-phase, 4-wire feeder with nonlinear load. therefore, per310.15(B)(4)(c), will be treated as current-carrying, therefore all four in the same raceway will be subject to the adjustment factor applied via 310.15(B)(2)(a), or 80%.
per 215.2(A)(1), Min Feeder Conductor Ampacity = 125 + 1.25 * 200 = 375A
So let's assume a 600kcmil conductor @ 90C (rated 475A) is chosen. This means:
600kcmil @ 90C Ampacity = 475A * 80% = 380A which can be protected by the 400A OCPD so we're good here.
However, 110.14 states that when a higher temp rated conductor is terminated on a 75C termination, the conductor ampacity used must be at the lower temperature (ie 75C). So the question is, how does this ampacity get treated?
Option 1:
My original assumption is that the adjustment factor is designed to derate the conductor ampacity as a result of the number of current-carrying conductors in the same race, which doesn't change. So the 80% still applies:
600kcmil @ 75C = 420A * 80% = 336A
This also means that the load current cannot exceed this value and taking into account that the continous load adjustment of 125% is based on the additional heat considerations when continuous, I'll neglect this factor as heat is already taken into account when using this factor as compared to the 90C insulation. Therefore:
Load Current = 125 + 200 = 325A < 336A so we're good.
If I hadn't done this, then it wouldn't have validated, in other words:
125 + 1.25 * 200 = 375A > 336A conductor ampacity at 75C, therefore selection is not good.
Alternatively I could've said:
Option 2:
But since the NEC doesn't seem to clear on this, I wondered if it could've been treated with the 75C requirement from 114 without adjusting the ampacity:
600kcmil @ 75C = 420A
which means that the min feeder ampacity requirement of 375A (using the 125% adjustment for continuous load) is met by the 420A ampacity.
This doesn't seem right to me because we are neglecting the 80% adjustment for multiple current-carrying conductors in the same raceway.
I know from the example calculation, they state that the 600kcmil @ 90C is a fit for this application and that the 75C value should be compared to the load current. What they don't say is how. And it doesn't exactly seem clear to me in the code itself which is best. Any help would be appreciated.
OCPD terminations rated at 75 C
125A noncontinuous + 200A continuous loads
Therefore, per 215.3: OCPD=125 + 1.25*200 = 375A... per 240.6 OCPD = 400A
3-phase, 4-wire feeder with nonlinear load. therefore, per310.15(B)(4)(c), will be treated as current-carrying, therefore all four in the same raceway will be subject to the adjustment factor applied via 310.15(B)(2)(a), or 80%.
per 215.2(A)(1), Min Feeder Conductor Ampacity = 125 + 1.25 * 200 = 375A
So let's assume a 600kcmil conductor @ 90C (rated 475A) is chosen. This means:
600kcmil @ 90C Ampacity = 475A * 80% = 380A which can be protected by the 400A OCPD so we're good here.
However, 110.14 states that when a higher temp rated conductor is terminated on a 75C termination, the conductor ampacity used must be at the lower temperature (ie 75C). So the question is, how does this ampacity get treated?
Option 1:
My original assumption is that the adjustment factor is designed to derate the conductor ampacity as a result of the number of current-carrying conductors in the same race, which doesn't change. So the 80% still applies:
600kcmil @ 75C = 420A * 80% = 336A
This also means that the load current cannot exceed this value and taking into account that the continous load adjustment of 125% is based on the additional heat considerations when continuous, I'll neglect this factor as heat is already taken into account when using this factor as compared to the 90C insulation. Therefore:
Load Current = 125 + 200 = 325A < 336A so we're good.
If I hadn't done this, then it wouldn't have validated, in other words:
125 + 1.25 * 200 = 375A > 336A conductor ampacity at 75C, therefore selection is not good.
Alternatively I could've said:
Option 2:
But since the NEC doesn't seem to clear on this, I wondered if it could've been treated with the 75C requirement from 114 without adjusting the ampacity:
600kcmil @ 75C = 420A
which means that the min feeder ampacity requirement of 375A (using the 125% adjustment for continuous load) is met by the 420A ampacity.
This doesn't seem right to me because we are neglecting the 80% adjustment for multiple current-carrying conductors in the same raceway.
I know from the example calculation, they state that the 600kcmil @ 90C is a fit for this application and that the 75C value should be compared to the load current. What they don't say is how. And it doesn't exactly seem clear to me in the code itself which is best. Any help would be appreciated.