Validated against 105,000+ test cases to get IEEE 1584-2018 right — here's what took me 3 weeks to figure out

Aespinosa

Member
Location
Canada
Occupation
Senior Design Engineer
So I went a little overboard.

I decided to implement IEEE 1584-2018 from scratch and validate it properly. Not just against the handful of examples in the standard — I'm talking about the full IEEE test vector dataset. 105,615 cases. Every combination of voltage, current, electrode configuration, gap, working distance.

Got to 100% conformity eventually. But one thing held me up for almost 3 weeks and I never found anyone discussing it online, so maybe this helps someone.

The intermediate arcing current interpolation between 600V and 2700V isn't spelled out clearly in the standard. I assumed linear interpolation. Wrong. It's log-space interpolation. The math looks similar but the results drift 2-3% — and that cascades through the rest of the calculations. Spent way too long chasing that one.

Other stuff that wasn't obvious:

Enclosure correction factors in Table 9 can swing your incident energy 15-20%. The "typical" vs "shallow" choice isn't always clear in the field, but it matters a lot.

The five electrode configurations don't map cleanly to real equipment. I've seen studies where engineers just default to VCB for everything. Sometimes that's conservative, sometimes it's not.

After all this, my numbers match ETAP within 0.3% on incident energy. Which is reassuring, but also makes me wonder how many people actually verify their software results against the IEEE vectors.

Do you? Genuinely curious. And if your equipment doesn't fit the standard enclosure sizes, how do you handle it?
 
The largest source of uncertainty I have from arc flash studies is not knowing how much variance specific parameters have on the results. It would be nice to know if, for example, bus spacing made the result swing wildly, so it is important to emphasize to the field data gathering team to get that right, or it doesn't really affect the results too much, so just give it your best guess.

It's a question of economics. Of course I want my study to be as accurate as possible, but it doesn't matter how accurate my study methods are if no client will pay for it. Which parameters are most "bang for the buck", and which are less important. The enclosure correction factor swing you mentioned is a valuable data point to that end. Of course, I could just muck about with the parameters in Etap to see what the swings are, but that takes a lot of time, and it's not like I have a lot of time on my hands to fill. It's a suggestion I've made to my employer that we investigate, but so far it's not made it to the priority list.
 
Now that you did it, I don't have to check ETAP. Can you do SKM and EasyPower? :)
Ha! I haven't run a direct comparison against SKM or EasyPower yet — only had access to ETAP results from published case studies.

If anyone has SKM or EasyPower outputs for a specific scenario they'd be willing to share, I'd be happy to run the same inputs through my implementation and compare. Could be an interesting data point.
 
The largest source of uncertainty I have from arc flash studies is not knowing how much variance specific parameters have on the results. It would be nice to know if, for example, bus spacing made the result swing wildly, so it is important to emphasize to the field data gathering team to get that right, or it doesn't really affect the results too much, so just give it your best guess.

It's a question of economics. Of course I want my study to be as accurate as possible, but it doesn't matter how accurate my study methods are if no client will pay for it. Which parameters are most "bang for the buck", and which are less important. The enclosure correction factor swing you mentioned is a valuable data point to that end. Of course, I could just muck about with the parameters in Etap to see what the swings are, but that takes a lot of time, and it's not like I have a lot of time on my hands to fill. It's a suggestion I've made to my employer that we investigate, but so far it's not made it to the priority list.
Great question — I actually ran a quick sensitivity analysis to give you real numbers instead of guesses.

Using a base case (480V, 20kA, VCB, 32mm gap, 610mm working distance, 0.1s clearing time), here's how much each parameter moves incident energy:

Parameter | Max % Swing | Notes
------------------------|-------------|---------------------------
Clearing time | 899% | 0.05s→1.0s (biggest factor)
Working distance | 202% | Inverse square relationship
Bolted fault current | 153% | 10kA→65kA
Electrode config | 75% | VCB→HCB
Voltage | 41% | 208V→600V
Enclosure size | 36% | Depends on depth classification
Gap distance | 17% | Within valid LV range

I cross-validated these results using two independent implementations and they match within 0.00% for all parameters.

The takeaway for field data collection: clearing time and working distance matter way more than most people think. Getting the protective device coordination right is critical. Meanwhile, being off by ±50V on your voltage measurement barely moves the needle.

Happy to share the full dataset if useful.
 
The major software programs directly use the IEEE 1584 equations and have been extensively tested and validated. As far as comparing the commercial software output with the equations from IEEE 1584, I would expect very little difference. If you want to challenge the validity of the IEEE methodology itself, that's another issue. There's plenty to criticize and some organizations do use variations of the IEEE 1584 methodology. It's definitely not an exact calculation by any means. But IEEE 1584 is a recognized consensus guide for calculation of incident energy and most people want results calculated strictly per IEEE 1584 using conservative assumptions.

Variations between programs would most likely be due to how they determine the arc time and what the default settings are for the various parameters in the IEEE equations, especially working distance.
 
The major software programs directly use the IEEE 1584 equations and have been extensively tested and validated. As far as comparing the commercial software output with the equations from IEEE 1584, I would expect very little difference. If you want to challenge the validity of the IEEE methodology itself, that's another issue. There's plenty to criticize and some organizations do use variations of the IEEE 1584 methodology. It's definitely not an exact calculation by any means. But IEEE 1584 is a recognized consensus guide for calculation of incident energy and most people want results calculated strictly per IEEE 1584 using conservative assumptions.

Variations between programs would most likely be due to how they determine the arc time and what the default settings are for the various parameters in the IEEE equations, especially working distance.
Good points. To be clear — I'm not questioning IEEE 1584 methodology at all. The standard is what it is, and for good reason.

My goal was simpler: verify that my own implementation matches the IEEE equations correctly. The 0.3% match against commercial software was reassuring, but the real validation was running all 105k IEEE test vectors and getting 100% conformity within ±0.1% tolerance.

You're absolutely right that the practical variations come from arc time determination and default assumptions. That's actually what prompted the sensitivity analysis above — understanding which parameters have the biggest impact helps prioritize what to get right in the field.

Appreciate the perspective.
 
Are you willing to share the insights you've gained from this exercise? Or write a book? I will buy the book. This is important information.
Ha! No book yet, but happy to share.

The big insight from the analysis: most field data collection focuses on getting voltage and fault current right. But clearing time and working distance have 4-6x more impact on the final number.

Practical takeaways:
- Clearing time is THE critical factor — get your TCC coordination right
- Working distance — measure actual working position, not just assume "18 inches"
- Electrode configuration matters more than most people think — look at actual bus arrangement
- Voltage and gap? Least sensitive — nominal values are usually fine

If there's interest, I could write up a more detailed guide on practical IEEE 1584 implementation — the gotchas, the edge cases, what the standard doesn't spell out clearly. Let me know what would be most useful.
 
If there's interest, I could write up a more detailed guide on practical IEEE 1584 implementation — the gotchas, the edge cases, what the standard doesn't spell out clearly. Let me know what would be most useful.
This is a central concern I have with arc flash analysis, so there is definitely interest. And I think it's important for our entire field, not just me. I encourage you to go through with this effort. And as I was insinuating before, I am more than willing to pay to have the result. You should be compensated for your time.
 
This is a central concern I have with arc flash analysis, so there is definitely interest. And I think it's important for our entire field, not just me. I encourage you to go through with this effort. And as I was insinuating before, I am more than willing to pay to have the result. You should be compensated for your time.
Really appreciate that — means a lot coming from someone doing this work in the field.

I'll put something together. Thinking a practical guide covering:
- Parameter sensitivity (what to prioritize in field data collection)
- Implementation gotchas (like the log-space interpolation issue)
- Edge cases the standard doesn't address clearly
- Common assumptions and when they break down

I'll post here when it's ready. Thanks for the encouragement.
 
Top