So what I'm gathering from this is that there is no requirement to use the smallest conductor size
possible based on the feeder rating so there is a little room for cheating by using #2 instead of
#3 CCC's as the basis for the calculations. Is this correct and generally acceptable?
In NEC2014 and NEC2017, the language reads "the minimum size that has sufficient ampacity for the intended installation".
In otherwords, your starting point is the minimum size that you can use, if circuit length were not a factor. So pretend the circuit was only 20 ft long. What size can you use, with that configuration of wire QTY, wire type, conduit, load, circuit breaker/fuse, and environment? Local factors alone, what is the minimum local size?
So if adjustment and correction factors such as bundling require #4/0 on a 200A circuit, then #4/0 is your starting point for this rule. Even though #3/0 would otherwise be required if the bundling adjustment factor didn't apply. You forget about that, because it is not equivalent to the intended installation. So #4/0 with a #6 EGC would be the starting point, and you would scale the two sizes together proportionally according to kcmil, to get the corresponding new EGC when curtailing voltage drop.