In my opinion, you are up against two somewhat competing design philosophies; best practice for reliability, a concept that has prevailed for as long as the Industrial Age has been with us, vs best practice for efficiency, a push since the first 1970s “energy crisis”.
From an efficiency standpoint, an AC induction motor operates at its highest efficiency nearest when to being fully loaded. That’s because there is a fixed energy “burden” in making the collection of iron and copper into a motor in the first place. Most of that remains the same regardless of the load on the motor, so as a percentage of total energy consumed to accomplish a task, when the motor is loaded to its maximum capability, the fixed burden represents a smaller piece of the pie. So if your load requires 7HP, using a 7.5 HP motor that is 95% efficient at nearly full load is more efficient than using a 10HP motor at 70% load, because the fixed energy used to create flux in that 10HP motor is going to be higher than the fixed energy used to create flux in that 7.5HP motor.
But the long established practice of applying a 20-25% “fudge factor” to sizing motors is based upon empirical evidence for over a century that you get maximum reliability from equipment that is not pushed to its limits all the time. A 7.5 HP motor is not called that because it ceases to deliver power above that level, it’s called that because if it is asked to deliver more than that continuously, it fails sooner. Heat x time = failure.
So a machinery OEM that wants to impress a buyer with good efficiency numbers will use the 7.5HP motor, knowing it will likely at least outlast the warranty. But an end user, recognizing that 1 hour of unscheduled down time could wipe out a year’s worth of a few percent difference in energy use, will often opt for the 10HP motor and run it for longer without problems.