This question is not about what OCP is. And I am not looking for simplistic answers, either.
So, here is why I ask this. OCP was first described in the late 80s. It reflects the thinking and context of that time. The concern was that changing source code to add or modify functionality, after the code had already been tested and put into production, would end up being too risky and costly. So the idea was to avoid changing existing source files as much as possible, and only add to the codebase in the form of subclasses (extensions).
I may be wrong, but my impression is that network-based version control systems (VCS) were not widely used back then. The point is that a VCS is essential to manage source code changes.
The idea of refactoring is much more recent. The sophisticated IDEs that enable automated refactoring operations were certainly inexistent back then. Even today, many developers don't use the best refactoring tools available. The point here is that such modern tools allow a developer to change literally thousands of lines of code, safely, in a few seconds.
Lastly, today the idea of automated developer testing (unit/integration tests) is widespread. There are many free and sophisticated tools that support it. But what good is creating and maintaining a large automated test suite if we never/rarely change existing code? New code, as the OCP requires, will only require new tests.
So, does the OCP really makes sense today? I don't think so. Instead, I would indeed prefer to change existing code when adding new functionality, if the new functionality does not require new classes. Doing so will keep the codebase simpler, smaller, and much easier to read and understand. The risk of breaking previous functionality will be managed through a VCS, refactoring tools, and automated test suites.