Update: See this question too.
I can only answer some parts here:
Is okay to break all dependencies using interfaces just to make a class testable? It involves significant overhead at runtime because of many virtual calls instead of plain method invocations.
If your performance will suffer too much because of it, no (benchmark!). If your development suffers too much, no (estimate extra effort). If it seems like it won't matter much and help in the long run and helps you with quality, yes.
You could always 'friend' your test classes, or a TestAccessor object through which your tests could investigate stuff within it. That avoids making everything dynamically dispatchable just for testing. (it does sound like quite a bit of work.)
Designing testable interfaces isn't easy. Sometimes you have to add some extra methods that access the innards just for testing. It makes you cringe a bit but it's good to have and more often than not those functions are useful in the actual application as well, sooner or later.
If I do a refactoring it occurs very often that I have to completely rewite unit test because of massive logic changes. My code changes very often change the fundamental logic of the processing of data. I do not see a way to write unit tests which do not have to change in a large refactoring.
Large refactorings by defintion change a lot, including the tests. Be happy you have them as they will test stuff after refactoring too.
If you spend more time refactoring than making new features, perhaps you should consider thinking a bit more before coding to find better interfaces that can withstand more changes. Also, writing unit-tests before interfaces are stable is a pain, no matter what you do.
The more code you have against an interface that changes much, the more code you will have to change each time. I think your problem lies there. I've managed to have sufficiently stable interfaces in most places, and refactor only parts now and then.
Hope it helps.