I think you're misunderstanding what the cast operator does for reference conversions.
Suppose you have a reference to an instance of C on the stack. That's a certain set of bits. You cast the thing on the stack to A. Do the bits change? No. Nothing changes. It's the same reference to the same object. Now you cast it to I. Do the bits change this time? No. Same bits. Same reference. Same object.
An implicit reference conversion via a cast like this simply tells the compiler to use different rules when figuring out at compile time what method to call.
So the casts to "A" are completely irrelevant and ignored by the compiler. All the compiler knows or cares about is that you have an expression of type I, and you're calling a method on it. The compiler generates a call that says "at runtime, look at the object reference that is on the stack and invoke whatever is in the "I.m1" slot of the object.
The way to figure this stuff out is to think about the slots. Every interface and class defines a certain number of "slots". At runtime, every instance of a class has those slots filled in with references to methods. The compiler generates code that says "invoke whatever is in slot 3 of that object", and that's what the runtime does -- looks in the slot, calls what's there.
In your example there are all kinds of slots. The interface requires three slots, the base class provides more, and the "new" methods of the derived class provide two more. When an instance of the derived class is constructed, all of those slots are filled in, and, understandably, the slots associated with I are filled in with the matching members of the derived class.
Does that make sense?