First up: My understanding of this is that CType(b, Int16) isn't the same as (Int16)b. One is a conversion of type (CType) and the other is a cast. (Int16)b equates to DirectCast(b, Int16) rather than CType(b, Int16).
The difference between the two (as noted on MSDN) is that CType succeeds so long as there is a valid conversion, however, DirectCast requires the run-time type of the object to be the same, and as such, all you're doing is telling the compiler at design time that this object is of that type rather than telling it to convert to that type.
See: http://msdn.microsoft.com/en-us/library/7k6y2h6x(VS.71).aspx
The underlying problem though is that you're trying to convert a 32 bit integer into a 16 bit integer which is... [I'm missing the word I need, perhaps someone can insert it here for me] lossy. Converting from 16 bit to 32 bit is allowed because it's lossless, converting from 32 bit to 16 bit is undefined. For why it works in C# you can see @Roman's answer - it relates to the fact that C# doesn't check the overflow.
The resulting value of &H7FFFFFFF And &HFFFF
results in UInt16.MaxValue (65535) UInt16 runs from 0 to 65535, you're trying to cram that into Int16 which runs from -32768 through to 32767 which as you can see isn't going to work. Also the fact that this value might fit into a UInt16 is coincidental, adding two 32 bit integers and trying to cram them into a 16 bit integer (short) would frequently cause an overflow and thus I would say this is an inherently dangerous operation.