What happens when you define the mapping model yourself instead of using an inferred model? Sounds like the creation of the inferred model is causing a performance hit that defining the mapping model directly and including it in your project would solve.
update
I already tried that strategy and using a mapping model generated in XCode results in approximately the same processing time as the inferred-at-runtime model. The only real difference is the time to load the model from the bundle is slightly quicker than inferring at runtime. Furthermore once a mapping model is bundled in the app, the automatic migration ceases to be lightweight, I assume it is using the bundled model. Removing the mapping model from the target brings the processing time back to ~4 seconds for automatic-lightweight
That is certainly counter-intuitive. Is your project simple enough to post as an example of this inefficiency or do you perhaps have a test project that isolates this issue? In either situation it would be very helpful to take a look at it so that we can A) hopefully solve the mystery; or B) file it as a rather large bug with Apple as the reverse should certainly be the case.
How large is the data set you are working with?