I have a little bit, I made some comment's here about LLVM and VCC, each of which I was evaluating to some extent.
I do think VCC is nice (which uses cci), however I also believe the claim's (hah) with respect to it's utiltiy with the Hyper-V audit or implmentation may be a bit more marketing slight of hand than a practical reality. No question the tools are helping save time & money on that project, but I have seriuos doubt's that the are the primary means for improvement in that code base.
I was able to formulate, possibly ill-concieved, a single function test, which validated (to the best of my knowledge), the scope was "453/0 inst, 0 lits, 1 children [rec:33, 13 inst/cnfl], but all of the conflicts came from VccPrelude.bpl and not my test case, so from what I can tell (at least from looking at Z3 Axiom) it did not detect a fault with my code... I believe it would of not even generated the bpl/mx' files etc, if it had.
Yet, this exact code, when compiled nativly, is subtily flawed and can be used to gain control of the $PC register quite easially.
It will be intersting to see how these projects unfold, I can say that LLVM failed much worse in comparison (not to just bag on cci/vcc/boogie/...).
I like the managed pdb/pe reader-writer functionality and the level of polish overall.
I can say also, similarly to the other post where I refer to WG23, I am very pleased with the overall direction in these circles, for years fundimental flaw's (i.e. scanf()), have aid and abetted some of the worst offender bit's of bits from being cleaned up.
CCI/LLVM are for sure, in some orders of magnatude more effective than anything most developers are used to being able to weild to fix bugs (lint anybody;)
To specifically speak to your question about meta-data, it is much better than reflection, performance wyse and ease of use (huge amounts of sample's which manipulate any commonly known meta-data).