I think there are too many variables involved to come up with a simple complexity metric unless you make a lot of assumptions.
A simple SAX style parser should be linear in terms of document size and flat for memory.
Something like XPath would be impossible to describe in terms of just the input document since the complexity of the XPath expression plays a huge role.
Likewise for schema validation, a large but simple schema may well be linear, whereas a smaller schema that has a much more complex structure would show worse runtime performance.
As with most performance questions the only way to get accurate answers is to measure it and see what happens!