(This is not an authoritative answer; just trying to trawl my memory.)
In a sense, any time you code a 'state monad' in a language, you're using the type system as a potential effect system. So "State" or "IO" in Haskell capture this notion (IO captures a whole lot of other effects as well). I vaguely remember reading papers about various languages that use advanced type systems including things like "dependent types" to control finer-grained management of effects, so that for instance the type/effect system could capture information about which memory locations would be modified in a given data type. This is useful, as it provides ways to make two functions that modify mutually exclusive bits of state be allowed to "commute" (monads don't typically commute, and different monads don't always compose well with one another, which often makes it hard to type (read: assign a static type to) 'reasonable' programs)...
An analogy at a very hand-wavy level is how Java has checked exceptions. You express extra information in the type system about certain effects (you can think of an exception as an 'effect' for the purpose of the analogy), but these 'effects' typically leak out all over your program and don't compose well in practice (you end up with a million 'throws' clauses or else resort to lots of unchecked runtime exception types).
I think a lot of research is being done in this area, both for research-y and mainstream-y languages, as the ability to annotate functions with effect information can unlock the compiler's ability to do a number of optimizations, can impact concurrency, and can do great things for various program analyses and tooling. I don't personally have high hopes for it any time soon, though, as I think lots of smart people have been working on it for a long time and there's still very little to show for it.