I think that the ratio is wholly meaningless. I might structure, say, a compiler, by throwing compilation errors as exceptions, and using one try to log the exception and abort compilation. In that instance, I might have one try to catch every single possible compilation error, which may be an exceedingly large number (e.g., a C++ compiler/linker/preprocessor) or very few (my Brainfuck interpreter).
The quantity of situations that can be handled identically is entirely application dependent.
Fundamentally, the fact that one try may catch an infinite number of throws suggests to me that you would need contextual data to even suggest that any given ratio is good, bad, or anything else, hence, application dependent.
In addition, grep will not demonstrate this to you. There's many throws that exist, even though realistically if such a thing occurs, you won't be able to recover. For example, you could suggest that operator new might throw a std::bad_alloc on Windows if you hit the virtual address limit, and count that as a throw. However, in reality, I would never catch such an exception- I don't know of any meaningful way to recover from such an exception. You would have to check out every throw and it's corresponding catch.