I've been through this, helping take a company's automated testing from scattergun / patchy through to being largely useful with much higher quality.
I find that trying to apply metrics as the primary driver of quality misses the point. First and foremost, it's a people problem. You can't make someone believe in something without reason, just as you can't magically make them good at it without support.
The short and difficult answer is that, to get the test quality up, you need to treat test code as a first class citizen. People won't make a good job of automated testing unless they can be sold on it and are given the support to improve their skills. Many people skirt around the issue, but the fact is that automated testing is hard to do well, and a lot of developers will not 'get it' or accept that it is a skill to be learned; even worse, many will silently struggle and refuse to ask for help.
Failing to prove its benefits results in lacklustre testing which feeds back to the developers in making them think testing is useless and finds no bugs. If a developer treats testing as a chore and phones it in, they are already in the mindset that it is useless -- it becomes a self-fulfilling prophecy and a total drudge. I know from experience that there is pretty much nothing worse than writing your code then writing all of your tests to hit a magical coverage target. By that time, not only is the code untestable, but it's akin to doing all of your school homework on a Sunday night -- it's no fun.
You need to build awareness of why unit testing can help, what a good/correct/understandable/maintainable/etc. unit test looks like. You can do this through education, presentations, pair programming, code reviews and so forth. If you just set a hard limit and tell people you expect them to meet it, they will probably resent it and then game the system. Note: This is not an easy thing to do! It takes a lot of time to get suspicious developers to see the value in it and start writing non-crazy tests. There is no single 'aha' moment you can bank on. Studies have been done that prove automated testing can be a valuable development tool, but a lot of developers are dogmatic. Some will just never ever come around to the idea.
I find pair programming works pretty well. Find an isolated component that can be easily tested, or write some tests with them. Go through the process of writing tests, making them pass, refactoring the tests and the production code to remove problems and make them more readable. Over time, build up their skills showing them the most common techniques from the testing toolbox. As time goes on, try various techniques such as using good naming practices, named consts, factory methods, test data builders, BDD style 'fixture as context'. Show them how to prove a bug exists via writing a failing test before fixing the bug. Emphasise the most important tenets of creating good tests!
The eventual goal should be that all code teams agree on some rules of thumb such as "To get sign off, all stories must be adequately tested and pass a code review." If automated testing is not valuable / feasible for a given piece of work (e.g. a prototype) that's 100% fine, but it should be the exception, not the rule.
Having respected code leads who will work with their teams to make this happen is of paramount importance. If you cannot get buy in from all of the leads, then you have a major problem.
You can augment your approach using code metrics tools like NDepend (or the variant for your language of choice) which offers features like listing the most complex / used code that does not have good test coverage, or the areas lacking coverage that have changed the most between checkins.
Good luck.