Thinking about this question in a language-agnostic, framework-agnostic manner yields that what you ask for is somewhat a conundrum:
The test tool will have no idea about the execution time of any of the unit tests until they are run; because this is dependant on not just the test tool and the tests themselves, but also on the application under test. The stop-gap solution to this would be to do things such as setting a time limit. If you do this, then that begs the question, when a test times out, should it be passed, failed, or perhaps fall into some other (third) category? ... Conundrum!
Thus to avoid this, I put forward that you should adopt a different strategy where you as the developer decide which subsets of the entire set of tests you wish to run in different situations. For example:
- A set of smoke tests;
- i.e. tests that you would want to run first all of the time. If any of these fail then you don't want to bother executing any of the tests. Put only the really fundamental tests in this group.
- A minimal set of tests;
- For your specific requirement this would be a set of tests containing all of the "quick" or "fast" tests, and you determine which ones they are.
- A comprehensive set of tests;
- The tests which do not belong to any of the other categories. For your specific requirement this would be tests that are the "slow" or "long" ones.
When running your tests, you can then choose which of these subsets of tests to run, perhaps configuring it in some form of a script.
I use this approach to great effect in automated testing (integrated into a continuous integration system). I do this by having a script that, depending on the input parameters, would decide either to execute just the smoke tests plus the minimal tests; or alternatively the smoke tests, the minimal tests and the comprehensive tests (i.e. all of them).
HTH