It sounds like statistical sampling. When you buy a product, there's a good chance that not every single product coming off the "assembly line" has been checked for quality.
Statistical sampling calls for checking a certain percentage of products to (almost) ensure they're all problem-free.
It minimizes the effort at the risk of some problems sneaking through.
To be honest, unless you're checking every single execution path and every single possible input value, you're already doing this in your testing. The amount of effort required to test everything for any but the most simplistic systems is not worth it. The extra cost would make your product a non-compete item.
Note that statistical sampling doesn't just involve testing every 100th unit. There are ways to target the sampling to improve the chance of catching problems. For example, if historical data suggests most errors are introduced at a specific phase, target that phase. If one of your developers is more problematic than others, check his stuff more closely.
From what I can see from a cursory glance at some research papers, statistical debugging is just that - targeting areas based on past history of problems.
I know we already do this for our software. Since any bugs that get fixed have to pass unit and system tests that replicate the problem (and our TDD says these tests should be written before trying to fix the bug), those tests are automatically added to the regression test suite so that those areas that cause more problems naturally are tested more often in the future.