Just speculating here, but one thing I imagine is database query optimization.
A database query in a declarative language such as SQL has to be translated into a step-by-step program called an "execution plan". One SQL query can typically be translated to a number of such execution plans, which all give the same result but can have very varying performance. The query optimizer has to find the fastest one, or at least one that is reasonably fast.
Cost-based query optimizers have a "cost function", which they use to estimate the execution time of a given plan. Exhaustive optimizers go through all possible plans (for some value of "all possible") and select the fastest one. For complicated queries the number of possible plans may be prohibitively large, leading to overly long optimization times (before you even begin the search in the database!) so there are also non-exhaustive optimizers. They only look at some of the plans, perhaps with a random element in choosing which ones. This works, since there is usually a large number of "good" plans, and it might not be that important to find the absolutely best one -- it is probably better to choose a 5-second plan instead of the optimal 2-second plan, if it requires several minutes of optimization to find the 2-second plan.
Some optimization algorithms use a sorted queue of "promising" (partial) plans. If it doesn't really matter if you find the absolutely best plan, maybe you could use an almost-sorted queue?
Another idea (and I'm still just speculating) is a scheduler for processes or threads in a time-sharing system, where it might not be important if a certain process or thread gets its timeslot a few milliseconds later than if strictly sorted by priority.