Warning: nostalgia post.
Many years ago, I worked in a mainframe environment. My job was to run ad-hoc SQL reports to evaluate the effectiveness of the Christmas sales campaigns. "How is line X doing, against line Y, over the last Z weeks, against the same weeks last year"... and similar.
It was exciting: ad-hoc queries, directly against the live database, marketing decisions being made for the afternoon, based on the morning's questions from the buying department. No SQL errors permitted, no testing, no development environment, it was right first time, or nothing.
I left for lunch, mad fool that I was, having crafted a multi-table join that would give the desired results, albeit after a 30 minute query, secure in the knowledge that no locks were being taken to prevent the transactional ordering system saving new orders. It was mid December, at the height of the Christmas ordering season.
I returned, the company in chaos. The whole ordering system was down, everything was timing out, no one new why.
Except that CICS couldn't get connections. The pool was exhausted, so new transactions just failed.
It was my report. It took a ton of CPU. It ran a long time. Everything else ran just that little bit longer. So the connection pool ran out of connections, and everything else died for a while.
My query completed. It gave the right results. The buying department was happy, since they could make the right decisions for the next day. Orders started getting saved again, and guess what...
I had been horrified and embarrassed, but amazingly, I was temporarily a local hero, for providing the crucial report just in time. The half-hour crisis was a catastrophe at the time, but a day later, just a memory.
If there's a moral here, it's this:
There's no single right answer to this sort of question.
If you can, find out why you're running out of connections, since it might be an indication of something serious.
But it might not, and tomorrow, there might be a different problem.