This is my copied answer I did on the closed question. It was closed while I was writing the answer. I hope it still fits somehow.
Hmm, this is a heavy question and heavy to answer. And the truth lies somewhere in the middle. No abstraction leads to spaghetti code and is hard to understand and difficult to debug. But too much abstraction raises complexity, is hard to understand and is propably also difficult to debug. If you take the perspective of a programmer that wants to use your code they again are more or less equal because it is hard to use.
From the OOP perspective you can't gain much if you don't abstract things to some extent. The general approach for me is to focus at first on design patterns to structure my code (I do only the GoF patterns). That gives a basic structure and if you name the classes/functions after the pattern your intentions are easy to get for others.
If you start to do more and more abstractions you need a stop sign to not overreact :) Mine back then was YAGNI principle of the XP guys. YAGNI stands for "You aren't gonna need it". It means that you should not implement features that might someone in the far future maybe will find useful. Concentrate on your actual problem. If you take this problem there are already several parameters that define the problem. This might be it needs to paramterized, exchangable, transparent or whatever. If you focus to find a nice (not the perfect) abstraction you might be on a good path already. If requirements change (and they do) you need to redo some of the work anyway. So take it easy!
So to summarize. Think about abstractions but not too far. The best rule is that you can't find a good abstraction without solving the problem more than once. If you try to do it apriori you will face that at a later point your abstraction doesn't fit and that is worse than having less abstraction in the first place.
This text is also very abstract as the topic. So I hope this gives some helpful insights