Developers should understand the performance implications of their coding choices.
It's not terribly difficult to write an algorithm that results in non-linear performance - polynomial, exponential or worse. If you don't understand to some extent how the language, compiler, and libraries support your algorithm you can fall into trap that no amount of processing power will dig you out of. Algorithms whose runtime or memory usage is exponential can quickly exceed the ability of any hardware to execute in a reasonable time.
Assuming that hardware can scale to a poorly designed algorithm/coding choice is a bad idea. Take for example a loop that concatenates 100,000 small strings together (say into an XML message). This is not an uncommon situation - but when implementing using individual string concatenations (rather than a StringBuffer) this will result in 99,999 intermediate strings of increasing size that the garbage collector has to dispose of. This can easily make the operation fail if there's not enough memory - or at best just take forever to run.
Now in the above example, some Java compilers can usually (but not always) rewrite the code to use a StringBuffer behind the scenes - but this is the exception, not the rule. In many situations the compiler simply cannot infer the intent of the developer - and it becomes the developer's responsibility to write efficient code.
One last comment - writing efficient code does not mean spending all your time looking for micro-optimizations. Premature optimization is the enemy of writing good code. However, you shouldn't confuse premature optimization with understanding the O() performance of an algorithm in terms of time/storage and making good choices about which algorithm or design to use in which situation.
As a developer you cannot ignore this level of knowledge and just assume that you can always throw more hardware at it.