I believe that a wholistic answer to this not at all subjective question must explore different types of bugs and their causes.
I wholeheartedly agree with most previous answers here, but I think they are focused on and limited to the coding process (i.e. the process between receiving the specification for a component on one end and integration on the other end), and to improving the quality there. It is not difficult to see that a better process for writing code will lead to better code, i.e. using best practices for structuring/desiging the code, unit-testing, and releasing will reduce the chance for error on the developer side. Avoidance and early detection strategies are a part of this.
However, in my experience, coding errors make up a relatively small portion of the whole set of defects. In between the scoping process and the delivery of the final product in a full-lifecycle software project, bugs that result from incomplete or incorrect specifications are much more difficult to detect and avoid. Developers can argue that such defects are not really bugs, but from the user's or customer's perspective, they are of course. As developers, we have less control over this type of bug, though.
The best quality code cannot replace quality in the QA process. A strict separation of roles between developer and tester improves the detection rate. Testers should work with subject-matter experts on developing test plans that cover all use cases. All parties involved in the project need to share the view that QA is a critical component to success, that it takes time, and that shortcuts will lead to trouble.
As trivial as it sounds, frequent and formal communication between developers, testers, and the customer/end users throughout the project is probably the most effective strategy to detect and avoid bugs that are not a result of coding errors.