Complex does not necessarily mean more or less lines of code to me.
The perfect system is never built the first time. All you can do is try not to make too many complex decisions that tie you into doing things one way.
For that reason I like to keep complexity low during the initial versions of any project. The reason you built something (new flexibility) is severly impacted. If you make it as complex as possible fewer people will understand it at the beginning. That could be a good or bad thing.
If you make it too simple (and 50 to 70% more code) it may have performance issues.
As a system ages and matures, the complexity seems to come in through re-factoring. By then you can reach a point where some code may never be touched again, and if you ever do, the costs to understand the complexity will be lower due to the lower frequency of touching it.
I like solving complex problems with simple steps. Where it isn't possible, the complexity increases accordingly. There was a point in another question about knowing when its "good enough". Sometimes a bit more code (5-20%) can offset the complexity significantly, which may be more expensive to re-learn or understand by someone.
Needing a better algorithm is usually a good problem because it means your stuff is being used and theres new demands to be dealt with.
This is the same sort of complexity that applies to Database Abstraction for me, you have to know when to make it more flexible, and when to keep it simple, and its best learnt through building it, and scrapping it a lot before you write a single line of anything.