I remember reading a law (well, maybe not exactly a law), but in software design, providing user with a lot of control without really giving him an option of a Basic and an Advanced Mode will sometimes backfire and make the user to end up not using any of the options in the first place because they are just too many of them. Did I read this correctly? If so, can someone point me to a more formal source?
I think this sums it up nicely:
http://stuffthathappens.com/blog/2008/03/05/simplicity/
But here's more reading on the subject:
http://www.joelonsoftware.com/uibook/chapters/fog0000000059.html
Sounds a bit like the law of Sensible Defaults to me. The rule that in many cases, the user doesn't really care about the little details so just give them a sensible choice.
There's also a Joel Spolsky article that may be relevant - JoelOnSoftware - Choices = Headaches
I'm not sure I understood it exactly but the thing is, you can't make an interface so complicated the basic user won't be able to use it. When a user first starts using an application if he sees too manyn options he won't find the basic thing he went there to do in the first place. That's why advanced features are usually hidden. Advanced users will now how to find it (they will learn, after all they're advanced :-P), and basic users won't be scared away.
I found the quote I was looking for in the slides of a course I took on interfaces. It's by Jeff Raskin, a HCI expert who created Macintosh along with Steve Jobs and Wozniak. It says: "I reject the idea that computers are difficult to use because what we do with them has become irretrievably complicated. No matter how complex the task a product is trying to accomplish, the simple parts of the task should remain simple. "
The answer to when? would then be: when the simple tasks start to get complicated. They should remain simple!!!!
A fundamental principle of good user interface (including API and programming language) design is to make simple things simple and complicated things possible. Excessive control increases the learning curve for simple things where the user probably doesn't care about, and doesn't want to specify, every small detail. However, the options need to be provided or your program will run out of steam when dealing with more advanced users.
The solution is to provide multiple levels of interface. If you're designing an application, there should be a basic and advanced mode. If you're designing an API, there should be a high-level API that "just works" for the 90% of cases, and a lower-level API that gets the job done with more complexity in the last 10% of cases.
this is definitively true in games. A game should have unlimited states (eg, a game of pool has 15 balls, each of which can be anywhere on the table which is around 4.5m^2 to an accuracy of maybe a millimeter, the total number of states is a lot!) but limited control of those states. If the players could place all the balls exactly where they want, or send them in a specific direction at a very accurate velocity, then the fun would be gone. Only by limiting the way the player interacts with the game can it be fun and entertaining.
Depending on the program you are writing this is either true or false. The stereotypical view of Mac application is very limited and simple, while Windows applications are complex and powerful. An application like Spotify compared to WinAmp is a good example of simple vs. complex, but they are both great apps.
In my opinion a complex app with limited control is something to strive for. Applications like Picasa, Spotify, Paint.net all manage to be very complex and useful, but with limited control. Always try to limit what the user can do to the most common tasks, instead of enabling all possible tasks.