views:

150

answers:

3

Most battle-tested, real-world software contains extensive error checking and logging. We frequently employ complex logging systems to help diagnose, and occasionally predict, failures before they happen. We generally focus on reporting the catastrophic failures on the server-side.
These failures are important of course, but I think there is another class of errors that are overlooked, and yet perhaps equally important. Whether you are using an iPhone, blackberry, laptop, desktop, or point-of-sale touch screen, user interaction is typically processed as discrete events. I suspect that identifying patterns of UI events can expose areas where the user is having difficulty regarding efficiently interacting with the application. I found an interesting academic paper on this subject here. I think the ideas presented in the paper are great, but perhaps other, simpler, techniques might yield nice results. What are your ideas and experiences in this area?

A: 

You might want to Google usability testing. I've never heard of it being done by having a program recognize patterns of events, but rather by having humans watch humans use the program.

John Saunders
Did you bother to scan the academic paper mentioned?
Todd Stout
I didn't notice it. I'd recommend you edit the question to draw more attention to the fact that "here" is a link. You might also throw in some blank space between the paragraphs, for readability.
John Saunders
Ok, now I've scanned it. Did _you_ read it? A survey from 1999? Not even original work? I recommend you find something a bit fresher, and more recent than 10 years old. It would be interesting to see what original work is being done in this area, and what the industry thinks about it. After all, it's been ten years.
John Saunders
I have searched. I did not find anything publicly available more recent than 1999. This is the only reason I am submitting this question to SO.
Todd Stout
Ok. It will be interesting. Perhaps there's a reason there's nothing since 1999. Did you try to track down the authors?
John Saunders
I suspect there are great algorithms for this, but they are constrained by IP issues.
Todd Stout
Alternatively, it's possible that a decade has proven that this was more difficult than previously thought, or that humans do the job better. If IP were the only issue, then you'd still find more recent academic papers.
John Saunders
Good point about tracking down the authors. I have not made any attempt to contact them.
Todd Stout
I didn't so much mean track them down to contact them. I meant search to see if they've written other papers on the subject. Also notice this article is a survey of the work of others. Search for those others. Maybe some of these have continued in this field, maybe they've all given it up as a bad idea.
John Saunders
+2  A: 

Interesting paper. One of the things I get out of it is that it's not easy to make sense out of user event logs unless you have a very specific hypothesis you are trying to test. They can be very useful, for example, if you know it's taking users too long to complete Task X or they're failing to complete it altogether. It's clearly a whole different ballgame to try to analyze sequences without any other supporting information and make sense out of them (though it can be done if you use the sophisticated techniques mentioned in the paper).

One simpler method would be to simply measure the total time to complete a given task that you know is common and important. If it's a shopping application, for example, the time to complete the check-out on a purchase would probably be something useful to collect. It's not quite that simple, though, because you'd have to at least take into account interruptions (e.g., the user's boss came into the room and he abandoned his shopping for actual work--not that I've ever done this :-P). You could have a simple rule that says, if there were no events logged for X seconds, assume the user is not paying attention to the screen.

Another simple thing you could do would be to check for obvious signs of errors, such as a user employing the "undo" facility or inputting information into an input box in a web app that causes a validation check to trigger (e.g., failing to enter required information and putting information in the wrong format). If certain input boxes result in a high number of errors, it might be a sign you should be more flexible in allowing different formats (e.g., allow users to enter a date as "6/28/09", "6-28-09", "june 28, 2009" instead of requiring a single format).

One other idea: if your application has contextual help, certainly count how many times people use it for each page/section/module of your application.

I doubt anything I'm saying is earth shattering, but maybe it will give you some ideas.

-Dan

DanM
I was thinking along these lines...thanks for the input!
Todd Stout
+1  A: 

I once wrote an app that tracked, for each command, whether it was accessed via the menu or a command-key equivalent; it gave us pretty good insight into which key-equivalents were expendable. We didn't have toolbars at the time, but the same kind of logging and analysis could of course apply to them, or to context-sensitive menus: anywhere you want to provide a limited set of valuable options.

Carl Manaster