views:

93

answers:

3

Lets say you have a program that allows access to some sort of media. This media can be damaged. It is only possible for the users to know if the media is damaged after they use the service and receive the media. So to make your users happy, you want your program to give the users the ability to turn the media back in for a refund. However malicious users will obviously try to game this system by asking for refund on perfectly good media.

The question is what would be a good algorithm to decide whether or not to trust a given user. How should users build trust? How should trust be spent?

I imagine there must be some academic research on how to construct 'trust' values for known users and so on. Anyone have links to papers or some sort of research? I would even be happy to read random thoughts on the problem but I am more interested in actual papers.

A: 

If you are referring to physical media, the first analogy is buying a CD, DVD or video game from a store. If you return it, they won't give you a refund, but they will give you a non-defective copy if that was your problem.

There's no reason for a user to suddenly decide not to have the media if the first copy was bad if they can easily get a second, non-defective copy for free.

Ed Marty
A: 

To clarify....

There are no humans involved in this process. Users approach the service, use it, attempt to consume the media then a chance of a problem. If there is a problem then the users want to get a refund. The question is how to build data over time as to the trustworthiness of a given user.

The bad user scenario would go something like: User consumes media successfully, lies to the service, asks for refund.

kazakdogofspace
If no humans are involved, what are the users?
Marcin
you need to provide more information - there may be limiting aspects to this. The user may need to provide more information for a refund so you can track abuse. Alternatively/also you can issue a credit rather than a refund.
Tim
+1  A: 

This is very programming related, it would be describing an algorithm.

Although I've never seen a paper on the scenario you are discussing, it seems like it should be pretty straight-forward. I think I would track two axis--By media and by user in a pretty simple, liner fashion.

First of all, at some point the sales/returns ratio should be able to indicate that you need to pull the item, that'd be my first line of defense!

If a user asks for a refund, I'd check the sales/returns ratio, if it's not Very Low, there is a good chance the media is bad. In this case I'd allow the refund (and increment the user's trust)

If the ratio is very low AND the total number of sales is low, I'd check the trust stat and if it's high, allow the return and adjust stats (but I wouldn't increment the trust stat except in the above case--because in that case it correlates with other users)

If the number of sales is high and the number of returns is low AND the user has a low "trust" stat, Then I'd deny it.

edit:

Also, I'd track all refunds separately, exactly who returned what and not just a simple counter as my post implies. In that way, if your algorithm is insufficient, you could implement a new algorithm that could recalculate your existing data on the fly.

It could also be used to evaluate patterns of abuse--in other words, if you do identify a pattern someone has been using to scam the system, you could create a new pattern detector and execute it to find other accounts that have been using the same pattern, then show them goatse or something next time they make a request.

Bill K