views:

389

answers:

2

Hi,

I have a set of training data consisting of 20 multiple choice questions (A/B/C/D) answered by a hundred respondents. The answers are purely categorical and cannot be scaled to numerical values. 50 of these respondents were selected for free product trial. The selection process is not known. What interesting knowledge can be mined from this information?

The following is a list of what I have come up with so far-

  • A study of percentages (Example - Percentage of people who answered B on Qs.5 and got selected for free product trial)
  • Conditional probabilities (Example - What is the probability that a person will get selected for free product trial given that he answered B on Qs.5)
  • Naive Bayesian classifier (This can be used to predict whether a person will be selected or not for a given set of values for any subset of questions).

Can you think of any other interesting analysis or data-mining activities that can be performed?

The usual suspects like correlation can be eliminated as the response is not quantifiable/scoreable.

Is my approach correct?

+2  A: 

It is kind of reverse engineering.

For each respondent, you have 20 answers and one label, which indicates whether this respondent gets the product trial or not.

You want to know which of the 20 questions are critical to give trial or not decision. I'd suggest you first build a decision tree model on the training data. And study the tree carefully to get some insights, e.g. the low level decision nodes contain most discriminant questions.

Yin Zhu
+1  A: 

The answers can be made numeric for analysis purposes, example:

RespondentID  IsSelected  Q1AnsA  Q1AnsB  Q1AnsC  Q1AnsD  Q2AnsA...
12345         1           0       0       1       0       0
  1. Use association analysis to see if there are patterns in the answers.

Q3AnsC + Q8AnsB -> IsSelected

  1. Use classification (such as logistic regression or a decision tree) to model how users are selected.

  2. Use clustering. Are there distinct groups of respondents? In what ways are they different? Use the "elbow" or scree method to determine the number of clusters.

  3. Do you have other info about the respondents, such as demographics? Pivot table would be good in that case.

  4. Is there missing data? Are there patterns in the way that people skipped questions?

el chief