This is week’s reading is ridiculous. Not only it’s trashing statistics to an unbearable extent, but it also justifies its wrong doing in a shameless manner. One quota from the book says it all – “Because product design is not research but engineering, we are not concerned with getting at scientific “truth”; our is more practical and less business. Our evaluation drives our engineering judgement, which is also based on hunches and intuition that are, in turn, based on skill and experience.”. It’s this kind of attitude that leads to the design that kills people.

List of sins against statistics:

1. “It may help to include standard deviation values, for example, to indicate something about the rough level of confidence you should have in data.”

Standard deviation and level of confidence are very different things. Level of confidence is to show how confident you’re that the real value (population mean) falls into an interval called confidence interval (E.g. we’re 95% confident that the average time it takes for the user to print out a report in the system is between 2 to 5 mins). Our confidence is in our method not in our data.

2. “Sometime it can mean that you should try to run a few more participants. ”

The number of participants should be decided according to your expectation towards the data (for example if you want your margin of error to be 0.3% then you need roughly a thousand people to participate). You can’t fix your result by getting more participants.

3. What are the UX goals based on?

The book talks nothing about how they actually set the UX goals. It may well be that the goals are set too high or even unrealistic in some cases. How can you tell?

4. “The quantitative data analysis for informal summative evaluation does not include inferential statistical analysis.”

So you have the data, why not do inferential statistical analysis on it? In fact, there is program for doing that. It can be done in minutes. And it costs nothing.

5. Why is this the only way to identify UX problems?

Statistical analysis can show association too. See, in statistics we have both quantitative and categorical data. Say there is question on the survey “what do you think is the best way to get to XXXX screen” and there is 4 answers. That data is categorical not quantitative. We can definitely run test with it to see if there is an association between this and the average time spent doing a task. On the other hand it makes sense, since the book tells us not to do statistical analysis, so I guess qualitative analysis is the only way.

I’m not disagreeing with the ability of qualitative data analysis. I think it’s great in finding UX issues. It’s just that the book is unfair in the way it describes power of statistics. If I want to make a very high-cost decision, I want to know the probability that I will be right. I can’t just put my trust into honesty of the participants and the ability of the evaluators to interpret the data.

There are questions that qualitative analysis can never give answers to. One I can think of is “Pepsi and Coke, which one is better?”. You can never convince me with your conclusion drawn by qualitative research. Exploring the goals and emotions behind each brand won’t tell you anything!!!

Dylan

This is a good example of what a reflection should be – engaging with the material.

More like ranting at the author….

I was doing stat homework…

I was in STAT mode…

Then I went crazy. I didn’t know what I wrote…

It is very interesting that I am thinking about the opposite myself. I think the book tries to be more scientific than it is necessary. For this is design not research. So I share exactly opposite view over the same phenomenon. I roughly summarized my idea here in case you are interested. ( It looks like advertisement.)

Oops. I forgot the link. http://intuinno.wordpress.com/2012/11/03/research-and-design/

These different perspectives would make for an interesting conversation!