After our eighth session

Only two sessions remaining!

For Thursday:
We'll be focusing on data collection and analysis. Sophia will share some of her work on the Speechome project, and we'll try out an analysis activity with some interview transcripts. I've posted the readings - please post your questions/comments by 3pm on Thursday!

For Monday:
We've been working hard for the past several weeks, so I thought we should celebrate with a seminar/dinner combination. I'll provide dinner - you bring your reflections (and maybe a dessert?) to share.

Please RSVP by email by Friday, so I know how many people to plan for. Based on the meetings I had with people who are writing, let's extend class by 30 minutes to 7:30pm on Monday, so that we have enough time for sharing and some wrap-up.

Kvale, Chapter 15, Ten standard objections to interview research:

- maybe like truthy, we should have sciency?
- is science not common sense? (ie, is it /not/ science when it's common sense?) (Really?)
- I am not sure the objective/subjective issue is even a real issue. what's objective?
- how do you sufficiently acknowledge bias? what if you get it wrong? how can a reader consider that validity?
- the leading question point is either reiteration of bias, or elevation of bad research to the status of good research...
- to what extent do the answers have to match for the results to be the equivalent? is what is being said the same as what is being meant?
- flip side: how do you know what's being meant?
- if the hypothesis one is not troll-bait, I don't know what is... are these points still valid?
- it's qualitative: Noooo... Really? on the other hand, I don't see a non-dogmatic defense to this one...
- while objectivity might not be in fashion anymore, it does bring forth the question of "what is a valid interview"

Kvale, Chapter 11, Methods of analysis:

- is sending back a type of triangulation? to what extent is interviewing participatory?
- how many passes are needed to do categorization? what's strongest/weakest? how do you decide? is it objective?
- the ad-hoc methods seems to get a bit of a short run here...
- how do you know which of the techniques would work without testing their efficacies? do you pick after the fact?
- what's the category discovery system? how do you know it applies? what is sufficient similarity and coverage?
- are interviews being used as a debiasing system?
- to what extent must interviewers define themselves when doing interpretive work?
- how do you prevent yourself from "reading in"? or is that expected?
- without interpretation, can an interview be reduced?
- why isn't there commentary on recording the internal state of the interviewer?
- is quantification feasible in any meaningfully-triangulated way?
- how do you "clean-room" tests of multiple coders?
- how much explication is enough?

Kvale
Isn't it interesting that we're still relying on the researcher for the breakthroughs on how ideas are put together no matter how rigorous mechanized or detailed the analysis tool: human is instrument

Has anyone thought to perform quantitative analysis on interview research methods? It strikes me that a few interviews, appropriately structured, would cover quite a bit of the space of key terms, events, and other factors that would emerge from the analytical steps of narrative structuring, categorization, and perhaps condensation. Perhaps I'm showing my quant bias, but it seems that there is something akin to a sampling theorem that would indicate the value of multiple subject interviews, or multiple interviewers, in establishing coverage or accuracy. This is only a small part of the space of valid studies, but could serve as a bridge from narrative research to more conventional research modalities.

I really liked the analogy to talk therapy interviews. From an action research perspective, narrative interviewing has significant value in systematically identifying key conceptual and linguistic structures that emerge in a given context which from a central part of the toolkit for where, how and when to intervene. It may be that such intervention is highly context specific, like talk therapy, and thus highly useful even if not directly subject to generalization.

The review of critiques was interesting, particularly in contrast to classical survey based research. Almost all of the complaints outlined for interviews can equally be applied to classical surveys. Questions can be leading and the writer can have a significant bias in the choice and style of questions and target 'variables', etc. A survey is almost by definition easier to generalize as you are asking focused, highly diluted questions which attempt to smooth over contextual differences by sampling larger populations. To get this generalizability/reproducibility researchers throw away a great deal of potential context which may be relevant, confounding, or otherwise.

The related observation is that the models that motivate the design of a quantitative survey are themselves not subjected to quantitative analysis. Qualitative methods would be highly informative and help debug these models, but in much of the so-called hard social sciences this part of the research research is done in a highly ad-hoc manner.

"Methods of Analysis"

Hella useful!
The explanation of ad hoc seemed a bit thin (not surprising entirely), but the others were nicely detailed.

On pg. 203 "In the social sciences a hermeneutics of suspicion is pronounced in psychoanalysis and Marxism, where the interpreter looks for meanings behind or beneath what is directly expressed."
Interesting--made me wonder what other lenses or fields might encourage this suspicion.

It's implied that researchers use on method of analysis, but might they use multiple methods?
I suppose the subject determines that.