Author Archives: nickhopwood

About nickhopwood

I'm a Senior Research Fellow at the University of Technology, Sydney (UTS). I am interested in learning and pedagogy, ethnography, practice theory (especially in relation to times, spaces, bodies, and things). I also blog about academic work, and in relation to research perspectives, methodology and design. Follow me on twitter @NHopUTS

Anxiety in academic work

Hi everyone

This is a short blog post to accompany a YouTube video I posted recently, about anxiety in academic work and particularly among research students. It’s a fairly simple video in which I talk mainly about how own personal history and experiences of anxiety, and what I’ve learned about it along the way. No flashy data, no promises of solutions. Just an honest sharing of experience that puts anxiety out there as something that happens and is okay to talk about.

Why did I write it? Because of the work I do, I come into contact with students from lots of different universities and countries.  I got an email from a student who had experienced anxiety in relation to her studies. Part of what she wrote was:

It is a learning process, right? I’m still figuring out what works for me, like walking for long time is really good. But just recognizing that this anxiety is a problem, like a broken finger, for example, and that it needs some time, maybe medicine, to heal, has been a big step. And I know it goes away. Just being able to put a name on it, has helped me a lot. And what also help is to talk to people who experience such things, and realizing that it is so normal. For me, I’m having the ups and downs, and I have had some therapy. But I now somewhat accept this part of me, and that is why I want to make it normal for people to talk about.

This made me think. Anxiety is out there among research students. And I agree with her about how helpful it can be to recognise it and talk about it with others. I also agreed with her about how unhelpful it is to push things like anxiety under the carpet, hide them away.

So, I wanted to make a video about anxiety. But it’s not my area of expertise, either in terms of research I’ve done about doctoral students, nor in any medical or clinical sense. So I have to be careful. I thought it might at least be useful to reflect on my own anxiety, and lay out publicly what happened, what I tried to do in response, what worked, what didn’t, and how I view it all now.

If you want to follow up with a serious academic paper on this topic, I would recommend this as a good place to start: Wisker & Robinson (2018) In sickness and in health, and a ‘duty of care’: phd student health, stress and wellbeing issues and supervisory experiences. It is a chapter in a book called Spaces, journeys and new horizons for postgraduate supervision published by SUN Academic Press.

 

 

Advertisements

When coding doesn’t work, or doesn’t make sense: Synoptic units in qualitative data analysis

You can download a full pdf of this blog post including the three examples here. Please feel free to share with others, though preferably direct them to this page to download it!

 

How do you analyse qualitative data? You code it, right? Not always. And even if you do, chances are coding has only taken you a few steps in the long journey to your most important analytical insights.

I’m not dismissing coding altogether. I’ve done it many times and blogged about it, and expect I will code again. But there are times when coding doesn’t work, or when it doesn’t make sense to code at all. Problems with coding are increasingly being recognised (see this paper by St Pierre and Jackson 2014).

I am often asked: if not coding, then what? This blog post offers a concrete answer to that in terms of a logic and principles, and the full pdf gives examples from three studies.

Whatever you do in qualitative analysis is fine, as long as you’re finding it helpful. I’m far more worried about reaching new insights, seeing new possible meanings, making new connections, exploring new juxtapositions, hearing silences I’d missed in the noise of busy-work etc than I am about following rules or procedures, or methodological dogma.

I’m not the only one saying this. Pat Thomson wrote beautifully about how we can feel compelled into ‘technique-led’ analysis, avoiding anything that might feel ‘dodgy’. Her advocacy for ‘data play’ brings us into the deliciously messy and murky realms where standard techniques might go out of the window: she suggests random associations, redactions, scatter gun, and side by side approaches.

 

An approach where you are a strength not a hazard

The best qualitative analyses are the ones where the unique qualities, interests, insights, hunches, understandings, and creativity of the analyst come to the fore. Yes, that’s right: it’s all about what humans can do and what a robot or algorithm can’t. And yes, it’s about what you can do that perhaps no-one else can.

Sound extreme? I’m not throwing all ideas of rigour out of the window. In fact, the first example below shows how the approach I’m advocating can work really well in a team scenario where we seek confirmation among analysts (akin to inter-rater reliability). I’m not saying ‘anything goes’. I am saying: let’s seek the analysis where the best of us shines through, and where the output isn’t just what is in the data, but reflects an interaction between us and the data – where that ‘us’ is a very human, subjective, insightful one. Otherwise we are not analysing, we are just reporting. My video on ‘the, any or an analysis’ says more about this.

You can also check out an #openaccess paper I wrote with Prachi Srivastava that highlights reflexivity in analysis by asking: (1) What are the data telling me? (2) What do I want to know? And (3) What is the changing relationship between 1 and 2? [There is a video about this paper too]

The process I am about to describe is one in which the analysts is not cast out in the search for objectivity. We work with ‘things’ that increasingly reflect interaction between data and the analyst, not the data itself.

 

An alternative to coding

The approach I’ve ended up using many times is outlined below. I don’t call it a technique because it can’t be mechanically applied from one study to another. It is more a logic that follows a series of principles and implies a progressive flow in analysis.

The essence is this:

  1. Get into the data – systematically and playfully (in the way that Pat Thomson means).
  2. Systematically construct synoptic units – extractive summaries of how certain bits of data relate to something you’re interested in. These are not selections of bits of data, but written in your own words. (You can keep track of juicy quotations or vignettes you might want to use later, but the point is this is your writing here).
  3. Work with the synoptic units. Now instead of being faced with all the raw data, you’ve got these lovely new blocks to work and play seriously with. You could:
    1. Look for patterns – commonalities, contrasts, connections
    2. Juxtapose what seems to be odd, different, uncomfortable
    3. Look again for silences
    4. Look for a prior concepts or theoretical ideas
    5. Use a priori concepts or theoretical ideas to see similarity where on the surface things look different, to see difference where on the surface things look the same, or to see significance where on the surface things seem unimportant
    6. Ask ‘What do these units tell me? What do I want to know?’
    7. Make a mess and defamiliarize yourself by looking again in a different order, with a different question in mind etc.
  4. Do more data play and keep producing artefacts as you go. This might be
    1. Freewriting after a session with the synoptic units
    2. Concept mapping key points and their relationships
    3. An outline view of an argument (eg. using PowerPoint)
    4. Anything that you find helpful!

 

In some cases you might create another layer of synoptic units to work at a greater analytical distance from the data. One of the examples below illustrates this.

The key is that we enable ourselves to reach new insights not by letting go of the data completely, but by creating things to work with that reflect both the data and our insights, determinations of relevance etc. We can be systematic as we go through all the data in producing the synoptic units. We remain rigourous in our ‘intellectual hygiene’ (confronting what doesn’t fit, what is less clear, our analytical doubts etc) . We do not close off on opportunities for serious data play – rather we expand them.

If you’d like to read more, including three examples from real, published research, download the full pdf.

Selective lowlights from my many rejections

This post relates to my rejection wall blog, tweet, and series of videos about rejection in academia.

My rejection wall is there for all to see – ‘all’ being people who happen to come to my office.

While there is fun to be poked at the ridiculousness of academic rejection, there is a serious point to it, so this blog post makes the details of my rejection wall available for all to see, and adds some commentary to it. This isn’t all my rejections, just some of the juicier bits – selected lowlights if you like – that I think might be the most rewarding for others to read. There’s other examples of people doing this in blogs and doing something similar in lab meetings (both of which I think are awesome!).

Maybe reading it just makes you feel a bit better by seeing how crap I’ve had it on several occasions with grumpy journal reviewers and months of grant-writing that came to nothing.

Maybe others can learn from my mistakes (and there have been plenty) – but I can’t promise that reading what follows will avert the inevitable misfortune of an unwarranted or nastily delivered academic rejection.

I have divided it up into rejected research proposals and rejected journal articles.

Some research rejections

Check out my video on why research grant applications get rejected, and then spot lots of those reasons in my failures below 🙂

Screen Shot 2018-06-05 at 4.07.06 am.png

It was clear that these reviewers really didn’t think our project should be funded. The language needs to be glowing to even stand a chance. Looking back at the proposal now, I can see where the reviewers were coming from. I think we suffered a bit from not having met regularly as a team to discuss things, and also a bit of group-think: we tended to see and say what we liked about what we were doing. There was no ‘red team’ asking the awkward questions. And I agree (now) that we framed the project around an issue that wasn’t obviously worth caring about in itself. And we didn’t make a good enough case for alignment in what we were proposing.

 

Screen Shot 2018-06-05 at 3.45.56 am.png

This was another big team effort. I think part of our problem was that the proposal swelled and got more complex as we fed in bits that represented what each of us offered (read: wanted to do so we all felt important). The team was very diverse – and we all felt we needed each other. None of us, nor any subset of us, could have made a case alone. But somehow it all became too much. Hence the relationship between parts being weak. The point about not offering clear understanding reflects this general problem, plus a second major weakness: we were not in the bullseye of what this particular funding round was about. We were not giving the funders what they wanted.

Screen Shot 2018-06-05 at 3.52.09 am.png

This was a proposal to a funding body specifically about womens’ safety. To this day I think our basic idea was a good one: to do a kind of action research. However, with reviewer comments like this, our proposal flew pretty wide of target. We went too heavy with unfamiliar theory and they couldn’t see how it would work in practice. They also couldn’t see how it would generalise. Lacking content was a big no-no – too much methodology. At the same time we didn’t give enough about the methods in terms of site details. And then we fell foul of the feasibility hurdle. So we misfired on multiple fronts.

Screen Shot 2018-06-05 at 4.02.14 am.png

Several more months’ of work down the tubes with this one! Among the many issues the reviewers found, the two above were the most catastrophic. Being ambitious is okay if your project looks really feasible and the reviewers don’t get lost in complexity. In this case we failed on both counts. And then we failed to make the case for adding to knowledge. Who in their right mind would fund something that wasn’t going to find something new? I still think the project as it existed in my mind would have been original and great. But what was in my mind clearly didn’t reach the reviewers. Finally the ‘reality check’ was a vicious blow! But pointed to how wrong we got it. The reviewer felt we had an over-inflated budget, to produce a measly evidence base that wasn’t going to reveal anything new. Brutal.

Screen Shot 2018-06-05 at 4.08.18 am.png

Ah – not giving concrete details about the methodology. That old chestnut. Old it might be, but a disaster for this funding proposal! I realise no-one is going to give out money – even for a study on a really important topic by brilliant people – if there isn’t a clear, feasible and persuasive business plan for how it is going to be pulled off (on time, on budget). The methodology section is key to this.

Screen Shot 2018-06-05 at 4.10.20 am.png

Again falling foul of methodological detail – in this case not explaining how we would do the analysis. The Field of Research point is really important – this is how these applications get directed to reviewers and bigger panels. We badged it as education but the readers didn’t see themselves or their field in what we proposed. I speak about getting the ‘wrong’ reviewer in the video about funding rejections.

 

And now some really lovely journal rejections

I’ve picked these to illustrate different reasons for getting rejected from academic journals – these connect with the reasons I talk about in a video focused on exactly this!

Screen Shot 2018-06-05 at 4.13.39 am.png

Ouch! This definitely wasn’t ‘a good paper went to the wrong journal’. The editors hated it and couldn’t even bring themselves to waste reviewers’ time by asking them to look at it. There was no invitation to come back with something better. Just a ‘get lost’. In the end the paper was published somewhere else.

Screen Shot 2018-06-05 at 4.18.36 am.png

This fell foul of the half-baked problem. Editor thought I was half way through. My bad for leaving him with this impression. Paper was published somewhere else without any further data collection / analysis, but with a much stronger argument about what the contribution was.

Screen Shot 2018-06-05 at 4.19.59 am.png

This was living proof for me that writing with ‘big names’ doesn’t protect you from crappy reviews. The profs I was writing with really were at the leading edge of theory in the area, and so we really did think it added something new. This paper was rejected from two journals before we finally got it published.

Screen Shot 2018-06-05 at 4.26.11 am.png

This is one of my favourites! This reviewer really hated my paper by the time she finished reading it. The problem was I got off on the wrong foot by writing as if the UK was the same as the whole world. My bad. Really not okay. But then things got worse because she didn’t see herself and her buddies in my lit review. All the studies she mentioned were missing were ones I’d read. They weren’t relevant, but now I learn to doff my cap to the biggies in the field anyway. How dare a reviewer question my ethics this way (the students involved asked to keep doing the logs as they found them so useful). How dare a reviewer tell me what theory I need to use? And what possible relevance do the names of her grand-daughter’s classmates have to my paper and its worthiness for publication?! Finally, on the issue of split infinitives, I checked (there was precedent in the journal). When this was published (eventually as a book chapter) I made sure there were plenty still in there. A classic case of annoying the reviewer who started with a valid point, then tried to ghost-write my paper the way she wanted it, and ended up flinging all sorts of mud at me.

 

Screen Shot 2018-06-05 at 4.22.00 am.png

The only thing I can say with certainty about this list is: it will get longer! I’ve published in quite a few of these too – showing a rejection doesn’t mean the end of the road for you and a particular journal.

More to follow (inevitably!)

 

Video on activist and change methodologies

I have been working with Ilaria Vanni on a new Designing Research Lab. We are approaching research methods teaching by conceptualising research approaches in terms of: deep dive, place-based, textual, and activist/change methodologies.

I made a short video summarising some key points. There are many different ways of doing activist or change-based research, and I don’t try to cover them all. After explaining some key ideas, the video focuses on action research, change laboratories, and practice-based approaches. These were chosen to illustrate different ways of going about research that can deliver change and create new possibilities for change.

You can download the PowerPoint used in the video here. Many of the images and speech bubbles highlight to other useful resources or original sources.

Some of the highlights in terms of framing ideas include these quotations:

“There is no necessary contradiction between active political commitment to resolving a problem, and rigorous scholarly research on that problem.” (Hale 2001)

“Activist research is about using or doing research so that it changes material conditions for people or places. It is different than cultural critique, where texts are written with political conviction, but no concrete changes are made on the ground.” (Hale 2001)

We are all participating in, and contributing to, the making of history and of our common future, bearing responsibility for the events unfolding today and, therefore, for what is to come tomorrow. The social structures and practices exist before we enter them… yet it is our action (or inaction), including our work of understanding and knowing, that helps maintain them in their status quo or, alternatively, to transform and transcend them” (Stetsenko 2017)

Let me know in the comments below – Are you involved in activist research? What other approaches are you using to deliver concrete change in the world?

The great wall of rejection

The saga of my #rejectionwall continues! The tweet that spurned 250,000 impressions and 1,000 retweets is still making new things happen.

The latest installment is here – from UTS’ U: Magazine.

It was preceded by my shadow CV and followed up with a post on another blog (It’s all pretty funny), a post as part of the ‘How I Fail’ series, a re-post  on an Indian website and two different pots on the UTS Futures Blog (one is a video interview, and the other a written reflection), a piece in Failure Magazine and then the interview and video for U: Magazine – produced by UTS, where I work.

The write-up brings together a few bits of the rejection story so far, and weaves them into the long and protracted story of rejections and failures in my career. There’s some new thoughts in there too.

What I really liked about it was that they got me and other academics to read out out rejections on camera. A bit like the mean tweets thing where celebrities read out insulting tweets about them, I found it really helped to step back, laugh, but also ‘own’ the rejection in a productive way. Kind of what the rejection wall did for me, but moreso.

I love the idea of a youtube channel just full of academics reading out their rejections, commenting on how unreasonable (or badly written, or unethical etc.) they are.

Anyone interested in making this a thing?!

How do I know I’m coding well in qualitative analysis?

Coding. Yay. Eek. Ugh.

Let’s face it, coding is a biggie. You don’t get far in the qualitative data analysis literature without seeing some mention of it. To be clear, this post does not assume coding is necessary in all qualitative data analysis. Nor does it assume coding amounts to qualitative analysis in the sense that coding is all you need to do. There is always an analytical residue – more interpretive work post-coding; in fact coding is often only a small part of qualitative analysis. Lots of analyses I’ve done haven’t used coding at all.

But coding can be incredibly valuable to us as qualitative data analysts. The problem is, it’s really easy to be busy coding but not to be doing so well. In this post I’m trying to spell out what it might mean to code well, and how you might know if you’re doing so.

 

Why code in the first place?

If you’re coding without knowing why, and without having made a deliberate choice to do so (rather than feeling you have to), it’s not a good start. Coding potentially serves lots of purposes, including but not limited to:

  1. Enabling you to retrieve chunks of data, or particular phrases, quotations etc, later when you need raw data in your writing, or if you want to check ideas that come up.
  2. Helping you ‘be with’ the data in a particular way, getting you up-close to the text.
    1. Maybe you might notice things in it that you haven’t seen before
    2. Maybe you might notice things important to the participants (but not originally to you)
  3. Lifting your ‘being with’ the data up a level to notice distinctions and associations (ie similarities and differences) at a high level of resolution
  4. Lifting your ‘being with’ the data up a level to notice where concepts or theoretical ideas might be manifest in concrete instances in your data
  5. Helping you develop codes, categories, or themes, that can become building blocks for subsequent analysis; You might compare or contrast these within or across cases, for example, or employ frequency counts.

 

What does coding well look like?

Coding is a slippery slope. I slid a long way down it several times, landing with a bump when I realised I’d been busy but uselessly so for several weeks. I forgot to keep these things in mind:

  1. Good coding relates to how hard you’re thinking (and helps you think harder). If you’re finding coding easy, or you’re not constantly having to make difficult decisions about what to name codes, how to code pieces of data, how big those pieces should be etc, chances are you’re not coding well.
  2. Good coding means you are seeing new things in the data and these new insights are progressive. Progression might mean enriching your argument (answer to research questions), or sharpening it (fixing in on what is essential, for example). These ‘new things’ could be new codes or categories or themes, but they could also be patterns, distinctions, associations, forms of significance, why things matter etc.
  3. Good coding settles towards a parsimonious set of codes/categories/themes. The best coding system is not the one with the most codes in it. An analyst who has created 10,000 categories has not done work that is 1,000 times better than the analyst who has created 10 categories. Chances are, the latter has been thinking much harder than the former as she goes. By parsimonious I mean strikes the optimal balance between power of explanation (persuasive, novel argument and insights) and complexity (number of ideas or building blocks in the argument). We can expect diminishing returns: adding five more codes or categories to a system that already has 50 probably doesn’t add as much as value as adding five to a system that only has two or three.
  4. Good coding opens up as much as it closes off: coding rarely, if ever, provides the answers to your questions. Rather it creates building blocks or thinking tools (and retrieval systems) that allow you to get closer to those answers. So good coding might open up by:
    1. Making new connections between parts of the data possible
    2. Making new distinctions between parts of the data possible
    3. Leading you to frame new questions that might specify how you will arrive at the answer to your big research questions
    4. Giving you units of data, concepts, ideas (and their inter-relationships) that you put to work in the next analytical stage.

But good coding isn’t purely expansive and generative. It also has to have boundaries and bring focus. So good coding might close off by:

  1. Helping you decide what data or concepts or categories to focus on, and which to set aside
  2. Consolidating what used to seem disparate or unconnected into coherent units that you can work with in whatever follows.

And this leads to my final point: good coding is a process that enables you to take further steps in analysis that wouldn’t have been possible without having done the coding. The codes are not the outcome (unless your research findings are going to be simply a matter of describing themes that come up in interviews, for example, which sounds terribly dull). If you can do the next step without more coding, perhaps it’s time to move on. If you can do it without the coding at all, why are you coding?

I would love to hear your experiences of coding – why do you code? When and how do you choose not to code, or to stop and move on to other analytical processes? How do you know you are coding well? Have you had experiences (like I have) when you’ve spent ages coding only to realise it hasn’t got you where you wanted to be?

Enhancing 1:1 research interviews: the secret power of the third thing

The one to one interview is a widely used means of generating data in qualitative research. It is a chance for a researcher to spend time exploring a participant’s experiences, practices, perceptions, stories, in detail.

I’d like you to imagine what this might look like. Perhaps you’ve done some interviewing yourself. Perhaps you are planning to do so. What will the set-up be? Here are some images I got from google that capture the sorts of practices I’m referring to.

The point here is the interview is constructed and conducted as a dialogue – a to and fro between two people. Generally one person (the researcher) is asking questions about the other (the participant). It is a dyadic interaction.

My argument is that interviews are better conceived and done as triadic interactions: between the researcher, the participant, and something else.

P4.jpg

Becomes

Screen Shot 2017-08-01 at 9.37.01 AM

Before I go further I need to lay out two assumptions:

  1. Interviews give direct evidence of what someone says in response to questions they are asked in particular circumstances. Nothing else. They are not a magical process that gives direct evidence of what people think or feel. Whom is asking them, what they are being asked, where and when this is happening all contribute to the way in which responses (ie. Data) are constructed within the context of situated social interaction.
  2. Good interviews help people construct useful answers. Useful has lots of dimensions – an element of ‘truth’ or at least genuineness is important, but also detail, relevance, clarity etc. We are not in the business of discovering what is in people’s heads here (at least not in the way I’m thinking about research interviews).

So the X on the diagram above is there to show that data come from interaction between you, the interviewee and a third thing.

This third thing, when used in particular ways, has amazing magical powers.

What am I talking about? What is this third thing?

The third thing can be an object or idea (ie concrete or abstract), but to have these magical powers it has to change the structure and function of the interaction.

A list of questions that the researcher has in her hands does not count. A digital audio recorder does not count. A cup of tea for each of you does not count. These things are useful but they do nothing in themselves to shift from a dyadic to triadic way of conducting the interview.

The objects or ideas that work as magical third things do so by changing the scenario from one in which the researcher asks the interviewee about herself to one in which the researcher asks the interviewee about the third thing.

If we wanted to know what a teacher thinks about, say, teaching in schools, we could ask “What do you think about ensuring accessibility for all learners?”. The question is aimed at the person, directly.

Instead, we could show a photograph of a classroom, or a video, or have one of the teacher’s lesson plans or some examples of resources she has used in her lessons. Then we could ask the interviewee to comment on those things. The interviewee can look at them, perhaps even pick them up.

We have changed from a question that follows a path directly from the researcher to the interviewee and back, to one that goes via a third thing.

Other examples could be diaries, concept maps, small cards with different words or pictures on them, computers or tablet devices, other relevant documents, artefacts people have produced or used – the list is pretty much endless. The magic is not in the thing itself, but in how it is used.

As I mentioned, some of these third things might not be concrete, tangible objects. They could be ideas that are invoked through the way the question is put together. Let me explain with an example.

I’ve been interviewing a lot of parents recently, people who have been experiencing significant difficulties, often for reasons beyond their control. When I was piloting, I found that the question “What do you think your strengths are as a parent?” didn’t produce very good answers. Duh. Why would it? These were vulnerable people who were often self-judging as failing.

Instead I starting asking questions like these: “If I asked your partner what he thinks your strengths are as parent, what would he say?” or “If I asked someone who knows what you’ve been through and knows you really well, what would she say has enabled you to get through it all?”. These questions still take an indirect route (ie via the top of the triangle on the diagram above), but this time the third thing is invoked in an imaginary way.

Why is this indirect, triadic way so valuable?

  1. It helps participants pause before answering. Other things (like cups of tea, biscuits etc) do too, but the point is at least when the third thing is a physical object, people can comfortable entertain silence while they think for a moment. I’ve found even the intangible versions work similarly too: if you ask someone about themselves, there’s this expectation they should know the answer. If you ask them about someone else, it seems more permissible to take some time to think. And generally in interviews, silence is golden! (the other important silence is the one you, the researcher, leave after the answer, but that’s probably for another blog post!).
  2. It is less confronting. Not X asking about Y, but X asks Y about Z. A very useful shift, particularly when we are talking about sensitive issues.
  3. It helps construct responses that are closely tied to concrete examples, giving rich empirical detail not generalised, vague abstractions.
  4. It is based on Vygotsky’s theory of learning (the principle of mediation of activity through tools and signs). There’s shelves of books on this. I’m not going to explain it here, other than to note that the idea does have sound theoretical basis.
  5. It helps balance pre-determined structure and emergence in the interview: the third things can shape (suggest direction, boundaries) but not fix the way the interview goes.

I’m not claiming these ideas are particularly new. People use them all the time. But I haven’t read much about them in the textbooks, at least not explicitly framed this way.

Finally, here are some brief comments from Sally Irvine-Smith, a wonderful doctoral student here at UTS. She has been working with interviews on these principles and kindly offered to share some of her experiences. Thanks Sally!

I am adopting a practice approach which focuses very strongly on what people ‘do’. My study which is about decision makers in the local sphere. One group of participants were involved in a community panel to decide how to spend a certain amount of money for a local council. They were given a large folder of documentation and I asked them to bring that along to the interview where we jointly examined it. Interestingly, although they all (but one) had quite an affection for the folder as a memento of their time on the panel, it was not particularly important to them as a source of information. The remainder of my participants were elected members  or council officers. I did something a little different in their case: I asked them to think of a decision they were making and I treated that decision as the object in the interview. I encouraged the participants to examine their decision as if it had material form, to discuss its genesis and outcome, to describe who helped to shape it, and how it transformed and developed over time. I haven’t written up any of my results yet, but my data analysis indicates that the results from this technique are rich and provide an authentic picture of what my participants actually do when getting information for their decisions.