Tag Archives: educational research

Some comments on Ben Goldacre’s paper about the role of RCTs and evidence-based practice in education, and the response to it

Ben Goldacre has recently authored a paper Building Evidence into Education, published through the UK’s Department for Education (which invited him to do). It has been extensively (mis)reported in the press and social media. A video of a related speech is available on youtube. I am intrigued by Goldacre’s arguments, and fascinated by the response his paper has provoked, see

1. The Guardian website, following an article that bears a curious subtitle, which Goldacre corrects);

2. Goldacre’s own website, with the paper and responses.

3. Geoff Whitty’s (Director Emeritus of the Institute of Education) ‘guarded welcome‘ to the paper, on the Centre for Education Research & Policy website, which asserts the need for a degree of realism

4. Prof Mary James’ (President of British Educational Research Association, and University of Cambridge) response on the BERA website, which raises crucial critical questions and doubts

5. Twitter, where, for example @MarkRPriestly refers Goldacre to literature about the need to treat RCTs with caution in social sciences.

A bit of context

For those who may be less familiar with science policy and media in the UK, Ben Goldacre is a key public figure who, among other things, holds scientists to account in terms of quality research, and the media to account for questionable reporting of research findings. He has recently moved out of the medicine / pharmaceutical area and written on educational research. An easy reaction is perhaps to position Goldacre as an outsider – neither teacher nor educational researcher. ‘Get off our lawns’ is generally not a very helpful response in my experience. I think educational researchers have benefitted enormously from insights from outside – historians, anthropologists, sociologists, economists: why not medics, too?

I wish to lay out clearly what I understand Goldacre is and is not arguing.

What Ben Goldacre is arguing

1. That we can improve outcomes for children and increase professional independence of school teachers by collecting better evidence about what works best, and establishing a culture where this evidence is used as a matter of routine.

2. That education can reap these benefits by replicating the successes that have been enjoyed in medicine, including the emphasis on randomised controlled trial (RCT) research, information architectures, and cultures of being research literate and research engaged.

3. That RCTs are the best process we have for comparing one treatment or intervention against another, for answering questions of whether something works better than something else.

I do not disagree with any of the aims Ben Goldacre is trying to achieve – who would argue for worse outcomes for children or reduced professional independence? (NB. I am not ignoring widespread views that teacher professional autonomy has been undermined in recent years; this is an important, but tangential issue for now).

Neither do I disagree with any of the claims I’ve identified above (a selection from many arguments Goldacre makes). I do have some caveats that I would append to his arguments, and some questions and comments that might explain aspects of the response that Goldacre says surprised him.

What Goldacre is not arguing

My reading of the paper, and of Goldacre’s rejoinders to comments, leads me to the following understanding:

1. He does not suggest that RCTs should replace qualitative or other quantitative approaches to research in education. He does suggest the balance needs to swing to extend the number of RCTs.

2. He does not suggest that RCTs are a kind of gold standard for all educational research. He is generally very careful in his wording, arguing that RCTs are the best we have for answering questions of what works. He explicitly acknowledges the limits of RCTs (for example they can’t tell us why something works), and acknowledges a valuable role for qualitative research (and presumably other quantitative methods, such as quasi-experiments where interventions are applied to groups not themselves created through random allocation).

Why I agree that RCTs are important and there should be more of them in (British) educational research

RCTs are an incredibly powerful research tool. Through random allocation of participants to either an intervention group or a control group, RCTs can take care of a whole bunch of complex factors that make research involving human beings (whether medical, educational, development-based etc) difficult. As Goldacre argues, they really are the fairest test for whether something works better than something else. Note, they only ever give a comparative answer, not an absolute one: they don’t provide a complete recipe for ‘what works’. They tell us that something works differently (and we interpret that difference as better/worse) than something else. Goldacre is clear on this.

They are amazing, and on top of the examples given by Goldacre, I’d add the High/Scope Perry project, that continues to build an incredible evidence base relating to the difference that a particular approach to nursery education can make to people throughout their lives. RCTs really are like Heineken: they can reach parts of understanding that other methods cannot. As long as your question is ‘Does X work better than Y?’.

The limits to RCTs

Ask any other kind of question, and the value of an RCT quickly diminishes. Goldacre acknowledges this, too. Ask ‘what should we do?’, and until you have a new idea or intervention to try against something else, an RCT is useless. Often challenges in practice and problems in research start from a more open question – the luxury of having things to compare comes later. And other research approaches, as well as pioneering practitioners, ideological commitments to social justice, technological advances, social changes etc. all have a role to play in getting us to the point of being able to try something new out and exploit the power of RCTs to give us causally robust comparisons of X and Y. We don’t always need an RCT to make important changes, either. We no longer allow corporal punishment in schools. It didn’t (nor should it) take an RCT to show that schools are better places without the cane.

RCT’s are not perfect, nor are they the best method in all circumstances. I am in no dispute with Goldacre on this point.

There are lots of important questions where RCTs might never figure. RCTs can not tell us what it is (morally) right to do, what is just. Other approaches are often better place to identify social inequalities that might remain hidden (for example, around gender or racial differences in educational outcomes, where results from national tests are very useful). RCTs are not necessarily blind to questions of social justice (depending on the outcome measures involved). I’m reinforcing the simple point here that there are lots of questions where RCTs are not the best approach.

Goldacre is not arguing that we should ignore these questions, but there is a risk in the way he presents his arguments, that questions of ‘what works’ are heralded as the most important. Paying less attention to other kinds of questions serves the argument for more and better RCTs, but could (unintentionally) lead to a narrowing of the conceived role of research in education.

RCTs as a valuable contributor to evidence that can contribute to evidence-informed practice

It might be possible to read Goldacre’s paper and to (wrongly in my view) equate the evidence used in evidence-based practice with outcomes from RCTs. Goldacre doesn’t say this, but he doesn’t talk in any detail about other kinds of evidence, or other sorts of relationship between evidence and practice. We might, hypothetically, start seriously looking at literacy teaching practices based on evidence that current curricular and testing regimes disadvantage certain students and reproduce social inequalities. If the only evidence that could influence practice was that coming from RCTs, we would have to ignore the other evidence that the status quo is grossly unfair.

Evidence is valuable to practitioners in helping point to ‘what works’ but is also valuable in other ways, and these alternatives are played down in Goldacre’s paper, in his construction of an argument that seeks to redress a perceived imbalance and neglect of RCTs. Such diverse evidence-practice connections apply in medicine, too – there are lots of things that doctors advise (eg suggesting you give up smoking) or do (eg heart transplants) based on other kinds of evidence, not just RCT outcomes.

So I agree with Goldacre – we do need more, better, and more joined-up RCTs, because they are so powerful and the best tools for comparing two or more approaches against each other. But to avoid possible over-extensions of this argument, it is important to be very clear about the important role of other kinds of evidence. It’s not that other research is useful for other things, and only RCTs can be used in evidence-based practice. Evidence-informed practice (a phrase I prefer because it points better to the requirement for professional judgement in interpreting evidence, which Goldacre mentions), can be enriched by all kinds of evidence.

What is educational research for?

I think Peter Mortimore (2000, p. 18) captured some of this in his writing on the role of educational research:

“Who else but independent researchers would risk making themselves unpopular by questioning the wisdom of hasty or incoherent policy? Who else could challenge inspection evidence and offer a reasoned argument as to how empirical faws had led to erroneous conclusions? Who else would dare say ‘the King has no clothes’? Who else would work with teachers and others in the system in order to look below the surface:

  • to notice the unfairness suffered by those who are young for their school year yet for whom no adjustment is made to their assessment scores;
  • to count, and to identify variations in, the numbers of minority pupils excluded from school;
  • to point out that many of the supermarket shelf-fillers are our further education students trying to get by financially;
  • to investigate whether adult learners need the same or a different pedagogy from pupils;
  • to make fair comparisons of schools, as opposed to the travesty of league tables;
  • to tease out why poverty is associated with failure in a competitive system, in which only so many can succeed, rather than just being an excuse for low expectations or poor teaching;
  • to monitor trends and changes in educational aspirations, attitudes and attainments.

On the relevance of the medical model

Goldacre argues that education has much to learn from medicine in terms of the way research is conducted, and in particular the way practitioners are involved in research and the way doctors are trained and expected to be research literate.

As an educational researcher who has in the past done considerable work based in schools, there is no greater reward than thinking what you have found out has made a difference to the lives of teachers and/or pupils. There is no greater insult, to me at least, than to find one’s research accrues value only through citations by other researchers and makes no connection to the ground. As Goldacre rightly suggests, this connection is not only a property of the quality of evidence. It is also a question of the (perceived) relevance of that evidence, and on the user/reader’s ability to make good sense of that evidence. The relationship between evidence and practice isn’t as simple as finding out X is better than Y and then making sure all teachers read the relevant paper.

So I would be quick to welcome and celebrate a shift in school teaching cultures that brought teachers into a different relationship, both more routine and more critical (as Goldacre advocates), with research. I would add this should be with the full richness of the evidence base that educational research has to offer, not just outcomes from RCTs (and I don’t think I’m contradicting Goldacre here either).

Medicine and education may not be that different in some respects

Goldacre’s paper, and the responses and rejoinders online, do raise questions about how valid or relevant comparisons with medical research, forms of evidence, and practice are. Many (including Goldacre) have rightly shot down crude arguments that medicine is about ‘physical stuff like cells’ and education is about ‘people’ or that all diseases/patients are treated the same, that medicine is devoid of the kinds of social complexity that pervade education. One only has to look at the complex deliberations underpinning the development of NICE guidelines to understand that what happens in hospitals and doctors’ surgeries is not simply a question of knowing ‘what works’. Questions of what can be afforded, what is politically acceptable (remember headlines about postcode lotteries in breast cancer treatment?), what is practical – these all have a bearing too.

It’s easy to police boundaries and protect education from medical experts by screwing our eyes shut and shouting ‘but classrooms are different!’ as loud as we can. There are presumably some very important things to consider about the nature of medical and educational research and practices, and whether elements from one system might inform those of another (and shock, horror, this might even involve medicine learning from education!). I don’t think Goldacre offers an adequate account of these issues, but at least he acknowledges them. My critique is not of Goldacre’s oversight (there’s only so much one can say in a paper or a 20 minute talk), but of the risks that others simply elevate medicine as an ideal type and naively expect education (and other systems) to follow.

I am, however, curious to learn more about the sorts of trials Goldacre argues can be done cheaply, efficiently, and effectively in education. As he points out, medical trials have developed into complex designs involving treatments that are not just based on one pill or quantifiable medication regimen versus another – trials of psychotherapeutic interventions, for example. My understanding is that steps must be taken in RCTs in education to ensure compliance – that what is being done in classrooms is actually what the trial is supposed to be testing (as happens in some medical trials). This often requires teams of researchers to check and observe what is being done, which is very expensive, or places a burden of documentation on teachers. I’m not saying it’s not possible. I’m saying that I’d like to see Goldacre’s vision of cheap, robust RCTs (which involves all sorts of considerations about levels of randomisation and their relationship to levels of outcome measurement) explained in more detail.

Why medicine over other models?

What Goldacre doesn’t address, except perhaps indirectly in his introduction where he attributes the leaps forward made in medicine to RCTs, is why medicine should be held up as a model to replicate rather than other systems. Sure, medicine has come a long way in recent decades. So have other aspects of life, too. Why are educational practices and evidence best approached in a medicine-like fashion? Why not sociological? Anthropological? Why not arts-based? Why are medical models better than approaches that might acknowledge things that RCTs can’t, like morality, or justice? Answers like ‘because those aren’t objective’ or ‘can’t be tested fairly’ miss the point (it should be obvious why). I’d love to see Goldacre develop more sophisticated arguments as to why medicine should trump other conceptions of evidence and other notions of evidence-practice relationships. I am guessing there is more to this than the mere fact that Goldacre knows medicine best, but we would benefit from further explanation on such issues. I’m not defending educational research against outside influences. On the contrary, I strongly believe we are and will be better off for these. But we should understand this as a complex choice, with significant implications depending which perspectives get left out from arguments.

On the defensive reaction by some qualitative educational researchers

Goldacre replies to some of the responses to the Guardian article as follows:

“It is very odd, I think we’ve seen some rather peculiar protectionism here from qualitative researchers working in education. I’ve not seen this attitude among the very good multidisciplinary teams working on mixed methods approaches to medical research, where quantitative and qualitative research is done harmoniously with mutual respect, in my experience at any rate. It may be a peculiarity of the qualitative research community in education, or it may be that we are seeing only bad apples in this thread. I don’t think they do their profession any favours”.

What might lie beneath ‘protectionism’? Why might qualitative educational researchers react differently from their medical colleagues in mixed-methods teams? Why would we expect them to react in the same way?

Notwithstanding the histories of marginalisation that many qualitative researchers would argue they have suffered at the hands of pseudo-scientific dominance in educational research, I think part of the explanation lies in some of the ways in which Goldacre’s language might be interpreted, and the genuine sense of threat that such interpretations could pose to some scholar’s values, ethical commitments, and livelihoods.

Why might qualitative educational researchers (of which I am one), react differently from medical researchers in mixed-methods teams? Maybe because many of us are not in mixed-methods teams (for better or worse), but instead collaborate in other ways, for example working with teachers and schools in solely qualitative paradigms. Arguments that the pendulum should swing back to re-emphasise RCTs can be interpreted as a move that will diminish the place of other approaches. This was not Goldacre’s intention, but this is what many perceive has happened in the US as a result of the way federal funding for educational research has been allocated. Protectionism seems quite understandable, as part of a professional ethos that preserves mutual respect and place for different kinds of research (an ethos that Goldacre himself subscribes to). What surprises me is that Goldacre was surprised by this reaction.

On the representation of qualitative research

Goldacre writes:

“Qualitative” research – such as asking people open questions about their experiences – can help give a better understanding of how and why things worked, or failed, on the ground. This kind of research can also be useful for generating new questions about what works best, to be answered with trials. But qualitative research is very bad for finding out whether an intervention has worked… The trick is to ensure that the right method is used to answer the right questions.” (p.13)

I agree wholly with the point that the right method is used to answer the right questions. In my view Goldacre’s paper does not adequately capture the range or value of qualitative approaches, and risks them being positioned as subservient to trials. I do not follow qualitative researchers who campaign against RCTs. I think we should have more of them. But this should not be at the expense of other approaches, and certainly not based on accounts of qualitative research that convey a potentially misleading and diminished view of what the alternatives are and what they offer. Goldacre does not clarify the extent to which he thinks ‘what works’ questions should trump other questions. Protectionism may reflect concerns that others may take Goldacre’s arguments as a basis for a narrowing of the kind of question (and by implication the kind of research that is valued) in educational research.

Indeed, Goldacre makes the very good point, I think, that educational research (or at least that which focuses on teaching and learning in schools), could be enhanced by pursuing agendas and questions from the ground up – ie. those identified as priorities by teachers. This would be very welcome, although I would always seek to preserve space for outsiders to pose questions, too, for they can often challenge assumptions and see possibilities that are difficult to imagine from the inside. But the bigger problem here, is if RCTs become a blanket preferential mode of enquiry (which is not what Goldacre advocates, but is not implausible). Rather than opening up the possibility for teachers to lead the direction of research, this would close it down by limiting the kind of questions that teachers can ask to those of a ‘what works’ variety. There are myriad other important kinds of question that teachers want to ask, too.

The overlooked value of locally-based, locally-relevant research

There’s something curious about Goldacre’s critique of piecemeal individual projects that are oriented to figuring out what works locally, and his open admission that RCTs don’t often generalise: there is rarely going to be a ‘what works’ solution that applies to all schools, age groups, subjects etc. I agree that isolated pockets of poorly supported research that never leaves the boundaries of a particular institution isn’t a great set-up. So yes, we need more joined up infrastructure, for research and for disseminating and sharing evidence. But could not such local projects also be ideal ways to test out, empirically, and in an evidence-based way, how local conditions shape the meaning of RCT outcomes developed elsewhere? Might not some of these projects into which teachers pour their heart and soul, which Goldacre criticises for turning out to be too small, lacking robust design (p.17), in fact be avenues for translating distant evidence into locally relevant forms?

A related point concerns the critical research literacies mentioned by Goldacre. These are of course important, and if teaching is to benefit from any kind of research evidence, there must be critical appraisals of that evidence. But that critique cannot be limited just to understanding RCTs and ‘what works?’ kinds of research. Such critical skills should also involve understanding different approaches and their value. Goldacre doesn’t close off on the kind of critical understanding he’s advocating for, but I think it’s important to be really clear that a narrow RCT-focused literacy will not suffice.

On the perils of misinterpretation

In what was mentioned above we find an example of how some of the language used might have (perhaps unintentionally) provoked the strong reaction from some qualitative researchers. There is potential for readers to infer from Goldacre’s wording an equation of ‘small’ with lacking robustness (and such readings are readily apparent in the comments on the Guardian webpage). If big sample = better research, then we should look away from RCTs and more to statistical analyses of existing datasets, and the use of various regression models to figure out which schools are performing best. Sample size is a poor proxy for research quality. Goldacre knows this, but not all of his readers appear to notice this point.

There are other moments, too, for example when Goldacre mentions the risk of pilot studies ‘misleading’ on benefits and harms. I agree such risks are real and important. But it is not the pilot itself that poses the risk. It is the flawed interpretation or application of findings that poses the risks. Qualitative researchers might be forgiven for interpreting what was written as laying the problem at the door of qualitative research itself, rather than at the door of those who mis-use or abuse its outcomes. Yes RCTs are the only true ‘fair test’ but this doesn’t make other approaches ‘unfair’ provided they’re not doing a certain kind of test. Goldacre knows this. Many of his readers may miss this point.

Then there is the different language used in the 20 minute talk, which was less precise in its wording and thus more open to misinterpretation. Goldacre spoke of people being ‘horribly misled by weaker forms of evidence’. Any evidence, from an RCT or otherwise, has the potential to horribly mislead. Any evidence, from an RCT or otherwise, may be strong or weak depending on the question. The care Goldacre took in his written paper to manage these issues was less evident in the speech, in which listeners could easily be led into equating RCT with strong evidence, and other approaches as misleading and weak. This only applies to ‘what works’ questions. This is reinforced by phrasing that links good quality evidence with RCTs, again without explicitly placing a caveat of ‘only if we are asking is X better than Y’. And again in talk of swinging the balance towards more robust quantitative research. More robust than what? The potential for listeners to interpret this as a slight against qualitative research, or as a suggestion that qualitative evidence by definition lacks robustness, is clear.

As an aside, Goldacre also contrasts ‘nerdy academics’ with ‘teachers on the ground’ – setting up another potentially damaging binary. In particular this kind of talk fuels the Govian anti-academic rhetoric and misleads the public into outdated conceptions of ivory tower academics (see Pat Thomson’s blog on this). Many educational researchers are in schools week in, week out, working with new and experienced teachers. They are ‘on the ground’. They are also ‘on the ground’ because most academics are also teachers, themselves. This applies not only, but particularly, to educational researchers. If having a scholarly or theoretical interest in learning and pedagogy makes us nerds, then I’ll wear the nerd badge with pride. But I do take issue with characterisations that reinforce notions of aloof nerdiness against on the ground realism.

Another binary set up in Goldacre’s talk is between evidence-based practice on the one hand, and leaving everything to individual professional judgement. I’m convinced Goldacre has a more sophisticated view of practice than this – his writing about the need for critical appraisal of research suggests so – but again this kind of phrase can provoke defensive reactions, and risks being taken up in unhelpful ways if not set in a wider context.

On the risk of over-promising

Finally, there are some other very real risks that Goldacre himself acknowledges. He rightly says that evidence based practice isn’t about telling teachers what to do. As if evidence (from RCTs or otherwise) could ever be so prescriptive. Goldacre imagines a greater role of RCTs and networker participation of teachers in research, supported by experts, and feeding into two-way information architectures of setting the profession free from governments. Forgive me if I don’t hold my breath. For the simple reason that even if we were to achieve everything Goldacre sets out, it would offer few guarantees to children’s outcomes or teacher professional independence. Goldacre does not imply otherwise, but does not engage adequately with other features of the political-practice landscape.

Many teachers and educational researchers share a view that the education system in the US, which Goldacre notes funds way more RCTs in educational research than the UK, is straining – with many school buildings in urgent need of renewal, and high-stakes testing policies asserting a significant influence on practice. Outcomes from the What Works Clearinghouse are undoubtedly valuable, but do not land in a tabula rasa. And of course, not all RCTs change practice.

Even with more, better RCTs, and research cultures and information architectures of the sort Goldacre imagines, without stability in other aspects of the education system (for example in curriculum content, examinations, accountability structures, inspection regimes), any knowledge of ‘what works’ seems likely to be reduced in value either through short lifespan (it worked in the old system, but not the one now), or by simply failing to register on the radar in a profession that is straining from incessant change. I agree in theory, what Goldacre proposes might play an important role in emancipating the profession from ‘the odd spectacle of governments telling teachers how to teach’ (p. 19). I wonder how likely the promise of this is to hold true.

Some of the protectionism that Goldacre is surprised to see, and so strongly puts down as reflecting poorly on our profession, may in fact be understood as people with passionate commitments to precisely the same aims and improvements that Goldacre wishes to see, differing in their confidence in the whole of his vision becoming a reality, and clear in their understanding that even if it were to be realised, it would not be enough to secure the kind of conditions they feel best serve children and teachers.

I do educational research (and I’ll confess, it is of a qualitative kind most of the time) because I think research-based evidence has a lot to offer teaching and learning. Like Goldacre I don’t think we have exclusive rights to this kind of influence, nor do I think there is no space for political ideology either. I’m all for more evidence, better evidence, greater research literacies, more joined up research, and weaker divides between academe and schools. But please let us treat visions such as that set out by Goldacre with the careful and critical reading to which we should subject research.


Mortimore, P. 2000, ‘Does educational research matter?’, British Educational Research Journal, vol. 26, no. 1, pp. 5-24.

Martyn Hammersley’s framework for critical reading of (ethnographic) research: why I like it

This is just a short blog post to accompany a linked podcast, video, and prezi that go into these issues and the framework in more depth.

I’m often involved in teaching students about critical appraisal of educational / social science research.  I’m not convinced by arguments that we should judge research only by the criteria that apply within a particular perspective or paradigm. Notwithstanding my prior post, based on Schatzki’s arguments, about why ontology is important and how it changes the game in terms of judging research, I do believe that there are some dimensions of research that can be subject to a broader-based critique.

This refers to a framework presented by Martyn Hammersley in chapter 2 of: Reading ethnographic research: a critical guide, published by Longman (eg. 1998 2nd edition).

I think Hammersley’s framework (originally written with a focus on ethnographic research) provides a sound basis for precisely such an approach. The content does not overly prescribe what good research is, nor does it replace rules, conventions and quality criteria associated with particular perspectives or approaches.

But, as I say in the podcast, I’ve yet to come across a piece of social science research where asking probing questions about the focus, empirical context / case, methods, claims [and their links to the case] and conclusions [and their links to the focus] have not been useful as a means for assessing research quality.

Reader-listeners will detect my strong attachment to the idea of ‘evidence’ in educational / social  science research. I doubt everyone shares this, and I’d be surprised if everyone agrees with the views expressed in the podcast.

Part of my motivation for the podcast was a reaction to constructions of Hammersley (and others like him) as rather old-fashioned empiricists. I hope the podcast shows how a concern for evidence, quality of evidence, and relationships between claims and evidence does not automatically position one as a naive realist who’s never heard of the crisis of representation etc.

I conclude the podcast by arguing that the aesthetic dimension of research (something I’ve blogged about elsewhere, too), is something that is not excluded from Hammersley’s framework, but isn’t given the emphasis that it might deserve. I suggest that incorporating aesthetics into assessments of research quality (inspired by Silvia Gherardi, Antonio Strati and others), follows through on the original spirit of Hammersley’s framework. Hammersley is very careful in setting up a position that rejects a doctrine of immaculate perception, and has an explicit role for modes of writing, relationships between researchers and participants, and varying degrees of insight, inference and so on. I simply suggest that highlighting these complements and enriches a focus on claims and evidence.

In summary: a lot can be achieved in terms of critical appraisal of educational or social science research by thinking about:

1. The Focus (wider topic), its articulation (scope, boundaries), importance, relevance

2. The Case(s) studied [not that all research is a case study] – the spatially and temporally limited aspect of the wider focus that is the actual subject of empirical research

3. Methods – including processes through which data are generated [and not collected: see Pat Thomson’s blog for more on this], relationships between researchers and participants, analytic techniques etc.

4. Claims made about the case – different kinds of claim and the different kinds of evidence that would warrant them

5. Conclusions drawn – not letting go of evidence completely, but saying something about the wider focus, moving beyond the specific case (eg. via theoretical inference, empirical generalisation).

If we concern ourselves with these questions, and relationships between focus, case, methods, claims, and conclusions, while keeping a close eye on evidence (whatever that may look like), we can’t go far wrong. And if we are sensitive to aesthetic dimensions when we do this, too, so much the better!

How to keep up to date with research in your field (particularly in the social sciences)

I was asked recently by the lovely people at UTS Library (who happen to have an excellent blog), to speak to doctoral students and other early career researchers about what I do to keep up to date with research in my field. This provoked me into thinking…

What does it mean to be ‘up to date’?

Being a social scientist, my first instinct is to debunk the question, to challenge the assumptions underpinning it. Let’s begin with up-to-date-ness. I don’t regard knowledge in my field (education / social sciences) as being updated in the sense that what comes later replaces what came before. Old knowledge is rarely obsolete, and new does not necessarily mean better. Furthermore there are fashions and trends which have interesting relationships with temporal trajectories of knowledge: education, for example, is sometimes seen as developing obsessions with thinkers and writers who have long since been left behind or fallen from the limelight in other social science fields.

For me, being up to date includes the conventional sense of having my finger on the pulse in terms of what some of the most recent outcomes of research are (note I don’t use the term ‘findings’). But this also involves maintaining a sense of the changes in the landscape in terms of groups of researchers and what kinds of work they are doing. Not just a retrospective ‘what has come out in journals’, but a contemporary ‘what are the key people in my field up to’. Being up to date also involves anticipating what is coming: there’s an ace team in [insert your university of preference here] that are the ones to watch; what’s going to happen now [superstar Prof or ECR] has moved to [wherever they have gone]. Up-to-date-ness involves projecting what will be in vogue and novel in the coming months and years. For many of us this anticipation requires looking outside the field (to changes in policy, practice, social contexts), and within it (who’s emerging, who’s taken over influential editorships etc).

And what is ‘my field’?

Not as easy a question to answer as it might seem. I do work based on pedagogy and practice in child and family health settings. Is this my ‘field’? I publish in journals relating to continuing education and workplace learning: maybe this is my ‘field’? I draw on sociomaterial and practice theories? Do writings by researchers using these (in philosophy, organization studies, education etc) constitute my ‘field’? As an educational researcher, don’t I also have a professional responsibility to know what’s going on in other areas: school education, higher education. Isn’t this my ‘field’?

Well, I think the answer to all of those question is ‘yes’. One’s location or position in intellectual communities is not singular. Just as those communities have a texture created through questions of conceptual scale, disciplinary boundaries, and historical changes, so our position in those communities becomes a textured on: positions [plural], maybe.

And this means we must have a textured approach to keeping up to date. Not all dimensions of our field are as important as others. For me, I use Table of Contents (ToC) alerts to keep tabs on who is publishing on what topics in the general field of education, reading the odd abstract I find interesting. For specific areas where I’m publishing and contributing to advances in knowledge, the approach is much more in depth. But even then it’s not that simple. What if I want to publish in a very general journal like British Educational Research Journal? It’s no good just having a cursory sense of what’s going on in my field and in the broader conversations that ‘big’ journals like BERJ support and publish. If I’m going to take that conversation forward, or in new directions, I’ve got to be more than a (legitimate) peripheral participant in it [those of you who’ve come across Communities of Practice literature will get the poke here].

What is a field made of? Is it findings? I’ve already questioned that notion. Is it ideas? Concepts? Studies? People? Research centres? Theoretical ‘turns’? My answer [no surprises for guessing]: all of the above.

And now, some strategies I use for keeping up to date with my field (whatever that means)

Live on Ramsay Street

Forgive the reference to TV soap opera Neighbours but there’s an important point here, relating to the complexity and texture I discussed above. We have to know who our research ‘neighbours’ are: who is next door, doing the work that relates most closely to mine? Who is down the street, doing similar stuff, maybe in a slightly different way or with a different angle? Who comprises my suburb or neighbourhood – people with whom I share a broader affiliation, but who as a collective still mark ourselves as distinct from the field at large? And how much does my city sprawl – who are the people to watch in the broad discipline or field (for me, education)? And of course, we might be flying over to other cities (fields) from time to time, too: who are our best friends there?

Accrue air miles

Air miles rock (though I have to admit the environmental consequences of rewarding pollution with yet more pollution seem troubling). Not only because you get access to business lounges, free upgrades, and a sense of superiority when you tread the red carpet at check-in, beat the queues through security and immigration, and board before everybody else.

Air miles rock as an outcome of important ‘keeping up to date’ activity. Like it or not, intellectual work doesn’t happen in a single place (note how I’m deliberately upsetting the Ramsay Street metaphor I used above, bringing out the need to jump on a plane every now and then). This was true when I worked in the UK when there were dozens of universities I could visit easily by car or train within a day, and is more true now I’m in Australia where the density of higher education institutions is much lower.

But how many universities there are within 200km isn’t the issue. Chances are no matter where you are, some of the best people in your field are 1000s of km away. Being friends with them, knowing what they’re doing, what they’re about to do, and what they think is coming up next is crucial.

I admit I’ve been very fortunate in receiving generous support for international travel through the positions I’ve held, and I recognise not everyone will be flying often enough to get to gold status. The air miles thing is me being flippant. What’s important is not being parochial in the contacts we make (and twitter, skype, email etc are all useful). And being strategic in how we plan and make use of international travel. Tempting as it may be to find conferences in the more glamorous locations and to travel widely, I’m increasingly of the view that going back to the same place(s) again and again is of more value. This means choosing a couple of conferences that you’re going to make an ongoing commitment to. You want to be walking into the room and recognise, and be recognised by, a good proportion of the people there. It also means doing things like doubling up conferences with institutional visits. The Researching Work and Learning conference creates a temporary Ramsay Street for me, when most of my buddies from that part of my field actually do come together for a few days and inhabit the same (geographical as well as intellectual) space. It’s happening in Stirling in 2013, and I’ve arranged to spend a month there in the run-up to the conference. One air-ticket, but a whole new level of richness in terms of my engagement with overseas colleagues. I’m flying less and making each truckload of carbon dumped in the atmosphere worth more. Visiting institutions is becoming increasingly important to me, and sometimes even replacing conference attendance.

The oedipus technique

Obviously you all think your own work is brilliant, amazeballs, the best. Other researchers who think the same might be worth tracking down: they clearly are the wise ones who know what good research looks like and what the important issues are. Maybe your next door neighbour is a bit of a quiet, introverted type, and without you knowing they’ve been devouring your papers and citing you left, right and centre. Google scholar is a great way of finding out who is reading your stuff – just click on the number of citations and you get a list of where you’ve been cited. It’s also good to see which of your publications is the intellectual equivalent (in terms of popularity, not quality, of course) of Harry Potter, or Fifty Shades of Grey, and which is less widely read (you may wish to regard these as ‘niche’ or ‘challenging’).

In further egotistical adoration of seeing my reflection ripple across the pond that is my field (another metaphor? seriously?!), I also keep tabs of who has been contacting me and asking for papers etc. Not only is this useful when you have to demonstrate ‘impact’ but it’s another way of figuring out who to be friends with, and a way of instigating contact. By not putting papers online, but instead putting details and asking for people to contact you for a copy, you can encourage this. It’s particularly useful with papers published in journals with copyright restrictions. You can’t publish them freely on your own web page.

Old doesn’t (always) mean gold

No offence here to my more established readers, but you’re not going to be here forever. There is constant talk of demographic crises in social sciences: many fields are quite top heavy – lots of academics with decades of experience, not so many in the younger / earlier career ranks. At some point, our current profs will no longer be occupying those prime real-estate offices and editing the big journals. Someone else is going to have to take over. Anticipating who that’s going to be is crucial.

At conferences (particularly the big education conferences in the USA), I have seen well-published professors followed round with a flotilla of admiring doctoral students and early career researchers. Celebrity or guru academics pack out rooms. Great. Many of them probably deserve it and are doing brilliant stuff. But two caveats: sometimes the most established people in the field can also be the most conservative and resistant to change, policing values of the old school. These may be values worth policing (I have my own gurus who do such policing, and I thoroughly intend to continue it myself in some areas). But what happens when they’ve moved on?

Building relationships with other doctoral students, early career researchers, new lecturers etc not only brings different kinds of friendship and joy to research. It is also an investment in your future and the future of your field. Attempts to keep up to date might be well served by following the top professors; they might not. I’ve yet to meet a doctoral student who wasn’t shockingly up to date with what is going on.

Up to date 2.0

Nothing revolutionary here (nor surprising to you since you’re reading this on a blog): social media are great. Twitter, blogs, podcasts, academia.edu – all great. My advice: don’t lurk, be active. But don’t kid yourself into thinking you can tweet or blog yourself into thesis completion or that next journal paper. These are means to something else. But a valuable one.

Sit back, relax, and let others do the work for you

I have self-confessed to adopting a laziness-based approach to keeping up to date with my field. I’m exhausted after having written the above, let alone actually done it all (keeping logs, jet lag, making friends, predicting the future etc). Luckily there are lots of other people who (more or less intentionally) are willing to do some of this work for you. Read book reviews. Use automatic email lists for selected journals and authors. (BUT! And I learned this from a panel member at UTS last month: do this selectively. If you find yourself automatically deleting or ignoring the automatic emails, you’ve giving yourself information overload and need to cut it down). Join Special Interest Group (SIG) lists. Follow interesting and relevant people on twitter. Read blogs.

And double-up on your own work. Reviewing articles for journals is great. Not only do you develop your own writing skills, and make a contribution to your discipline (an ethical obligation in my view), but you also get an early scoop on what is coming out. I often ask to see the other reviews of papers I referee, so I can see if other reviewers identify literature or ideas that I’ve missed. See, other people are doing the work for me again! And like I said, I reckon doctoral students are some of the most up to date researchers there are around. Reading their lit reviews is great, and a bonus of being a supervisor. Laziness is not total work avoidance, but recognising the multiple benefits of work you already are doing. And avoiding work where it is distracting, irrelevant, or if other people are already doing it for you.

From Land of Hope & Glory to Lasagne: ontology, epistemology and social research!

A student (Samantha Thomas) posted this response to my podcast about music, ontology and epistemology.

I think the way she takes the idea of the metaphor and applies a new one is great. Thanks Samantha! I have also put a reply from another student underneath!

This podcast was great and really got me thinking about the different ways that we can unpack an idea.  I’m not sure how helpful this will be as a metaphor, but given I listened to this podcast over dinner I thought I would try and relate it to my meal – beef lasagne.

So, if I were to think about beef lasagne in terms of ontology (a very strange thing to do) and ask ‘what is it?’ then I could describe it in a very scientific way in terms of the exact ingredients, measurements of those ingredients, cooking time, cooking temperature, method of cooking etc.  Basically talk about it in terms that a recipe would, 500g of mince, 1 onion, 2 cloves of garlic etc, baked in an oven at 180 degrees for 45 minutes.  This answer assumes that there is a single reality about what I ate and is therefore a positivist perspective.  And if the ontology of positivism says that there is a single reality that is undeniable, then it follows that the epistemology is about uncovering the truth/answer that already exists (finding the recipe in this case).
However, I could look at my lasagne as a collection of ingredients that were put together by a chef (or a very bad cook), interpreting and following a recipe and using the equipment and utensils at hand.  The meal is a result of not only the ingredients, but the way they were assembled, the quantities used, the skill of the chef in following the recipe (what is a dash anyway?) and the type of equipment used (electric vs gas oven etc).  Not only that, but each person’s taste buds are different, and so the answer to the question ‘what is it?’ is also influenced by the individual who ate it as well as the individual who made it.  And so, with all of these variables, the ontological question  ‘what is lasagne?’ could have a number of different explanations and therefore realities.  This is an interpretive perspective which accepts that there are multiple possible realities at any one time as the reality has essentially been constructed.  So, if the ontology allows for multiple answers to the question ‘what is it?’, then the epistemology has to be concerned with considering all of of these factors, weighing up their importance, and providing an answer to the question based on this interpretation.
If you add in further human meaning to this somewhat ridiculous question ‘what is lasagne?’ and look at it from a vegetarian’s perspective, then their answer to that question might be that it is an inhumane meal that should not be eaten.  And if I were Italian, my understanding of ‘what is lasagne?’ might be very different from my own Australian perspective.
Anyway, I’ve got way too carried away thinking about food, and I’m not sure if it is at all relevant to be discussing lasagne in terms of ontology and epistemology anyway…..
And then we got this reply, which I also like:

I really like the metaphor, however, your recipe for lasagne needs some ‘chilli’. Where are the radicals in your approach? There needs to be someone that disputes the name and origins of the meal. I am sure the Greeks were making food this like before the Italians.  Actually, there is possibly some “undiscovered’ tribal group in Central America cooking up a meal just like this and they would cook in on an open fire – none of this new age electricity!!!!

Great analogy in your comments

Theodore Schatzki on why ontology matters in educational & social research

I was inspired to write this, and base much of the content on:

Schatzki T R (2003) A new societist social ontology. Philosophy of the Social Sciences 33(2), 174-202.

Schatzki considers whether social [and educational] researchers should simply to implement methodological strategies for investigating social affairs and to avoid ontology altogether—for ontologies are nothing but unnecessary and empirically unconfirmable presumptions.

In my words, what he is saying here is: it could be argued that ontology is an abstract philosophical concept best left to philosophers, while social scientists get on with rigorous empirical enquiry.

When Schatzki talks about the ‘social’ he does not mean social in terms of the opposite of anti-social (ie sociable). The social refers to things pertaining to human existence. All educational research is thus social research on these terms.

Schatzki: Types of ontology however, have implications for method.

Schatzki tells us that ontology affects:

  1. The choice and use of particular methods
  2. The inferences that are made from observations and measurements to statements about social matters [ie. how we interpret evidence]
  3. The formulation of these statements [ie. The kinds of knowledge claims we make, and the degree of certainty and universality that apply to them]
  4. What of the social is thought to be directly experiencable, observable, and measurable [ie. Which phenomena we assume we can directly see or measure. This gets messy quite quickly when, as in educational research, we are making claims about social phenomena such as learning, educational achievement, boredom, interest, motivation, emotions etc. Can we see these things? What does evidence of these look like?

Schatzki says: these matters will vary depending on whether a researcher is an individualist [someone who believes individual free will or agency control the world] or believes in social structures [things like class and race as primary influences on human life]… or social facts distinct from facts about individuals (and maybe their relations).

He then considers: An investigator might proceed oblivious to this dependency and simply carry on research. How he proceeds, however, will implicate stands on these issues that collectively affirm at least some general type of ontology (e.g., individualism).

What Schatzki is saying here is that like it or not, and whether we are aware of it or not, as soon as we embark on research, we are making ontological assumptions. They are not optional extras. They are inevitably, always already part of the process. You don’t choose to have an ontology or not. (You can choose which kind of ontology you work with).

Schatzki, as a philosopher, then tells us of the benefits to social research of being explicit about our ontologies, and exploring differences between them. He calls this ‘the advantage of ontological self-consciousness and choice when studying the social world’.

When I was a student I was often sceptical of the value of such self-consciousness, which seemed at times like naval-gazing, abstract introspection ladled with technical terms designed to confuse and frustrate (see another of my posts for more on my love/hate relationship with research perspectives as a set of concepts).

Schatzki then considers how a ‘methodologist objector’ (someone who is less convinced of the advantage Schatzki speaks of) might reply:

OK, a variety of ontologies might inform social research. There is still no reason to argue for and against particular ontologies. Justification in social science is empirical validity, and ontologies cannot be empirically tested.

What is being argued here is that the advantage of one ontology over the other is not something that can be put to empirical test. (NB. This is not the same as the advantage of ontological self-consciousness, which is being aware and explicit about ontology).

As social scientists we care about empirical evidence in leading us to make some claims about the world and to reject or question others. If we can’t test our ontological assumptions this way, then they are just arbitrary choices. Not very ‘scientific’ at all.

Schatzki considers one way of testing ontologies: to see which is the most successful research, and then go with whatever ontology that research is based on.

Great. Easy.

Too easy (as they say in Australia 😉 )

But. Big BUT…

Schatzki says: There are at least two problems with this response. First, unanimity does not exist in social science about what counts as a ‘successful’ research program.

In other words, not all social research projects have the same success criteria. They have many different aims. So establishing which is the most successful research isn’t going to be easy. Or fair.

Schatzki again: Second, and more specifically, what counts as success often reflects ontology.

What good research looks like (and I’ve written about this before) depends on the ontological assumptions upon which it is based, not just our aims (though these are linked).

The person who says ‘identify the best research and follow whatever ontology it uses’ is asking us to identify what ‘best’, but ‘best’ cannot be judged independent of ontology. It’s a circular argument that gets us nowhere.

Schatzki then offers us a genius phrase: So empirical validity is not ontologically innocent.

He adds: And because ontology is tied up with social research, there is room and need in the overall enterprise of social research for ontological disputation.

For me at least, he has won the argument. Even as a researcher who really cares and worries about data and evidence (in a way that some would regard as old fashioned), I simply can’t escape the need to be thoughtful and clear about questions of ontology.

Don’t like philosophy? Don’t like long words that end in –ology or –ism? Want to just get on and do some proper social research? Tough sh!t – ontology is coming your way whether you like it or not. In fact, it’s already there.

Why the idea of research perspectives is brilliant and annoying at the same time

What are research perspectives?

This is a term that is often used to describe different approaches to research. These differences are generally understood as being rooted not simply in the focus or methodology, but in deeper ontological and epistemological foundations of research. Ontology concerns assumptions we make about reality. Epistemology is our theory of knowledge, and how what we come to know relates to that reality (or those realities). Methodological approaches and study foci can in some ways be seen to flow from these deeper (philosophical) points of view.


Brilliance lies in the fact that acknowledging different research perspectives (others may use the term paradigm, though it’s not quite the same thing), we are forced into a number of important realisations:

  1. Not all researchers understand this thing called ‘knowledge’ in the same way
  2. So… the enterprise of doing research in order to advance knowledge is understood very differently. Some are looking to discover knowledge that gets close to a single truth. Others are looking to create knowledge that provides different possible answers to the same questions.
  3. By implication, what it means to do research well changes according to the kind of research we are doing. I like metaphors, so let’s think of this in terms of Olympic runners. We can compare the running style of a sprinter and that of a long distance runner. Who is the better runner? The sprinter moves quicker, and develops and uses her body to effectively cover short distances. The marathon runner develops and uses her body differently. It’s not fair to say one is better than the other because they are trying to do different things. What ‘good’ running is depends on which race you’re running in. What ‘good’ research looks like depends on the perspective taken. Bing!
  4. Finally brilliance in the idea of perspectives lies in the fact it forces us not to take knowledge, evidence, methods, data, and truth for granted. If we are in the business of producing new knowledge we need to take these things seriously, not brush them under the carpet.


But we have to be clever in the way we work with these concepts. Why?

  1. These are conceptual categories that have a mixed, sometimes quite problematic relationship with actual research practices. Many studies don’t fit neatly into one or other category.
  2. Categories tend to turn messy, blurred boundaries into neat, separate entities. Most people who write about research perspectives acknowledge this – we need the concepts as sign posts and to give us some clarity of thought; but at the same time we need to be flexible and hold them loosely. Aargh!
  3. People can develop a security in applying long words that end in …ism as a kind of badge or label that fits their research, or even themselves as a researcher. But like many things in the social world, research isn’t a stable activity, and projects may evolve, researchers may change their views, or hold contradictory views at the same time. Badges have their limitations.
  4. Badges or terms like ‘interpretivism’ also turn into chunks what might be better conceived as a continuum, or even a big set of splodges and squiggles (like a modern art painting maybe). Many books write of positivism, post-positivism, interpretivism, feminism, critical approaches, poststructuralism etc (and the terms used to describe the same, or near-same things vary; that’s another issue!). But however long the list, there’s always more. Practice theory has a ‘site’ ontology (see the annotated bibliography of Schatzki). Actor-Network Theory makes other ontological assumptions again. But they have some points of overlap.
  5. I’ve often been asked, what about a big quantitative study that studies gender inequality in schools? The quantitative stuff perhaps signals a post-positivist perspective. But the gender inequality might be redolent of critical or feminist approaches. Which is it? To return to our metaphor: it might not be clear which race is being run: Is there a sprinter warming up on the starting line for the marathon? Or is there some hybrid or complex combination going on? (Here’s where the metaphor runs dry, [excuse the pun]). That’s the difficulty when we set up categories like all the …isms. But at least the categories have been useful in getting us to think about the assumptions made by the researchers and their purposes.

So caveat emptor – buyer beware: use these concepts cleverly, and with caution.

PS. There is nothing new here. I’m by no means the first person to write about these issues.

Ethnographic fieldwork: transparency, uncertainty, and what is going on here?

On Tuesday 19th February, the Ethnographic UTS group met once again, this time for a themed event focused on fieldwork and data. We had a lively discussion and exchange of ideas among research students and staff from several Faculties. As is typical of our meetings, we found that members have very different perspectives, theoretically and methodologically. Marie Manidis, Deborah Nixon, Paul Thambar and Sarah Stewart all provided us with engaging entry points into their research worlds. Here are some reflections on the issues that came up during the afternoon.



“Transparency is about what is not there”. Marie introduced us to this quote from Silvia Gherardi, prompting us to think about just what you can see or observe as ethnographers. This thread was woven into our subsequent discussions, and links were made by some participants to related qualitative traditions of oral history interview methods.


What can we see as ethnographers? What do we interpret or infer on the basis of this? How can you observe a practice? How can you observe learning? What did Silvia Gherardi mean when she referred to transparency?


In a way I think Silvia’s statement is quite useful. I believe (others may disagree) that ethnographic fieldnotes should be dull. They document material artefacts, social doings and sayings, spatial arrangements etc. The meaning and significance of these comes later, the act of observing is in many ways a mundane (and of course selective) documentation of quite boring things. This really struck me in my own fieldwork in a health setting when a nurse read my notes and remarked how boring they were.


So what are we seeing here? Not the higher level concepts that we are interested in, such as practices, learning, business strategy, collective memory. Perhaps our mundane seeing (through observation) and listening (through interviews) can become transparent, rather like a pane of glass can provide us with a view through to a world beyond. We do not see that world directly, we have to look through the glass; and it filters what we see.


I like the metaphor of the window also because it captures something about what is strange in ethnographic fieldwork. In everyday life, we look through windows as if they weren’t there. The ethnographer makes the familiar strange, notices things that are normally taken for granted. It’s as if we turn our gaze from what lies beyond to the glass itself. We notice its thickness, features, cracks, specks of dirt. That’s what is often hard in observation and interviewing – getting and maintaining that focus on what is normally so obvious it becomes invisible; or transparent.


Trust in uncertainty

All four presenters described uncertainty in terms of what they were looking for in their fieldwork, particularly in early stages of the research. How do we live with ignorance of what we are looking for? Ethnography is often touted as valuable because it offers a degree of holism that goes beyond what other approaches can achieve. But we know that our observations and other methods are always selective. How, then, to be selective in a state of ignorance or uncertainty?


One answer is perhaps simply to trust: to trust in oneself that the data you are generating are highly likely, on the whole, to be useful in one way or the other; to trust in the world that it is fascinating enough to let you follow where it takes you, confident of arriving somewhere interesting. But blind trust is unwise, so what are the checks that balance our faith? It’s unlikely that after a few field visits or interviews, an answer to your research questions will emerge. In my experience this kind of creeps up on you, semi-consciously, as you become embedded in the field, immersed in your data, and develop a sense, often intuitively, of what is going on here. Yes, we then subject this sense to rigorous analysis and theoretical interpretation, but I think it often has soft and hard-to-pinpoint origins in our extended time in the field.


Fine – but looking for answers to our research questions early on isn’t going to work as a counter to misplaced trust is it? Sharing fieldnotes or transcripts with peers and supervisors can help see where gaps are, identify what other people might have been looking at or for, what might be needed to create a fuller vicarious window (here comes the metaphor again!) into what was happening. But I come back again to trust: diligent researchers, with well thought-through questions, elegant designs, and theoretical fluency seem pretty likely to be on the right track or not far off. Of course what the right track is may change, and those qualities I mentioned can all help us achieve the flexibility and responsiveness that is a hallmark of good ethnography.


So what is going on here?

I return now almost to where the first theme left off – how do we move from our boring fieldnotes, to beautiful and fascinating insights about the world? As we look at the everyday in its magnificently dull detail, how do we see the bigger picture? This is where our ethnographic sensibility of noticing changes register – from noticing that underpins our fieldwork, to a noticing that provides foundations for analysis and interpretation. Just as concepts and theories may help us focus or filter our fieldwork, so they provide crutches in our analysis. In my own work, I rely heavily on conceptual understandings of what learning is and how it is brought about in order to make claims that quite mundane actions are in fact instances of learning. I don’t claim to see learning, but rather to see certain conditions that tick the boxes that my theoretical approach tells me are required if learning is taking place.


But we also have creative insight, a-ha moments, and intuition at our disposal too. Just because they may not fit elegantly into pseudo-scientific accounts of rigorous, systematic analytic techniques doesn’t mean they don’t happen or aren’t important. An analytic idea or interpretation may result from highly accomplished technical procedures but ultimately be weak, dull, and far from offering any new or meaningful insights. One borne of intuition or something more fuzzy could be brilliant, radically changing how we understand our data, and thus offering something new and interesting to say about the world. Of course we want to subject that brilliant idea to rigorous testing, throwing various things at it to see if it holds (doubt, the data, our peers, supervisors, reviewers of journal manuscripts).


Two things leapt out at me from the presentations that touch upon the issue of ‘what is going on here’ in fascinating ways. Deborah’s description of photo elicitation techniques with elderly people who had experienced partition in India was fascinating. Rather than taking a photo as a snapshot record of an instant and asking ‘what was going on there and then?’, she used photographs to open up a much more temporally and spatially fluid, and affectively rich, set of responses. The stimulus of the photo was taken up by her participants such that the question ‘what is going on here?’ took us through memories and sentiments, to times and places far from those depicted in the image.


As I sat listening to Deborah and looking at her photographs, I realised that images created as part of my own data (line drawings based on photographs) were doing similar work. Rather than just being visual representations of moments, they were helping me ask ‘what is going on here’ in a different way, inviting me to play with temporality and spatiality, and make connections between bodies, things and practices that I hadn’t made before. The drawings are (I admit) quite uninteresting: it is their function as a window (here we go again!) into something else that is where their real value lies.


Finally, Sarah’s description of her fieldwork approach introduced a lovely idea about complementing different techniques. She is using observation and interviews in a fluid and responsive relationship to each other, in order to maximise the light that can be shed upon a particular event or situation. This does not mean forgetting that these are different windows (!) onto the world reflecting different processes of selection and production. But it does keep a nice focus on our purpose: drawing on all the resources and sources at our disposal to arrive at the best sense of what is going on here. What ‘the best’ means… well that would be a whole new blog post.


Postscript – some amuses-bouches for posts to come…

  1. How do you observe a meeting?
  2. What can we do when participants are (overly) generous in their treatment of us? Don’t we sometimes get too worried about being a burden, about compensation or reward, when really the best we can do and our primary obligation is go ahead and do our research to the best of our ability?





Quality, parsimony and beauty in educational research

So… my first post aimed at supporting students studying research perspectives (UTS 013952) – which covers issues about quality in educational research, philosophy, what it means to produce new knowledge etc. At the heart of this is learning to be critical – not to take what you read for granted or at face value.

One thing I have often noticed is that people leap quickly onto the critical part of critique (ie. picking  holes, identifying limitations or shortcomings), and forget the equally important part: giving credit where it is due, identifying strengths. Think about it in terms of food shopping – we could go round the supermarket giving reasons why all the products are rubbish, but without a sense of what good food is and an ability to know it when we see it, our trolley would always be empty.

One really useful text for getting started on these issues is (chapter 1 in particular):

Yates, L. (2004). What does good education research look like? Situating a field and its practices. Maidenhead: Open University Press.

In the opening chapter, Yates notes that many people give one or more of three responses when asked what good research is:

1. It is technically good – systematic, tight, well designed etc.

2. Makes a contribution to knowledge – shows something we didn’t know before.

3. Achieves something that matters – which in education people often take to mean makes a difference to teaching and learning in classrooms.

Yates (quite rightly in my view) debunks each of these. When I’ve asked students in the past about good research, many have used words like ‘objective’, or ‘unbiased’. These point to the technical theme (1) from Yates. Research should be done well. We can’t cut corners, be sloppy with our concepts or methods. We have to think carefully about things like samples, the tools we use to generate data, and processes for analysis. But I would like to complicate ideas about what technical quality might look like. Can something be subjective and still technically good? In some circumstances, yes! It depends on your perspective – what ontology and epistemology you are working with.

Couldn’t a piece of research be technically good but still rubbish? Let’s think about designing something, maybe a car. It could have perfect components, a finely tuned engine, but be shockingly ugly, too wide for roads, too long for car parking spaces, too high to pass under bridges. No beauty. No utility.

Beauty? Utility? In research? Well the utility part links to the 3rd response above – making a difference to something that matters. I agree – there is an infinite number of questions we could ask about education, and I don’t think all of them are equally worthy of our attention as researchers (and the money of the people who fund research, who are often taxpayers!). But research can be useful in many ways, not just identifying ‘what works’ (here I am poking at a major preference in the USA for a particular kind of research that promises this kind of outcome; see here for more info). As we continue in class and in this blog, we’ll think more richly about what utility might mean.

What of beauty? I think good research does have an aesthetic quality that is often overlooked. A kind of elegance that comes from a great question that cuts through to the nub of an issue; snappy, tight concepts that give us something to work with without over-complicating (I’ve read hundreds of studies that tell me X or Y issue is more complex than we thought. Yawn); a neat design (size and scale aren’t everything); and a focused, insightful analysis. Statisticians don’t just apply rigid mathematical formulae; they make judgements when they build models of the world, and one of them can be framed in terms of parsimony – striking a balance between explanatory power and complexity. I think parsimony is a quality of all good research. But it’s not a 1 or a 0 kind of concept. More a question of grey areas than black and white: judgement; aesthetics; beauty.

The quick-witted amongst you have noticed I have ignored the 2nd response, that of making a contribution to knowledge. But have I? When I said not all of the infinite research questions are equally worthy of our attention, didn’t I imply that not all new knowledge is equally valuable? Research that doesn’t lead to new knowledge isn’t (by my definition) research, let alone good research. But that doesn’t mean all contributions to new knowledge are good research. What if I ‘found out’ that students do best if 100% of their classes are 1:1 with teachers with PhDs in their subject area? It might be ‘true’; based on a technically competent (even parsimonious) study; and as far as I know, no-one has shown this to be the case before. There’s my novelty. But so what? What’s the point at arriving at a conclusion that is so disengaged from realities of politices, budgets etc.?

Now I’m going to contradict myself, and leave you with the question that maybe good research questions and challenges the status quo, including dominant political ideas, assumptions about money and funding etc. Maybe something of the beauty in educational research is precisely the ability to take evidence and to use to imagine new possibilities, new ways of facilitating learning, to provoke new dreams of justice and equality? How else will we break persistent cycles of inequity if we don’t use research to do this? Adding a brick to the brick wall of existing ways of thinking is fair enough. Maybe good (beautiful) research lobs bricks through it, knocking a hole in the opaque edifice and giving us a glimpse of what might lie beyond?