the appraisal and rejection of conference abstracts

I had an email recently from an early career researcher who’d just had an abstract for a conference knocked back. When they asked for feedback, they were shocked by what they read. Presumably assuming that the writer would never see what was said, reviewers had made very rude remarks. These included a statement that the researcher should turn their hand to writing “unsubstantiated pamphlets”.

The early career researcher was very taken aback, and then began to think about the implications of this feedback, and wondered about whether it was the discussion of the marketisation of higher education that was actually the problem. Was it just politically too difficult for the conference reviewers? And if so, was it worthwhile trying to raise such questions at all in this particular scholarly community.

What did I think?

Well of course neither the ECR nor I will ever actually know the answer to why the abstract was knocked back. However, the line about unsubstantiated work does suggest one possibility, and this was what I raised in my email back.

This incident brought to mind another occasion where a very senior scholar in my field was rejected for the key annual national conference on the basis of his abstract. He was told to find a method. Now his area of expertise – and of his multiple, widely used and referenced publications – was research methods. Not surprisingly, he wrote a very cutting response about the grounds on which he was rejected. He suggested that in an effort to create some criteria for selection of abstracts, the conference had privileged a particular kind of research – that which most closely conformed to a science-like empirical study. In asking for an abstract which discussed aims, methods, theoretical framework and results, the conference had ruled out anything which did other kinds of work – philosophy, history, textual analysis, think pieces. His work was a think-piece which didn’ t conform to this kind of format.

I’d had an experience that was somewhat similar to this, but with a different conference. However the same set of abstract rules applied – address the aims, methods, theoretical framework and results. I did do this, but the problem was that the method I was using was action research, and one of the reviewers clearly hadn’t heard about it and didn’t recognize it as a methodology. So the comments I got back from one of the reviewers were that this wasn’t research and I needed to find a method.

Well, as the Editor of an action research journal you can imagine my response. Who is this idiot? Who is this person who doesn’t know that this actually is a methodology and is written about in most basic research methods texts in our field? What are they doing reviewing at all? Who gets to be a reviewer for this conference and on what basis? Fortunately the other reviewer had heard of AR, and the very conflicting reviews meant that someone had to mediate. The result was that I did present at the conference, but this experience led me to a very particular decision.

I realised that I can’t always trust conference reviewers. This doesn’t apply to abstracts submitted to a specific special interest group (a SIG) which is content focused, of which I’m a member, and where I know the kinds of people who will be reviewing and the way that they will make judgements. It also doesn’t apply to small discrete conferences where the emphasis is on building a community based on interest in the topic and simulating debate. But it does apply to the big generic conference and a random set of reviewers.

But while it’s possible to make particular decisions about particular conferences in this way, there do seem to be two more general issues. The first is about a very narrow view of what counts as research and scholarly activity. At a time when interdisciplinary research is being actively promoted, conferences really ought to be thinking about how to encourage, not eliminate, people who bring different disciplinary traditions and different intentions to their conference submissions. In the case of the ECR who emailed me, they came from an arts and humanities background, and it seems pretty likely to me that they were done in and done over by someone with a very doctrinaire application of an extremely narrow and empiricist view of social science.

The second implication concerns thoughtless reviewers. Is it possible that conference reviewing is even more arbitrary and less considered than that offered for papers? My experience is that conference abstract comments are generally shorter than those for papers. Indeed, sometimes just a simple score is given with little justification for the allocation. This is not really adequate for those who are rejected who really do want to know why. But when feedback comments are simply caustic, as with those received by the early career researcher who emailed me, then it can be pretty discouraging. At the very least the ECR probably won’t try that conference again. And is this really what the conference organisers want?

About these ads

About pat thomson

Pat Thomson is Professor of Education in the School of Education, The University of Nottingham, UK
This entry was posted in abstracts, conference papers, early career researchers, peer review, rejection, reviewing and tagged , , , . Bookmark the permalink.

6 Responses to the appraisal and rejection of conference abstracts

  1. random says:

    So disappointed to read this. Really, it is dinosaur thinking to assert that qualitative research methods are less rigorous and have less impact than empirical research. I would really question the credentials of a reviewer who makes such comments, and I think it behoves conference convenors to choose reviewers who have wider experience and knowledge of the methods available to their field.

    • Greta Hawthorne says:

      I agree with this first comment. Too many scholars subscribe to the narrowest of views expecting pigeon-holing of almost any fresh perspectives offered. Their must be an audience who, like yourself values qualitative research. I for one am among those who would.

  2. Kath McNiff says:

    Hi Pat – thanks for this interesting perspective. It seems to me that the formulaic assessment process leads to a pretty boring experience for the conference attendees. All the papers end up being vanilla and there is nothing much to be inspired by. Makes you wonder about the point of some conferences :)

  3. Pingback: Developing a mentoring plan for doctoral student reviewers | Doctoral Writing SIG

  4. maelorin says:

    This myopia is not confined to reviewers.

  5. Pingback: Friday links: Conference applications

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s