Scott Compton

I was speaking with a research-scientist the other day who teaches in our Brain & Behaviour course.  She was saying that she would like to become even more involved in the medical education program of the school.  I was delighted to hear this!  While brainstorming possibilities, I asked if she was interested in conducting educational research.  She shrugged slightly, looked away, and with her voice trailing off said, "I don't know... I've heard that the quality of educational research is... kind of... low".  I immediately had visions of my amygdala glowing blindingly-white as my "fight" instinct had me lacing up my academic boxing gloves.  Fortunately, some form of evolutionary reaction intervened on behalf of my professionalism, and I muttered something like, "Well, we won't do that kind of research".  It was as convincing of a response that I could muster at the moment.

Unfortunately, this wasn't the first time I had heard this type of comment.  In fact, throughout my career, I have heard it numerous times and remember responding to it by giving my first talk on "raising the bar" for medical educational research back in 2007.  Now, here we are, 10 years after that talk, and I realize that I do not think we have raised the bar even 2.54 centimeters (I'm a recovering American).  In fact, we may have even lowered it.  In short, I find myself agreeing with my research-scientist colleague.

Why?  For the same two reasons that have existed for a long time:  First, there are too many medical education journals for the amount of high-quality educational research that exists.  Second, the requirement for medical educators to publish educational research in order to rise in academic rank, which results in more people attempting one-off, poorly conceived, and even more poorly conducted "science".  

And here is the scary part... I believe the problem is getting worse.  We now have people being selected as peer reviewers and even editors for medical education journals because they have a publication history - or merely a speaking history - but no specific training or knowledge in social science research methods.  When I submit an article to a journal and receive comments from reviewers, I find it all too common to feel as though I need to cite the most fundamental background sources to explain to the reviewer (and the editor) why their criticism is unfounded.  I've recently received the following comments from reviewers:

    • "Surveys are the lowest level of evidence" - made in reference to a study on the prevalence of educators' perceptions of a particular topic
    • "The meta-analysis graph is inappropriate" - made in reference to a box plot
    • "The sample size is inadequate" - made without reference to precision, effect size, or variance... even though the results were statistically significant!
    • "The study design does not take into account learning styles" - oh my god

I'm not completely sure how we fix this problem.  One small-step is to drop the curtain on the charade of calling everything "research" and call most of it for what it is: a "story".  I give a lot of credit to the Duke-NUS/SingHealth Education Conference (EDUCON) for allowing this category of submissions.  There is a lot to learn from each others' stories (hey, that's why I write a blog).  But there seems to be a lot of harm to our profession (and our students, schools, and stakeholders) when we label things as "science" or "research" if it does not reach the bar for that description.    


Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
19 + 1 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.