How strong is your belief in the efficacy of IPE interventions? You may respond by saying “pretty much” if you believe it works, or “nil” if you don’t. You may also say “I don’t know, I’m not convinced; maybe it works, maybe it doesn’t” or “it depends”. Behind all these answers is a probabilistic statement, an assessment of how likely you believe IPE works.
You can actually quantify your belief. Reverend Thomas Bayes (learn more about Bayes and Bayes’ theorem at: http://en.wikipedia.org/wiki/Thomas_Bayes#Bayes.27_theorem) called this the prior: the amount of belief you start with when confronting an event or a situation. Expressed as a ratio between 0 (zero certainty) and 1 (complete certainty) or a percentage, this can be your subjective assessment of how likely something is to happen or be, e.g. how effective IPE is.
Once the event or situation happens, you can say “see I told you” if what happens agrees with what you thought would happen. If it doesn’t, you may say “well, I’m not an oracle; obviously, I can’t predict the future!”.
Sometimes, what happens is only partly what you thought would happen and this gives you some confidence in your predictive abilities; you might then say “Hum! I was partly right” and the next time you are asked about your belief in what would happen, you would feel more confident in your assessment of what would happen.
If you do that, it means that you have changed your prior belief in the event or situation you are considering. You would have updated your belief based on the new information provided by what happened.
Can we update our belief in IPE interventions? Do we have some new information that can change our prior belief (probabilistic assessment) in IPE? We do.
In the new issue of JRIPE (www.jripe.org), Packard et al.  published a study that tested a specific hypothesis: Students’ performance working up a case and perceptions of interprofessional skills would improve if they are given modeled examples of interprofessional communication and a team reasoning framework.
Eighteen students from dentistry, medicine, nursing, occupational therapy, pharmacy, and physical therapy were randomized to teams of six and were videotaped while completing a patient’s case. Team 1 (control) received only the case; team 2 received the case plus framework; and team 3 received the case, framework, and was shown videotaped examples of interprofessional interactions.
The authors hypothesized that the use of the framework would be associated with better student perceptions about working as part of a team and would also correlate to better student performance in working up the patient case.
Comparing the three groups, Packard et al. found that students’ perceptions of team skills were significantly improved in team 2 and team 3 but not team 1. Students’ performance of their case as assessed by blinded faculty was significantly better in team 3 compared with teams 1 and 2.
What does this means? To answer this question, let’s just review what the authors did and what we can infer from it.
First, there was a random allocation of students to three different groups. That any of the students ended up in team 1 or 2 or 3, did not depend on any characteristic of the student or the choice of the researchers.
Second, the researchers assessed the students’ perception of team skills before and after the study completion. They also compared the three groups of students for their performance of the case.
Third, and this is a slightly different way of seeing the second point, the students went through a process of change; they went from point A to point B, where A is some specific way of seeing team skills and B another way; they also arrived at a specific level of performance of their case that they would not have arrived at had they not been through the experiment. Those who changed the most were those who received all the components of the intervention (the case, the framework, and videotaped examples of interprofessional interactions).
The central question is whether these changes and the differences between the three groups could have happened by chance. Since the analysis showed that the differences were statistically significant, they could not have been due to chance; there is one most likely explanation for these changes. What is it?
The most likely explanation is the compound effect of the intervention. The three groups were comparable in all other factors, the only thing that differentiated them was the fact that one group received all the components of the intervention while the other two groups received only some of the components of the intervention or nothing. In other words, the three groups are comparable for all other factors that may explain the differences in the changes after the study. And they are also different in terms of the dosage of the intervention; there is a kind of gradient dose (of the intervention)/response: going from nothing, to using the framework, to using the framework and videotaped examples of interprofessional interactions.
So the take-home message, is that the most likely explanation for the differences between the three groups is the compound effect of case + framework + videotaped examples of interprofessional interactions.
What alternative explanations can we have? here are a few:
The students in the third group were actually good to begin with;
The change in perception of team skills was just a chance event;
Those who evaluated the students’ performance were biased; they misjudged what they were looking at.
The randomized, blinded nature of the study allows us to say that these explanations of the differences between the groups are the least likely.
So by now, you can update your belief in similar IPE interventions. You can update your probabilistic assessment of the efficacy of IPE interventions (similar to the one the authors used) based on the new information that this study provides.
What did the authors leave us with? They left us with three things:
1. A working hypothesis that can be tested in future research. The authors propose “that because the framework is comprehensive and represents issues of context in case management, it provides enough distributed intelligence to support interprofessional teamwork.”
Experiments could be carried out and the results analysed to confirm, refute or refine the hypothesis. Future studies would define what is meant by distributed intelligence, how it can be measured and how it would be linked to using the framework.
2. They thought others might want to use the framework and thought of a way to optimize its use in teaching an interprofessional course. They created a website with sample cases and tools to teach the framework:
3. Finally, they plan for the near future to determine the efficacy of the framework in another context: that of interprofessional Team Observed Structured Clinical Encounters (TOSCEs).
So kudos to the authors. I, for now, have updated my beliefs that IPE can work. My earlier update was in 2010 when another randomized study, also published in JRIPE, showed how IPE can work .
Have you updated your beliefs in IPE?
References (pdf available at http://www.jripe.org):
1. Packard, K., Chehal, H., Maio, A., Doll, J., Furze, J. Huggett, K, Jensen, G., Jorgensen, D., Wilken, M., & Qi, Y. (2012). Interprofessional Team Reasoning Framework as a Tool for Case Study Analysis with Health Professions Students: A Randomized Study. Journal of Research in Interprofessional Practice and Education 2(3), 250-263.
2. Just, J.M., Schnell, M.W., Bongartz, M., & Schulz, C. (2010). Exploring effects of interprofessional education on undergraduate students’ behaviour: A randomized controlled trial. Journal of Research in Interprofessional Practice and Education, 1(3), 182-199.
- A Case for Interprofessional Exchange in Family Medicine (stfm.org)
- Complex Care and the Need for Collaborative Brains* (jripe.wordpress.com)
- Beware the Weasel Word “Statistical” in Statistical Significance! (freakonomics.com)
- Nate Silver Explains The Most Important Concept In Statistics (businessinsider.com)