Training activity information

Details

Compare competing clinical or operational measurements using appropriate statistical techniques and present the results and conclusions

Type

Developmental training activity (DTA)

Evidence requirements

Evidence the activity has been undertaken by the trainee​.

Reflection on the activity at one or more time points after the event including learning from the activity and/or areas of the trainees practice for development.

An action plan to implement learning and/or to address skills or knowledge gaps identified.

Considerations

  • Gold standard
  • Bland-Altman diagrams
    • Level of agreement
    • Bias limits
    • Systematic error
    • Data transformation
  • Precision
    • Intra-sessional variance
    • Inter-sessional variance
    • Inter-assessor variance
    • Standard error of measurement/intra-class correlation

Reflective practice guidance

The guidance below is provided to support reflection at different time points, providing you with questions to aid you to reflect for this training activity. They are provided for guidance and should not be considered as a mandatory checklist. Trainees should not be expected to provide answers to each of the guidance questions listed.

Before action

  • What are the competing clinical or operational measurements you will be comparing?
    • What is the context of this comparison?
  • What statistical techniques are suitable for comparing these measurements (e.g., t-tests, ANOVA)?
    • What are the assumptions of these tests?
  • What data will you be using for this comparison?
    • How was this data collected?
  • How will you present the results and conclusions of your statistical comparison, including any limitations?
  • What challenges might arise in selecting the correct statistical tests, ensuring the data meets the assumptions of the tests, and interpreting the statistical significance of the results?

In action

  • As you apply statistical tests to compare the measurements, are you checking if the data meets the assumptions of the tests?
    • What are you doing if the assumptions are violated?
  • Are the results of the statistical comparison clear and interpretable? If not, are you re-evaluating your choice of test or data preparation?
  • Are you considering the clinical or operational significance of the statistical differences (or lack thereof)?
    • Is this influencing how you interpret the results?
  • When preparing to present your conclusions, are you anticipating potential questions about your methodology or findings?
    • How are you structuring your presentation to address these proactively?
  • Are you reflecting on the robustness of your conclusions based on the data and the statistical methods used?

On action

  • Summarise the clinical or operational measurements you compared, the statistical techniques you used, and the key results and conclusions of your comparison.
  • What did you learn about using statistical techniques to compare different measurements?
    • Did you improve your understanding of hypothesis testing and statistical significance?
    • How effective were you in presenting the results and drawing meaningful conclusions?
    • How did your choices of statistical tests and your interpretation of p-values during the activity shape your conclusions?
    • How does the ability to compare measurements inform decision-making in healthcare?
    • Were there any surprising differences or similarities between the measurements?
    • What did you learn from these?
    • Were there any challenges in choosing the right statistical test or interpreting the results in a clinically or operationally relevant way?
  • What specific statistical tests for comparison do you need to understand better?
    • How can you improve your ability to interpret statistical significance in a practical context?
    • What are your next steps in refining your skills in comparative statistical analysis?
    • What resources (e.g., statistical guidelines, example analyses) might be beneficial?

Beyond action

  • Have you compared different measurements using statistical techniques in other contexts or encountered different comparative statistical tests since this DTA?
    • How has your understanding of the nuances of comparing different types of data evolved?
    • Have you discussed your comparative analyses with colleagues to validate your approach and interpretation?
  • How has this experience influenced your ability to critically evaluate studies that compare different clinical or operational measurements?
    • Have you applied comparative statistical techniques to inform decision-making regarding measurement selection or operational improvements?
    • Has your appreciation for the importance of robust statistical comparison in healthcare grown?
  • What transferable skills (e.g., statistical analysis, critical appraisal, decision support) did you develop that will be valuable in research, technology assessment, or guideline development?
    • How has this experience informed your understanding of evidence synthesis in healthcare?
    • What clear actions for continued development in comparative statistical methods or interpretation have been identified?

Relevant learning outcomes

# Outcome
# 2 Outcome

Select, perform and critique statistical analyses and interpretations on clinical and operational datasets.