2023-2024 Workbook for Principles of Research week 1-7.docx

Full Transcript

CKX23 - Physiotherapy MODULE PP6005 Principles of Research Workbook Week 1- Week 9 YEAR 1 This workbook is linked to Week 1 to Week 9’s lectures. Week 1: Introduction to Research Methods Please read Galvin R, Cusack T, O'Grady E, Murphy TB, Stokes E. Family-mediated exercise intervention (FAME): eva...

CKX23 - Physiotherapy MODULE PP6005 Principles of Research Workbook Week 1- Week 9 YEAR 1 This workbook is linked to Week 1 to Week 9’s lectures. Week 1: Introduction to Research Methods Please read Galvin R, Cusack T, O'Grady E, Murphy TB, Stokes E. Family-mediated exercise intervention (FAME): evaluation of a novel form of exercise delivery after stroke. Stroke. 2011 Mar;42(3):681-6. doi: 10.1161/STROKEAHA.110.594689. Epub 2011 Jan 13. PMID: 21233462. https://pubmed.ncbi.nlm.nih.gov/21233462/ Hanratty CE, Kerr DP, Wilson IM, McCracken M, Sim J, Basford JR, McVeigh JG. Physical Therapists' Perceptions and Use of Exercise in the Management of Subacromial Shoulder Impingement Syndrome: Focus Group Study. Phys Ther. 2016 Sep;96(9):1354-63. doi: 10.2522/ptj.20150427. https://pubmed.ncbi.nlm.nih.gov/27013575/ Study Design Why do you think each study is qualitative / quantitative? Galvin et al 2011 Hanratty et al 2016 How many participants? How have they been selected? What data is being collected? Is there an intervention? Is there a control? Use PICO /PICo to structure the research question Bias: Systematic Deviation from the Truth. In Quantitative studies, we aim to reduce bias; to improve Internal Validity. This simply means that we limit other factors [other than the intervention; or the independent variable] that might distort the results. By controlling other factors, we are more likely to believe that the intervention [or independent variable] was the factor that led to the results of the study. The results of a credible, well-designed study may, in turn may lead to changes in clinical practice. Bias is one important component of scientific appraisal of quantitative studies. Galvin et al (2011) Galvin et al (2011) found significant improvements in impairments and activity. What, other than the intervention, may have helped to bring about that finding? Was there any way that the study personnel influenced the outcomes? Such as manipulating the allocation of the participants to each group giving more than just the intervention to the intervention group (eg. Time, attention, follow-up etc.) manipulated the results in favour of the intervention group at outcome assessment? Was there any way the participants influenced the outcomes? Could there be any way that they were more invested in the program? Could they manipulate the results in any way? Was there any way that the results were reported to make it look more effective? Were the results presented in a way / grouped the participants results in a way to make the effectiveness look “better”? Trustworthiness in Qualitative Studies In Qualitative studies, we aim to reduce bias to improve trustworthiness. The same as quantitative studies, we are more likely to believe the results of the study, which in turn may lead to changes in clinical practice. In qualitative studies, researchers collect data on experiences, perceptions, satisfaction etc. As with any conversation, there is a chance that the researcher can steer, or only hear what he/she is biased towards. In other words, could the researcher be a factor that may distort the results? Researchers must stay true to the information/message that the participants are telling. In qualitative studies, we look for strategies to limit the researchers inherent bias on the findings; in collecting and analyzing the data. Hanratty et al (2016) Do you believe their findings? Was the study completed in a way that limited bias as much as possible? Dependability Was the data collected without the influence of the researchers? This is described as the dependability of the research methods. Dependability can be established if the research process is logical (i.e. are the methods suitable to answer the research question, and are they in line with the chosen methodology), traceable and clearly documented. Did the researchers design the study the best way to answer the research question? Did they influence the recruitment of participants? Did they plan the interview structure before they met/knew the participants? How many / who were involved in planning the questions? Did they attempt to limit their biases on the questions? Did they allow the participant to lead the interview? Did they attempt to limit their influence on the interpretation of the results? Were the participants given an opportunity to verify the data collected (member check) Reflexivity Reflexivity is a process whereby the researchers reflect on their potential influence on the data. This is a recognized strategy to improve dependability, and would include the following: A statement locating the researcher culturally or theoretically. A reflection on the influence the researcher had on the research and vice-versa. For example, did they write up field notes and reflect on their thoughts as the research progressed (reflexivity: to reflect on how they may manipulate the data, or interfere with the interviews) Credibility Was the data analysed without the researchers influences? Are the findings true to the data collected? Has the data been interpreted differently to the message the participants were trying to share? This is described as credible research findings. Credibility evaluates whether there is a ‘fit’ between the author’s interpretation and the original source data. Was the data analyzed by several researchers in the team, with frequent debriefing sessions? Did the researchers use data of non-verbal cues (such as video footage, or a second researcher recording observations during the interviews) in their data analysis? This will further strengthen the finding. Do you feel the data matches the emergent themes – in other words, can you see the link between the themes and the quotations provided. Can you see quotes that illustrate the findings/ themes? Transferability How much did the context make a difference? This cannot be changed, but it will influence findings. A good researcher will reflect on this in the discussion, and link to the transferability of the findings. Where the focus groups took place? The work/home/leisure environment of the participants – did this differ across participants? Was this considered in the data analysis? Confirmability Finally, we should consider how open and transparent the research process was. The researchers should clearly report the step-by-step process. Can you see clearly what was done? Could you repeat the study? This is called confirmability. Are all the steps clearly documented, highlighting how the researchers maintained high standards of dependability. Methods of coding the data and patterns are clearly documented and repeatable. How to critically appraise the Internal Validity / trustworthiness of Research Studies. Review the following information. This will help prepare for the critical appraisal of the literature. We use the acronym PICO / PICo a lot. It helps to structure a research question. Try to use it when possible. Quantitative Qualitative P Population Description of the participant groups? Who / how were they recruited? How would these factors affect credibility? I Intervention Consider how was the intervention described? Could you now repeat the intervention from its description? What information is missing? (Think FITT – frequency, intensity, type and time, group, tailored, supervised exercise etc., for example) Interest Consider the phenomena of interest. Is it a defined event, experience? How well is it described? Do all the participants understand this phenomenon the same (eg: exercise – would some consider this to be structured in a gym, for example while others may consider it as mindfulness) C Comparator the experimental intervention is compared to a more recognised/routine intervention (ie the control, or comparator group). The differences in the outcomes are measured. Therefore, it is important to note the comparators intervention. Also consider contamination (if the control group find out the experimental intervention and start doing it – how would that affect the results? (Co) Context Consider the setting of the phenomenon- how would that impact the findings? Eg, exploring peer learning in university in the context of Covid, or remote learning etc. Consider each participants context – the similarities and difference, and how this would impact the findings. O Outcomes Results can only be found in what what is measured. However, if you measure too much, the participant will fatigue. Consider the value/meaningfulness of the measurements. In quantitative studies, bias can arise mostly from the list below. How could you reduce bias? Sources of Bias How to reduce bias Bias arising from the randomisation process (recruitment and allocation) Computer generated randomisation. Bias due to deviations from intended interventions Bias due to missing outcome data Bias in measurement of the outcome Bias in the selection of the reported results Conclusion Just note here that bias suggests poorer credibility. Less rigour may lead to significant results. Poor credibility suggests that the researchers may have manipulated the results of the study – the results may be biased. So, when you critically appraise literature, remember that the stricter, more credible studies allow less bias, and hence harder to manipulate the results towards the significant changes the researchers would like. Good researchers go into a study with an open mind – a genuine interest in finding out what happens at the end, rather than trying to prove a hunch! Week 2: Qualitative Research Read Pope C, Mays N. Qualitative Research: Reaching the parts other methods cannot reach: an introduction to qualitative methods in health and health services research BMJ 1995; 311 :42 doi:10.1136/bmj.311.6996.42 (available on Canvas) Kitzinger J. Qualitative Research: Introducing focus groups BMJ 1995; 311 :299 doi:10.1136/bmj.311.7000.299 (available on Canvas) Guest, G., Bunce, A., & Johnson, L. (2006). How Many Interviews Are Enough?: An Experiment with Data Saturation and Variability. Field Methods, 18(1), 59–82. https://doi.org/10.1177/1525822X05279903 (available on Canvas) Section 1: Interpretation of the presentation Why use qualitative research (slide 5)? Think about the different types of research questions it may be able to answer. Sampling. Discuss the advantages and disadvantages of random sampling when conducting qualitative research? What are the advantages of unstructured interviews when compared to structured interviews? What are the advantages of focus groups when compared to one-to-one interviews? Discuss the concepts of the following – what are they? How are they different? How can you recognise each? Trustworthiness Dependability Credibility Confirmability Transferability The focus groups took place within the Belfast and Northern Health and Social Care Trusts in Northern Ireland and at a central location in Dublin, where therapists from 3 hospitals/clinics convened.” Does this statement increase the credibility, dependability, confirmability, or transferability of the findings? Explain “During each of the six stages, a meeting was convened with a third researcher to discuss and review the interview coding, the formation of preliminary themes, and the decision of the final themes.” Does this statement increase the credibility, dependability, confirmability, or transferability of the findings? Explain “All interviews were audio recorded.” Does this statement increase the credibility, dependability, confirmability, or transferability of the findings? Explain Participants were purposively sampled from participants taking part in the RCT and to ensure the views of people from different age groups, gender and social backgrounds were captured. Does this statement increase the credibility, dependability, confirmability, or transferability of the findings? Explain ………………….. was assured by the non-judgmental atmosphere during the interviews, using an iterative process, and discussion of the coding phases and resulting model among the team members. Fill in the missing word in the sentence above. Your 4 options are credibility, dependability, confirmability, or transferability. Explain the reason for your choice. Week 3: Critical Appraisal of a Qualitative Paper Read: Littlewood C, Malliaras P, Mawson S, May S, Walters S. Patients with rotator cuff tendinopathy can successfully self-manage, but with certain caveats: a qualitative study. Physiotherapy. 2014 Mar;100(1):80-5. doi: 10.1016/j.physio.2013.08.003. Epub 2013 Nov 13. PMID: 24238700. Johnson JL, Adkins D, Chauvin S. A Review of the Quality Indicators of Rigor in Qualitative Research. Am J Pharm Educ. 2020 Jan;84(1):7120. doi: 10.5688/ajpe7120. PMID: 32292186; PMCID: PMC7055404. Shenton, Andrew. (2004). Strategies for Ensuring Trustworthiness in Qualitative Research Projects. Education for Information. 22. 63-75. 10.3233/EFI-2004-22201. Introduction If you were going to present a critical appraisal of this paper (Littlewood, 2014), what information would you give in the introduction (note the details in this box rather than the headings/concepts) Think about the structure of the introduction that will help orientate the reader (in other words that will lead the reader clearly to the main body of the report, with details of the anticipated structure.) Clearly state the aim How would you structure the main body of your report? What headings / subheadings would you use? What would each subsection cover? What would be the topic of each para? Conclusion This section will briefly describe what you found and your overall conclusions of your appraisal. Week 4: Quantitative Research Design The aim of this week is to gain an understanding of different quantitative research designs. We will focus on RCT and observational studies. Read [WIH] McCullagh R, Dillon C, Dahly D, Horgan NF, Timmons S. Walking in hospital is associated with a shorter length of stay in older medical inpatients. Physiological measurement. 2016 Sep 21;37(10):1872. https://iopscience.iop.org/article/10.1088/0967-3334/37/10/1872/meta [APEP] McCullagh, R., O’Connell, E., O’Meara, S. et al. Augmented exercise in hospital improves physical performance and reduces negative post hospitalization events: a randomized controlled trial. BMC Geriatric 20, 46 (2020). https://doi.org/10.1186/s12877-020-1436-0 https://bmcgeriatr.biomedcentral.com/articles/10.1186/s12877-020-1436-0 APEP WIH what were the study questions, or aims? To measure the effects of an augmented prescribed exercise programme versus usual care, on physical performance, quality of life and healthcare utilisation for frail older medical patients in the acute setting. To measure the association between average daily step-count in hospital and (1) length of stay and (2) end of study physical performance. What were the study designs? Prospective, sham-intervention con-trolled, randomised trial, with blinded randomisation and outcome measurement. Cross-sectional, observational study. In your opinion, were they appropriate? Controlled randomised trial reduces researcher bias. More reliable than self or proxy reported. Identify where the results of the study may not be applicable. [think of the following] Context / setting Population Intervention/phenomenon More robust physically abled patients. This study was limited to one centre. While all patients recruited were typical general medical or geriatric medicine patients, the results may not apply to other hospitals or patient cohorts. APEP Identify if/when deviations from the intended intervention occurred? Did the authors attempt to limit this occurrence? How? Can you suggest any other ways to limit the occurrence? Where can you find information about the planned intervention? Accelerometer-recorded walking activity was collected on a considerably lower number of patients than planned. The trial was terminated early, due to a change in discharge procedures, with 190 patients of the planned 220 patients included Introduced a phone follow-up assessment for patients unable to attend a face-to-face assessment. Methods APEP WIH List 3 potential confounders in the studies? How could they distort the relationship between the DV and IV? How would you know what potential confounders exist? Baseline level of patients Table 2 Baseline Characteristics of the APEP Participants What did the adjusted analysis tell us about the relationship between WIH and physical performance? No association between step count and physical performance. Were power calculations conducted for the studies? Did the researchers reach them? What [outcome measure] were the power calculations based on (this is routinely the primary outcome measure)? Did they reach the numbers intended? What impact did that make on the results? Yes Did not reach LOS No Did not reach the P value, could not determine significance. No APEP Can you find any clues to tell you how variable the LOS was? APEP WIH P value of primary outcomes What does that mean [in English!]? Magnitude of effect What does that mean? 95% CI What does that mean? Week 6: Critical appraisal of a randomised controlled trial (Galvin et al 2011) The link to the CASP checklists https://casp-uk.net/casp-tools-checklists/ The link to Galvin et al (2011) https://www.ahajournals.org/doi/10.1161/STROKEAHA.110.594689?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%20%200pubmed We will appraise the paper in class, please be familiar with the paper. Week 7: Ethics Introduction The aim of this workshop is to explore, discuss and identify the key ethical considerations for people who consent for a study. We will consider what participants need to know in order to give (written) informed consent and why. For this workbook, It is best to consider yourself as a person who has been approached to participate in the study. What would you need to know before you consent to take part? This will inform your ethics application. You will also need to consider what the research committee will need to know. Note, the ethics committee do not review the methodological robustness of the study. Their job is to protect the participants. However, some reviewers might suggest simple changes that will improve the study quality – but note not to expect this in their responses. Each study that collects data on people or animals needs to gain ethical approval from the local research ethics committee. Full ethical approval must be given before recruitment begins. The ethics committee is made up of clinicians and researchers, who review the applications. They can (1) grant full approval (2) grant approval subject to changes to the project (3) refuse approval. There are two ethics committees for research collecting data on people: Social Research Ethics Committee (SREC), which deals with data collection on “people”, community-dwellers, and the research is not linked through their clinical/health care. This means that the clinicians cannot be involved in the study in any way, data cannot be collected in a healthcare setting and the data being collected is not health data, but more social / well being / occupational data. We have a local subcommittee in the School of Clinical Therapies that reviews applications from Physiotherapy (CT-SREC). Approval can be gained within about 2-4 weeks. On the other hand, the Clinical Research Ethics Committee (CREC) deal with ethical applications for clinical research. This would include data collection in a healthcare setting, involving the persons healthcare and health data. People under healthcare are in a vulnerable position, and ethical examination needs to protect the individual in this more vulnerable situation. Approval can take considerably longer for this committee, and often, needs amendments before full approval is granted. Please read Galvin R, Cusack T, O'Grady E, Murphy TB, Stokes E. Family-mediated exercise intervention (FAME): evaluation of a novel form of exercise delivery after stroke. Stroke. 2011 Mar;42(3):681-6. doi: 10.1161/STROKEAHA.110.594689. Epub 2011 Jan 13. PMID: 21233462. https://pubmed.ncbi.nlm.nih.gov/21233462/ Brainstorm in your group. What do the people (participants) need to know before they consent. (High Level) Think about each stage: recruitment, intervention, assessments and their data. In the class we will complete the SREC application for this study. Please note that a clinical study (which this is!) should seek ethical approval through the CREC committee. However, as most of you will need to apply to the SREC committee, we will use this form. Also, note that systematic reviews and qualitative evidence syntheses do not need ethical approval. Code of Research Conduct https://www.ucc.ie/en/media/research/researchatucc/researchsupports/researchintegrity/UCCCodeofResearchConductV2.4-approved14thSeptember2021.pdf This code applies to all researchers – students and supervisors. Please take time to become familiar with the code. Principles of Good Research practice underpins the ethical considerations of research. Read Sections 6-9 carefully (Research Integrity, Respects for the Rights and Dignity of Research Participants, Records and Data Management, and Dissemination, P12-19). 4: Ethics application (we will discuss this in the class) Download an (SREC) ethics form https://www.ucc.ie/en/research/support/ethics/socialresearch/applicationformtemplates/. Download the Participant Information Sheet and Consent Form also. Helpful guidelines are available through hyperlinks, and the videos are very helpful. In your groups, consider the following for the Galvin et al (2011) study. Review Data protection – what would you need to report to the ethics committee in your application? Patient information and Consent – can you adapt the Participant Information Sheet and Consent Form?

Use Quizgecko on...
Browser
Browser