Evidence-Based Practice in Rehabilitation - Textbook PDF
Document Details
Uploaded by ClearerProtactinium
Tags
Summary
This textbook introduces evidence-based practice in rehabilitation. It explains the importance of research, including sources of external scientific evidence, practitioner experience, and client situation/values, in clinical decision-making. The book details how to formulate, evaluate, and implement evidence-based clinical decisions.
Full Transcript
# Defining Evidence-Based Practice ## Why Practitioners Need to Understand Research ### Chapter Outline - What Is Evidence-Based Practice? - External Scientific Evidence - Practitioner Experience - Client Situation and Values - Why Evidence-Based Practice? - The Process of Evidenc...
# Defining Evidence-Based Practice ## Why Practitioners Need to Understand Research ### Chapter Outline - What Is Evidence-Based Practice? - External Scientific Evidence - Practitioner Experience - Client Situation and Values - Why Evidence-Based Practice? - The Process of Evidence-Based Practice - Formulate a Question Based on a Clinical Problem - Identify the Relevant Evidence - Evaluate the Evidence - Implement Useful Findings - Evaluate the Outcomes - Writing an Evidence-Based Question - Questions on Efficacy of an Intervention - Research Designs for Efficacy Questions and Levels of Evidence - Questions for Usefulness of an Assessment - Research Designs Used in Assessment Studies - Questions for Description of a Condition - Research Designs Used in Descriptive Studies - Questions for Prediction of an Outcome - Research Designs Used in Predictive Studies - Questions About the Client’s Lived Experience - Research Designs Addressing the Client’s Lived Experience - Critically Appraised Topics and Critically Appraised Papers - Review Questions - Answers - References ## Learning Outcomes 1. Identify the three components, or sources, of evidence that contribute to evidence-based decision making. 2. Apply an evidence-based practice hierarchy to determine the level of evidence of a specific research study. 3. Describe the different types of research questions and the clinical information that each type of question elicits for therapists. 4. Explain the purposes of a critically appraised topic and critically appraised paper. ## Key Terms - Client-centered practice - Control - Critically appraised paper - Critically appraised topic - Cross-sectional research - Evidence-based practice - Incidence - Internal validity - Levels of evidence - Longitudinal research - PICO question format - Prevalence - Random assignment - Randomized controlled trial - Reflective practitioner - Reliability - Replication - Scientific method - Sensitivity - Shared decision making - Specificity - Systematic review - Validity ## Evidence-Based Practice in Rehabilitation Many of us have heard and some of us even try to achieve the recommendation of walking 10,000 steps a day. Is this widely accepted recommendation based on scientific evidence? Lee et al (2019) conducted a study to determine the benefits of walking 10,000 steps for older women. When examining the background for the recommendation, the researchers learned that the number was not based on evidence but instead was used as a marketing device to sell pedometers. In the study, Lee and colleagues found that sedentary older women tend to walk only 2,700 steps a day and, when this number was increased to 4,400, there was a 41% decrease in mortality. However, the benefits tended to level off at 7,500 steps. This amount of walking may be a more reasonable goal for older women — and it is based in the evidence. The 10,000-step recommendation is an example of practice that is not supported by research or “evidence.” Such practices even creep into our professions. No doubt there are practices that rehabilitation professionals have adopted and accepted as fact that, although they are not as well-known as the 10,000-step adage, are also ingrained in practice — even though they are not supported by evidence. Let’s look at an example: For decades, the recommended treatment for acute low back pain was bedrest, typically for 2 days with no movement other than toileting and eating. A Finnish study examined this recommendation in a well-designed, randomized controlled trial that compared 2 days of bedrest with back extension exercises and ordinary activity (Malmiraara et al, 1995). The study found the best results with ordinary activity. Subsequent research confirmed this finding, or at least found that staying active was as effective as bedrest for treating low back pain, and had obvious advantages associated with less disruption of daily life (Dahm, Brurberg, Jamtveat, & Hagen, 2010). Not only did this research change practice, but it also helped move practitioners toward recognizing the importance of evidence in the clinical reasoning process. Without the research evidence, the recommendation for bedrest may have been difficult to challenge; bedrest did eventually ameliorate low back pain, so clinical and client experience suggested a positive outcome. Only through testing of alternatives was the accepted standard challenged. Questioning what we do every day as health-care practitioners and making clinical decisions grounded in science is what *evidence-based practice (EBP)* is all about. However, the use of scientific evidence is limited by the fact that clinical decisions are made within the context of a clinician’s experience and an individual client’s situation. Any one profession will never have a suitable number of relevant studies with adequate reliability and validity to answer all practice questions. However, the process of science is a powerful self-correcting resource. With the accumulation of research, clinicians can continually update their practice knowledge and make better clinical decisions so that clients are more likely to achieve positive results. Evidence-based practitioners are reflective and able to articulate what they are doing to help a client and why. In evidence-based practice, decisions are not based on hunches, “the way it has always been done,” or what is easiest or most expedient. Rather, in evidence-based practice, the therapist’s clinical decisions and instructions can be explained along with their rationale. Evidence-based practice is explicit by nature. This chapter introduces evidence-based practice. Topics such as sources of evidence, the research process, and levels of evidence are discussed so that the reader can understand the larger context in which evidence-based practice takes place. Detailed guidance is provided concerning how to craft evidence-based questions to answer clinical questions. The chapter also introduces readers to critically appraised topics and papers, as well as the process for developing them. These topics are then explored in greater detail in subsequent chapters. This chapter focuses on the what, why, and how of evidence-based practice: What is evidence-based practice? Why is evidence-based practice a “best practice”? How do practitioners integrate evidence into their practice? ## What Is Evidence-Based Practice? Evidence-based practice in rehabilitation stems from evidence-based medicine. David Sackett, a pioneer of evidence-based medicine, and his colleagues provided the following widely cited definition: “*Evidence-based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients*” (Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996, p 71). Evidence-based practice requires an active exchange between researchers and clinicians (Ben-David, Jonson-Reid, & Tompkins, 2017). Researchers produce findings with clinical relevance and disseminate those findings through presentations and publications. Clinicians then use this information in relation to their specific practice situations. The researchers’ findings may be consistent or inconsistent with the clinician’s experience and understanding of a particular practice question. Reflective practitioners capitalize on the tension that exists between research findings and clinical experience to expand their knowledge. In addition, from a client-centered practice perspective, the values and experience of the clients, caregivers, and family are essential considerations in the decision-making process. Thus, evidence-based practice is a multifaceted endeavor comprising three components, or sources, of evidence: 1. The external scientific evidence 2. The practitioner’s experience 3. Client/family situation and values **Each source of evidence provides information for clinical decision making. The best decisions occur when all three sources are considered.** ## External Scientific Evidence External scientific evidence is a component of evidence-based practice that arises from research. Therapists typically obtain scientific evidence from research articles published in scientific journals. The scientific evidence provides clinicians with objective information that can be applied to clinical problems. In the development of this source of evidence, researchers use the *scientific method* to attempt to remove bias by designing well-controlled studies, objectively collecting data, and using sound statistical analysis to answer research questions. The steps of the scientific method include: 1. Asking a question 2. Gathering information about that question 3. Formulating a hypothesis 4. Testing the hypothesis 5. Examining and reporting the evidence - **From the Evidence 1-1** provides an abstract from a randomized controlled trial that investigates the efficacy of a technology-assisted enriched environment for rehabilitation patients (Amatya et al, 2020). The researchers had questions about the amount of time that patients spend inactive and isolated outside of their therapy sessions. Their hypothesis was that an enriched environment could not only increase activity, but the additional practice would improve upper arm function. The research findings supported their hypothesis. When compared with the control group, the intervention group that received environmental enrichment increased the amount of time they spent in activity and exhibited greater improvements in upper arm function. - **From the Evidence 1-1** - **Example of External Scientific Evidence:** - Amatya, B., Khan, F., Windle, I., Lowe, M., & Galea, M.P. (2020). Evaluation of a technology-assisted enriched environmental activities programme for upper limb function: A randomized controlled trial. Journal of Rehabilitation Medicine, 52, jrm00003. doi: 10.2340/16501977-2625 - **FTE 1-1 Question** How might this external scientific evidence influence your practice? Although this example provides evidence from a single study that used a strong randomized controlled trial design (Amatya et al, 2020) (described in Chapter 8), all studies have limitations. The results of a single study should never be accepted as proof that a specific intervention is effective. Science is speculative, and findings from research are not final. This concept speaks to another important characteristic of the scientific method: *replication*. The scientific method is a gradual process that is based on the accumulation of results. When multiple studies produce similar findings, as a practitioner you can have more confidence that the results are accurate or true. Later in this chapter, the hierarchical levels of scientific evidence and the limitations of this approach are presented. Subsequent chapters describe in greater detail the research process that is followed to create evidence to address different types of research questions. ## Practitioner Experience Initial models of evidence-based practice were criticized for ignoring the important contribution of practitioner experience in the clinical decision-making process. From early on, Sackett and colleagues (1996) argued against “cookbook” approaches and submitted that best practices integrate scientific evidence with clinical expertise. Originally clinical experience was held in less regard when compared to research evidence; however, today’s researchers and practitioners take a more nuanced perspective. *Practice knowledge* is as important and complimentary to research evidence (Paez, 2018). In addition, clinical experience is essential when the scientific evidence is insufficient for making clinical decisions and translating research into real-world clinical settings. Research can never keep up with clinical practice nor can it answer all of the specific questions that therapists face every day, given the diversity of clients, the different settings in which therapists practice, and the pragmatic constraints of the real world. There may be studies to indicate that a particular approach is effective, but it is much less common to find evidence related to frequency and intensity or how to apply an intervention to a complicated client with multiple comorbidities, as typically seen in practice. Practitioners will always need to base some of their decisions on expertise gathered through professional education, interaction with colleagues and mentors, and accumulation of knowledge from their own experience and practice. Practitioner expertise is enriched by *reflection*. The *reflective practitioner* takes the time to consider their experience with a client, how it turned out, and how to make things even better. The reflective practitioner also incorporates theory into the decision-making process. Theories provide the practitioner with an explanation of how an intervention is believed to work. Eventually, a specific theory-driven intervention will optimally have research evidence to support its efficacy, but it takes time for the research to accumulate. Again, it is reflection that makes knowledge more explicit and easier to communicate to others. Reflection becomes even more explicit and methodical when therapists use program evaluation methods. Collecting data on your own clients and evaluating their responses to treatment will provide important information for enhancing the overall effectiveness of your services. An example of integrating research evidence and practice experience is illustrated with the Lee Silverman Voice Treatment (LSVT) BIG approach for individuals with Parkinson’s disease. The intervention is based on the theory that, because Parkinson’s disease results in a slowness of movement and a reduction in arm swing and smaller steps, repetitive large movements will cause a recalibration of the proprioceptive system, leading to normalized function (Farley & Koshland, 2005). This theory led to the development of the LSVT approach, and therapists used it even before a large body of research evidence had accumulated. Now, there are enough studies such that a systematic review has been conducted (McDonnell et al, 2017). This systematic review indicates that individuals receiving LSVT BIG treatment are more likely than individuals in control conditions to improve motor performance, including walking speed and coordination. Yet, when working with an individual client who has Parkinson’s disease, the practitioner still faces many questions: At what stage in the disease process is the intervention most effective? Is the intervention effective for clients who also experience depression or dementia? Is the intervention more or less effective for individuals receiving deep brain stimulation? The intervention is typically provided in 16 60-minute sessions over a 1-month period. Can a more or less intensive schedule be used for specific clients? What about the long-term effects of the treatment? Because Parkinson’s disease is progressive in nature, will there be maintenance issues? This is where clinical reasoning comes into play. You will use your practice experience to make decisions about whether to implement the approach with a specific client and how the intervention should be implemented. If you do implement LSVT BIG, you will reflect upon whether or not it is working and what modifications might be warranted. ## Client Situation and Values Interestingly, client-centered practice moved into the forefront at the same time that evidence-based practice gained traction. *Client-centered practice* emphasizes client choice and an appreciation for the client’s expertise in their life situation. A client’s situation should always be considered in the treatment planning process. An intervention is unlikely to result in successful outcomes if a client cannot carry it out due to life circumstances. Some very intensive therapies may be ineffective if the individual does not have the financial resources, endurance/motivation, or social support necessary to carry them out. For example, a therapist should consider the issues involved when providing a single working mother of three with an intensive home program for her child with autism. Client preferences and values also play an important part in the decision-making process. For example, an athlete who is eager to return to their sport and is already accustomed to intensive training is more likely to respond favorably to a home exercise program than a client who views exercise as painful and tedious. Rarely is there a single option in the treatment planning process. When *shared decision making* occurs between clients and health-care providers, clients increase their knowledge, are more confident in the intervention they are receiving, and are more likely to adhere to the recommended therapy (Stacey et al, 2017). *Shared decision making* is a collaborative process in which the clinician shares information from research and clinical experience, and the client shares information about personal values and experiences. Different options are presented, with the goal of arriving at an agreement regarding treatment. From a client-centered practice perspective, the client is the ultimate decision maker, and the professional is a trusted advisor and facilitator. The accessibility of Internet resources has increased clients’ involvement in the treatment planning process. Today, clients are more likely to seek health information on the Internet first before seeing a health-care provider (Jacobs, Amuta, & Jeon, 2017). The practitioner can help the client understand and interpret the evidence in light of the client’s own situation (Box 1-1). In addition, a practitioner who is well versed in the research literature may be able to supplement the client’s search with further evidence on the topic of interest and help evaluate the sources the client has already located. Chapter 14 provides additional information about the process of integrating practitioner experience, client values, and research evidence. - **Exercise 1-1** - **Strategizing When Client Values and Preferences Conflict With the External Research Evidence and/or the Practitioner’s Experience** (LO 1-1) - **The example in Box 1-1** describes an experience in which there was conflict between the mother’s preference, the research evidence, and the orthopedic surgeon’s experience. There will likely be situations in your own practice when similar conflicts emerge. - **Question:** - Identify three strategies that you might use to address a conflict such as this while still honoring the client’s values and autonomy to make decisions. - **Box 1-1 An Example of Client Inclusion in Decision Making** This personal example illustrates the inclusion (or lack of inclusion) of the client in the decision-making process. When the author’s daughter was in elementary school, she broke her arm in the early part of the summer; as a result, a typical summer activity of spending time at the pool was disturbed. We expected the cast to be removed on the scheduled follow-up visit, and plans were made to hit the pool as soon as the appointment was over. However, after the cast was removed and an x-ray was taken, the orthopedic surgeon explained that, although the bone was healing well, a small line where the break had occurred indicated that the bone was vulnerable to refracturing. The orthopedic surgeon was ready to replace the cast and explained that the patient would need to wear it for several more weeks to strengthen the bone. The orthopedic surgeon was likely making recommendations based on the research evidence and his practice experience. He was interested in keeping the patient from reinjuring herself. However, his recommendation was not consistent with the values and interests of the patient and her mother at that time. The mother instead requested a splint that the patient could wear when she was not swimming. The mother was willing to assume the risk after weighing the pros and cons of the situation. The orthopedic surgeon complied with the request, yet made it clear that he thought the decision was wrong and included this in his progress note. As a health-care practitioner, it is easy to appreciate the orthopedic surgeon’s dilemma. Furthermore, it is natural to want our expertise to be valued. In this case, the orthopedic surgeon may have felt that his expertise was being discounted, but the family situation and the opinion of the parent were important as well. If the health-care professional (in this case, the orthopedic surgeon) had approached the situation from a shared decision-making perspective, the values of the child and parent would have been determined and considered from the beginning. Best practice occurs when the decision-making process is collaborative from the outset and the perspectives of all parties are appreciated. ## Evidence in the Real World - **Applying Evidence-Based Practice to Early Psychosis Programming** - **Natalie Jones, OTS at Midwestern University – Glendale** Photo of a laptop computer. On the screen is a presentation slide that reads, Pathway to Independence. Natalie Jones, OT S. Natalie Jones’ online early psychosis employment program was developed using evidence-based practice principles. The roots of occupational therapy stem from mental health, and presently, a growing area of practice for occupational therapists is early psychosis programs. Early psychosis programs work with individuals who are experiencing the first indication of psychotic symptoms or those who have been recently diagnosed with a severe mental illness. The typical age range of these individuals is 16–26 years of age. This period of life is when many people are in the process of discovering their identity and making the transition into adulthood; occupational therapists can have a strong impact on increasing their independence during this transition. I was lucky to have the opportunity to not only work with the early psychosis population during my level II fieldwork but to also create a program for this group. One of my goals for this program was to encompass the principles of evidence-based practice. Evidence-based practice is a multidimensional process that takes not only external scientific evidence into account but also the practitioner’s experience and the client’s situation/values (Sackett et al, 1996). I incorporated all three components into the development of my program by conducting a literature review, interviewing different disciplines within the program, and leading a focus group with clients. I began by conducting a literature review to determine where the need was greatest for occupational therapy services within this population and incorporated this external scientific evidence into my program. Research indicated that one of the biggest areas of need was preparing individuals for adulthood (McCay et al, 2020). Looking further into what preparation for adulthood consisted of, I found a need for more support in maintaining work and/or education, social skills/maintaining healthy relationships, money management, and understanding government benefits (Brown, 2011; Hensel et al, 2016; McCay et al, 2020). With this information, I was able to better guide the focus of my questions to the interdisciplinary team member and clients. Before interviewing the team and clients, I was planning to create an in-person group focused on teaching specific, concrete independent living skills. After completing the interviews with staff and focus groups with clients, I realized I was missing a key consideration; each individual is at a different stage of recovery. Therefore, the program needed to be more flexible in terms of meeting the client where they were. This realization helped me alter the format of my program from in-person to online, so clients could access resources at the best time for them. I also found the staff and clients mentioning themes that were echoed in my literature review. For example, the desire and support needed in employment and career exploration was a prominent theme. Clients emphasized their interest in learning how to network and transition into the workforce. Practitioners highlighted the importance of maintaining employment and incorporating work-life balance. With this information, I created my program focused on employment. This program was offered through an online platform that provided recorded modules to learn from, along with resources to help initiate the application process. My contact information was provided if clients had additional questions. My program focuses on different aspects of employment, such as the importance of a work-life balance, transitioning into the workforce, and tips on networking. It also provides a variety of resources including questions to ask yourself to identify your strengths and interests, an example e-mail for networking, and considerations when disclosing mental health conditions at work. Through the process of considering all three components of evidence-based practice when developing my program, I was able to identify the most relevant topics to target and the best method for delivering the content. As an evidence-based occupational therapist, I could maximize my expertise and the outcomes for clients. - **Brown, J. A. (2011). Talking about life after early psychosis: The impact on occupational performance. Canadian Journal of Occupational Therapy, 78(3), 156–163. doi: 10.2182/cjot.2011.78.3.3** - **Hensel, J. M., Banayan, D. J., Cheng, C., Langley, J., & Dewa, C. S. (2016). Client and key worker ratings of need in first-episode psychosis early intervention programmes. Early Intervention in Psychiatry, 10(3), 246–251. doi: 10.1111/eip.12171** - **McCay, E., Tibbo, P., Conrad, G., Crocker, C., Langley, J., Kirwan, N., ... & Sheasgreen, C. (2020). Prepared for transition? A cross-sectional descriptive study of the gains attained in early psychosis programs. Early Intervention in Psychiatry. doi: 10.1111/eip.12916.** ## Why Evidence-Based Practice? In the past, practitioners may have been comfortable operating exclusively from experience and expert opinion, but best practice in today’s health-care environment requires the implementation of evidence-based practice that incorporates the research evidence as well as values of the client. Evidence-based practice is expected and, in many instances, required. The official documents of professional organizations speak to the importance of evidence-based practice. For example, the Occupational Therapy Code of Ethics and Ethics Standards (American Occupational Therapy Association [AOTA], 2015) includes this statement in the section addressing beneficence: “*Use, to the extent possible, evaluation, planning, intervention techniques, assessments, and therapeutic equipment that are evidence based, current, and within the recognized scope of occupational therapy practice.*” The Position Statement from the American Speech-Language-Hearing Association’s (ASHA’s) Committee on Evidence-Based Practice includes the following: “*It is the position of the American Speech-Language-Hearing Association that audiologists and speech-language pathologists incorporate the principles of evidence-based practice in their clinical decision making to provide high quality care*” (ASHA, 2005). The World Confederation of Physical Therapy’s policy statement on evidence-based practice maintains that: “*Physical therapists have a responsibility to ensure that the management of patients/clients, carers, and communities is based on the best available evidence. They also have a responsibility not to use techniques and technologies that have been shown to be ineffective or unsafe*” (WCРТ, 2017). Clinical decisions grounded in evidence carry more weight and influence when they are supported with appropriate evidence. Imagine participating in a team meeting and being asked to justify your use of mirror therapy for a client recovering from stroke. You respond by telling the team that, not only is your client responding favorably to the treatment, but a Cochrane review of 57 randomized controlled trials also found that mirror therapy was effective for reducing pain and improving upper extremity motor function and activities of daily living (Thieme et al, 2018). Use of evidence can increase the confidence of both your colleagues and your client that an intervention is valid. Likewise, payers are more likely to reimburse your services if they are evidence-based. Evidence-based practice also facilitates communication with colleagues, agencies, and clients. As clinical decision making becomes more explicit, the practitioner can support choices with the source(s) of evidence that were used and explain those choices to other practitioners, clients, and family members. Ultimately, the most important reason to implement evidence-based practice is that it improves the quality of the services you provide. An intervention decision that is justified by scientific evidence, grounded in clinical expertise, and valued by the client will, in the end, be more likely to result in positive outcomes than a decision based on habits or expediency. ## The Process of Evidence-Based Practice The process of evidence-based practice mirrors the steps of the scientific method (Fig. 1-2). However, rather than creating new evidence by collecting data, the evidence-based practitioner uses existing evidence to answer a question. It is a cyclical process that includes the following steps: 1. Formulate a question based on a clinical problem. 2. Identify the relevant evidence. 3. Evaluate the evidence. 4. Implement useful findings. 5. Evaluate the outcomes. - **Figure 1-2** The cycle of evidence-based practice. ## Formulate a Question Based on a Clinical Problem The first step in evidence-based practice involves identification of a clinical problem and formulation of a question to narrow the focus. First, the problem is identified, and then a question is formulated. The formulation of a specific evidence-based question is important because it provides the parameters for the next step of searching the literature. Questions can be formulated to address several areas of practice. The most common types of questions address the following clinical concerns: 1. Efficacy of an intervention 2. Usefulness of an assessment 3. Description of a condition 4. Prediction of an outcome 5. Lived experience of a client Each type of question will lead the practitioner to different types of research. The process of writing a research question is discussed in more detail later in this chapter. ## Identify the Relevant Evidence After the question is formulated, the next step is to find relevant evidence to help answer it. Evidence can include information from the research literature, practice knowledge, and client experience and values. Searching the literature for evidence takes skill and practice on the part of practitioners and students. Development of this skill is the focus of Chapter 2. However, as mentioned previously, the research evidence is only one component of evidence-based practice. Therapists should always consider research evidence in light of their previous experience, as well as information gathered about the client and their situation. ## Evaluate the Evidence Once evidence is found, evidence-based practitioners must critically appraise that evidence. The design of the study, size of the sample, outcome measures used, and many other factors all play a role in determining the strength of a particular study and the validity of its conclusions. In addition, practitioners need to evaluate the applicability of a particular study to their practice situation and client life circumstances. Much of this textbook focuses on evaluating different types of research. ## Implement Useful Findings Clinical decision making may focus on an intervention or assessment approach, use evidence to better understand a diagnosis or an individual’s experience, and/or predict an outcome. Once the evidence has been collected, analyzed, and presented to the client, the practitioner and client use a collaborative approach and, through shared decision making, apply the gathered evidence to practice. Chapter 14 provides more information on clinical decision making, presenting evidence to clients and families, and using an interdisciplinary approach to evidence-based practice. ## Evaluate the Outcomes The process of evidence-based practice is recursive; that is, the process draws upon itself. When a practitioner evaluates the outcomes of implementing evidence-based practice, the evaluation process contributes to practice knowledge. The practitioner determines whether the evidence-based practice resulted in the intended outcomes. For example, did the intervention help the client achieve established goals? Did the assessment provide the therapist with the desired information? Was prediction from the research evidence consistent with the results of clients seen by the therapist? Did the client’s lived experience resonate with the research evidence? Evidence-based practitioners reflect on the experience as well as gather information directly from their clients to evaluate outcomes. The step of evaluating the outcomes helps the practitioner to make clinical decisions in the future and ask new questions to begin the evidence-based process over again. Box 1-2 provides an example of the steps in evidence-based practice. ## Writing an Evidence-Based Question This section helps you develop the skills of an evidence-based practitioner by learning to write an evidence-based question. As mentioned previously, there are different types of questions; the appropriate type depends on the information you are seeking. The five types of questions relevant to this discussion include: 1. Efficacy of an intervention 2. Usefulness of an assessment 3. Description of a condition 4. Prediction of an outcome 5. Lived experience of a client Table 1-1 provides examples of questions for each category and identifies the research designs that correspond to the question type. Subsequent chapters describe the research designs in much greater detail. ## Questions on Efficacy of an Intervention Questions related to the efficacy of an intervention are intended to help therapists make clinical decisions about implementing interventions. Efficacy questions are often structured using the PICO format (Table 1-2): - P = population - I = intervention or exposure - C = comparison or control condition - O = outcome The following is an example of a PICO question: “*For individuals with schizophrenia (population), is supported employment (intervention) more effective than transitional employment (comparison) for work placement, retention, and income (outcomes)?”* The order of the wording is less important than inclusion of all four components. PICO questions are useful when you are familiar with the available approaches and have specific questions about a particular approach. However, it may be necessary to start with a more general question that explores intervention options. For example, you might ask, “*What approach/es is/are most effective for increasing adherence to home exercise programs?*” Searching for answers to this question may involve weeding through a substantial amount of literature; however, identification of the possible interventions is your best starting place. - **Box 1-2 An Example of Steps in Evidence-Based Practice** The following example illustrates all of the steps in the process of evidence-based practice. You are working with a 4-year-old boy, Sam, who has a diagnosis of autism. During therapy, Sam’s parents begin discussing issues related to sleep. Sam frequently awakens in the night and then, when encouraged to go back to sleep, typically becomes very upset, sometimes throwing temper tantrums. The parents explain that, when Sam awakens, one of them typically stays in his room until he eventually falls asleep again. They are unhappy with this tactic but have not found a more effective technique. You explain to the parents that you will help them with this concern, but first you would like to examine the evidence. First, you formulate the question: “*Which interventions are most effective for reducing sleep problems (specifically nighttime awakening) in children with autism?*” Second, you conduct a search of the literature and identify relevant evidence in the form of a systematic review by Cuomo et al (2017). (A systematic review provides a summary of many studies on the same topic.) You talk with the parents about what approaches they have tried in the past. The Cuomo et al (2017) systematic review is evaluated. Although a systematic review is considered a high level of evidence, this review finds that the studies addressing sleep issues for children with autism are limited. The review discusses several approaches, including extinction, sleep hygiene, positive reinforcement, sleep restriction, and stimulus fading. The review suggests that none of the approaches has more evidence than the others, but there is some support for all of them. You work to summarize the findings in an understandable way for the parents and present the evidence to them. Sam’s parents decide to implement these useful findings and try standard extinction. This technique can be challenging for parents to implement because, in the short term, it is likely to result in an increase in tantrums and agitation. However, the parents are desperate and willing to give it a try. You provide them with basic instruction, and together you develop a plan. The parents decide to start the intervention on a long weekend so that they will have time to adjust to the new routine before returning to work. After the initial weekend trial, you talk to the parents about the extinction process and evaluate the outcomes for Sam. They report that, although it was initially very difficult, after 1 week they are already seeing a significant reduction in their son’s night awakenings and an improvement in his ability to self-settle and get back to sleep. ## Research Designs for Efficacy Questions and Levels of Evidence Evidence-based practitioners need a fundamental understanding of which research designs provide the strongest evidence. An introduction to designs used to answer efficacy questions is provided here so that you can begin to make basic distinctions. Designs used to answer efficacy questions are discussed in greater detail in Chapter 8. The concept of *levels of evidence* uses a hierarchical system to evaluate the strength of the evidence for research that addresses efficacy questions. Determining if a specific approach is effective implies a cause-and-effect relationship; that is, the intervention resulted in or caused a particular outcome. Certain research designs are better suited for determining cause and effect. Hence, knowing that a researcher used an appropriate type of study design means the practitioner can have more confidence in the results. There is no universally accepted hierarchy of levels of evidence; several exist in the literature. However, all hierarchies are based on principles reflecting strong internal validity. Controlled studies with random assignment result in the highest level of evidence for a single study. Table 1-3 gives examples of evidence hierarchies and their references. - **Table 1-1 Examples of Evidence-Based Clinical Questions and Corresponding Research Designs** - **Table 1-2 PICO Format for Efficacy Questions** - **Table 1- 3 Examples of Evidence Hierarchies and Supporting References** - **Table 1-4 Example of Standard Levels-of-Evidence Hierarchy** For a study to be deemed a *randomized controlled trial*, three conditions must be met: 1. The study must have at least two groups, an experimental and a control, or comparison, condition. 2. The participants in the study must be randomly assigned to the conditions. 3. An intervention (which serves as the manipulation) must be applied to the experimental group. Stronger than a single study is a *systematic review*, which identifies, appraises, and analyzes (synthesizes) the results of multiple randomized controlled trials on a single topic using a rigorous set of guidelines. Other factors taken into consideration in some level-of-evidence hierarchies are issues such as sample size, confidence intervals, and blinding (these topics are addressed in later chapters). As a result, different hierarchies include varying numbers of levels. Table 1-4 outlines an example of a standard levels-of-evidence hierarchy that can be used for the purpose of evaluating studies that examine the efficacy of an intervention. Because different hierarchies exist, it is important to recognize that a Level II as described in this table may differ from a Level II in another hierarchy. In the hierarchy shown in Table 1-4, the highest level of evidence is a *systematic review of randomized controlled trials*. Because a systematic review involves analysis of an accumulation of studies, this level-of-evidence hierarchy supports the value of replication. Although this is the highest level of evidence, it does not mean that, just because a systematic review has been conducted, the practice is supported. At all levels, the research may or may not result in statistically significant findings to support the conclusion that the intervention caused a positive outcome. In other words, it is possible that a systematic review may find strong evidence that the intervention of interest is not effective. Also, it is important to consider the studies included in the review. A systematic review may not provide the highest level of evidence if the studies in the review are not randomized controlled trials. If randomized controlled trials have not been conducted in the area under study and therefore could not be included in the review, the systematic review would not meet the criteria for Level I evidence. Systematic reviews are described in more detail in Chapter 13. Level II evidence comes from *randomized controlled trials*. The strength of the randomized controlled trial lies in its ability to indicate that the intervention, rather than another influence or factor, caused the outcome. This quality of a study is also known as *internal validity*. Factors that contribute to internal validity include the use of