Summary

These are notes on nursing research covering topics such as evidence-informed decision making, research trends, qualitative and quantitative research methods, and statistical analysis. The notes also detail critiquing research articles and developing evidence-based practice. It also include an overview of different research concepts.

Full Transcript

Class 1 Chapter 1: Evidence-Informed Decision Making:  continuous interactive process involving the explicit, conscientious and judicious consideration of the best available evidence to provide care.  problem-solving approach to the delivery of healthcare from well-designed...

Class 1 Chapter 1: Evidence-Informed Decision Making:  continuous interactive process involving the explicit, conscientious and judicious consideration of the best available evidence to provide care.  problem-solving approach to the delivery of healthcare from well-designed studies and patient care data and combine it with clinical expertise and patient preferences and values. ETP Link: Education, Theory, Practice Research and Trends:  health disparity in indigenous  underservice community  vulnerable population  palliative care and MAID  increase life-threatening illness because life-sustaining technology.  More mental illness, older people, chronic illness, covid issues, maternal issues Nurse’s role  Consumer  Generator of clinical question  Investigator/participant  Protector or participant 19th century:  Formal discipline  Nightingale: systematic collection and exploration of data to support health promotion.  Start nursing school 20th century  1900-1940: prepare of nurses for practice  1950-1999: o Master program in Canada o First federally funded nursing research o Nursing research fund 21st century  2000-2012: RPN program and master International perspectives:  Global research community  Cross-cultural and cross-national studies  Network, database, website CASN research priorities  Indigenous or vulnerable community  Chronic disease management  Home/Primary healthcare  Older adults CFHI Health Challenge:  Shared concern among provincial, federal and territorial government  Improve access to addiction, mental, home, community care. What is research?  Inquiry: systematic investigation  Conduct: data collection  Reported: evaluated and replicated Class 1 Chapter 4: Research Question, Hypothesis, Clinical Question Clinical Question:  Specific, focused  In clinical setting  Address aspect of patient care  Guide decision making by seeking evidence-based answers  Diagnosis, treatment, prognosis, healthcare delivery  What happens when, what is the exp of, what are characteristics, effect of x on pt.  PICOT o Population o Intervention/focus o Comparison o Outcome o Time  Framing o Situation: patient/population o Intervention/comparison o Outcome: effect of treatment: does it make difference o Example: does use of pain diary in the palliative care of patient with cancer lead to improved pain control:  Situation: patient with cancer with palliative care  Intervention: pain diary  Outcome: decrease pain Research Question  If no answer to the clinical question after searching available evidence, the research question may be developed that would lead to study.  Guide investigation process  Provide direction and purpose  Align with aim  Justify resources invested in research and position out work within large text  Concise, interrogative statement written in the present tense and including 1+ variable and concepts  Focus on: o Describe variable o Specifying the population being studied o Examining testable relationship among variables  Characteristics o Variables under consideration are clearly defined o Population being investigated is specified o Possibility of empirical testing is implied  Variables o Attribute or property in which organisms vary (people, event, object) o X factor:  independent variable: presumed effect on dependent variable.  Can be manipulated o Y factor:  dependent variable: presumed effect that varies with change in independent variable.  Not manipulated, and is the outcome  Population o Well-defined o Specified or implied in research question o Initial idea of the composition of the study population  Critique Research Question o Is there relationship between variables o Does it specify nature of the study population o Does it imply possibility of empirical testing  Study purpose o Aim or goal o Suggest type of design o Level of evidence to be obtained  Purpose statement o Suggest how the researcher seek to study the question o Purpose is to…..  Hypothesis o Formal statement of expected relationships between 2+ variables in population that suggest answer o the research question. o Educated guess propose relationships o Sharpen focus of investigation o Framework establish o Collect relevant data. o Wording:  Variable, population, design, outcome  relationship statement o causal: cause and effect vs associative o simple: relationship between variables o complex: relationship between 3+ variables  type of hypothesis: o directional: state the direction o non-directional: state existence not direction o null: statistical o research: alternative hypothesis  testability o second characteristic of hypothesis o variable lend themselves to observation, measurement and analysis o hypothesis supported or not after data analysis o predicted outcome will be congruent with/not with outcome  feasibility 可行性 o time, money, expertise, access to subject, facility/equipment, ethicality Critique hypothesis:  evaluating wording for o clarity of statement o implication of testability o congruent with theory o appropriate with design Class 2 Chapter 2: Theoretical Framework Knowledge Development  Gap: nurse as question need answers from experts, no theoretical knowledge  Generation: research question devised, qua/quan method used to answer  Distribution: shared with profession formally (report) or informal (media)  Adoption: new knowledge for practice and develop policy  Review and revision: old knowledge revised, new questions prompt new. Nursing knowledge:  Personal: inner experience  Experience: repeated exposure  Ethical: moral component, what is right vs responsible  Aesthetic: art of nursing, intuitive and creative  Sociopolitical: culture, society, beyond nurse-pt relationship  Empirical: scientific, observational, quan/qua research Theoretical(理论)/empirical (经验) knowledge  Scientific knowledge  Guide EIP  Develop/test theory/idea that nurse researcher have about how world operate  Informed by empirical knowing, which involve observation of reality Paradigm/worldview: (模范)  Greek word for pattern  Asset of beliefs and practice, shared by researchers that guide the knowledge development process. Philosophical Term:  Ontology: study of being  Methodology: discipline-specific principles, rules  Aim of inquiry: objective of research  Epistemology: issue of truth  Context: personal, social, political environment Nursing Paradigm:  how philosophical question are asked and answered depend on nursing paradigm used to guide the research process  Post-positivist, constructivist, critical social o Influence methodology and question o Determine how findings interpreted and applied Qualitative Research:  Personal meanings and context of experience, culture, human pattern  Emphasis on capture and interpreting participants perceptions  Data are words or text  Process: o Identify research purpose and question o Select group of people who has experience o Conduct interview o Analysis data and look for recurring theme o Conduct further interview and observation until no new theme o Summarize finding o Review literature between 1-2, 4-5, 5-6 Quantitative research:  Used to explore research questions or test hypotheses that describe phenomena, test relationships, assess differences and try to explain c-e interaction in variables  Data usually numbers and stats  Precise and controlled measurement techniques  Process: o Identify research purpose o Review literature to see what is known o Framework that explain concept o Decide on suitable rigorous design o Select sample and measure interest o Analyze data and report validity of hypotheses Approach to Science:  Inductive reasoning: start with detail of experience and move to a general picture, small to big o Used by qualitative: observation, pattern, hypothesis, theory  Deductive reasoning: start with general picture and move to specific, with 2+ concept: used by quantitative: theory, hypothesis, observation, confirmation Variable  Property being studied/something that changes  Often conduct studies to understand how changes in 1 variable affect other. Concept:  Major component of theory and convey abstract ideas within  Variable=defined concept, a property that takes on different value (gender) Theory:  Set of interrelated concepts that serve the purpose of explaining/predicting  A blueprint or written depiction of both concept that compose of theory Conceptual Framework:  Structure of concepts, theories, or both that is used to construct a map for the study  Present theory that explains why the phenomenon being studied exists  Constructed from review of literature or qualitative.  Theoretical Framework  May also be structure of concept or theory that construct map for study  Based on philosophical/theorized belief of why phenomenon exists Difference between theo vs conc: origin, purpose, scope, application, flexibility Framework function  Clarify concept  Identify and state underlying assumptions  Specifies relationship  Visual symbolic representation of the concepts in a framework Example:  What is relationship between first occurrence of ambulation in patient after open heart surgery and the length of their stay  Conceptual: abstract meaning of concept  Operational: how concept will be measured in study  Conceptual: time during a person registered patient in hospital  Operational: sum of days as patient beginning with admission and end w/d.c.  Definition of ambulation o Conceptual: walk from place to place, move about o Operational: taking four steps without assistance Evaluating a Framework:  is it clearly identified, consistent with nursing perspective, appropriate with topic?  Is concept and variable clearly defined  Did study present literature to support concept  Is there logical, consistent link between framework, concept, and methods?  Study findings relation to framework. Conceptual: define relationship within study, specific/focused lens, images of research Theoretical: framework from existing theory to explain phenomenon, broder lens Class 2 Chapter 3: Critical Appraisal Strategies: Reading Research Critical thinking  Rational examination of ideas, inferences, assumptions, principles, arguments, conclusions, issues, beliefs, statements and actions  Involves disciplined, self-directed thinking  Includes the display of mastered intellectual skills and abilities (critique criteria) Critical Reading  Active, intellectually engaging process in which reader participates in inner dialogue with the writer  Critical reader can enter the POV of writer  Critical thinking and reading are developed by learning the research process. Critical Reading Process  Preliminary: Familiarize yourself with the content o skim article, identify concepts, clarify unfamiliar terms  Comprehensive: understand the researcher’s purpose or intent o Identify main theme, steps of the research design, clarify unfamiliar terms  Analytical: understand the parts of the study and being developing a critique o Assess the study value for your need, evaluate validity and applicability.  Synthesis: understand article and how it fits in larger body of knowledge o Understand article and research process, use own words, identify articles strengths and weaknesses. Steps to EIP  Critical reading, critical thinking, read widely, understand scientific principles, be an intelligent consumer of knowledge, develop EI interventions Level of Evidence:  1. Meta-analysis, systematic review of several randomized controlled trails  2. 1+ randomized controlled trails  3. Quasi experimental study  4. Non-experimental study  5. Evidence from descriptive or qualitative studies  6. Evidence from single descriptive or qualitative study  7. Evidence from the opinion of authorities or expert committee reports Research articles  Follows steps of research process  Formatting and space allocation are controlled by journal guidelines Format and style  Journal: need to identify purpose, space limitations, author guideline, type of study o Author: info and brief bio o Abstract: synopsis of study placed at beginning of article Component of Research Report:  Title, Abstract (summary), Intro (background), Lit review, Purpose (aim/obj)  Question/Hypothesis, Theoretical/Conceptual Framework  Methods (design, sample, procedure, instrument, ethics)  Results (data analysis), discussion, conclusion, implication, reference Communicating results:  Dissemination of results is important  Can publish article in journal  Present study at conference  Apply findings to EIP like: care plans, protocol guideline, practice standards. Systematic review:  Analysis of available lit (evidence) and a judgment of the effectiveness or otherwise of a practice, involving a set of complex steps  Meta-analysis, integrative review, scoping reviews, meta-syntheses. Clinical guidelines  Systematically developed statement served as a guide to bridge gap research and practice  Assessed prior to implementation for appropriateness To synthesize research article, what question need to answer:  What is purpose? What methods used, were they appropriate? What are main findings? How are they significant? What are implication of findings? Limitation? Class 2 Chapter 5: Finding and Appraising the Literature Literature review:  Systematic summary and critical evaluation of scholarly literature on a topic  Clearly, concisely, adequately represents positive and negative findings  Include adequate number of resources and is a synthesis of literature.  Between theory, practice, research, education  Purpose: o Discover what is known, follow clinical question, knowledge building, reading to learn. o Known vs unknown, gaps, discover unanswered question or frameworks o Generate research question/hypotheses o Help to narrow design and method/determine need for replication.  Purpose as a consumer o Known vs unknown, gaps, traditions, strength and weaknesses o Develop EIP uncover a new practice that can be used, tested or revised. PICOT to guide literature review:  Does regular exercise prevent osteoporosis for post menopause women who had osteopenia?  P: Post menopausal women with osteopenia  I: regular exercise  C: no regular exercise  O: prevention of osteoporosis  T: after one year Overview of Literature Review (framework, primary/secondary source, research question and hypothesis, design and method, outcome of analysis)  Theoretical or conceptual framework o a structure of concepts or theories that provides the basis for development of research questions or hypotheses. o A concept is an image or a symbolic representation of an abstract idea o A theory is a set of interrelated concepts, definitions and propositions that convey a systematic view of phenomena.  Quantitative Relationship: o Review literature->problem/need, question/hypothesis, design, findings, implications, recommendations, theoretical frameworks  Qualitative Research Purpose: o Phenomenological: compare study findings with lit to enhance knowledge o Grounded study: constant comparison of study findings with lit o Ethnographic: lit concepts provide a framework for the study o Historical: lit is data source  Types of literature o Data based: research lit, studies found in journals (aka empirical, scientific) o Conceptual: reports of theories or reviews.  Systematic Reviews: o Special kind of lit review uses rigorous methods to identify, critically appraise and synthesize primary studies o Provide summary on what is known on the topic o Help with identifying in the knowledge o Considered a research study  Sources: o Primary: data-based, theory, research (published research study) o Secondary: summary of material, critique, analysis of theory, topic, practice  (article about an analysis of a clinical practice)  Journals: o Peer-reviewed o Blind reviewed by external reviewers o Judged using a set of criteria  Class 3 Chapter 8: Introduction to Qualitative Research Qualitative Research:  Human experience  Conducted in natural settings, and uses data that are words or text, rather than numerical, in order to describe the experience that being studied  Naturalistic settings, people live  Researchers believe reality is socially constructed and context dependent, the meaning of observation is defined by circumstance or environment  Steps in Process: o review literature, study design, sample, setting (recruitment and data collection), data collection, data analysis, findings, conclusions  Designs: o grounded theory, case studies, historical, ethnography, phenomenology, participatory action, narrative, action research, historical  Lit review o Quick review of literature o Some conduct limited reviews because they want to be open and bot biased about phenomena in study.  Sample/Setting o Purposive sampling: a group consisting of particular people who can shed light on the phenomena under study. o Researchers will set inclusion and exclusion criteria. o No set sample size: continue to gather data until there is no new information emerging, referred so as data saturation o Occurs in naturalistic setting  Data collection o Interview: often recorded o Focus group o Consent o Interview guide: broad question  Data Analysis o Description of how raw data was handled o Use of computer software o Process of distilling the data into broader, more abstract, overarching categories of meaning.  Findings: o Through description of the findings o Quotations to help the reader become immersed in the findings (thick descriptions) o Conclusion: summary of the findings and compare to exist lit o Discuss transferability to other settings o Describe limitations and further areas  Ethical issue o Naturalistic setting: may skip consent o Emergent nature of the design: may alter method OT, affecting how informed participants really are. o Researcher-Participant interaction: relations influence focus of study o Only human  Triangulation o The expansion of research strategies in a single study or multiple studies to (use multiple sources, methods to investigate phenomenon)  Enhance diversity, enrich understanding, accomplish certain goals o Data triangulation: variety of data sources (at different times, from different settings and groups) o Investigator triangulation: use different researchers with divergent backgrounds o Theory triangulation: use of multiple perspectives during data interpretation o Methodological triangulation: multimethod used to study single topic. o Interdisciplinary triangulation: use 1+ discipline to study topic  Mixed methods: o Combines qualitative and quantitative approaches o Inductive: primarily qualitative: observation, generalization, theory o Deductive: primarily quantitative: theory, production, experiment  Meta-synthesis o Systematic review of qualitative research o Uses comparative analysis and interpretative synthesis of findings o Seeks to retain essence and unique contribution of study o Critical mass of qualitative research evidence that is relevant to practice Class 3 Chapter 9: Qualitative Approach to research Constructivist lens:  Applied in qualitative research  Attribute meaning to experience, which evolves from life context  Life context: matrix of human-health-environment relationships emerging over course of living. Qualitative nursing science  Study human experience of health, central concern  Understand complexity of human health experiences: guide practice  Help create instrument, develop theory Qualitative research methods  Guide nursing practice by using personal stories to help understand everyday health experiences.  Contribute to instrument development by using voice of participants evaluate exist instrument or create new ones  Develop nursing theory by enable systematic structure of ideas that emerge from people who are experts through life experience.  Identify phenomenon  Structuring study  Gathering data  Analyzing data  Describing findings  Phenomenological method (understand human exp of phenomenon) o Learning and constructing the meaning of human exp through intensive dialogue with persons who are living the exp o Goal: understand meaning of exp as it is lived o Research question: focused on exp o Bracketing: research identifies personal biases about the phenomenon and sets them aside (brackets them) when engaged with participants. o Sample selection: purposive, people living exp o Data: interview transcripts, notes memos o Data analysis: thematic analysis reveal essence of phenomenon through exp of participants. o Results: thematic narrative  Grounded Theory Method: o Inductive approach involves systematic set of procedures to arrive at theory about basic social processes o Used to construct theory where no theory exists, or existing theory fails to explain a set of circumstances o Goal: discover underlying social forces that shape human behaviour. o Research Question: focus on basic social process o Researcher: develop theory in data, reflect context value, not his/her value o Sample selection: purposive, people exp basic social process o Data: interview transcript, observation note, memo o Data analysis: constant comparative analysis done during collection and analysis in cyclical pattern, along with theoretical sampling o Result: theory grounded in data.  Ethnographic method o Focus on scientific description of interpretation of social group/system o Goal: understand participant POV of their world=emic view  Contrasted with the outsider’s view=etic view o Research question: focused on cultural patterns or lifeways o Bracketing: researcher identify personal biases and brackets them when engaged with study participants o Sample selection: purposive: people living in culture being studied o Data: informal interview, notes, artifact, video o Analysis: collect and analysis o Result: cultural narrative  Case study method o Contemporary phenomenon OT to provide an in-depth description of essential dimensions and processes of the phenomenon. o Research question: evolve OT and re-create itself with progress o Researcher: perspective reflected in questions o Sample: purposive: common cases or unusual cases o Data: interview, observation, document review o Analysis: done simultaneously in cycles = iterative process o Result: thematic case-specific narrative  Historical research method o Understanding the past through collection, organization, appraisal facts o Goal: shed light on past so it guide present and future. o Research question: in description of the phenomenon studied o Researcher: interpretation free of bias o Sample: primary and secondary data source o Data: record, book, document, artifact o Analysis: authenticity, pattern o Result: historical narrative  Participatory action research o Combine exploration, reflection, action on social and health problem o Systematic access voice of community to plan action o Goal: change within community with participant involved in steps o Community-based: accessed (look, think, act) o Focused on who is affected or has effect on problem o Researcher: consultant, not expert o Sample: purposive: people in community with perspectives, exp or BG. o Data: interview, group session, observation, document, material o Analysis: all data->set of ideas, themes to plan action o Result: report, narrative, presentations  Critiquing criteria o Identify phenomena o Research question o Researcher perspective o Sample select o Data gather o Data analysis o Describe finding Class 4 Chapter 16: Evaluating Qualitative Research Qualitative Data Analysis (no one set of criteria to evaluate qualitative research)  Look for insight, meaning, understanding, patterns of knowledge, intent, action  Each study is unique and reliant on creativity, intellect, style, exp of researcher Component of Data analysis:  Data-collection period, data reduction, data display, conclusion  Data reduction o Ongoing process as data are collected o Process of select, focus, simplify, abstract, transform data o Organize data into clusters (themes) o Thematic analysis: recognizing and recovering the emergent theme o Memo used to help organize data, note to self o Data coded: given tag or label from theme (lists by codebook) Phases of Data Analysis  Phase 1: Data generation (Coding) o 1. Organize and prepare the data (participant A or B)  Transcribe and label the data for easier handling  Organize data chronologically or by participant o 2. Familiarize self with data  read transcript to identify recurring ideas/tones  highlighting keywords that suggest theme o 3. Code the data  break data into smaller pieces and assign code names to describe key concept  Nurse A “quote”: code “team support” o 4. Identify categories and themes  Group related codes into preliminary categories or themes  “Team support” and “mindfulness” group to “coping strategies” o 5. Review and refine themes  Review data to ensure theme reflect participant experiences  Can create subthemes like (internal, external)  Phase 2: Data analysis (interpretation) o 6. Interpret the data  Develop insight and connect findings to the research question  interpretation: connecting dots from the data; insight: solution o 7. Validate findings  Confirm credibility using participant feedback or triangulation  Share the findings with the participants and adjust if necessary o 8. Present the findings  Organize result into narrative with quotes  Nurse A: x strategy: quote o 9. Reflect on the analysis process  Assess rigor, biases, limitations  Final Thematic Summary (theme, subtheme, groups) Data display  Organized, compressed assembly of information that permits conclusion drawing and action  Visual display: graph, flow chart, matrix, model Conclusion Drawing and verification  Challenge for the researcher is to stay open to new idea, theme, concept.  Conclusion drawing: description of relationship between themes  Verification: occur as data collection Phenomenological analysis:  Immersion in data: read and reread  Extract significant statement  Determine relationship and describe themes  Synthesize theme into consistent description Ethnographic analysis:  Immerse in data, identify themes, cultural inventory, interpret, compare findings. Grounded Theory analysis:  Divide data to discrete parts  Compare data for similarity and differences and with other data collected (constant comparative method)  Cluster into categories and develop them  Determine relationships among categories Case study analysis:  Identify unit of analysis  Code continuously with data collection  Find themes, analyze field notes  Review and identify pattern and connection Data management  Computer assisted qualitative data analysis software (CAQDAS) o Code and retrieve (data collector), theory build (QSR), o Conceptual network builder (inspiration) Criteria for Judging Scientific Rigor  Credibility/Truth Value o Truth finding as judged by participants and others within discipline o Researcher must ensure participants identified/described accurately. o Seek out experiences “that don’t fit” until all cases fit (don’t ignore)  Check with participants on accuracy (keep outliers tho) o Example: understanding life-threatening illnesses (told by doctor)  Auditability/dependability/consistency o Findings are same if using same sample for phenomenon o Judged by adequacy of info, and able to lead reader through process o Multiple methods of data collection, audit trail, peer review  Fittingness/transferability/applicability o Degree that study findings are applicable outside study situation o How meaningful the result to people not in the study o Thick description of data presented o Context of study can be determined o Transferability to other context may be possible if similarity  Confirmability/Neutrality o Data linked to sources for reader to think interpretation are from them o Finding reflects previous 3 o Reflective journal, search for outliers  Trustworthiness o is the data analysis fair? Is there anything you notice? Transferable? o Achieve trustworthiness  Clarify researcher bias, triangulation, long engagement, more observation, peer review/debrief, member check, thick description, audit trail, negative case analysis Types of Triangulation  Methodological: multiple methods (interview, observation)  Data source: collect data from different source or context (group)  Investigator: multiple researchers collect, analyze, interpret  Theory: multiple theoretical perspectives to interpret  Environmental: multiple settings to understand influence of different settings Peer review/debrief:  Review independently by member of team, then monthly at team meeting for discussion with previous data  One member assign as primary analyst for each interview and lead discussion  Conducted independently. Audit trail:  Organization of data (digital, audio, notes, memos, coding list)  Compile audit trail of decisions made by research team Critique:  Method clear? Good for this study? Able to follow? Accurate? Credible?  Class 5 Chapter 10: Intro to Quantitative Research Purpose of design  Provide plan: solve problem, answer question, test hypothesis  Design involves: plan, structure, strategy  With different level of control (independent variable affects dependent?)  Plan control (consideration) o Objectivity, accuracy, feasibility, control, homogeneous sample, constancy, manipulation, intervention, randomization Quantitative process (hypothesis, theoretical framework, lit review, problem state, design) Element of design:  Participant (who)  Observation (what)  Measure of time (when)  Select of participant (where)  Role of investigator Internal validity (asks if it is IV that caused the change in the DV) (within study)  A study accurately establish casual relationship between IV and DV.  Whether observed effects in a study are genuine due to manipulated IV and not due to confounded by other factors or extraneous variables  Threats to internal validity o Ambiguous temporal precedence, selection, history, maturation, regression to the mean, testing, instrumentation, attrition, additive effect. External Validity (conditions under findings can be generalized, ability to generalize the findings outside the study) (can it be applied to real world)  Threat to: o Interactions of causal relationships with unit (causal with one not other) o Interactions of causal relationships with setting o Context-dependent relationships o Interactions of causal relationships with outcome/treatment Critiquing criteria:  Appropriate design, control measure match design, feasible, flows well, control of threat to Internal validity and external, Class 5 Chapter 11: Experimental and Quasi experimental design Purpose of research design  Provide plan for testing hypothesis about IV and DV (experimental, quasi, non)  Exp and quasi differ from non, since researcher actively seek to bring about the desired effect and does not passively observe behaviour and action. Feasibility of conducting study  Use study that: appropriate to question, max control, hold constant condition, specific sample criteria, max level of evidence, ethical. Maximizing control:  Rule out extraneous variable (not studied, can affect) o Homogenous sampling o Constancy in data collection o Manipulating IV o Randomization  Control: increase probability that study results are accurate o Good design: exercise control of condition that threaten validity Experimental designs provide level 2 evidence:  Evaluate outcome in term of efficacy and cost effectiveness  True exp design, Solomon four-group design, after-only design  Causality: true experiment (strongest), descriptive design (weakest) o Criteria:  Temporal sequence (exposure precede outcome)  Experimental evidence (is there randomized controlled trial)  Feature: o randomized of participants to control or treatment group o control: IV->DV o manipulation of IV Hierarchy of research study design (hypothesis to causality)  Case report, ecologic study, case-control study, randomized control trial Notation: O (observation/DV), X (intervention/manipulation/IV), R (randomization) Solomon 4 group allow researcher to assess impact of pretest on outcome of study Test designs:  R X O: only control  R Xa1b2 O: experimental (manipulate 2+ variable)  R O X O: o Quasi-experiment when true experiment not possible o 1 group exposed to intervention, and observed at 2 different times  R O X O; R O O: pretest-post test control group  R O X O; R O O; R X O; R O: Solomon 4 group (effect of pretest on how participant react to IV)  Pretest is when DV is measured before IV is manipulated  Posttest is after  https://quizlet.com/876816717/research-design-notation-rxo-flash-cards/  Experimental Design:  Advantage: test c-e relationship, highest level of evidence for studies  Disadvantage: participant mortality, Hawthorne effect, difficult logistics in field settings, not all research questions are amenable to experiment manipulation Quasi experimental:  Evaluate the effect when random assignment of participants to group is not possible  Non-equivalent control group, one-group design, after-only non-equivalent control group design, time series design.   Advantages and disadvantages: o Practical and more feasible, especially in clinical setting o Some generalizability o Hard to make c-e statement, and may not be randomized. Critique Criteria:  What design? Is it quasi? Is there c-e? is the method appropriate? Is it suit for setting  Research evaluation: o Specific problem being evaluated? o Are outcomes identified? o Is the problem analyzed? Is the solution described? o Are measurement of change identified? Are outcome relate to other activities  Experimental o What exp design? Appropriate? Randomization, control manipulation? o Report threat to validity? o Are there reasons to believe that alternative explanations exist for findings?  Quasi: o Design? Common threat to validity? Alternative explanations? o threat to validity? Limitations? Class 5 Chapter 12: Non-experimental Design Non-experimental Design (is used in studies to)  construct a picture of a phenomenon at point or over a period of time  people, places, events, situations as they naturally occur  test relationship and difference among variables  survey or relationship studies. Survey studies: descriptive, exploratory, comparative Relationship or difference studies:  correlational, developmental, x-sectional, longitudinal/prospective, retrospective/ex post facto   Observation: assess knowledge  Descriptive/correlation: examine relationships between x and y  X-sectional: among one group of student  Explanatory, correlational: association between  Retrospective: existing data, relationship between exposure and outcome  Prospective: looking forward, predict.  Case-control: condition to without (case to control)  Ex post facto: causal-comparative, exam condition to determine cause Non-experimental design advantage and disadvantage  Hard to explain c-e  Important to develop knowledge base on phenomenon  Useful in forecasting or making prediction  Important designs when randomization, control, manipulation are not possible  Useful in testing theoretical models of how variable work in a group in setting. Methodological research:  Develop and evaluate data collection instruments, scales, techniques  Psychometric: theory and development of measurement instruments (survey or questionnaire) and measurement techniques. Meta-analysis (quantitative studies):  Strict scientific process  Synthesize findings from separate studies in a specific area.  Statistically summarize findings to obtain precise measure of the effect. Secondary analysis:  Reanalysis the data from a (non/exp) study for different purpose Systemic review:  Study with similar designs  Quantitative  Identify most reliable and current evidence for practice  Focus on specific question, inclusion/exclusion criteria, search strategies Epidemiological study:  Examine factor affect health and illness of population relative to environment  Investigate distribution, determinant, dynamics of health and disease  Prevalence or incidence focused Critique criteria:  Why this design? Does it congruent with study purpose? Is it appropriate? Is it suited for data collection?  Is finding congruent with study design?  Researcher theorize beyond relational parameters of findings and infer c-e relationship between variables?  Alternative explanations? How was threat discussed? Limitations? Class 5 Design and analysis for Non-experimental Concepts:  Causality o Process of deriving c-e conclusion by reasoning from knowledge. o Three conditions needed  Temporal precedence: cause before effect  X and y need to be related  Relationship must not be because confounding variable or others  Multi-causality o Recognizing interrelating variable can be involved in causing an effect o One theory or study is unlikely to identify every variable involved in causing a particular phenomenon.  Probability o Extent to which something is likely to happen  Bias (systematic error) o Error: may get result that are not representative or population o How to minimize bias and know how they are introduced  Manipulation: o Move around or to control movement of something o Manipulation of the IV under study  Population (random)->group 1 (new)/group 2 (control)->outcome  Control o Direct or manipulate factors to achieve a desired outcome o Control where, when, what form intervention is delivered, who receives it, when and how effect is measured o Rule out alternatives for the result  Additional concept: o Exposure: is there an exposure?  Condition of being subjected to something (disease, meds)  Hypothesis: Vitamin C prevent cold o Unit of observation: what level of data are available?  Cigarette smoking is related to lung cancer (personal? Ecological?) o Time: data collected over time?  does exercise prevent diabetes? (Longitudinal? X-section?) o Sampling: select of subject based on exposure (cohort) or disease (case control)  Cigarette smoking causes lung cancer (cohort? Case-control?) Types of research designs: Non-experimental research Non-experimental research steps:  Determine problem and hypothesis to be tests  Select variable and measures to be used  Identify data source or collect data  Select appropriate stat test and distribution of data  Analyze the data  Interpret result IV in Non-experimental:  NE study lack manipulation of IV by researcher  Researcher studies what naturally occurs or has already occurred  Categorical IV: gender, ethnicity, geographic location, personality, drug use  Continuous IV: age, weight, BP, depression level, self-esteem Descriptive studies: provide description of status of characteristics of situation  Examine or describe an issue  Especially effective when without evidence or issue was not studied before  To estimate (univariate), nurse fatigue (what), in sample group of nurses (who), in US (where), in 1991 (when) (univariate: 1 varible) Comparative/Difference Studies:  Demonstrate whether there are differences between groups on DV  To compare (bivariate: ANOVA, chi-square), nurse fatigue (what: DV), by the nurses’ geographic location (rural vs urban): one IV (relationship of 2 variables) Relationship/inferential study: Multivariable (linear regression)  Degree of differences in one variable are related to difference in another variable  Predict occurrence of phenomena on basis of 1+ characteristic which can inform further investigation of variable of interest.  To analyze relative importance of (association), nurses’ communication/mental health/demographic (2+ IV or control variables), nurse fatigue (what, DV) Predictive: predict status of 1+ DV Explanatory: explain how phenomenon operates or relates to other variables Class 6 Chapter 13: Sampling Population: well-defined set that has certain properties: people, animals, events  Population descriptor: inclusion/exclusion criteria->sample selection o Ethnicity, education, health status, diagnosis, comorbidities o Example: inclusion: women who can English, has damage to their X o Example: exclusion: maternal allergy to lanolin, infant with abnormality  Target vs accessible population: o Broader group (generalize) vs subset reachable for study o Larger vs smaller size o Focus: theoretical and ideal vs practical and realistic o Purpose: define scope of generalization vs defines sampling and recruit Sample: subset of population (goal: to represent a population of interest)  Sampling: process of selecting a portion of the designated population to represent the entire population.  A sample is a set of elements that makes up the population  An element is the most basic unit about which information is collected (people, places, objects)  Theoretical population: who you want to generalize to  Study population: what population we can access  Sampling frame: how can you get access to them  Sample: who is in the study  Representativeness: key characteristics closely approximate those of population Types of sampling strategies  Non-probability: non-random  Probability: random of sample, more likely to be representative of population  Simple random sampling: equal probability sample o Every individual in population has equal chance of being selected  List of entire sample, assign number, random selection o Ensure unbiased representation o Advantage:  Applicable when population small, homogenous, available  Estimates easy to calculate o Disadvantage  Need sample list, not practical for large sample frame  Minority subgroups may not be present in big number  Systematic sampling o Participants are selected at regular intervals from ordered population list o Random choose number (4), start at 4 and take every 5th unit o Advantage: Can use when there is no list/sample frame: do as subject come in Sample evenly spread over entire population  Better than simple random (because less time consuming) o Disadvantages  Pattern can cause bias, periodicity, order in the list  Start must be random  Stratified sampling o Population divided into subgroups (strata) based on a characteristic, and a random sample are taken from each stratum proportionally. o To ensure representativeness of sample and strata  Maximize homogeneity of unit on variable of interest (more precise)  Allow sufficient case in subgroup to make inference o Advantage:  Every unit has the same chance of selection, more representative o Disadvantage Need sample list plus extra info for stratify, need separate frame Stratifying variable can relate to some, not other variables, reducing utility of the strata  Need larger sample than other methods  Cluster Sampling: o The population is divided into clusters (group) clusters are randomly selected o Advantage: Good for widely geographically dispersed sample impractical to list whole sample or to sample all 50 states or when permission needed (school) o Disadvantages  Analysis must account for clustering design effect  Goal: diversity within cluster, across cluster composition similar, but hard to achieve  Can have high error rate, because within cluster homogeneity  1-stage: all cases within randomly selected clusters included as sample.  2-stage: subset of elements within selected clusters are randomly selected for inclusion in the sample. Randomly selected schools, then classes within schools, those classes are samples.  Allocation options: o Proportional: select proportional to size of subgroup in population o Equal allocation: same number for each stratum o Oversampling by choice: Asians 1/5 vs others 1/10  Strata: subset of population defined by key characteristics (age), strive for homogeneity (on key variable) (being the same)  Cluster: subset of population defined by space or social, strive for clusters to be heterogenous (variations) Types of Non-random sampling:  Convenience: participants selected based on availability and willingness to participate, without randomization.  Purposive: participants chosen based on specific characteristics or purpose relevant to the study o Validation of a scale or test with a known-groups technique o Collection of exploratory data in relation to an unusual or highly specific population, particularly when total target population is still unknown. o Collection of descriptive data (qualitative) that seek to describe lived experience of a phenomenon (depression, sexual abuse, experiences) o Study population relates to a specific diagnosis (TDDM, MS) or condition (blindness, terminal illness) or demographic characteristics (twins)  Snowball: existing participants recruit future participants, commonly used in hard-to reach populations  Quota: selected to meet predefined quotas for specific subgroups in population. Sample representativeness: Sample procedure, sample size, response size Factors influencing sample size:  Type of design used, sampling procedure used, formula used to estimate sample size, degree of precision, heterogeneity of attributes, relative frequency of occurrence of phenomenon of interest, projected cost Sampling procedure:  Identify target population, delineate the accessible population, develop sampling plan (entire population, proportional stratified, population, sample randomized Sampling bias  Bias: systematic error that can prejudice or distort findings  Sampling bias: error that arises due to sample selection o Survey for high school students, measure use of illegal drugs  Miss home schooled kids or dropouts  Certain under/over-represented relative o Can happen any time  Data you collect may not be accurate or represent the missed group.  Probability sampling limits but does not eliminate it. o Compare characteristics of respondents in sample to what’s known in population o Demographic characteristics that might be related to variables o Response bias: who did not participate and how do they compare to those who did. Getting best response rate possible Critiquing criteria:  Sample selection? Appropriate? Any bias? Appropriate sample size? Is there rights of participants? How is it replicated? Use entire population when its small. Increase subjects by 30% to ensure different group detect, Class 6 Chapter 14: Data Collection Methods Importance: success of a study depends on the quality of data collect method. Research study definition:  Conceptual definition: derived from lit, accepted definition of a concept  Operational definition: translates the conceptual definition into behaviour or verbalizations that can be measured. Goodness of Fit (Between)  Purpose, design, research question/hypothesis, conceptual and operational definition, data collection method. Data consistency:  Measuring the data in the same manner for each participant  Data collection protocol needed to ensure intervention fidelity  Co-investigators and assistants need to be trained to follow same protocol  Ensure interrater reliability. Data Collection must be:  Objective: data must not be influenced by person who collect information  Systematic: data must be collected in the same method by each person involved in the collection procedure. Quantitative data collection method:  Physiological or biological measurement (indicator of health) o BP, HR, HgbA1C, T3, T4, breast milk, bglu  Observational o Consistent with study objective and theoretical framework o Standardized and systematic plan for observation and recording o Training of data collector o All observations are checked and controlled.  Scientific Observation Methods (concealment means hiding) o Concealment with/without intervention o No concealment with or without intervention  Interview and questionnaire (likert-type, closed ended) o Open-ended questions o Close-ended questions (likert-type, T/F, semantic differential) o Likert type scale:  How satisfied are you with X (1-very satisfied, 5-not satisfied)  A 1 to 5 scale or not a all to very much  Constructing new instruments o Define construct to be measured o Formulate item or question o Assess item for content validity o Write instruction, test new, assess validity  Records or available data (hospital records, historical documents, audio, video)  Qualitative data collection methods:  Observation, interview and questionnaire, focus group, photovoice, story telling Critique:  Framework identified? Data collection clear? Instrument? Appropriate method? Is it all the same for all participants? Replicable? Precaution? (validity measures whether the instrument is doing its job) Class 7 Chapter 15: Rigor in research Rigor in research 可审计  Ideally, research results are transferable and generalizable (need rigor)  The quality, believability and trustworthiness of the study findings.  Qualitative rigor: shown by credibility, auditability, fittingness  Quantitative rigor: psychometric measures used to ensure reliability and validity Measurement: Measures used to test validity and reliability Reliability: the extent to which the instrument yields the same results on repeated measures.  Proportion of the variance in the measurement scores due to differences in the true scores rather than random error.  Stability: consistent results over time  Consistency: all items measure the same concept  Accuracy: measures what it is intended to measure  Equivalence: different versions or form of the instrument or between multiple raters Reliability coefficients:  Expresses the relationship between: error variance, true variance, observed score  RC can range from 0 to 1: 0 no relationship, 1 perfect relationship, higher=stronger Test-retest coefficient:  Pearson: continuous scales (ratio, likert scale)  Spearman: ordinal or non-normally distributed scale  Phi: dichotomous (categorical) Types of reliability: equivalence, stability, internal consistency  Stability: instrument considered stable when repeated instrument administration yields the same results. o To estimate stability: test-retest reliability, parallel or alternate form  Homogeneity/internal consistency assess by: o Item-to-total correlation, split-half reliability, cronbach’s alpha o Kuder-richardson (KR-20) coefficient.  Internal consistency: Coefficient alpha o Each item is correlated with every other item. o Take average of individual item-to-item correlation and adjusting for the number of items o Represent the extent to which performance on any one item is a good indicator of performance on any other item in the same instrument  More interrelated the items are, the greater reliability.  Equivalence: o Consistency among observers using the same measurement tool, or agreement among alternative forms of a tool. o Can be tested by parallel form, interrater reliability. Ex of Association and Agreement:  scores by observer 1 are exactly 2 points above observer 2 o correlation would be perfect 1.0 o agreement is poor because no agreement (interaction)  reliability depends on true score variance o reliability is a group-level statistic o reliability = 1- (error variance) o reliability of 0.7 being 30% of variance in observed score is due to error Importance of reliability  necessary for validity (not sufficient) o low reliability attenuate correlation with other variables o may concluded 2 variables are not related when they are  greater reliability = greater power o more reliable your scales, the smaller sample size you need to detect  reliability often poorer in lower SES/low literacy groups because more random error due to o reading problem, hard to understand problem, not familiar with survey Validity:  whether a measurement instrument accurately measures what it supposes to do  precision, accuracy, consistency. Contrast reliability and validity  test’s reliability is the degree to which differences in test scores reflect real differences among people in their levels of the trait affecting test scores  don’t have to be aware of interpretation of score  validity linked to interpretation of score and what it supposes to measure  validity need reliability. External and internal validity  external: extent to which results are generalizable or transferable  internal: rigor that study was conducted (design, measurement, what is measured) o how much researcher thought about alternative explanation for causal rxn.  What is measurement validity: o Accuracy of a measure o Valid when it measure what it supposed to do  Internal validity o Content validity: evidence based on test content: Content of the measure represent the universe of content or the  domain of a given behaviour  Instrument appears valid to expert  Expert who knows content area well:  Rate extent to which item reflects the theoretical definition  Rate extent to which entire set represent the construct  Evaluate wording of items and response format for cultural and gender bias, clarity of directions and formatting.  Health-related QoL (sleep problem), with participants who have missing sleep problems. o Construct validity: convergent, discriminant, known group Test measures theoretical construct and attempt to validate theory underlying measurement and test hypotheses relationship.  Hypothesis test, converge/diverge, contrast group, factor analysis o Criterion-related validity: predictive and concurrent  how much performance on measurement tool and actual behaviour are related  concurrent validity: instrument’s ability to distinguish individuals who differ on a present criterion (SAT score to HS GPA) predictive validity: instrument's ability to distinguish people whose performance differ on a future criterion (SAT score to Uni GPA) o Factor analysis/item response theory: evidence on internal structure Observed score (X0) = true variance (Xt) (hypo) (variable in the population such as intelligence) + error variance (Xe) (random/measurement error) Weak to good validity: face, content, criterion (predictive), construct (int: discriminant and convergent /ext or nomological) Class 6 Chapter 17: Quantitative data analysis Descriptive statistics:  Description and/or summary of sample data  Let research to arrange data visually to show meaning and understanding in the sample characteristics and variable  Maybe the only result sought  Check data, missing, outliers, normality (table 1 describing population)  Explaining high-level summaries of set of information  Include: o Central tendency: mean, median, mode o Variability, dispersion: stddev, var, min., max., kurtosis, skewness o Frequency distribution: count, percentage, cumulative % o Pie chart, bar chart, scatter plot, histogram, box  Result: o Text for less number, table for more number, figures for complex o Table is self-contained, white space row and column o Informative table heading o Graph: present, summarize data, understand result o Box plot for x-sectional, spaghetti plot for longitudinal  Levels of measurement o Categorization of precision with which an event can be measured o Nominal  Classify objects or events into categories  May be dichotomous or categorical (dichotomous=different)  Gender, marital status, religious affiliation o Ordinal: Show relative ranking of objects, number assigned to each categories can be compared and member of higher categories is said to have more certain attribute than one in a lower category  Intervals not necessarily equal  Examples: class ranking, likert scale o Interval:  Show ranking on a scale with equal interval between numbers  Zero point is arbitrary  Example: temperature scale, depression inventory o Ratio  Ranking of event on scale with equal interval and absolute zeros  Highest level of measurement (usually in science)  Example: weight, BP, height Frequency distribution  Basic way to organize data (Kurtosis: peakedness of data)  Summarize occurrence of event, tallies frequency  Cohort group can be created to investigate frequency of data Central tendency  Summarize middle of group, an average  Each measure has specific use and appropriate to select type of distribution Normal distribution  Theoretical concept that observe interval or ratio data group about a midpoint in a distribution closely approximates the normal curve (68, 95, 97%)  Tested for outcome o Central limit theorem: N>30: distribution is normal o P-P plot, histogram, value of skew/kurtosis o Kolmogorov-Smirnov: test is data differ from normal Non-normality  Leave it and do non-parametric test  Leave it and do parametric test if big sample size  Do robust stats (SEM, MLR)  Transform the outcome, Log, Square Root  Positive skew: lower range mean  Negative skew: higher range mean  Variability: level of dispersion, spread of data, homogeneity/heterogeneity Range:  Most unstable variable, how are they related, reported with other variability Percentile:  % of cases a given score exceeds (median=50th, score in 90th=exceed by 10%) Standard deviation:  Most frequent used measure of variability,  Average deviation from mean, always reports the mean Inferential stats:  Combines math and logic, test hypotheses using data from probable/non sample Descriptive vs Inferential  D: summarization, organization, data and sample  I: about a population, draw broader conclusion o Predict: characteristics based on sample, anticipate future trend o Test hypotheses: if result are by chance or reflect true characteristics  How much result is by chance? How strong is correlation  Scientific/experimental/Alt Hypothesis (H1): what researcher believe outcome will be, with the variables interact.  Null (H0): tested by stat method: state no difference exist between the groups under the study. o Allow generalize: depend on population, data collected, stats Test stat:  Stat for which frequency of particular values is known  Observed values can be used to test hypotheses  Test=signal/noise=var explained/var not explained=effect/error Probability  Support for H1 by rejecting H0 by applying probability theory  An event’s long-run relative frequency in repeated trails under similar conditions  type 1: false positive: rejection of H0 when it is true (saying yes when answer no)  type 2: accepting H0 when it is false (fail to see connection when it’s there) Level of significance (Alpha level)  probability of making type 1 error: 0.05  researcher says if study done 100 times, rejecting H0 will be wrong 5/100 times  can be 0.01 if want smaller risk of reflect H0 (1/100)  selected alpha level depending on how important to not make an error Practical vs Statistical significance  Stat significance hypotheses: finding unlikely to have occurred by chance  Magnitude of significance important to outcome of data analysis Steps:  Info->data, codebook, inspect data, transform, univariable stats, bivariable stats, multivariable inferential stats, posthoc interpretative stats Variable roles:  Dependent: outcome of interest  Independent: variable being manipulated can affect outcome  Confounding: variable affecting both IV and DV  Mediator: variable explaining relationship between IV and DV  Moderator: variable affect strength/direction of relationship between IV DV  Bivariable: x->y, multivariable: many x->1 y, multivariate: x->many y Research variables determine approach:  Level/scale of variable o Categorical: nominal (unordered), ordinal (ordered), counts (#s) o Continuous: Interval (temp) and ratio (height, absolute zero point), take lots of values, continuous variables  Roles: dependent, independent, confounder, mediation, moderation  Nature: meet required assumptions Measure hierarchy: interval>ordinal>nominal  If level of ordinal>5, can be treated as continuous  Count variable: take whole numbers (poisson, binomial), violate assumption of parametric tests. Test of differences:  Chi-square: nominal data to determine whether frequencies in each group are different from what would be expected by chance (non-parametric)  T-stat: test whether means of two groups are different  ANOVA: test variations between and within multiple groups  ANCOVA: measure differences among group means on an important variable while controlling another variable  MANOVA: measure differences in group means while controlling another variable when there is more than one dependent variable. Test of relationship  exploring relationships between 2+ variables reflecting interval data  determining the correlation, the degree of association (-1 to +1)  most common (correlation coefficient, r) 1/-1 (perfect), -(neg), +(pos) Nominal and ordinal data test: phi coefficient, spearman’s rho, kendall’s tau, point biserial correlation Relationship in 2+ variables  Multiple regression: 1 DV with multiple IV  Used to determine what var contribute to change in DV and how much  Types of MR: forward, backward, stepwise, hierarchical solution Confidence intervals:  Estimated range of values provide measure of certainty about finding  Most common 95% degree of certainty (95% of time findings will within range) Odds ratio: in meta-analysis too  In harm studies to estimate if participants were harmed by expose to event  Divide odds in treatment or exposed group by odds of control group Parametric test assumption  Normality, linearity of the association, homogeneity of variance, independence  Randomness, not having multi-collinearity (test, t-test, ANOVA, MR, correlation)  Non-parametric test: o Kruskal-Wllis->randomness (independent in non matched test) o One sample vs 2 Multi-collinearity issues: (M-C: 2+ IV in regression highly correlated)  Predictors can be correlated, large stdev, can’t estimate variable, good model fit, not significant variables, detect correlation, check collinearity diagnostics (pearson correlation, chi-squared)  Assess by examine tolerance and variance inflation factor (VIF) o Tolerance 10, large std error, non-significant  To solve: o Composite index measure, delete variable, factor/cluster analysis Homogeneity of variance  Equal variance of DV across all levels of IV  Violation of that affect validity and reliability  Test equality of variance: Levene, Bartlett, Brown, Box plot  P-valueweakness? Any risk or benefits? How does finding apply to nursing? Similar in other studies? Is it feasible? Is it replicable? Class 11 Chapter 21: develop EIP E-I decision making  A continuous interactive process involving the explicit, conscientious and judicious consideration of the best available evidence to provide care  Evidence is information acquired: research and scientific practice  Comes from: o External evidence from research, theories, opinion leader, experts o Clinical expertise (internal evidence from outcome, projects, assess) o patient preference and values Evidence-Based Practice: improving patient outcome, reduce cost, empower clinicians Bridge the gap: research to practice (discovery and practice)  key: o a lot of info for clinicians o need pre-appraisal, synthesis, and E-I guidelines o guidelines alone insufficient for knowledge uptake by care providers o targeted approach for implementation Address Barriers to EIP  Barrier: o Perception of time constraints o Limited knowledge and skills in EIP o Focus on research over application in education o Organizational and leadership resistance o Lack of mentors and resources  Solutions o Active facilitation: model successful adopter o Cultural support: building culture value evidence o Localization: adapt evidence to setting Dissemination: communication of research findings (spread)  Modes of dissemination: publication, conference, consultation, training program EIP vs Research utilization: EIP: encompass research utilization in conjunction with case reports and expert opinion Research utilization: process of using research findings to improve care. Foundation of EIP  Best evidence sources: systematic review, random control trail, qualitative research, descriptive research  Guidance for practice: research evidence+ clinical expertise + pt values  Limited research: expert opinion, scientific principles, E-B theories, local practice projects  Ongoing process: integrate new research evidence when its available Forms of using research evidence in practice  Conceptual knowledge use: o Influences thinking (not action) o Results in knowledge incorporation, theory form, hypothesis generate o AKA knowledge creep/cognitive application o E.G.: integrating research into critical thinking  Decision-driven knowledge use: o Apply evidence in new practice, policy, procedure, intervention o Critical decision to maintain/change practice based on evidence review o Use EBP to promote quality of care EIP Models include:  Synthesis of evidence, implement evidence, evaluation of impact on care  Consideration of the context where evidence is implemented Iowa model of EBP:  Overview: o Model guiding healthcare decision o Incorporate research evidence and other evidence types  Key process: o Knowledge from problem-focused triggers->question current practice o Staff review research and critique studies o If not enough studies: conduct study  Implementation: o Develop E-I guideline combine evidence o test changes with small groups, then refine and expand o monitor patient, staff  success factor: organization and administrative support. Ottawa Model of research use (OMRU)  purpose: framework for increase research use in healthcare  components: practice environment, potential adopter, E-I innovation, transfer strategies, adoption, health and other outcomes  approach: continuous assess, monitor, evaluate, address barriers (to enhance) EIP Process steps:  select topic: problem/knowledge focused trigger  form team: formulate EIP question using PICOT  evidence retrieval, rate, critique->synthesis findings->evaluation EIP in organizational setting:  if nurses decided what evidence to use in practice, potential harm can arise because of conflicted practices  in organization, adoption of practice should systematically->consistency in care stakeholders: resistor, facilitator:  high stakeholder support and influence (adoption=use new method) o positive affect dissemination and adoption, need info to gain buy-in o collaborate, involve them in program, feedback, empower  high support low influence o positive affect dissemination and adoption if given chance, need attention for buy-in to prevent develop mix-feelings o collaborate, feedback, support via status, participation  low support high influence o negatively affect both, need more attention/info->maintain neutral buy-in o consensus, relationship, detail benefit, involve in team, monitor  low support low influence o least influence both, negative impact, attention to bring neutrality/buy-in o consensus, relationship, involve in team read article in this order:  clinical article: understand state of practice  theory article: various theoretical POV and concept that may help to critique  systematic review: article and synthesis report, understand state of science  EIP guideline: evidence report  Research article: meta-analysis EIP rating system:  ABC rating scale, hierarchy of evidence model, quality of research, strength of the body of evidence Research Utilization  Depend on interest, commitment, expertise of nurses  Proactive, deliberate, systematic, address process of adopting innovation  Include identify problem, critical review lit, translate find practice, implementation, evaluation.  Barriers: o Communication, can’t locate relevant study, cost, time restraints o Poorly presented studies with few implication o Negative attitudes towards research, no time to read Translation science  Investigation of methods, intervention, variable that influences adoption of EIP, by individuals and organization to improve clinical practice  Include testing effect of intervention in prompting and sustaining adoption of EIP Knowledge translation:  Synthesis, dissemination, exchange, ethically sound application of knowledge to improve the health of people, provide effective health service and product. Nurses have professional responsibility to integrate research into practice:  Direct care nurse: identify area for improvement and support practice change  Advanced practice nurse: conduct research where evidence is lacking  Nurse in education: master EIP to foster improvement  Doctor of Nursing Practice: lead EIP across organization  Education and accountability: nursing education emphasize research use