Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Document Details

HandierMesa

Uploaded by HandierMesa

CCNM

Dr. Monique Aucoin ND MSc

Tags

evidence-based medicine research methods clinical practice naturopathic medicine

Summary

These notes cover research methods in evidence-based medicine, including various study types and analysis techniques. Information on arguments for and against evidence-based practices is also included.

Full Transcript

ART AND PRACTICE: RESEARCH WEEK 1 DR. MONIQUE AUCOIN ND MSc ANM 100 Week 12 SOME RESEARCH HUMOR John Oliver Clip about research communication in the news OUTCOMES Explore the underlying concepts of Evidence-Based Medicine, including its strengths and limitations. Review the scientific method and its...

ART AND PRACTICE: RESEARCH WEEK 1 DR. MONIQUE AUCOIN ND MSc ANM 100 Week 12 SOME RESEARCH HUMOR John Oliver Clip about research communication in the news OUTCOMES Explore the underlying concepts of Evidence-Based Medicine, including its strengths and limitations. Review the scientific method and its value and application in clinical practice. Explore and evaluate the need for critical appraisal of published health research. Describe the principles of causation and correlation. Describe and identify bias, chance, confounding, and association and know how this can influence the validity and reliability of research findings. Identity examples of bias, confounding, correlation and causation in sample research studies WHAT IS EVIDENCE INFORMED PRACTICE? What does it mean to you? HISTORY & PURPOSE 1972 Archie Cochrane: most treatment decisions not based on systematic review of evidence Term “Evidence-Based Medicine” introduced in 1992 Purpose: Shift decision making from “intuition, unsystematic clinical experience, and pathophysiologic rationale” to increase used of scientific, clinically relevant research Research/practice gap: 17 years EVIDENCE-INFORMED PRACTICE A ROSE BY ANY OTHER NAME… Evidence Based Medicine Evidence Based Practice Evidence Informed Practice WHAT ARE THE BENEFITS OF EBP? WHAT ARE THE DOWN SIDES OF EBP? ARGUMENTS FOR Avoid (or decrease) biases from clinical experience alone: false attribution, lack of follow up, small sample size, rose-coloured glasses Use the vast amount of literature that exists, use of more credible sources (vs “just google it”) Efficient use of resources Improved clinical care (which treatment is most effective, safest) Shaughnessy et al. Clinical Jazz: Harmonizing Clinic Experience and EBM. 1998 Ooi SL, Rae J, Pak SC. Implementation of evidence-based practice: A naturopath perspective. Complementary therapies in clinical practice. 2016 Feb 1;22:24-8. ARGUMENTS FOR Stop ineffective practices (treatments, diagnostic tools) Consistency within/across professions, communication and collaboration Promotes inquiry, continual improvement (We can’t possibly be taught everything!) Links for above examples here: The Star and The Globe And Mail ARGUMENTS AGAINST “practitioner uses only modalities or treatments that have been proven effective by empirical means” – Misconception May reduce treatment options (under studies modalities) Challenging to study complex clinical situation, complex interventions Excluded factors: can’t apply results to complex clinical situations Concerns about undermining naturopathic philosophy (less individualization, ‘lost art’) Greenhalgh. How the Read a Paper. 2001 Steel A et al. The role and influence of traditional and scientific knowledge in naturopathic education: A qualitative study. The Journal of Alternative and Complementary Medicine. 2019 Feb 1;25(2):196-201. ARGUMENTS AGAINST/LIMITATIONS OF USING EVIDENCE ALONE Studies show that on average improvement was seen, doesn’t mean your patient will benefit Doesn’t capture significance, meaning to the patient Gold standard studies are expensive and don’t always exist (undermines other type of evidence) Reduced emphasis on professional judgement, creativity DOING EIP 1. 2. 3. 4. Formulate an answerable research question (Ask) Find the best available evidence (Acquire) Critically appraise/evaluate the evidence (Appraise) Apply the evidence by integrating with clinical expertise and patient’s values (Apply) 5. Evaluate performance (Assess) CRITICAL APPRAISAL Essential to understand and critically evaluate research in order to apply it properly Conclusions from research studies may be reflect the truth All research is open to bias Presentation in the media aimed at generating attention and interest rather than accuracy FUNNY, BUT ALSO NOT FUNNY Exciting Headlines: “Study reveals that smelling your partner's farts is the secret to a longer life” Synthetic hydrogen sulphide doner chemical protects mitochondria from oxidative damage ALSO, FUNDING Bhandari M, Busse JW, Jackowski D, et al. Association between industry funding and statistically significant pro-industry findings in medical and surgical randomized trials. CMAJ. 2004 Feb 17;170(4):477-80. BACKGROUND: In this study, we examine the association between industry funding and the statistical significance of results in recently published medical and surgical trials. RESULTS: Among the 332 randomized trials, there were 158 drug trials, 87 surgical trials and 87 trials of other therapies. An unadjusted analysis of this sample of trials revealed that industry funding was associated with a statistically significant result in favour of the new industry product (odds ratio [OR] 1.9, 95% confidence interval [CI] 1.3-3.5). The association remained significant after adjustment for study quality and sample size (adjusted OR 1.8, 95% CI 1.1-3.0). There was a nonsignificant difference between surgical trials (OR 8.0, 95% CI 1.1-53.2) and drug trials (OR 1.6, 95% CI 1.1-2.8), both of which were likely to have a pro-industry result. Interpretation: Industry-funded trials are more likely (almost twice as likely) to be associated with statistically significant pro-industry findings in medical trials. SCIENCE, RESEARCH, SOME BASIC CONCEPTS SCIENCE Systematic study of the structure and behaviour of the physical and natural world through observation and experimentation. Empirical method of acquiring knowledge (accessible to sense experience or experimental procedures) SCIENTIFIC METHOD CORRELATION (VS CAUSATION) Correlation: a measurement of the size and direction of the relationship between 2 or more variables. Example of positive correlation: height and weight, taller people tend to be heavier Example of negative correlation: mountain altitude and temperature, as you climb higher it gets colder CORRELATION – VISUALLY As margarine consumption decreased, divorce rate also decreased Random chance?? SUGAR INTAKE AND RATES OF DIABETES IN THE UK SOME MORE EXAMPLES Studies show that people who have more birthdays live longer Reverse causality Days with higher ice cream sales have more cases of drowning Confounding Factor: People with depression eat a lower quality diet than people without depression Warm weather, Swimming ?? CONFOUNDING A additional variable causes the change in the dependent variable More ice cream sales → more drowning Nice weather/swimming Over weight → Life expectancy Heart Disease/diabetes CAUSATION A relationship where one variable (independent variable) CAUSES (is responsible for the occurrence) the other (dependent variable) Ex. decapitation causes death Generally, very difficult to prove a causal relationship NOT ALL ASSOCIATIONS ARE CAUSAL Associations may APPEAR causal due to: Confounding Chance: ever present randomness. Ex. flip a coin 100 times, may get 58 heads and 42 tails Bias BIAS Anything that systematically influences the conclusion or distorts comparisons Can impact any kind of research study TYPES OF BIAS: SELECTION BIAS Systematic differences between groups Likely due to inadequate randomization Ex. Research study with 2 locations: 1 location is in an upscale neighbourhood, these participants get the treatment. The other location is in an inner city neighbourhood, these participants get the placebo. Ex2. Survey of the naturopathic profession about evidence based practice. 100 NDs respond, overall largely favourable views of EBP TYPES OF BIAS: PERFORMANCE BIAS Systematic differences in the care provided apart from the intervention being assessed Ex. Participants in the treatment group spend 10 hours with the researchers, control group spends 1 hour TYPES OF BIAS: ATTRITION BIAS Systematic difference in withdrawals from the trial Ex. Participants who have a negative reaction (or no benefit) from the study treatment drop out more often than the people who find the treatment helpful. TYPES OF BIAS: DETECTION BIAS Systematic differences in outcome assessment Ex. Study of the effects of working with radioactive material on skin cancer risk. More cases of skin cancer discovered in patients who report working with radioactive material. Ex 2. A research genuinely believes that the study drug will help psoriasis. If they know who is receiving the real drug, they may underestimate when measuring the psoriasis skin lesion. TYPES OF BIAS: OBSERVATION BIAS When participants are aware of being observed, they alter their behaviour Ex. DIET DIARY! TYPES OF BIAS: PUBLICATION BIAS Studies with negative findings less likely to be submitted and published TYPES OF BIAS: RECALL BIAS When asked about things in the past, may have difficulty remembering and respond in an inaccurate way Ex. What did you eat for breakfast 10 years ago? BIAS – KEY IS “SYSTEMATIC” There will be always be random factors Ex. One day the research is tired and less observant, notices less cancer lesions (variation in detection) Ex. Some people in a study group with move away, loose interest etc ( attrition) Ex. Some people over/under-estimate their vegetable servings PRINCIPLES OF CAUSATION Temporality Strength Dose-response Reversibility Consistency Biological plausibility Specificity Analogy TEMPORALITY Cause came before the effect Some study types limited in ability to detect this (crosssectional, case-control) Ex. may do a survey of men who currently have prostate cancer and find higher fish intake – does dietary fish cause prostate cancer? VS measure fish intake and follow over time to see who develops cancer STRENGTH OF ASSOCIATION Stronger association is better evidence of cause/effect relationship DOSE RESPONSE RELATIONSHIP Varying amounts of the cause result in varying amounts of the effect A dose-response relationship is good evidence of cause/effect relationship Ex. number of cigarettes smoked per day and lung cancer risk BUT risk of confounding: heavy smokers more likely to consume more alcohol REVERSIBILITY The association between the cause and the effect is reversible Ex. people who quit smoking have a low risk of cancer STILL think about confounding: people who quit may start other healthy lifestyle behaviours too! CONSISTENCY Several studies conducted at different times, in different settings and with different kinds of patients all come to the same conclusions Some inconsistency does not invalidate other trials – look at trial design and quality BIOLOGICAL PLAUSIBILITY If the relationship between cause and effect is consistent with our current knowledge of mechanisms of disease When present, strengthens the cause for cause/effect Ex. cigarette ingredients cause cancer in cell cultures, animal models Challenges with homeopathy, energy medicine SPECIFICITY One cause → one effect (A only causes B) Vitamin c deficiency → scurvy Absence of specificity is weak evidence against cause; ex, smoking causes cancer, bronchitis, periodontal disease ANALOGY Cause and effect relationship is strengthened if there are examples of well established causes that are analogous to the one in question Ex. if we know a virus can cause chronic, degenerative CNS disease (Subacute Sclerosing Panencephalitis) it is easier to accept that another virus might cause degeneration of the immunologic system (e.g. HIV and AIDS) Analogy is weak evidence for cause GROUP ACTIVITY A few sample observational studies: think about possible confounding factors Assess studies based on the principles of causation Consider the types of conclusions that can be drawn WRAP UP AND QUESTIONS ANY THOUGHTS OR INSIGHTS OR N E W P E R S P E C T I V E F R O M T O D AY ’ S M AT E R I A L ? ART AND PRACTICE: RESEARCH WEEK 2 DR. MONIQUE AUCOIN ND MSc ANM 100 Week 13 QUESTION: WHAT DO WE DO WHEN WE HAVE CONFLICTING INFORMATION FROM DIFFERENT TYPES OF RESEARCH? OUTCOMES Identify and summarize the basic types of clinical epidemiological studies including: randomized controlled trials, cohort studies, case-control studies, systematic reviews, case reports, n-of-1 studies, and whole systems research. Identify inherent strengths and weaknesses of each type of research design and how to assess quality based on their design and relevance to practice/individual patients and public health. RESEARCH STUDY METHODOLOGIES HIERARCHY OF EVIDENCE EXPERIMENTAL/INTERVENTION STUDIES DO Something to the patient, OBSERVE what happens Does the treatment change the likelihood of the outcome? Treatment TIME RANDOMIZED CONTROLLED TRIAL: CHARACTERISTICS Defined population (inclusion/exclusion criteria) 2(+) Groups: Treatment arm + Comparison arm Prospective: look forward in time Tx A TIME Tx B RANDOMIZED CONTROLLED TRIAL: CHARACTERISTICS Randomized: Equal chance of being assigned to the intervention or control group Control group: accounts for natural course of illness, placebo effect, confounding factors May have Blinding: minimize expectation effect RCT Use: **Best Design for confirming cause/effect** RANDOMIZED CONTROLLED TRIAL: CHARACTERISTICS Randomized: Equal chance of being assigned to the intervention or control group Control group: accounts for natural course of illness, placebo effect, confounding factors May have Blinding: minimize expectation effect RCT Use: **Best Design for confirming cause/effect** Treatment Control RANDOMIZATION For comparison to be useful, need to have truly RANDOM assignment to treatment/comparison -ex. people who visit clinic on weekday vs weekend; people who visit clinic A vs clinic B; judgement of clinician; flip of a coin; alternating -better: computer generated sequence, sequential numbered sealed opaque envelopes, 3rd party allocation Should describe HOW the randomization was done Should check to see that it worked, how similar were the groups? Selection Bias TABLE 1 Sample confounding factors: Age Gender Family history Comorbidities BMI Socioeconomic status Marital status Education Exercise Diet ANYTHING, known or unknown BLINDING Remove expectation Who was blind? -Participants (‘single blind’) -person delivering the intervention (‘double blind’) -person assessing the outcome (‘triple blind’) Should describe how it was achieved Bias in measurement of outcomes CLEARLY DEFINED POPULATION Participants are a SAMPLE of some population Inclusion/exclusion criteria help define Similar enough to your patient? (“generalizability”) -more or less ill? -different ethnicity or geographic location? -pregnancy, smoking, OCPs, non-English speaking, unable to read consent form, elderly, comorbidities, taking other medications Often excluded from clinical trials RECRUITMENT What was the method? Can influence the sample Ex. newspaper ad – people who read the newspaper and motivated to respond Ex. every person who presents at a clinic with a particular condition Selection Bias REPORTING OF RESULTS Power calculation Are the things they planned to measure (methods section) reported in the results section? Any missing data? Were p values reported? Reporting Bias COMPLIANCE Should be assessed Simple methods: Diary, pill count, phone calls, questioning Biological methods: blood/urine levels (costly) WITHDRAWALS How many participants enrolled, completed, dropped out (why?), analyzed Accounting for missing data: -Per-protocol analysis (analyse people in the intervention group that they completed) -Intention to treat analysis (analyse people in the intervention group that they were assigned to) WITHDRAWALS: RISK OF PERPROTOCOL ANALYSIS Mean skill level High EIP Skill Mean skill level Moderate EIP Skill Low EIP Skill High EIP Skill Extremely Hard Course Moderate EIP Skill Low EIP Skill Systematic factor impacting who drops out Other ex’s: side effect, acceptability, impact on outcome INTENTION TO TREAT ANALYSIS Analyze everyone in the group randomized, even if did not complete or ended up in the other group Techniques: last observation carried forward, statistical approaches Best way to minimize bias from drop outs Cautious, under-estimate of effect -Pre-Protocol: over-estimate INTENTION TO TREAT ANALYSIS High EIP Skill Mean skill level Moderate EIP Skill Low EIP Skill High EIP Skill Extremely Hard Course Moderate EIP SkillMean skill level Low EIP Skill RCT LIMITATIONS Difficult! (recruitment, funding, time) Sample may not be representative Not good for rare/distant outcomes Application to non-pill interventions can be challenging (control, blinding) Ethics of treating only some participants ASSESSING THE QUALITY OF RCTS Jadad Cochrane risk of bias –often used in SRs CASP checklist - https://casp-uk.net/casp-tools-checklists/ Assess quality of REPORTING: CONSORT CAM-specific reporting standards: Non-pharm CONSORT, Herbal Medicine CONSORT, REDHOT (homeopathy), STRICTA (acupuncture) JADAD SCALE Assess clinical trials Score from 0 to 5 2 points randomization 2 points blinding 1 point drop outs Stephen, H., Halpern, M. and Joanne, D., 2005. Jadad scale for reporting randomized controlled trials. OTHER KINDS OF INTERVENTION STUDIES Cross-over: everyone gets intervention AND comparison Pro: minimize confounder (participants are their own controls) Con: must be a chronic illness, treatment must washout Image source: Cochrane OTHER KINDS OF INTERVENTION STUDIES Preference (controlled, not randomized): choose between different arms (ex. Cancer survivors, pick MBT or Tai Chi) Open-label, Pre/Post: everyone gets the intervention (know it), assess changes before and after intervention Treatment TIME DESIGN A STUDY! Question: Dose a maternal diet deficient in B12 impact fetal brain development? appropriate study design? Clinical trial? Case report? Vandenbroucke JP. When are observational studies as credible as randomised trials?. The Lancet. 2004 May 22;363(9422):1728-31. OBSERVATIONAL STUDIES Exposure NOT controlled by the researcher They ask: Is there a relationship between a risk factor (or health factor) and an outcome (harm or benefit) Ex. Is high intake of blueberries associated with a lower risk of cancer? Is increased stress associated with an increased risk of a heart attack? Strength: Can study any question (harmful exposure, lack of a beneficial exposure) OBSERVATIONAL STUDIES TIME COHORT STUDY Compare Incidence Exposed CROSS-SECTIONAL STUDY TIME Un-exposed TIME Compare Prior Exposure Diseased TIME Non-Diseased CASE-CONTROL Compare Current Disease Status and Current Exposure COHORT STUDY Recruit the cohort (outcome is NOT present) Assess risk/health factors (creates a comparison grp) Follow over time See who develops the outcome “longitudinal” “prospective” TIME COHORT STUDY Exposed TIME Un-exposed Compare Incidence COHORT EXAMPLE: SATURATED FAT AND CVD Who developed CVD? Is there a difference between the groups? High sat fat diet -People without CVD -Ask about sat fat COHORT STUDY consumption TIME Exposed TIME Un-exposed Low sat fat diet Compare Incidence COHORT STUDIES Strengths Weaknesses Look at any exposure (even harm) Confident that exposure came before outcome Assignment to comparison grp is NOT random → risk of confounding Time consuming Inefficient for rare outcomes Assess multiple outcomes CASE-CONTROL Outcome is PRESENT at the beginning of the study Looks backwards in time for exposure (how much meat did you eat 10 years ago?) TIME Compare Prior Exposure Diseased TIME Non-Diseased CASE-CONTROL CASE CONTROL STUDIES Strengths Weaknesses Can look at rare outcomes Assignment to comparison grp is NOT random Faster (no waiting time, minimal loss of participants) Hard to assess temporality (ex. recall bias) Only assessing one outcome CASE CONTROL EXAMPLE Find people with AND without CVD -Is there a difference? TIME Compare Prior Exposure Diseased TIME Non-Diseased -Ask them to think about the past -high or low saturated fat diet? CASE-CONTROL CASE CONTROL STUDIES Strengths Weaknesses Can look at rare outcomes Assignment to comparison grp is NOT random Faster (no waiting time, minimal loss of participants) Hard to assess temporality (ex. recall bias) CROSS-SECTIONAL STUDIES Outcome is PRESENT at the beginning of the study CROSS-SECTIONAL STUDY Assess exposure and outcome at ONE time point Ex. Patients with CVD and healthy controls, ask about CURRENT meat intake Compare Current Disease Status and Current Exposure CROSS-SECTIONAL EXAMPLE CROSS-SECTIONAL STUDY Find people with AND without CVD Ask about sat fat in diet Is there a difference? Compare Current Disease Status and Current Exposure CROSS-SECTIONAL STUDIES Strengths Weaknesses Can study rare outcomes Faster Assignment to comparison grp is NOT random No recall bias No assessment of temporality (which came first?) Only assessing one outcome OBSERVATIONAL STUDIES: STRENGTHS Can study any question (harmful exposure, lack of a beneficial exposure) Can be less expensive or faster than intervention studies OBSERVATIONAL STUDIES: LIMITATIONS Not randomly assigned to exposure groups Investigate correlation, not necessarily causation APPRAISING THE QUALITY OF OBSERVATIONAL STUDIES Recruitment: Do the participants reflect the population of interest? Assessment of exposure: accurate? Subjective or objective? Validated? Consideration of confounding factors? Measurement or Classification Bias Selection Bias Generalizability of Findings Did the look for confounding factors? CASE REPORTS, CASE SERIES Report previously undocumented events (success, adverse reaction) May lead to further action Real patients and real clinical approaches BUT concerns about bias and generalizability N OF 1 STUDY Randomized, double blind, multiple crossover comparisons in an individual patient. “individualized RCT” - compare to self while taking the real treatment vs taking the comparison (ex. patient with hypertension: magnesium vs placebo or blood pressure medication) N OF 1 TRIAL DESIGN N OF 1 STRENGTHS Look at real world use of an intervention Allows for individualization, root-cause style treatment, complex health conditions, multi-modal treatments Can compare naturopathic and conventional treatments Could justify further research Consistent population (same person! Same genetics, family history, other risk factors) N OF 1 LIMITATIONS Doesn’t work if the condition is curable or self-limiting, must relapse in washout Findings may not be generalizable (low external validity) Ethics – need ethical approval, is it ethical to experiment in a clinical setting? High cost (manufacturing of placebo, administration of study with blinding) PRECLINICAL STUDIES In vitro: outside the body: cell lines, organs In vivo: in non-disease model: healthy human to study pharmacokinetics (absorption, elimination), animal models Base of the EBM hierarchy Basic knowledge to build on, allows to innovation, highly controlled, ‘ethical’, study mechanisms of action PRECLINICAL STRENGTHS Allow for creativity and innovation Background for future research in humans Investigate mechanism of action Study possible adverse events or interactions High level of control (ex. Exact intake of fiber in a mouse diet) Ethical (?) PRECLINICAL LIMITATION May not be clinically applicable to humans (lack generalizability) Highly controlled One isolated part of the story CONSIDER: Is pre-clinical evidence ‘sufficient’ enough to guiding clinical recommendations? Eg. In vitro anti-fungal effects of garlic are well known - does this mean that systemic fungal infections (highly dangerous) can/should be treated with garlic? At what dose? Eg. Soy phytoestrogen inhibits MCF-7 human breast cell growth in vitro - what does this mean for prevention/treatment of breast cancer? MIGHT CONSIDER Informed consent (document risks, benefits, level of evidence) Consider alternatives Monitor patient (response, safety) Alter treatment if needed SYNTHESIS RESEARCH Why? TIME!! Incorporate the results of individual studies together Draw bigger conclusions NARRATIVE REVIEWS Researcher combines some of the research on a topic Reports on the collection of evidence Often does NOT describe how they searched and how they decided to include certain studies High risk of bias – results often consistent with author’s hypothesis SYSTEMATIC REVIEWS Explicit and rigorous methods to: 1. Identify (2+ databases, specific inclusion/exclusion criteria) 2. Critically appraise 3. Synthesize (combine) Scientific investigation with pre-planned methodology Enormous effort to minimize bias META-ANALYSES Statistically combine the results of studies in a systematic review Ex. 5 studies with 20 participants → 1 study with 100 participants Visual representation of the studies (Forest Plot) SYSTEMATIC REVIEWS STRENGTHS Explicit and rigorous methods to: 1. Identify (2+ databases, specific inclusion/exclusion criteria) 2. Critically appraise 3. Synthesize (combine) Scientific investigation with pre-planned methodology Enormous effort to minimize bias Capture the big picture of evidence on a topic Meta-analysis allows for the creation of a larger sample size (helps with stats: stay tuned for next semester) SYSTEMATIC REVIEW LIMITATIONS Only as good as the available (findable) studies → publication bias, lack of research on a topic Can’t replace good clinical reasoning WHAT MAKES A STRONG SR? Clear question? Did they look for the right type of studies? -relevant to question, approp design (RCT if intervention) Comprehensive search? -databases, reference lists, unpublished studies, contact experts, non-English studies Assessment of study quality? WHOLE PRACTICE RESEARCH Need: Do the results of RCTs apply to real clinical practice of naturopathic medicine? Issues: RCT often use one intervention to treat one disease in a uniform patient population Naturopathic medicine: often complex interventions, prescribed in an individualized way, to patients with complex health conditions WHOLE PRACTICE/SYSTEMS RESEARCH Assess entire system of care vs individual treatments of modalities Ex. individualized acupuncture treatments vs testing 3 set acupuncture points Ex. Individualized homeopathy vs a single remedy Ex. Naturopathic medicine as a whole WSR TRIAL DESIGN Goal: accurately study what is actually done in the real world Modified RCT design: use an entire system of medicine vs individual therapy W H AT ’ S T H E METHODOLOGY? a) Systematic Review b) Intervention study c) Cohort Study d) Case-Control Study W H AT ’ S T H E METHODOLOGY? a) Systematic Review b) Intervention study c) Cohort Study d) Case-Control Study GROUP WORK WRAP UP AND QUESTIONS ANY THOUGHTS OR INSIGHTS OR N E W P E R S P E C T I V E F R O M TO D AY ’ S M AT E R I A L ?

Use Quizgecko on...
Browser
Browser