Research MA TEFL IUT PDF

Summary

This document provides an overview of research methods in social sciences, specifically touching upon types of questions, variables, and research project elements. It also includes a section on philosophical issues and common approaches to educational research.

Full Transcript

1. Types of questions Think of different ways of asking the same question. Some questions could be asked in different ways and therefore need different statistical techniques: a) Is there a relationship between age and level of optimism? b) Are older people more optimistic than youn...

1. Types of questions Think of different ways of asking the same question. Some questions could be asked in different ways and therefore need different statistical techniques: a) Is there a relationship between age and level of optimism? b) Are older people more optimistic than younger people ? Which question is more suitable? It depends on the nature of the data you have collected. Types of items and scales (data) This is crucial in determining which analytical technique to employ CONSIDER THE ANALYSIS THAT YOU WANT TO USE WHEN FIRST DESIGNING YOUR STUDY. When thinking of the analytical technique, get back to questionnaire/test items, and your code- book; identify each variable, how it is measured, how many response options there were, and the possible range of scores. Whatever the nature of study, make sure you know how each variable was measured. Characteristics of a Variable Variable: An attribute which changes from person to person, object to object, place to place, and time to time. Concrete Variables: objectively measurable Abstract Variables: measured indirectly Discrete Variables: all or nothing Continuous (Parametric) Variables: range from zero to infinity The number and nature of variable influence the narrowing down of the topic and its manageability. Theoretical and Operational (level of significance) definitions of variables (Quantitative research) Types of variables Dependent or independent variable The information comes not from the data but from the topic, relevant theories, and previous research. Some analyses (e.g. correlation) do not need dependent/independent specifications while some others (e.g. ANOVA) assume this as crucial. Different statistics are needed for categorical and continuous variables: - Categorical variables (also known as nominal) like sex: male/female - Ordinal variables (ranking: 1st, 2nd, 3rd, etc.) - Continuous variables (also known as interval) like age in years, scores on tests, etc. Summarising your data or lumping people together is a possible technique but there are some disadvantages YOU NEED TO WEIGH UP THE BENEFITS AND DISADVAMNTAGES CAREFULLY Main elements of a research project Define area of interest Read relevant literature Clarify aims and objectives Formulate research questions Develop conceptual framework Decide on research approach/strategy Decide on sample and methods for data collection Decide on methods for data analysis Identify any ethical issues Carry out pilot study Carry out main study Write-up Plan next piece of research Major philosophical issues in social sciences Research on social sciences is like walking on shifting sand. - It takes considerable effort to move in any direction - It is difficult to see where you have been This type of feeling is what most people experience when they start but the reward is that the period of disorientation and intellectual difficulty is usually followed by the ability to communicate more clearly and being open to new ways of seeing the world. Clarity and creativity are essential qualities in any research. Social and natural sciences Three resolutions to their relationship 1. Social sciences are subsumed within natural sciences 2. Natural and social sciences are distinct categories within the same system of classification 3. Natural sciences are subsumed within social sciences (Carr, 1995:87) Can those who carry out educational research safely ignore that part of their subject (philosophy) which underlines their own investigation. For if we do so, we cannot claim to be educationalists but must be connected with being … laboratory technicians … if we are merely technicians, we cannot claim to be able to criticise the educational foundations and implications of our work. This means simply that we cannot claim to know what we are doing. Some key words Discourse: ‘a site of discussion’. It refers ‘beyond language to sets of organised meanings (which can include images as well as words) on a given theme. Epistemology: What distinguishes different kinds of knowledge claims. What distinguishes knowledge from non-knowledge. Ontology: Concerns what exists. What is the nature of the world. What is reality. Paradigm: The entire constellation of beliefs, values, techniques shared by members of a given scientific community. The rise of Positivist/Empiricist epistemology Reaction to mediaeval approach to knowledge founded on theology and classical idealism Positivism in sociology (Compte 1798-1857) and Empiricism in psychology (Mill 1806-1873) Rooted in Enlightenment: a distinctive approach adopted by University of Vienna (Vienna Circle) in the 1920s Vienna Circle emphasised on logic and mathematics as the basis of ‘logical positivism’ Logical positivism: founded on verifiability principle stating that all rational discourse consists of either: - Logical statements that are true by definition, or - Empirical statements whose validity can be determined The main task for social sciences was making causal explanations and prediction of future behaviour on the basis of the study of present behaviour. Some assumptions in Positivism World is objective: Phenomena are lawful and orderly. This makes it possible to explain, predict, and control events. There is clear distinction between object (the world) and subjects (the knowers), Between facts (the world) and values (the knower – researcher). No interference should be made. Inter-subjective replicability and validation as the most significant indicator of objectivity. The social world is very much like the natural world. There is order and reason; social life is patterned and has a cause and effect form. The natural and social sciences share a common logic and methodology of enquiry. Epistemological critique about research process is pointless. Positivism in 20th century Methodological monism: the primacy of inductive natural science as the model for enquiry and knowledge Empiricism: the conception of science which emphasises method, logical argument, the generation of ideas against what is observable Objectivism: a belief that science can only produce factual knowledge, but cannot (and should not be used to) generate evaluation or prescription. Challenges to Positivism Challenges started in the 1950s Popper – rejected the theory of verification in favour of falsification (1968). The historical position of science that it can by its methods converge on the real ‘truth’ is sharply into question. Wittgenstein – scientific knowledge is not a logical model for all knowledge, rather it is just one form of ‘language game’ among many others. It has its rules but is not superior. Gadamer – there is no method by which universally valid knowledge can be produced. Dilthey – the Humanities have their own logic of research. While Natural Sciences try to explain phenomena, Humanities try to understand them. Also there are two kinds of psychology: one which attempts to generalise and predict and one that tries to understand the unique individual in his/her entire, concrete setting. Challenges to Positivism Foucault – Universally valid knowledge is not possible. ‘Knowledge is the product of desire or power. Habermas – All knowledge is linked to social interests and has social consequences. The progress made in science caused doubt about the orderly nature of the world. Quantum physics suggests the world is not independent, mechanistic and orderly, but is better represented as holistic, indivisible and flux. Hermeneutic/Interpretive epistemology The idealised and universal logic of scientific research is inappropriate. In social research, knowledge is concerned not with generalisation, prediction and control, but with interpretation, meaning, and illumination. It assumes all human actions as meaningful and tries to understand and interpret the motives behind them in the context of social practices. ‘Double Hermeneutic’: the process of double sense-making. In social sciences both the subject (the researcher) and the object (the people) have the same characteristic of being interpreters and sense seekers. By trying to understand human beings as individuals in their entirety, it avoids the fragmentation caused by positivist/empiricist approach that takes out a small slice which it subjects to closer scrutiny. ‘Hermeneutic Circle’: The interpretation of the part depends on interpreting the whole, but the interpretation of the whole depends on an interpretation of the part. Hermeneutic/Interpretive epistemology The fusion of horizons: Horizon – one’s standpoint or situatedness (in time, place, culture, gender, ethnicity, etc.) - This constitutes a standard of objectivity which can function as objectivity in positivist/empiricist epistemology. It is the outcome of inter-subjective agreement – dialogue between researchers – when conflicting interpretations are harmonised. Consensus can be achieved despite difference – indeed because of difference. Writing a Research Proposal What to include: 1. Description of the problem for study 2. Problem’s background and its relationship to existing scholarship 3. Definition and discussions of central concepts 4. Research aims, questions and hypotheses 5. Sources of data and sampling procedures (if any) 6. Techniques and procedures for analysis and/or data collecting and processing explained 7. Extended summary of progress and/or preliminary results and interpretation 8. An indication of why you believe the work is suitable for research 9. Proposal for timing and content of remaining work 10. Ethical issues Review of Literature Goals 1. To put the research topic within a scientific perspective 2. To avoid duplication of previous studies 3. To avoid inadequacies of previous research Focus 1. Theory 2. Method 3. Data analysis Conceptual framework A way of representing the main concepts, variables, themes, or ideas which underpin the research Provides a simplified model of the micro-world to be investigated Can provide limits and boundaries Can often be usefully represented as diagrams is often helpful in formulating research questions, in choosing methods, designing instruments, and analysing data Is used in both quantitative and qualitative research Can be modified as research progresses Conceptual framework Conceptual framework Research Design The process through which interesting research questions are turned into reliable, valid, and appropriate research procedures which will produce answers to the research questions. Plans and procedures for research that span the decisions from broad assumptions to detailed methods of data collection and analysis. … Informing the decision should be the worldview assumptions the researcher brings to the study, procedures of inquiry (called strategies), and specific methods of data collection, analysis and interpretation (Creswell, 2009 p. 3) Common Approaches to Educational Research Survey Experimental Case study Action research Ethnography Survey Purpose: To provide descriptive or correlational information about a particular state of affairs (or sample) at a particular point in time. Sample: Usually large and representative of some population Typical data collection methods: Questionnaire, telephone interview, Structured observation Typical data analysis methods: Descriptive statistics, correlation of variables Survey: Typical example Research question: What is the incidence of disruptive behaviour in HK classrooms? Sample and access: A random sample of HK schools All teacher within each school Data collection methods: Teacher questionnaire Data analysis methods: Frequency distribution of disruptive behaviour, Comparison by age, teacher experience, socio- economic status, etc. Sampling Random Sampling Stratified Random Sampling Cluster Random Sampling Quota Sampling Case study Experiment Purpose: To explore the effect of one variable (independent) on another (dependent) by manipulating the independent variable Sample: Usually two or more comparable groups, chosen at random or matched in some way Typical data collection methods: Measures of attainment, attitude or behaviour Typical data analysis methods: Comparison of experimental and control groups Experiment: Typical example Research question: What effect does reducing class-size have on disruptive behaviour? Sample: 20 matched classrooms (10 normal, 10 small) Data collection method: Observation, Teacher questionnaire Data analysis methods: Comparison of ‘small’ with ‘normal’ classrooms Experimental Research Principles - Randomisation - Pre-testing - Experimental and Control group - Treatment - Post-testing Experimental Research Types 1. True experimental All requirements are met 2. Pre-experimental One or two requirements are deliberately ignored 3. Quasi-experimental Violations of certain principles are compensated Experimental Research True Experimental Method Randomisation Experimental vs. Control group Why Control group? 1. To make sure that there is only one variable causing the outcome, and 2. To make sure that the variable under investigation and not any other variable caused the outcome. Treatment vs. Placebo (ineffective treatment) Pre-test (To ensure the equality of the groups) Post-test (To observe the effect of the treatment) Experimental Research Pre-experimental Methods One-shot case study (No control group, no pre-test) X T One group pre-test/post-test study (No control group) T1 X T2 Intact group study (No randomisation) G1 X T G2 O T Experimental Research Quasi-experimental Method - Compromise between true-experimental and the nature of human being Time-series study (several pre- and post-tests) T1 T2 T3 X T4 T5 T6 Equivalent time-series method (treatment is introduced and re-introduced between a set of pre- and post-tests) T1 X T2 / T3 0 T4 / T5 X T6 / T7 0 T8 Experimental Research Validity Findings should be Verifiable and Applicable Verifiability: Replication brings about the same findings Applicability: Findings should be useable in situations similar to that of the experiment Experimental Research Internal vs. External Validity Internal validity: The extent to which the findings are due to the manipulation of independent variable and not other factors. Threats to Internal Validity: 1. History effect (What happens outside experimental environment 2. Maturation effect (systematic change over time) 3. Test effect 4. Selection effect 5. Mortality (Attrition) effect Experimental Research External Validity The extent to which the research findings are applicable to other similar situations i.e. generalisability. Case study Purpose: To obtain as full an understanding as possible of one particular case Sample: One or more (comparative case study) Typical data collection methods: Wide ranging: interviews, observations, document analysis Typical data analysis methods: Wide ranging, aimed at building up a full understanding of the particular case Case study –Typical example Research question: What are the causes of disruptive behaviour in Mrs Mok’s classrooms? Sample: Mrs Mok’s classroom Data collection method: Systematic classroom observation Data analysis method: Correlation of classroom events with disruptive behaviour Action Research Purpose: To bring about change in a situation or practice and examine the effects of the change Sample: Usually small, often a single practitioner- researcher Typical data collection methods: Varied, including observation and researcher diaries Typical data analysis methods: A continuous cycle of planning, action, evaluation and reflection Action research – Typical example Research question: How can Mrs Mok improve behaviour in her classroom? Sample: All the pupils in Mrs Mok’s classroom Data collection methods: Teacher diary Data analysis methods: Analysis of diary for patterns, possible causes and effects Ethnography Purpose: To describe and understand a context or culture from the point of view of its participants Sample: Usually focuses on a group or community in its natural setting, rather than particular individuals Typical data collection methods: Varies, including participant observation, collection of cultural artefacts Typical data analysis methods: Varied, including grounded theory Ethnography – Typical example Research question: What meanings do teachers and pupils attach to disruptive behaviour in different classrooms? Sample: A small number of contrasting classrooms Data collection methods: Observations, interviews with teachers and pupils Data analysis methods: Grounded theory Data analysis: Decision making process Consider the following: 1. Type of your research question 2. Type of item and scale included in your questionnaire 3. The nature of the data for each variable 4. The assumptions that must be met for each analytical technique Normality Normality indicates (i) whether the variables are distributed normally across the research instruments, and (ii) informs the researcher in taking appropriate actions in case of any significant violation of normality. Tabachnick and Fidell (1996) suggest a non- normal distribution of variables degrades the results of a study. Similarly, Bradley (1982) asserts that as distributions depart from normality, statistical inferences become less and less robust. Normality The normal distribution of variables is expressed in terms of ‘Skewness’ (the symmetry of the distribution) and ‘Kurtosis’ (the peakedness of the distribution). There is also a rule of thumb that values between -2 and +2 for skewness and kurtosis illustrate normal distribution (Bachman 2003). However,Kunnan (1995) refers to 4.57 and 5.21 kurtosis as ‘reasonably normal’. In the same way, Tabachnick and Fidell (1996) refer to - 4 to +4 degrees of skewness and kurtosis as normal distribution. As the sample size gets larger (100 or more), the effect of an abnormal distribution diminishes. Tabachnick and Fidell (1996:73) “In a large sample, a variable with statistically significant skewness often does not deviate enough from normality to make a substantive difference in the analysis”. It is also believed that underestimation of variance due to high positive kurtosis disappears when the sample is large (200 or more) (op cit.). Parametric/Non-parametric statistics Parametric comes from parameter or characteristic of a population. Parametric tests (e.g. t-tests, ANOVA) make assumptions about the population from which the sample has been drawn. *Generalisability* Basic quantitative techniques Frequency distribution Means and standard deviations Cross-tabulation and contingency tables Comparison of groups on one dependent variable (T-tests, ANOVA, MANOVA) Correlations and regressions Factor analysis Structural equation modelling Choosing the right statistic When analysing statistically you need to look at: 1. The type of research question, 2. Which variable/s you want to analyse, and 3. The nature of the data. Statistical techniques explore: 1. Relationship among variables, 2. Differences between groups, 3. Causal relationship between/among variables, 4. Identification of underlying structure (i.e. trait). Exploring relationship Correlation studies: - Show the strength and direction of linear relationship between two variables. Pearson product-moment coefficient is designed for continuous variables (it can also be used with one continuous and one dichotomous variable). Partial correlation: - An extension of Pearson correlation – it allows the researcher to control for the possible effects of the confounding variables. Multiple regression: - A more sophisticated extension of correlation used to explore the predictive ability of a set of independent variables on one continuous dependent variable. Factor analysis: - Used to condense a large set of variables or scale items or scale items to a smaller, more manageable number of factors. Chi-square: - Used with categorical variables for their relatedness or independence Exploring differences Such statistics are used when the researcher is to find out whether there is a meaningful (i.e. statistically significant) difference between/among groups. This involves comparing the mean score for each group on one or more dependent variables. T-test: Compares the mean scores of two groups on some continuous variable: - Paired sample t-test - Independent sample t-test Exploring differences One-way analysis of variance: - Is similar to t-test - Compares the mean scores of two or more groups on a continuous variable - Is one way because the impact of only one independent variable on the dependent variable is looked at - It shows whether the groups are different but does not tell the researcher where the difference is; post-hoc tests are used for that purpose (also ‘planned comparisons’ can be used) Types of One way ANOVA: - Repeated measures - Between groups Exploring differences Two-way analysis of variance - Looks for the impact of two independent variables on one dependent variable - Allows the test for an ‘interaction effect’ (i.e. one independent variable affects the other) - Allows the test for ‘main effects’ (i.e. overall effect of each independent variable) Types of two-way ANOVA - Between groups - Repeated measures - Combined between-groups and repeated measures (mixed Between-Within Design or Split Plot) Exploring differences Multivariate analysis of variance (MANOVA) - Allows comparison of groups on a number of different but related dependent variables - Multivariate ANOVA can be used with one-way, two-way and higher factorial designs involving one, two, or more independent variables. Analysis of covariance (ANCOVA) - Allows statistical control for the possible effects of an additional confounding variable (covariate) - ANCOVA statistically removes the effect of the covariate - ANCOVA can be used as part of a one-way, two-way or multivariate design

Use Quizgecko on...
Browser
Browser