Psychology 253: Statistical Analyses - PDF

Summary

These are course notes for a psychology course on statistical analyses. The document provides the basic definitions of descriptive and inferential statistics.

Full Transcript

Psychology 253 Ch3 : Statistical Analyses understanding data need to have a very clear idea of what you want to find out from the data when we look at a dataset, the type of analysis we conduct fall into 2 diff types...

Psychology 253 Ch3 : Statistical Analyses understanding data need to have a very clear idea of what you want to find out from the data when we look at a dataset, the type of analysis we conduct fall into 2 diff types descriptive statistics allow us to simply describe our data inferential statistics identifying signif diff or relationships one of the most important things to remember when running and interpreting statistical analyses is that descriptive and inferential stats always need to be used in combo to understand the dataset descriptive stats : how can I summarize my data ? Psychology 253 1 used to provide a single number that summarizes a set of continuous data in some way fall into 2 categories : measures of central tendency measures of dispersion measures of central tendency attempt to describe where the center of the dataset is with a single number more commonly described as the average mode = most frequently occurring score median = middle score when data points are arranged from smallest - largest mean = sum all data and then divide by number of data points measures of dispersion measures of central tendency should never be interpreted on their own simplest way to look at dispersion is to look at range diff between the largest and smallest data set similar measure = interquartile range split range into 4 equal-sized quarters tells you the middle 2 quartiles’ smallest & largest we more often use standard deviation or variance very closely related SD = square root of the variance a small SD would show that the data is tightly clustered around the mean SD and variance are actually calc by looking at how far each data point is from the mean when analysing a variable that is measured with cont. data, you should report the mean and the SD Psychology 253 2 why stat analyses are like cake understanding the variability in datasets are fundamental to most of the stat analyses that we conduct in psych research between-groups variance - experimental variance or model variance variance that researcher caused by giving one group a placebo and the other a treatment “good” variance within-group variance - random variance or error variance spread of the data within each group with this type, we have no idea why some scores are higher and some are lower. ideally want variance to be described by between groups variance inferential statistics : what is signif ? ‘inferential’ tells us about what we aim to do whenever we run an analysis of statistical significance large number of tests and options available first thing is to look at methodological approach each one is associated with diff statistics graph TD Quanti-research-design --> Experimental-Design Quanti-research-design --> Correlational-Design Correlational-Design --> Correlation/Regression Experimental-Design --> t-tests/ANOVA Hypothesis testing signif is all based on hypothesis testing called Null Hypothesis Signif Testing (NHST) Psychology 253 3 with stat signif tests we want to know whether we can be confident that our findings reflect what might be found in real world imagine 4 outcomes in a research study ideal outcome = find something in sample that exists in population = correct 2 possible errors you could make Type I type of mistake you don’t want to make finding an effect in sample that doesn’t exist in population means findings don’t accurately reflect population Type II fail to find anything within sample, but there’s something to be found in population what is signif testing ? significance is denoted by a p value p value = probability that you committed a type I error smaller p values are good how small does the p value have to be ? we call this criterion the alpha level alpha =.050 this means there’s a 5% chance of committing a type I error Psychology 253 4 interpret any p value on the basis of this Significance like a cake all statistical analyses essentially do the same thing : attempt to explain variance works by looking at the ratio between the amount of between groups variance and within-groups variance degrees of freedom (df) signif is affected by sample size this is seen in the df diff of a certain size between conditions can lead to diff signif findings depending on sample size more partic = easier to find signif result Assumptions of parametric analysis (PA) when we talk about inferential stats, we’re usually talking about PA Psychology 253 5 PA represent gold standard of various methods seen as preferable as they depend on certain assumptions being met regarding methodology 4 assumptions that are relevant across analyses independence of obs confidence that each indiv partic’s data does not influence others’ need each partic to have a fair chance of providing data that shows their own performance usually pretty easy to ensure this interval- or ratio- level data PA only robust when at interval or ratio level when you design a study using quanti methodology, aim to collect interval or ratio normal distribution when normally distributed, you’d expect most partic to get scores around the middle of the range expect lower and higher scores to be symmetrically distributed around the mean if not normally distributed = bad could be skewness ‘peak’ in frequencies falls into lower / higher end of scores positively skewed = peak falls at higher end negatively skewed = peak falls at lower end leptokurtic distribution vast majority of dataset falls in the middle & there’s very little around it platykurtic distribution Psychology 253 6 data points are widely distributed showing a flatter and broader distribution homogeneity of variance assumption that the variance in the dataset will be similar across all conditions doesn’t matter if lots of variance or little variance PA vs non-PA ideally, use PA not appropriate if assumptions have been violated if assumptions have been violated, there are 2 options : 1. use the non-parametric equivalent analyses 2. use statistical corrections in SPSS when running PA Ch4 : intro to SPSS Chapter example : social anxiety in a sample of 50 teenagers Navigating SPSS “data view” = where all the raw data we collect belongs Psychology 253 7 Place numbers are typed into “Variable view” = where we tell SPSS about the variables data entry golden rules each row is 1 partic all their data should be in the same row none of their data should be anywhere else no data from another partic should be in their row each column is 1 var all of the data for a var should be in one column no data for the var should be elsewhere no data from another var should be in the column 3 steps for data entry 1. list all the var you that you collected data for in your study 2. name and define each of your var in the “var view” 3. enter the raw data that you have collected into the SPSS “data view” Variable view name the name in the column heading in the “data view” should be short, can’t contain spaces or start with a number type defines the kind of data you will input default is “numeric”, but you may want to change this width defines the number of charac you can enter for your data Psychology 253 8 does not normally need to be changed decimals changes number of decimals shown labels a more detailed label that will be shown in any output you create can contain spaces and start with a number values give word labels to any numbers that represent categorical value codes e.g. No = 0; Yes = 1 missing if you have missing data, create impossible values and tell SPSS to ignore them columns changes the physical width of the column in data view will not affect the data you enter align change alignment of how you see the data measure what type of data was collected defining variables in var view best way = work through each var and then work through all options running simple analyses 3 main ones Frequencies, descriptive stats, normal distribution Basic dialogue boxes are all same Psychology 253 9 frequencies how many partic are in certain categories 💡 Analyse → Descriptive Statistics → Frequencies left side = variables When moved over to right side, you’re telling SPSS you want to analyse these creates new output file to show you the result of the analyses Output table Frequency - tells you the number of partic in each group Also tells you if there’s a missing datapoint percent - percentage of the sample in each category valid percent - tells you the percentage minus the missing datapoint Cumulative percent - each frequency is added to the next one as you go usually report the frequency and the % descriptive statistics can be used in correlational or experimental design Provides a summary of any cont. var in the study Typically the mean and SD But also min and max scores Psychology 253 10 💡 Analyse → Descriptive Statistics → Descriptives all var will be on the left Will be 1 row in output table for each variable you analysed The way one would present / talk about this differs depending on the type of analysis used testing for normal distribution 💡 Analyse → Nonparametric Tests → Legacy Dialogs → 1-Sample K-S allows us to run a Kolmogorov-Smirnov (K-S) test Tests the distribution of scores in a continuous var To see if the data is normally distributed, we really only need to look at the bottom row of the output table - ‘Asymp.Sig.(2-tailed) Gives us the p value for the significance of the K-S test If the data is normally distributed, then the K-S test will not be significant Aka, p > 0.050 Creating graphs vast majority are created in the ‘chart builder’ Making a histogram X-axis = all possible scores in the data set Psychology 253 11 Goes from smallest to largest Y-axis = number of partic with each score Allows us to visibly see the distribution of our dataset 💡 Graphs → Chart Builder Left = variables ; lower left = diff charts Can also ask SPSS to show normal distribution on histogram Can’t really use SPSS generated graphs in a research report Gonna have to edit and improve them to match APA standards To edit in SPSS - double click on graph to open editor Tips and tricks Computing new var Can easily create new var from existing ones 💡 Transform → Compute Variable On the left of the dialogue box is a ‘target variable’ setting, where you need to name the new variable On the right of the dialogue box is a ‘numeric expression’ setting, where you need to tell SPSS how to calculate the variable Psychology 253 12 splitting data files useful for looking at diff ‘types’ of partic within a dataset 💡 Data → Split File First step is to click “compare” groups to tell SPSS that you want to run analyses separately next, you want to move the var you want to use to split the file across “groups based on” will initially look like nothing happened now that the split is applied, you can run analyses will remain on until you tell SPSS to to turn it off : 💡 Data → Split File → Analyse All Cases, Do Not Create Groups selecting partic how to select diff partic for analyses, whilst ignoring all other partic 💡 Data → Select Cases → If Condition is Satisfied → If Psychology 253 13 need to tell SPSS what criteria to use and can either type it in, or move across var from the list and use the calc buttons can see what SPSS did in Data View Ch6 : t tests 3 diff types of t tests independent measures t test signif changes between 2 indep groups of partic repeated measures t test compare scores within 1 group to scores, but from 2 conditions, where all partic take part in both one-sample t test used when we have 1 set of partic, who only provide 1 data point, and we then want to compare these to a single reference point understanding experimental data is about understanding how best to explain the variability Psychology 253 14 Independent measures t test Structure of a data set important thing to notice about the structure of a dataset is how to set up the IV variable so it defines whether each partic is in which condition all data analysed needs to be analysed as numbers need to know the values you gave to each condition when we run the t test running an indep measures t test Psychology 253 15 💡 Analyse → Compare Means → Independent-Samples T Test on the left = all var in the dataset move var to be analysed to the right DV gets moved to the ‘Test Variable(s)’ box IV gets moved to the ‘Grouping Variable’ box SPSS adds “(?,?)” to ask you to define the 2 groups you’re comparing (”Define Groups”) Interpreting the SPSS output first thing SPSS spews out = ‘Group Statistics’ table tells you number of participants in each condition, mean, SD, standard error in each condition SPSS gives you descriptive stats first but you shouldn’t report this first next thing = ‘Independent Samples Test’ 2 columns labelled ‘Levene’s Test for Equality of Variances’ determines whether the assumption of homogeneity of variance has been met variability should be similar in each condition contains 2 stats F stat - test stat p value tells us if it’s signif or not should write it as such in a report : F = 4.52, p =.040 Psychology 253 16 3 columns for t test stats t = t test df = degrees of freedom sig. = p value also 2 rows top one = equal variances assumed bottom one = equal variances not assumed difference between then = df if the assumption has been violated, the df are adjusted and made smaller, making the test more conservative these 3 columns are everything needed to write up a t test remember to look at the Levene’s test to check which row to use tho if Levene’s test = significant, we use the lower row, equal variances not assumed what it means if you get a negative t value positive / negative tells you which condition had the higher / lower scores writing up important to follow accurately report the t stat alongside df 💡 t (df) = XX.XX, p <.XXX Psychology 253 17 t - tells you which stat SPSS calculated df - tells you how many degrees of freedom there are → calc from # of partic XX.XX - tells you stat value → calculated test stat value p value - tells you the significance effect size presented along with p value for a t test, most common used effect size = Cohen’s d statistic not on SPSS effect size interpretation ≥ 0.2 Small effect ≥ 0.5 Medium effect ≥ 0.8 Large effect non-parametric equivalent Mann - Whitney U if homogeneity is violated, stick with using the df correction SPSS gives if DV not normally distributed, use Mann - Whitney U Psychology 253 18 💡 Analyse → Nonparametric Tests → Legacy Dialogue → 2 Independent Samples still get 2 bits of output ‘ranks’ gives the descriptives ‘mean rank’ ‘test statistics’ gives the U stat and p value equivalent to the t stat work by converting the raw DV scores into rank score lowest gets a rank of 1 rank scores are used for analysis Mann - Whitney U doesn’t require df or N to be reported though Repeated measures t tests works similar to indep but data needs to be structured in a different way structure of datasets organised diff to indep no test of homogeneity for repeated measures instead, it’s sphericity can’t be computed if there’s only 2 conditions running a t test 💡 Analyse → Compare Means → Paired-Sample T Test Psychology 253 19 need to ask it to compare 2 columns of data so select both interpreting output 3 separate tables only need to look at 2 ‘Paired Samples Correlations’ tends not to be reported ‘Paired Samples Stats’ gives you the descriptive stats partic = ‘N’ ‘Paired Samples Test’ can see t value, df, and p value if p comes back as.000, it’s impossible, but SPSS only does first 3 decimal places, so report it as p <.001 writing up same as indep effect size SPSS doesn’t offer Cohen’s for repeated see above for table non-parametric equivalent if DV is ordinal, or either condition’s DV aren’t normally distributed, run Wilcoxon test 💡 Analyse → Nonparametric Tests → Legacy Dialogues → 2 Related Samples Psychology 253 20 select both var, use the arrow to move them across, SPSS will select Wilcoxon by default gives out 2 tables ‘ranks’ - descriptive stats ‘negative ranks’ ‘positive ranks’ calc a diff score for each partic, and then looking at the number of partic whose score increases (positive ranks) and decreases (negative ranks) ‘test statistics’ gives z score - test stat gives p value One-Sample t test last type diff from indep and repeated only 1 set of data, which we compare to a reference value e.g. IQ measured against a standardized IQ score of 100 structure of dataset 2 separate sets of partic (e.g.) before we can run t tests, we need to split the file not necessary if there’s just 1 group of partic 💡 Data → Split File Psychology 253 21 split by group (either 1 or 2) running analysis 💡 Analyse → Compare Means → One-Sample T Test move the score for the research to ‘Test Variable(s)’ box look at ‘Test Value’ box where you define the reference value the value you want to compare your data to interpreting SPSS output 2 tables each table has 2 rows; one for each condition if you haven’t run a split file, you’ll only have 1 row first table = descriptive stats second table = t test stats t value, df, p value included in these writing up same as above graphing don’t need it if there’s only 1 condition can add a reference line for your reference variable Psychology 253 22 💡 Options → Y axis line Ch7 : one way independent measures ANOVA Overview ANOVA is the next step up from the T test that allows you to compare scores from more than two conditions Two types of ANOVA independent ANOVA : compare scores across different groups of independent participants Repeated ANOVA: compare scores from just one group of participants who repeat the study multiple times under different conditions ANOVA and variance cake ANOVA works by dividing the variance in the data you collect into two different types experimental variance and random variance Experimental variance tells you how much of a difference there is between the conditions and you generally want to find a big difference so you want lots of this kind of variability When you run ANOVA, the statistic that is calculated is an F ratio calculated from 2 values: the mean square for the experimental variance, and the mean square for the random variance SPSS will call the mean square for the experimental variance the “condition” Psychology 253 23 SPSS will call the mean square for the random variance the “error” these two Ms. values tell you how large the two slices of cake are The larger the F ratio the more the variance cake is explained by the experimental variance An F ratio of one would mean there is exactly the same amount of experimental and random variance larger the F ratio, the more likely it is that your findings will be significant Developing hypotheses important to think carefully about how you develop your hypotheses With ANOVA, it's more complicated we're now comparing 3 or more conditions, so there are two stages to our hypothesis and our analysis First stage = we want to see if there's an overall difference between the conditions this is called the main effect Doesn't tell us exactly which conditions might differ from each other Psychology 253 24 Statistically breaking down a main effect With the main effect determined, you now need to think about how you'll break down a significant main effect The main effect will tell you whether there is a difference across conditions You may be thinking a possible alternative is to run a few t tests to work this out Not sensible because of family wise error. which would result in an overall 15% chance of having committed a type one error Solution to this is to use either planned contrasts or post hoc contrast If you have a one-tailed hypothesis you must run planned contrasts These work by making very specific sets of comparisons that allow us to break down the main effect in a very special way If you have a two-tailed hypothesis you need to run post hoc contrasts These work by running every pair-wise comparison in the data set Similar to T tests Because every comparison is made, you can pick up on any differences regardless of where they occur Post-hoc contrasts tend to be more conservative than planned contrasts some differences could be significant with the planned contrast, but not significant with the post hoc contrast There are four important things to remember about selecting planned or post hoc contrasts only report the statistical breakdown of a main effect if the main effect is significant if significant you will need to run further analysis to see where this significance came from which type of contrast you use should be entirely based on the previous research that you review Psychology 253 25 if the previous research is contradictory and there's no consistency then a two-tailed hypothesis is necessary You must pick one type of contrast only develop your hypothesis and test it appropriately using just one when you run planned or post hoc contrasts within the ANOVA the statistics will automatically be corrected for family wise error Is Levene’s test significant ? Yes Games-Howell post hoc test No Bonferroni post hoc test Post hoc contrasts for a one-way independent ANOVA if you have a 2 tailed hypothesis you will need to select a post hoc test in SPSS when you run your ANOVA When you run the analysis, you will need to ask SPSS to run both of the two different types of post hoc tests this is because post hoc tests can be quite sensitive to the assumption of homogeneity being violated when you run the ANOVA and ask for post hoc tests, you won't actually know whether the assumption has been violated because you won't have seen any output yet. It is simplest to select two different post hoc tests bonferroni post hoc test used if the assumption has not been violated games Howell test used if the assumption has been violated you will only need to interpret and write up one of these depending on whether the assumption has been met or not Psychology 253 26 Contrast 1 Contrast 2 Deviation 2 vs. 1, 2, 3 3 vs. 1, 2, 3 Simple (first) 1 vs. 2 1 vs. 3 Difference 3 vs. 2, 1 2 vs. 1 Helmert 1 vs. 2, 3 2 vs. 3 Repeated 1 vs. 2 2 vs. 3 Planned contrasts for one-way independent ANOVA picking the right plan contrast is more complicated because there are lots of different plan contrast you could pick Decision making can be the most difficult part of doing ANOVA Easiest way to work out which type of plan contrast you need is to clearly write out your directional hypothesis, number each condition and then work out what comparisons you need to make You will always have one fewer contrasts than the number of conditions you have A deviation contrast will take each condition except the 1st, and compare it to the overall effect of all the conditions combined A simple contrast is great if you have a control condition you want to compare to each of the other conditions need to make sure that your control condition is numbered first in the conditions coding the difference contrasts work by taking the last condition, and comparing it to all previous conditions The helmert contrast works in the opposite direction to the different contrasts A repeated contrast will start with the first condition and compare it to the next Also a polynomial contrast which is sometimes called a trend analysis can only be used with a very particular kind of experimental design and this depends on the conditions you have for your IV Psychology 253 27 Looks at a trend of changing scores across the conditions of your IV Only makes sense if the conditions of your IV exist in a sensible and predetermined order Running a one-way independent ANOVA in SPSS when you run ANOVA you should select either planned or post hoc contrasts, and just one planned contrast if you have a 1 tailed hypothesis How to run a one way independent measures ANOVA 💡 analyze → general linear model → univariate then need to click options on the right hand side 💡 Options → Descriptive statistics + Homogeneity tests Descriptive statistics = means and standard deviations for each conditions need these to interpret the direction of findings homogeneity = run levenes test so you can see whether the assumption of homogeneity of variance has been met Selecting post hoc contrasts in SPSS should only pick one way to break down a significant main effect Looking at the main SPSS ANOVA box, click on the ‘post hoc’ button Move IV across from factors to post hoc tests so that SPSS knows you want to run post hoc tests on the conditions within this IV split into 2 types ones suitable if you have “Equal Variances Assumed” meaning Levene’s is significant Psychology 253 28 ones suitable if you have “Equal Variances Not Assumed” meaning Levene’s isn’t significant also homogeneity has been violated select one from each section Selecting planned contrasts in SPSS Click “contrasts” will say “condition(none)” in the ‘Factors’ box indicates no planned contrasts have been selected 💡 Change Contrast → select from drop down simple contrast has extra step bc you’re comparing each condition to the control can tell SPSS if the control is first or last can only run 1 planned contrast at a time Graphing one-way independent ANOVA when presenting the findings from ANOVA, you also need to present findings of the mean and dispersion can be done in APA format or graphs can do it through Chart builder or when running ANOVA 💡 Plots → move the variable to Horizontal Axis → Show different conditions across X axis → Select line or Bar → exclude error bars Psychology 253 29 Interpreting SPSS output for one-way independent ANOVA Descriptive statistics gives you the mean, standard deviation and the number of participants in each condition next box is Levene’s test, tells us if assumption of homogeneity has been met Want the variances in each condition to be roughly similar if the assumption is met Levene's test should not be significant Levines is actually an F statistic can use APA format need to acknowledge that the assumption has not been met if so ANOVA is not a suitable method of analysis if the assumption has not been met Next output is the main ANOVA box Called “tests of between subjects effect” Main information comes from the condition and error rows Also gives you mean square values which can be helpful to understand the data set F statistic is calculated from dividing the condition mean square by the error means square look at main ANOVA finding need 2 degrees of freedom to present the analysis The DF for the main effect of condition is based on the number of conditions in the analysis calculated as K - 1 Number of conditions -1 the error DF comes from the number of participants Psychology 253 30 calculated by looking at the number of participants in each condition and then subtracting 1 from each condition and then adding them together Breaking down main effect : post hoc contrast (method 1) Breakdown is significant main effect using post hoc contrasts Bonferroni test used when homogeneity was assumed and Levene’s test was not significant Games - Howell test used when homogeneity was violated and Levenes test was significant When looking at post hoc tests, you want to look for pairs of conditions and see whether the DV scores differ significantly between them only one P value for each pair so there's no additional statistic to present Psychology 253 31 If a pair differs significantly, you need to look to see which of the two conditions has the higher score significantly Breaking down main effect : planned contrast (method 2) planned contrast does not get every single condition but instead a smaller and more specific set of contrasts Only need to look at ‘Sig’ row in output as it tells you the P value Need to look at P value alongside the descriptives to see if the contrast is significant or not, and to interpret which group has the significantly higher score Breaking down main effect : planned polynomial contrast (method 3) this type of contrast looks at the trend across the increasing levels of an IV Only works if the conditions exist on a continuum Interpreting this different to reporting any other kind of contrast because you only need to report one of the findings which is the type of trend that best represents your data Should get contrast analysis for a linear trend and for a quadratic trend data can only be described as one of these so you need to see which one is better Do this by looking at the ‘contrast estimate’ row for each type of trend The larger this value is the better the data is described as this type of trend Writing up a one-way independent ANOVA Simple structure that you should follow First report on the assumptions of your analysis then report main effect If main effect is significant you should report the planned or post hoc contrasts and use descriptive statistics to interpret them Psychology 253 32 If main effect is not significant you don't need to report any contrasts but should still report descriptive statistics don't interpret descriptive statistics Effect size for a one-way independent ANOVA very simple to add effect sizes SPSS can run the partial η2 can see the conventions for what defines a small, medium and large effect size when you run the ANOVA and go into the options box, you'll see that one of the options is ‘estimates of effect size’ Can be reported just after the main ANOVA statistics using a colon Include in write up if the effect is small, medium or large Non-parametric equivalents if you violated one of the parametric assumptions it would be more appropriate to analyze your data using the Kruskal-Wallis test 💡 Analyze > nonparametric tests > legacy dialogues > K independent samples Move DV to ‘test variable list’ move IV to ‘grouping variable’ Two key tables in the output that you need to interpret ‘Ranks’ table gives you the mean rank for each condition in the final column of the table SPSS orders all of the data from smallest to largest Need to break down the main effect using three Mann- Whitney U tests Psychology 253 33 Can't use planned or post hoc contrast options If we run multiple statistical tests we increase the chances of committing a type 1 error so we need to bonferroni- correct the alpha level We need to run the Mann - Whitney U tests so that the results will only be significant if the P value is.017 or smaller Ch15 : Correlations Overview primary aim of any correlation is to explore the linear relationship between 2 continuous variables pearsons correlation is the parametric analysis that is used to analyze these relationships Statistic that is calculated gives us an R value also called a correlation coefficient ranges from -1 to +1 0 means there's no relationship at all Pearson’s Correlation basic parametric analysis Analyze > correlate > bivariate Simply select the two variables you wish to analyze in the SPSS dialog box The output is quite simple, you get all the information twice SPSS output gives you three different values R value-Pearson's correlation P value number of participants Psychology 253 34 asterix alongside the R values will show you if the correlations are significant Reporting a Pearson’s Correlation important to ensure you report the statistic using the APA standard Should explicitly state whether the correlation is significant or not For significant correlation you should also say if the relationship is positive or negative Also need to report the degrees of freedom but SPSS calculates this for you Graphing a Pearson’s Correlation should graph any significant correlations with the scatter plot and the line of best fit included to represent the strength of the relationship Graphs > chart builder> scatter/ dot > simple scatter drag and drop onto X&Y axis Also need to make sure you tick ‘total' in ‘linear fit lines’ Psychology 253 35 The line of best fit will be straight, which is a clear visual representation of the strength of the relationship between the two variables Can also make some changes to the aesthetics Effect sizes for a Pearson’s Correlation R value can be interpreted as an effect size If you have a negative correlation you can look at the values in the same way, just add a negative sign before the value Spearman’s non-parametric correlation nonparametric version of Pearson's correlation Follow the same prompts as when running the pearsons correlation with one difference simply change Pearson to Spearman output is the same and you can interpret and write up the correlation in the same way Just be clear that it was a nonparametric correlation and specify it was a spearman's correlation Partial Correlations if we collect data for confounding variables we could statistically control for it by running a partial correlation We can look at the correlation between the two variables we're interested in after having taken away any effects of the control variable Can only control for continuous or binary control variables Analyse> correlate> partial Move across the variables you want to correlate, move across control variable Two differences in the output when looking at partial correlations and normal correlations reminded which control variables were included Psychology 253 36 Presented with a DF rather than an N [number of participants] Partial correlation takes into account 3 variables the two conditions and one control DF = n – 3 reporting partial correlations is done the same way As for standard Pearson's correlation Be very explicit that the control variable variance is removed first Statistically comparing correlations between different groups of participants Final thing is look at the correlations between variables separately for different groups Need to tell SPSS to split the data file according to the variable that defines the different groups you want to see separately Data> split file > compare groups Will look as if nothing has happened Split file function does not affect any of the analysis you've already run Can interpret and write about these correlations in the usual way, ensure that you're clear which group you are talking about when reporting the correlations Consider where correlations for each group are similar or different possible to show 2 separate correlations on the same scatter plot but you need to turn off split file first split file> analyze all cases, do not create groups next one choose scatter/ dot in chart builder Can also calculate the magnitude of the correlations and how they differ using online calculators when you statistically compare correlations you will get a Z score tells you how different the correlations are from each other Psychology 253 37 The bigger disease score, the bigger the difference Running multiple correlations within a dataset if you have continuous variables within a data set that you want to analyze using multiple correlation analysis, Need to remember the issue of family wise error Can you use bonferroni-correct or incorporate all the variables within a single multiple regression analysis to fix this Ch23 : Content analysis and Thematic analysis to begin : factual coding vs. referential coding Qualitative methods can be daunting Use the term ‘coding' a at some point during the description of the stages of data analysis qualitative analysis involves familiarization with the data, followed by the research and making some initial notes on the transcription codes are then developed based on patterns and the initial notes and then a number of codes may be grouped together to form a theme Themes may be collated to develop labels that reflect the overarching themes Common to find most qualitative papers include a number of master themes with their own subordinate themes to the not work the type of initial notes developed by the researcher will depend on the epistemological and philosophical roots of the method of analysis Remember that the main point of coding is to help the researcher organize and analyze the data by applying labels Psychology 253 38 Recognize that coding systems will generally fall into one of two overarching camps: factual or referential Factual codes = help categorize data on the basis of specific attributes involves less interpretation and is considered to be somewhat more likely to lead to clear interrater reliability Referential coding = focuses on the meaning and the text rather than just nominal differences Maybe influenced by theory involves more initial decision making on the part of the researcher Treat the daughter mole heuristically coding is the result of your attempt to organize, reduce or analyse the data Codes and themes do not just emerge from the data as though they were waiting to be discovered, requires decision making on the part of the researcher One of the fundamental differences in each qualitative analysis is the intention of the coding process based on whether you're taking a positivist or more interpretivist stance Content analysis sometimes referred to as qualitative content analysis Involves identifying categories or themes within the data using factual coding frameworks Often used to generate frequency data Typically used within quantitative rather than qualitative designs Main focus is to search for incidences of a predefined set of things within the data Ultimately leads to the ability to produce a numerical frequency Source of the data is sometimes referred to as an ‘artifact’ and maybe interview transcripts, images, TV programs Psychology 253 39 often relies on a clearly defined coding framework which will most likely be driven by theory or a psychological model that provides a description of what you may be looking for Types of research questions Content analysis is most suited to research questions using existing theoretical frameworks as a basis to identify categories or themes Tend to describe something and can be used to generate quantitative data from qualitative data that can be used to test hypothesis within a quantitative design Scope and topic choice of a content analysis research question can be extremely varied, but the single homogeneous thread is that they all clearly outline what it is they're looking for at the start of the research process Types of data and coding methods aim of content analysis is to quantify what starts off as qualitative data Participant generated texts are also commonplace in papers using content analysis There are a number of guidelines as to how best to conduct content analysis Most important thing is transparency in writing up what you did Recognize that variations and content analysis are usually based on the type of coding system used to analyze the data conventional coding systems = researcher observes the data and then identifies codes from it during the analytical process Summative coding systems = involves the development of a manifest code before the analysis Directed content analysis = involves the identification of a coding system before analysis begins 1. Define research question and identify suitable body of material Psychology 253 40 research aims and question or developed in much the same way as in quantitative studies It is best to choose research questions with a specific predefined question when doing a content analysis rather than exploratory next stage in content analysis is to identify a body of material for analysis The material will often be determined by the specifics of your research question 2. Decide on recording unit and categories involves the meticulous process of identifying how you will code and categorize the content you plan to search for This stage is of vital importance in the content analysis process as poorly coded categories will ultimately lead to lower inter rater reliability Firstly, we need to identify the way in which we're going to code the data achieved by defining the unit of measurement we'll use The recording unit may be the manifest content of the data and may typically be as simple as the search for an individual word This step is essentially asking what constitutes evidence of the presence of a particular category or subcategory more complex content analysis may use recording units that are naturally more specific to the research question the next step involves the identification of clear and distinct categories categories may be theory driven [top- down] or they can be formulated from an initial review of the data [bottom- up] 3. Pilot the search and check for inter-rater reliability research that can conduct a small scale pilot search of one of the data pieces There are lots of ways to record the information, a simple tally sheet is recommended Psychology 253 41 The research will also be concerned with intra and interrater reliability Key to reliability in a content analysis is ensuring that the coding categories and recording units are clearly defined Intra rater reliability = extent to which the research is consistent in their own coding process across materials Interrater reliability = extent to which two coders using the same coding framework end up with the same frequencies at the end of their data analysis process can be assessed by asking assets of independent coders to analyse the sample material in the pilot stage The data can then be compared, potentially using correlation as a statistical measure of interrater reliability Difference between the frequencies in each category might mean that the categories were not sufficiently clearly defined 4. Conduct the main analysis and write up results researcher can move on to the main body of the analysis Can now move on to search the entire corpus of materials available to you that were predefined at the start of the study The quest for interrater reliability should continue by ensuring that there are multiple coders taking part in the process Once frequency dots is collected, it is common to explore the findings in relation to additional variables that may have formed secondary hypotheses as part of the research question Write up can follow standard APA formats for reporting statistics Other considerations several computer packages that can be used to analyze qualitative data and take the qualitative data and turn it into a quantitative summary CAQDAS provides a more robust way of ensuring reliability Psychology 253 42 because once the parameters of the codes and categories are defined, the computer searches (without the issue of human bias and individual differences) for the same terms there are concerns about the use of software packages for some types of qualitative analysis, mostly relating to the issue of context Content analysis is seen as a middle ground between quantitative and quantitative analysis in which the quantitative researcher recognizes the wealth of data available and uses content analysis as a mechanism for transforming it into quantitative output Thematic analysis considered a foundational qualitative technique and has been used as a generic term across a variety of disciplines Used to describe the way in which research is code and search for themes in qualitative data and there's a method of analysis in its own right Involves identifying and analyzing themes within qualitative data and is frequently used with a variety of methods of data collection Suitable to use with deductive or inductive research projects Analysis of the themes involves an element of interpretation by the researchers Semantic analysis provides a catch all approach and this is by no means a disadvantage Semantic analysis is an easier solution compared to having to learn the philosophical backgrounds of IPA or discourse analysis Probably the best starting point for those wanting to do qualitative analysis for the first time as it will provide the core skills needed for more complex analysis in the future Types of research questions appropriate for both research questions where there's an existing framework and with the somatic areas are developed within the analysis process Psychology 253 43 Can be used to address research questions related to identifying themes within both specific small participant groups and larger or more general sub populations Sample size qualitative research sample size is not always simply defined by the number of participants For example, when we describe a large sample we could in fact be referring to only a few participants who provided substantial amounts of data Easier to think of sample size as the overall size of the corpus of data rather than as the number of individuals who've provided it Defining an apriori sample size and sticking to it is less common in qualitative methods May be asked to specify sample sizes for your quantitative study and it is suggested that you write an answer that provides a suggested sample size based on previous similar studies Phases of thematic analysis number of different takes on how to conduct a thematic analysis You can begin the six stage approach to conducting a thematic analysis once you have decided on your question and data collection method and have collected your data Familiarization with the dataset One of the most important stages The extent to which you're familiar with the data at the start of the analysis stage may depend on the type of data you're using and how it was collected May also have to transcribe verbal data into written form You should ensure you read the material at least once in its entirety as a minimum Psychology 253 44 Important that you're not tempted to skip bits and that you fully read each data source from start to finish Transcription can be done in a variety of ways depending on what the researcher is trying to capture Coding individual data items Codes may be manifest or latent but ultimately they should describe for each of the data that is relevant to the research question If the research is taking a theory driven approach then codes may may be derived from particular aspects of the theory If the research is taking a more data-driven approach the codes and ultimately the themes will depend on the data Codes can be interpretive rather than just a semantic description of a word used Searching for themes at this stage, the codes that have been recorded or compared and collated as we begin to try and map how the codes might be grouped or related to each other Many techniques that can be used at this stage The aim of this stage is to produce a number of candidate themes It is also common to create a thematic map which is a visual representation of the broader themes now each code is attached to it and related to the other codes Reviewing themes this phase is about taking stock of what we've done so far and essentially asking the question “do my themes make sense?”, specifically in relation to the entire data set it may be at this stage that there's not enough data to justify a theme being a theme and it may be broken down and placed in another theme or perhaps discarded Psychology 253 45 this is a 2 stage process Firstly, the researchers should review all the initial codes and their associated data extracts to ensure they fit the theme Secondly, the researchers should then look at the wider data set to see if the developed thematic map makes sense in the context of the data set this stage may then end with a revised version of the thematic map, one that should accurately reflect each individual's theme but which also considers how each theme relates to the other as a whole Naming and defining themes focuses on the final refinements and themes and should involve the research at aiming to identify the essence of what each theme is about You will also need to clearly define whether each theme has any subsequent sub themes The names of the themes should encapsulate the overarching meaning of the theme and not simply be a one word name Writing up final stage in analytic process Need to ensure the reader is provided with a clear and valid representation of the data set For the most part, a semantic analysis write up will contain all the core aspects of your psychology report Given that semantic analysis has been described as a flexible method of analysis that does not adhere to any specific estimate logical position, it's important that your write up also clearly outlines the estimate logical background that informs the design and analysis Different analytical methods may do things differently Intro to analysis Psychology 253 46 reiterate the type of analysis and provide a short paragraph detailing any specific transcription symbols Analytical overview overview of themes and sub themes Do not use emergent theme conscientious on the basis that it implies this theme was already there to be discovered rather than produced through the analytical process state how many seams were found and then list them order is not really relevant but should make sense can also do a thematic map at this stage which shows all themes and sub themes and may indicate relationships between them Presentation of analysis get into the detail of your analysis Start by giving an overview of the first theme Outline the theme and maybe reiterate the sub themes within it Then go on to outline the first subtheme and so on presenting quotes can be done in several ways but better to indent them to the center and separate from the main text If you're working on a more descriptive analysis it may be enough just to provide the quotes as evidence with a brief description Most qualitative analysis require you to actually provide some interpretive commentary after each quote You can also evaluate the commentary in relation to the literature at this point or save this until the discussion section Key considerations : ensuring themes are grounded content analysis takes qualitative data and allows us to transform it into quantitative data via the process of systematically and exhaustively defining Psychology 253 47 recording units and categories it collects and analyzes frequency data and strives for interrater reliability Thematic analysis is a foundational qualitative method of analysis that is flexible in its approach but ultimately acknowledges the importance of context when engaging in semantic analysis, we are adopting a paradigm shift away from the hypothetico-deductive models typical of quantitative designs to adopt the perspective that people are social beings, and that their version of reality is a byproduct of social constructionism qualitative research needs to account for context when analyzing qualitative data we must consider the individual circumstances of the participants demographics, past experiences, relationships when creating codes and ultimately themes for our research Ch24 : Grounded Theory An intro to the philosophy of grounded theory : generating rather than testing theory qualitative technique which involves developing theory from the data Not just a single specific method but is typically seen as an overarching approach to study design with a research of fundamentally aims to generate new theory Inductive and aims to develop theory about a phenomenon in a particular context When we talk about the development of theory in this type of analysis it is substantive rather than serving to develop general laws or aiming to produce an overarching theory that can be generalized to all populations Philosophy takes the approach of trying to develop a theory based on the specific data for the specific cases for a specific phenomenon Many different variations Psychology 253 48 Early versions attempted to position the analysis in positivism Types of research questions grounded theory is especially appropriate for research questions on topics that are little understood or under researched there are lots of variations in the subject matter covered by grounded theory studies topic will focus on some type of psychosocial phenomenon aim will be explanatory rather than confirmatory Getting to grips with key features and terms there are a number of variations in grounded theory but there are overarching commonalities between them all Key features: concurrent data collection and analysis Focus on psychosocial processes Coding process that produces codes and categories that are developed from the data Memos to aid the analysis Inductive production of abstract categories Use of theoretical Sampling Incorporation of categories into an overarching theoretical framework A continuous process of collection and analysis range of different epistemological and philosophical variations of grounded theory, focus is on a general procedure that draws on some of the general grounded theory features See table on pg. 501 Psychology 253 49 Sampling and collecting data grounded theory usually relies on purposive sampling purposive sampling searches for information rich cases of the phenomenon decisions on who to include are subjective and made by the researcher Data will most likely be gathered using interviews or focus groups Open-ended or semi structured interviews are commonly used in order to ensure that participants can provide detailed accounts of the phenomenon in question Only need to transcribe the relevant bits of data Common to use the Cornell note taking method when transcribing Producing memos analysis starts immediately both in terms of coding the first case and by producing memos from the very start of the research Memos should be used right from the start of the data collection phase to record your thoughts on potential codes and interpretations Can be kept in a separate document central part of the analytical process Psychology 253 50 Using a memo is what allows you to reflect on and analyze the data to develop a theory Analysing the data : different types of coding and the development of categories grounded theory analysis has lots of technical terms We identify and develop codes in data which group together to represent concepts We group concepts to form categories A theory is developed by describing the relationship between categories Coding There are three levels of coding that take place during a grounded theory analysis : Open Axial Selective Open coding Initial breaking down of the transcripts to generate initial codes and concepts Made-up of words, sentences, even paragraphs of the participants transcript that describe a particular feature of the phenomenon There is no specific length of text a code should be Psychology 253 51 most important thing is that you can make sense of it in the context of the wider data this level of coding is conceptualizing the data on the first level of abstraction [descriptive] Initial open code should not only describe the data but must also attempt to provide some analysis Theoretical sensitivity is important Two types in vivo use the specific words of the participant as a label used when you may not be clear on what the code is conceptually telling you constructed more conceptual Grouping open codes as concepts Involves taking these open codes and grouping them together to identify and categorize concepts, labeling them, leaving the researcher with an initial set of conceptual categories Can be at the higher level of abstraction Aiming to summarize all related codes as a single concept and then label groups of concepts as a category Constant comparison Grounded theory analysis is also referred to as the constant comparative method Constant comparisons of codes and categories with data should take place Should be comparing and contrasting chunks of the text with categories that have already been developed to identify similarities and differences Psychology 253 52 Comparisons can also be made using the flip flop technique researcher takes a specific concept from the data and compares it with its hypothetical opposite Constant comparison means that throughout this stage of analysis, codes and categories may be revised and relabeled Theoretical saturation is the point at which new codes generated by theoretical sampling consistently fit into pre-existing categories and no new categories are formed researcher must continue constant comparison up until theoretical saturation occurs Abstract definitions The formal definitions of each category and associated subcategory at an abstract level These should reflect the characteristics of each category including the defining psychological constructs Including answering questions about when things occur, how they manifest, consequences of it happening Theoretical sampling at every stage refers to the process by which we collect data for the purpose of generating theory with the researcher collecting data, coding and analyzing the data, then making decisions about what data to collect next never too early to start this At the beginning of the grounded theory project, the researcher identifies the research question and identifies a specific group of people based on a purposive sample Purposive sample = a group of people based on a particular set of criteria Psychology 253 53 researchers should then actively seek participants who can confirm categories that are being developed This is an iterative process that should not be left to the end Axial coding involves looking for relationships between categories Axial codes or the connections that join categories of things together We recognize each code is unique but can also describe the axial codes that make them similar or different to each other achieved by using a coding paradigm Paradigm is used as a model to stimulate hypotheses and questions also leads to a refining and reduction in what may have started as potentially extensive list of categories Selective coding and identifying a ‘core category’ first need to identify a core theory and relate it to the other main categories before beginning to formulate an overarching theory Very similar to axial coding, but we are now progressing to explore relationships not only at the conceptual level, but also at the property and dimensional level Cool category is described as the essential concept which brings together all the other categories Purist grounded theory methodologists generally choose only 1 core category Likely to be abstract but must remain grounded in the data Theoretical integration involves producing a storyline of the theory from those categories Storyline is a representation of how the theory sits together Grounding the theory and filling in the gaps Psychology 253 54 the first attempt at a theory is now represented by a combination of theoretical memos and the integrative diagram of the relationship between core categories and other main categories Validate the theory against the data by taking each statement and individually reviewing them to assess if they provide support Once this process is fully complete we can begin the write up stage Write up is fairly standard some variations depending on whether individual authors have adhered to more of a purist version or have simply used some of the methods to take a grounded theory approach to their data collection Results section is likely to be the most substantial part abbreviated grounded theory = researcher tends to be confined to coding and constants comparative analysis are not conducted Ch25 : Interpretative Phenomenological Analysis Intro concerned with the detailed examination of core structures of individual experience and how individuals make sense of that experience Emphasis is placed on the meanings that people hold about their experiences Hermeneutics = a philosophy that focuses on how we interpret our world Phenomenology = a branch of philosophy that focuses on understanding and making sense of our experiences Idiography = represents an analytical commitment to in-depth exploration of the individual Psychology 253 55 Interpretation and hermeneutics Aim of hermeneutics was to understand the lived experience and to ask questions about interpretation It's not about truth checking or validating accounts but rather perception of the experience The analysts should offer interpretations of the text which the participant may not be able to Argues that we are all interpretive beings and we will naturally try to make sense of the things we experience Descriptive vs interpretative phenomenology can differentiate between two overarching types of phenomenological approaches Interpretive type additionally concerned with the sense making activity Psychology 253 56 Interpreting how the participant makes sense of their experience of the phenomenon of interest Descriptive type focus is on exploring the essence of the experience and identifying its essential structure Descriptive phenomenology and Husserl Essentially developed from Husserl’s perspective on how the scientific study of an individual's experience should be conducted Argues that subjective experiences are worthy of attention Husserl Believed it was possible to identify common core concepts of the lived experiences of a phenomenon specific to a homogeneous group main aim is to explore and provide a detailed description of lived experiences Interpretative phenomenology and Heidegger Heidegger argue that we should go beyond just the description Instead we should search for meaning embedded in common life practices Interpretive inquiry may be inductive or deductive in its approach to using theory to guide and shape initial planning of research The final analysis is derived from a combination of the meanings articulated by the participant and the researcher called a double hermeneutic cycle Psychology 253 57 Commitment to idiography IPA demonstrates a commitment to the idiographic approach in two ways firstly, focuses on detailed in-depth analysis Secondly, focuses on a commitment to understand how particular experiential phenomena have been understood one should ensure a fully worked up analysis of the first case prior to moving on to the next Interested in quality over quantity Research questions and data types research question should always involve the exploration of some form of experientially based event Questions tend to be open-ended meaning IPA papers opt for a more general statement of exploratory focus allowing for much more detail the focus on experiences makes IPA highly suited to research exploring unique examples The variety of research topics is as diverse as psychology itself Data collection methods usually semi structured interviews Can also be creative and use methods such as photo elicitation focus groups can also be used as well as secondary sources Sample size choice and size of sample must stay true to the phenomenological roots of IPA Aim of ensuring each participant is able to offer an insight Therefore small sample sizes are perfectly acceptable Sample should be homogeneous but participants do not need to have the same demographic profile Psychology 253 58 Interview schedules + conduction Interview phase involves researcher trying to put themselves in the shoes of their participant Should allow participant to give their own account Encourages a flexible use of the interview schedule, eliciting experiential accounts with probing questions that may not have formed part of the original schedule Important to foster an empathetic and sensitive manner Should always aim to ask questions that initially ask participants to describe the experience and then follow up with exploratory questions Don’t want to bluntly stop participants discussing things that may lead to a narrative that's relevant to the research Vital that the line of questioning does what it says It's often used in IPA as a method of analysis with sensitive topics Stages of analysis most important aspect in qualitative analysis is adherence to the epistemological and philosophical roots OK to adapt and deviate but be careful to be consistent Transcribing data recommended that this is done as soon as possible Verbatim transcription is usually used Should be viewed as the first stage of analysis: the more involved you are the more likely, you will feel immersed in the data Stage 1 : reading and re-reading gives it a personal feel Start to make initial interpretations of the experiences you're reading about Psychology 253 59 Actively aim to go through the transcription slowly to consider the words and phrases used Stage 2 : initial noting formally you move on to examining the transcript to develop initial codes Focused on a model of free text analysis Should be an honest and instinctive commentary about exactly what came to your head as you become more and more familiar with the transcription Should be an evident phenomenological focus Stage 3 : developing initial themes seems should be formed from clusters of discrete chunks of the transcript and your associated commentary Can be useful to create a table of these initial themes Stage 4 : searching for connections across initial themes One way of doing this is to print out a list of the themes, cut them up into separate strips, try to sort them into piles Stage 5 : moving to the next case aim to bracket off what you have interpreted and abstracted from the previous case by repeating stages one to four on this new case Stage 6 : looking for patterns across cases Take the tables of initial themes and look to see what patterns are apparent across cases These final larger groups of themes or grouped together to form a master theme Reflexivity pivotal idea Psychology 253 60 Role is obvious when we consider the fact that we are also interpretive beings Important to involve the reader in your reflexive process Ch26 : Discourse Analysis – Samuel Fairlamb What is discourse analysis Approach that has interdisciplinary roots and has been adopted in various academic disciplines Has various different meanings Concerns analysing the language, not at a mechanical level though Focuses on how people use language Discourse analysis is embedded in a social constructionist approach suggests that humans take on an active role in constructing knowledge and meaning that is consensually validated between humans by using language Discourse is the way of speaking about a topic and how it constructs meaning What is discursive psychology refers to one specific approach that can be considered to fall under the umbrella of discourse analysis Action orientation discursive psychology keeps in line with the social constructionist ethos of discourse analysis by examining how people use language to construct something Questions the extent to which we can infer what people actually think based on what they say Psychology 253 61 Focuses on what people say within interactions as having an action orientation Discursive psychology aims to highlight the actions ingrained within language also views the context in which the interaction is produced as important in understanding what language is doing Stakes the ways in which people talk about things can vary according to its context referring to people's stakes may attempt to discredit certain versions of events Stake = having desires, beliefs, loyalties Discursive psychologists are interested in how people manage these within social interactions Interpretive repertoires refers to a coherent way of speaking about something Interpretive repertoires are the habitual lines of argumentation that may rely on familiar cliches Refer to things that everyone knows Tension may also exist between these different positions that people may take up about a subject may be referred to as ideological dilemmas as they represent how people may shift between opposing viewpoints or ideologies part of discursive psychology might be to examine how people negotiate dilemmas within everyday interaction When to use discursive psychology Discursive psychology seeks to explore how people construct their experiences Emphasis is not only on what is being said, but also how it is said and what purpose it serves Psychology 253 62 Research questions appropriate or ones that concern what people are doing with their language Researchers will often narrow their focu

Use Quizgecko on...
Browser
Browser