Descriptive Statistics and Inferential Statistics Past Paper
Document Details
MIT
null
Mark Van M. Buladaco
Tags
Related
Summary
This module reviews descriptive statistics and sampling techniques, and includes a discussion of hypothesis testing and inferential statistics. It's intended for undergraduate-level learners studying statistics.
Full Transcript
Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods MODULE 2: DESCRIPTIVE STATISTICS AND INFERENTIAL STATISTICS OVERVIEW This module will review the different tools and techniques...
Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods MODULE 2: DESCRIPTIVE STATISTICS AND INFERENTIAL STATISTICS OVERVIEW This module will review the different tools and techniques used in descriptive statistics from your previous Statistics course or Research course. Descriptive statistics is the term given to the analysis of data that helps describe, show or summarize data in a meaningful way such that, for example, patterns might emerge from the data. This module will also include discussion of different sampling techniques and determining what is the appropriate sampling to be selected based on a given problem. This will also include discussion and demonstration of conducting hypothesis testing and include the process of solving statistical problems using inferential statistics. Inferential statistics are techniques that allow us to use these samples to generalize about the populations from which the samples were drawn. MODULE OBJECTIVES At the end of this module, the student is expected to be able to: Demonstrate ability to analyze and solve problems utilizing descriptive statistics and sampling. Demonstrate ability to analyze and solve problems utilizing hypothesis testing. Appreciate the process of solving statistical problems in descriptive and inferential statistics LESSONS IN THIS MODULE Lesson 1: Review on Descriptive Statistics and Sampling Techniques Lesson 2: Hypothesis Testing (T-Test and ANOVA) This module can be redistributed but the contents of this module are solely owned by the author. 23 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods Review on Descriptive Statistics and Sampling LESSON 1 Techniques LEARNING OUTCOMES At the end of this lesson, the student is expected to be able to: Understand the tools used in descriptive statistics. Analyze how to select appropriate sampling techniques on a given problem. Solve statistical problems using descriptive statistics and sampling techniques. TIME FRAME Week 4 INTRODUCTION Welcome to Lesson 1 of Module 2: DESCRIPTIVE STATISTICS AND INFERENTIAL STATISTICS! Here, you are going to learn and understand about descriptive statistics and sampling techniques. You will review the tools used in descriptive statistics especially in solving problems. Descriptive statistics provide information about our immediate group of data. For example, we could calculate the mean and standard deviation of the exam marks for the 100 students and this could provide valuable information about this group of 100 students. Sampling techniques will also be included in this lesson. You will learn how to select appropriate sampling technique for a given scenario or problem. What are you waiting for? Let us now discover the wonders of descriptive statistics and sampling techniques! ACTIVITY: Hidden Objects Find the objects indicated. Put circles on the objects you find. You may copy the picture and put it in a word document or slide presentation or any software you are comfortable with and submit your output in the LMS. This module can be redistributed but the contents of this module are solely owned by the author. 24 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods ANALYSIS 1. Review on the different charts and graphs used in Statistics. Describe each and give an example. _________________________________________________________________________________ _________________________________________________________________________________ _________________________________________________________________________________ _________________________________________________________________________________ _________________________________________________________________________________ ABSTRACTION Knowledge of some basic statistical procedures is essential for researchers proposing to carry out quantitative research. They need statistics to analyze and interpret their data and communicate their findings to others in education. Researchers also need an understanding of statistics to read and evaluate published research in their fields. SCALES OF MEASUREMENT Data come in a wide range of formats. For example, a survey might ask questions about gender, race, or political affiliation, while other questions might be about age, income, or the distance you drive to work each day. Different types of questions result in different types of data to be collected and analyzed. The type of data you have determines the type of descriptive statistics that can be found and interpreted. A fundamental step in the conduct of quantitative research is measurement—the process through which observations are translated into numbers. S. S. Stevens (1951) is well remembered for his definition: “In its broadest sense, measurement is the assignment of numerals to objects or events according to rules”. Quantitative researchers first identify the variables they want to study; then they use rules to determine how to express these variables numerically. The variable programming language preference may be measured according to the numbers indicated by students who are asked to select among (1) Java, (2) C++, (3) PHP, (4) Visual C#, or (5) other. The variable programming grade may be measured as the numbers observed when subjects step on a scale. The nature of the measurement process that produces the numbers determines the interpretation that can be made from them and the statistical procedures that can be meaningfully used with them. The most widely quoted taxonomy of measurement procedures is Stevens’ scales of measurement in which he classifies measurement as nominal, ordinal, interval, and ratio. This module can be redistributed but the contents of this module are solely owned by the author. 25 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods Table 2.1.1 Scales of Measurement Sources Discussion Nominal Scale The most primitive scale of measurement is the nominal scale. Nominal measurement involves placing objects or individuals into mutually exclusive categories. Numbers are arbitrarily assigned to the categories for identification purposes only. The numbers do not indicate any value or amount; thus, one category does not represent “more or less” of a characteristic. School District 231 is not more or less of anything than School District 103. Examples of a nominal scale are using a “0” to represent males and a “1” to represent females or the religious preference described previously. Ordinal An ordinal scale ranks objects or individuals according to how much of an attribute they possess. Thus, the numbers in an ordinal scale indicate only the order of the categories. Neither the difference between the numbers nor their ratio has meaning. For example, in an untimed footrace, we know who came in first, second, and third, but we do not know how much faster one runner was than another. A ranking of students in a music contest is an ordinal scale. We would know who got first place, second place, and so on, but we would not know the extent of difference between them. The essential requirement for measurement at this level is that the relationship must be such that if object X is greater than object Y and object Y is greater than object Z, then object X is greater than object Z and is written thus: If (X > Y) and (Y > Z), then (X > Z). When appropriate, other wording may be substituted for “greater than,” such as “stronger than,” “precedes,” and “has more of.” Interval Scale An interval scale not only places objects or events in order but also is marked in equal intervals. Equal differences between the units of measurement represent equal differences in the attribute being measured. Fahrenheit and Celsius thermometers are examples of interval scales. We can say that the difference between 60° and 70° is the same as the distance between 30° and 40°, but we cannot say that 60° is twice as warm as 30° because there is no true zero on an interval scale. This module can be redistributed but the contents of this module are solely owned by the author. 26 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods Zero on an interval scale is an arbitrary point and does not indicate an absence of the variable being measured. Zero on the Celsius scale is arbitrarily set at the temperature water freezes at sea level. Ratio Scales The ratio scale of measurement is the most informative scale. It is an interval scale with the additional property that its zero position indicates the absence of the quantity being measured. You can think of a ratio scale as the three earlier scales rolled up in one. Like a nominal scale, it provides a name or category for each object (the numbers serve as labels). Like an ordinal scale, the objects are ordered (in terms of the ordering of the numbers). Like an interval scale, the same difference at two places on the scale has the same meaning. And in addition, the same ratio at two places on the scale also carries the same meaning. The Fahrenheit scale for temperature has an arbitrary zero point and is therefore, not a ratio scale. However, zero on the Kelvin scale is absolute zero. This makes the Kelvin scale a ratio scale. For example, if one temperature is twice as high as another as measured on the Kelvin scale, then it has twice the kinetic energy of the other temperature Figure 2.1.1 Determining Scales of Measurement This module can be redistributed but the contents of this module are solely owned by the author. 27 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods ORGANIZING RESEARCH DATA Researchers typically collect a large amount of data. Before applying statistical procedures, the researcher must organize the data into a manageable form. The most familiar ways of organizing data are (1) arranging the measures into frequency distributions and (2) presenting them in graphic form. Frequency Distribution A systematic arrangement of individual measures from highest to lowest is called a frequency distribution. The first step in preparing a frequency distribution is to list the scores in a column from highest at top to lowest at bottom. Include all possible intermediate scores even if no one scored them; otherwise, the distribution would appear more compact than it really is. Several identical scores often occur in a distribution. Instead of listing these scores separately, it saves time to add a second column in which the frequency of each measure is recorded. See Table 2.1.2 to refresh your knowledge on frequency distribution. Table 2.1.2. The Test Scores of 105 Students on Programming 1 Test 495 86 24 Table 2.1.2 shows the test scores of a group of 105 students in an Programming 1 class. Part A of the table lists the scores in an unorganized form. Part B arranges these scores in a frequency distribution with the f column showing how many made each score. Now it is possible to examine the general “shape” of the distribution. With the scores so organized, you can determine their spread, whether they are distributed evenly or tend to cluster, and where clusters occur in the distribution. fX This module can be redistributed but the contents of this module are solely owned by the author. 28 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods is the product of the scores and the frequencies while cf is the cumulative frequency. The total of a frequency and all frequencies so far in a frequency distribution. Graphic Presentations It is often helpful and convenient to present research data in graphic form. Among various types of graphs, the most widely used are the histogram and the frequency polygon. The initial steps in constructing the histogram and the frequency polygon are identical: 1. Lay out the score points on a horizontal dimension (abscissa) from the lowest value on the left to the highest on the right. Leave enough space for an additional score at both ends of the distribution. 2. Lay out the frequencies of the scores (or intervals) on the vertical dimension (ordinate). 3. Place a dot above the center of each score at the level of the frequency of that score. From this point you can construct either a histogram or a polygon. In constructing a histogram, draw through each dot a horizontal line equal to the width representing a score. To construct a polygon, connect the adjacent dots, and connect the two ends of the resulting figure to the base (zero line) at the points representing 1 less than the lowest score and 1 more than the highest score. Histograms are preferred when a researcher wants to indicate the discrete nature of the data, such as when a nominal scale has been used. Polygons are preferred to indicate data of a continuous nature. Figure 2.1.2. Histogram of Programming 1 Test Scores from Table 2.1.2 This module can be redistributed but the contents of this module are solely owned by the author. 29 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods Figure 2.1.2. Frequency Polygon of Programming 1 Test Scores from Table 2.1.2 MEASURES OF CENTRAL TENDENCY A convenient way of summarizing data is to find a single index that can represent a whole set of measures. Finding a single score that can give an indication of the performance of a group of 300 individuals on an aptitude test would be useful for comparative purposes. In statistics, three indexes are available for such use. They are called measures of central tendency, or averages. To most laypeople, the term average means the sum of the scores divided by the number of scores. To a statistician, the average can be this measure, known as the mean, or one of the other two measures of central tendency, known as the mode and the median. Each of these three can serve as an index to represent a group. The Mean The most widely used measure of central tendency is the mean, or arithmetic average. It is the sum of all the scores in a distribution divided by the number of cases. In terms of a formula, it is which is usually written as Example 1: 15 students were assessed with their IQ and their scores are the following: 110, 120, 109, 110, 111, 90, 95, 113, 112, 115, 110, 90, 99, 110, 99 Using the formula above, we can find that the mean is 106.2. Note that in this computation the scores were not arranged in any particular order. Ordering is unnecessary for calculation of the mean. Some think of formulas as intimidating incantations. Actually, they are time savers. It is much easier to write than to write “add all the scores in a distribution and divide This module can be redistributed but the contents of this module are solely owned by the author. 30 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods by the number of cases to calculate the mean.” Although it is not necessary to put the scores in order to calculate the mean, with larger sets of numbers it is usually convenient to start with a frequency distribution and multiply each score by its frequency. This is shown in column 3 (fX) in Table 2.1.3, Mr. Li’s physics class exam scores. Adding the numbers in this column will give us the sum of the scores: Table 2.1.3 The mean of the physics exam scores is Think about it 2.1.1 Construct a histogram and a polygon of the scores of Mr. Li’s first physics exam. Draw/Write it in a clean paper, take a picture and submit it in LMS. The Median The median is defined as that point in a distribution of measures below which 50 percent of the cases lie (which means that the other 50 percent will lie above this point). Consider the following distribution of scores, where the median is 18: 14 15 16 17 18 19 20 21 22 In the following 10 scores we seek the point below which 5 scores fall: 14 16 16 17 18 19 20 20 21 22 The point below which 5 scores, or 50 percent of the cases, fall is halfway between 18 and 19. Thus, the median of this distribution is 18.5. The Mode The mode is the value in a distribution that occurs most frequently. It is the simplest to find of the three measures of central tendency because it is determined by inspection rather than by computation. Given the distribution of scores: 14 16 16 17 18 19 19 19 21 22 you can readily see that the mode of this distribution is 19 because it is the most frequent score. In a histogram or polygon, the mode is the score value of the highest point (the greatest frequency where the mode is 29. Sometimes there is more than one mode in a distribution. This module can be redistributed but the contents of this module are solely owned by the author. 31 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods For example, if the scores had been 14 16 16 16 18 19 19 19 21 22 you would have two modes: 16 and 19. This kind of distribution with two modes is called bimodal. Distributions with three or more modes are called trimodal or multimodal, respectively. The mode is the least useful indicator of central value in a distribution for two reasons. First, it is unstable. For example, two random samples drawn from the same population may have quite different modes. Second, a distribution may have more than one mode. SHAPES OF DISTRIBUTION Frequency distributions can have a variety of shapes. A distribution is symmetrical when the two halves are mirror images of each other. In a symmetrical distribution, the values of the mean and the median coincide. If such a distribution has a single mode, rather than two or more modes, the three indexes of central tendency will coincide, as shown in Figure 2.1.3. Figure 2.1.3 Symmetrical Distribution If a distribution is not symmetrical, it is described as skewed, pulled out to one end or the other by the presence of extreme scores. In skewed distributions, the values of the measures of central tendency differ. In such distributions, the value of the mean, because it is influenced by the size of extreme scores, is pulled toward the end of the distribution in which the extreme scores lie, as shown in Figures 2.1.4 and 2.1.5. The effect of extreme values is less on the median because this index is influenced not by the size of scores but by their position. Extreme values have no impact on the mode because this index has no relation with either of the ends of the distribution. Skews are labeled according to where the extreme scores lie. A way to remember this is “The tail names the beast.” Figure 2.1.4 shows a negatively skewed distribution, whereas Figure 2.1.5 shows a positively skewed distribution. This module can be redistributed but the contents of this module are solely owned by the author. 32 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods Figure 2.1.4 Negatively Skewed Distribution Figure 2.1.5 Positively Skewed Distribution MEASURES OF VARIABILITY Although indexes of central tendency help researchers describe data in terms of average value or typical measure, they do not give the total picture of a distribution. The mean values of two distributions may be identical, whereas the degree of dispersion, or variability, of their scores might be different. In one distribution, the scores might cluster around the central value; in the other, they might be scattered. For illustration, consider the following distributions of scores: The value of the mean in both these distributions is 25, but the degree of scattering of the scores differs considerably. The scores in distribution (a) are obviously much more homogeneous than those in distribution (b). There is clearly a need for indexes that can describe distributions in terms of variation, spread, dispersion, heterogeneity, or scatter of scores. Three indexes are commonly used for this purpose: range, variance, and standard deviation. Range The simplest of all indexes of variability is the range. It is the difference between the upper real limit of the highest score and the lower real limit of the lowest score. In statistics, any score is thought This module can be redistributed but the contents of this module are solely owned by the author. 33 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods of as representing an interval width from halfway between that score and the next lowest score (lower real limit) up to halfway between that score and the next highest score (upper real limit). For example, given the following distribution of scores, you find the range by subtracting 1.5 (the lower limit of the lowest score) from 16.5 (the upper limit of the highest score), which is equal to 15. It is simpler to use the formula: Example: Find the range of the following datasets: 2 10 11 12 13 14 16. Using the formula above, Subtract the lower number from the higher and add 1 (16 − 2 + 1 = 15). In frequency distribution, 1 is the most common interval width. Variance and Standard Deviation Variance and standard deviation are the most frequently used indexes of variability. They are both based on deviation scores—scores that show the difference between a raw score and the mean of the distribution. The formula for a deviation score is Scores below the mean will have negative deviation scores, and scores above the mean will have positive deviation scores. For example, the mean in Mr. Li’s physics exam is 20; thus, Ona’s deviation score is x = 22 − 20 = 2, whereas Ted’s deviation score is 16 − 20 = −4. By definition, the sum of the deviation scores in a distribution is always 0. Thus, to use deviation scores in calculating measures of variability, you must find a way to get around the fact that Σx = 0. The technique used is to square each deviation score so that they all become positive numbers. If you then sum the squared deviations and divide by the number of scores, you have the mean of the squared deviations from the mean, or the variance. In mathematical form, variance is In column 4 of Table 2.1.4, we see the deviation scores, differences between each score, and the mean. Column 5 shows each deviation score squared (x2), and column 6 shows the frequency of This module can be redistributed but the contents of this module are solely owned by the author. 34 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods each score from column 2 multiplied by x2. Summing column 6 gives us the sum of the squared deviation scores Σx2 = 72. Dividing this by the number of scores gives us the mean of the squared deviation scores, the variance. Table 2.1.4. Variance of Mr. Li’s Physics Exam Scores The variance is computed as follows: Another formula of the variance is Column 7 in Table 6.3 shows the square of the raw scores. Column 8 shows these raw score squares multiplied by frequency. Summing this fX2 column gives us the sum of the squared raw scores: Note that this result is the same as that which we obtained with Formula 6.5. Because each of the deviation scores is squared, the variance is necessarily expressed in units that are squares of the original units of measure. In most cases, researchers prefer an index that summarizes the data in the same unit of measurement as the original data. Standard deviation (σ), the positive square root of variance, provides such an index. The standard deviation is the square root of the mean of the squared deviation scores. Rewriting this definition using symbols, you obtain: For Mr. Li’s physics exam scores, the standard deviation is The standard deviation belongs to the same statistical family as the mean; that is, like the mean, it is an interval or ratio statistic, and its computation is based on the size of individual scores in the This module can be redistributed but the contents of this module are solely owned by the author. 35 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods distribution. It is by far the most frequently used measure of variability and is used in conjunction with the mean. Formulas above are appropriate for calculating the variance and the standard deviation of a population. If scores from a finite group or sample are used to estimate the heterogeneity of a population from which that group was drawn, research has shown that these formulas more often underestimate the population variance and standard deviation than overestimate them. Mathematically, to get unbiased estimates, N − 1 rather than N is used as the denominator. The formulas for variance and standard deviation based on sample information are Following the general custom of using Greek letters for population parameters and Roman letters for sample statistics, the symbols for variance and standard deviation calculated with N − 1 are s 2 and s, respectively. With the data in Table 2.1.4, Spread, scatter, heterogeneity, dispersion, and volatility are measured by standard deviation, in the same way that volume is measured by bushels and distance is measured by miles. A class with a standard deviation of 1.8 on reading grade level is more heterogeneous than a class with a standard deviation of 0.7. A month when the daily Dow Jones Industrial Average has a standard deviation of 40 is more volatile than a month with a standard deviation of 25. A school where the teachers’ monthly salary has a standard deviation of $900 has more disparity than a school where the standard deviation is $500. SAMPLING TECHNIQUES An important characteristic of inferential statistics is the process of going from the part to the whole. For example, you might study a randomly selected group of 500 students attending a university in order to make generalizations about the entire student body of that university. The small group that is observed is called a sample, and the larger group about which the generalization is made is called a population. A population is defined as all members of any well-defined class of people, events, or objects. For example, in a study in which students in American high schools constitute the population of interest, you could defi ne this population as all boys and girls attending high school in the United States. A sample is a portion of a population. For example, the students of Washington High School in Indianapolis constitute a sample of American high school students. This module can be redistributed but the contents of this module are solely owned by the author. 36 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods Complete Enumeration Complete enumeration or census is the complete count of every unit, everyone or everything in a population. It is where all members of the whole population are measured. A complete enumeration-based survey is often preferred for certain types of data, solely because it is expected that it will provide complete statistical coverage over space and time. Complete enumeration sometimes may be desirable, but not attainable for operational reasons. An existing sampling programme can be progressively expanded to provide more reliable and robust estimates, if human and logistics resources allow such expansion in a sustainable manner. Usually such progressive expansion is done in distinct phases. Complete enumeration is expensive and time-consuming. For example, your research is to describe the current flexible learning technologies of an higher education institution in the Davao region. Using complete enumeration, you will need to get responses from all higher education institution in the region. Steps in Sampling 1. The first step in sampling is the identification of the target population, the large group to which the researcher wishes to generalize the results of the study 2. We make a distinction between the target population and the accessible population, which is the population of subjects accessible to the researcher for drawing a sample. In most research, we deal with accessible populations. 3. Once we have identified the population, the next step is to select the sample. Two major types of sampling procedures are available to researchers: probability and nonprobability sampling. Slovin’s Formula on getting the sample size Slovin's formula is used to calculate the sample size (n) given the population size (N) and a margin of error (e). It is a random sampling technique formula to estimate sampling size. If a sample is taken from a population, a formula must be used to take into account confidence levels and margins of error. When taking statistical samples, sometimes a lot is known about a population, sometimes a little and sometimes nothing at all. For example, we may know that a population is normally distributed (e.g., for heights, weights or IQs), we may know that there is a bimodal distribution (as often happens with class grades in mathematics classes) or we may have no idea about how a population is going to behave (such as polling college students to get their opinions about quality of student life). Slovin's formula is used when nothing about the behavior of a population is known at at all. This module can be redistributed but the contents of this module are solely owned by the author. 37 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods where, n – sample size N – population size e - Desired margin of error Example: To use the formula, first figure out what you want your error of tolerance to be. For example, you may be happy with a confidence level of 95 percent (giving a margin error of 0.05), or you may require a tighter accuracy of a 98 percent confidence level (a margin of error of 0.02). Plug your population size and required margin of error into the formula. The result will be the number of samples you need to take. In research methodology, for example N=1000 and e=0.05 n = 1000 / (1 + 1000 * 0.05²) n = 1000 / (1 + 2.5) n = 285.7142 we round it off to whole number. n = 286 samples Think about it 2.1.2 A researcher plans to conduct a survey in Panabo City. If the population of the city is 174,364, find the sample size if the margin of error is 15%. PROBABILITY SAMPLING Probability sampling is defined as the kind of sampling in which every element in the population has an equal chance of being selected. The possible inclusion of each population element in this kind of sampling takes place by chance and is attained through random selection. When probability sampling is used, inferential statistics enable researchers to estimate the extent to which the findings based on the sample are likely to differ from what they would have found by studying the whole population. The four types of probability sampling most frequently used in educational research are simple random sampling, stratified sampling, cluster sampling, and systematic sampling. Simple Random Sampling The best known of the probability sampling procedures is simple random sampling. The basic characteristic of simple random sampling is that all members of the population have an equal and independent chance of being included in the random sample. The steps in simple random sampling comprise the following: 1. Define the population. 2. List all members of the population. This module can be redistributed but the contents of this module are solely owned by the author. 38 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods 3. Select the sample by employing a procedure where sheer chance determines which members on the list are drawn for the sample. It could be using Slovin’s formula. The first step in drawing a random sample from a population is to assign each member of the population a distinct identification number. The generally understood meaning of the word random is “without purpose or by accident.” However, random sampling is purposeful and methodical. It is apparent that a sample selected randomly is not subject to the biases of the researcher. Rather, researchers commit themselves to selecting a sample in such a way that their biases are not permitted to operate; chance alone determines which elements in the population will be in the sample. When random sampling is used, the researcher can employ inferential statistics to estimate how much the population is likely to differ from the sample. Unfortunately, simple random sampling requires enumeration of all individuals in a finite population before the sample can be drawn—a requirement that often presents a serious obstacle to the practical use of this method. Now let us look at other probability sampling methods that approximate simple random sampling and may be used as alternatives in certain situations. Figure 2.1.6 Simple random sampling of a sample “n” of 3 from a population “N” of 12. Here a basic example, a researcher wants to study of effects of virtual gaming with academic performance of high school students in Panabo City. Let’s say there are 25,000 high school students in the city, you will use Slovin’s Formula to get the sample size (95% confidence level). That is equivalent to 394 sample of high school students in the city. From there, you will select randomly 394 high school students out from the 25,000 high school students. Stratified Sampling When the population consists of a number of subgroups, or strata, that may differ in the characteristics being studied, it is often desirable to use a form of probability sampling called stratified sampling. For example, if you were conducting a poll designed to assess opinions on a certain political issue, it might be advisable to subdivide the population into subgroups on the basis of age, neighborhood, and occupation because you would expect opinions to differ systematically among This module can be redistributed but the contents of this module are solely owned by the author. 39 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods various ages, neighborhoods, and occupational groups. In stratified sampling, you first identify the strata of interest and then randomly draw a specified number of subjects from each stratum. The basis for stratification may be geographic or may involve characteristics of the population such as income, occupation, gender, age, year in college, or teaching level. In studying adolescents, for example, you might be interested not merely in surveying the attitudes of adolescents toward certain phenomena but also in comparing the attitudes of adolescents who reside in small towns with those who live in medium-size and large cities. In such a case, you would divide the adolescent population into three groups based on the size of the towns or cities in which they reside and then randomly select independent samples from each stratum. Figure 2.1.7 Stratified Sampling Representation You basically get the simple random sample on each stratum. These are the following steps to follow in getting the sample using stratified random sampling. Step 1: Divide the population into subpopulations (strata). Make a table representing your strata. Step 2: From each stratum, obtain a simple random sample of size proportional to the size of the stratum. Sample size of the strata = size of entire sample / population size * layer (strata) size Step 3: Use all the members obtained in Step 2 as the sample Step 4: Perform a simple or systematic random sampling Example You work for a small company of 1,000 people and want to find out how they are saving for retirement. Use stratified random sampling to obtain your sample. The population will divided in strata by age group. This module can be redistributed but the contents of this module are solely owned by the author. 40 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods Table 2.1.5 Strata Table We will now obtain the sample using the Slovin’s Formula with 98% confidence level. The sample is 714 people. After obtaining the sample size, we will now proceed on computing the proportion of people from each group. Table 2.1.5 Strata Table with proportion on each group Age group Number of people in Strata Number of People in Sample 20-29 160 714/1000*160 = 114.24 = 114 30-39 220 714/1000*220 = 157.08 = 157 40-49 240 714/1000*240 = 171.36 = 171 50-59 200 714/1000*200 = 142.80 = 143 60+ 180 714/1000*180 = 128.52 = 129 Note that all the individual results from the stratum add up to your sample size of 714: 114 + 157 + 171 + 143 + 129 = 714. After determining the sample size for each stratum, you will perform random sampling (i.e. simple random sampling) in each stratum to select your survey participants. Cluster Sampling As mentioned previously, it is very difficult, if not impossible, to list all the members of a target population and select the sample from among them. With cluster sampling, the researcher divides the population into separate groups, called clusters. Then, a simple random sample of clusters is selected from the population. The researcher conducts his analysis on data from the sampled clusters. For example, a researcher might choose a number of schools randomly from a list of schools and then include all the students in those schools in the sample. This kind of probability sampling is referred to as cluster sampling because the unit chosen is not an individual but, rather, a group of individuals who are naturally together. These individuals constitute a cluster insofar as they are alike with respect to characteristics relevant to the variables of the study. This module can be redistributed but the contents of this module are solely owned by the author. 41 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods Figure 2.1.8 Cluster Sampling Representation Cluster elements should be as heterogenous as possible. In other words, the population should contain distinct subpopulations of different types. Each cluster should be a small representation of the entire population. Each cluster should be mutually exclusive. In other words, it should be impossible for each cluster to occur together. Types of Cluster Sample 1. One-Stage Cluster Sample - One-stage cluster sample occurs when the researcher includes respondents or population from all the randomly selected clusters as sample. Example: A researcher divide the population of Davao del Norte into groups by municipality and cities. He then selects a number of groups and the population of those selected groups are the sample. Find the actual sample using one-stage cluster sampling. First, determine the total population and the size of each cluster. Table 2.1.6 Population of Davao del Norte by Municipality/City Municipality/City Population Asuncion 59,322 Braulio E. Dujali 30,104 Carmen 74,679 Kapalong 76,334 New Corella 54,844 Panabo 184,599 Samal 104,123 San Isidro 26,651 Santo Tomas 118,750 This module can be redistributed but the contents of this module are solely owned by the author. 42 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods Tagum 259,444 Talaingod 27,482 Total 1,016,332 Then, the researcher selects a number of clusters depending on his research through simple or systematic random sampling. In this case, there are only 11 clusters of municipalities and cities in Davao del Norte, the researcher can conduct random sampling with discretion of selecting a number of samples. Let’s say 4 clusters will only be selected. Through random sampling, 4 clusters are selected: Asuncion, Kapalong, New Corella, Tagum City. All of the population of the selected clusters will be the sample: 59,322+76,334+ 54,844+259,444 = 449,944 is the sample size for the research. 2. Two-Stage Cluster Sample - Two-stage cluster sample is obtained when the researcher only selects a number of respondents from each cluster by using simple or systematic random sampling. Example: We will take the example in One-Stage Cluster sampling. Once we have selected, the clusters, we will conduct random sampling on the population of each clusters using Slovin’s formula with 95% confidence level. Table 2.1.6 Two-Stage Cluster Sample on Selected Clusters in Davao del Norte Municipality/City Population Sample Size (Using Slovin’s Formula) Asuncion 59,322 397 Kapalong 76,334 398 New Corella 54,844 397 Tagum 259,444 399 TOTAL 449,944 1,592 Systematic Sampling Systematic sampling is a method of selecting every nth (the skip) element or respondents from the population. After the size of the sample has been determined, the selection follows. 1. First, you decide how many subjects you want in the sample (n) using researcher discretion, 50% technique or Slovin’s Formula. 2. Because you know the total number of members in the population (N), you simply divide N by n and determine the sampling interval (K) to apply to the list. This module can be redistributed but the contents of this module are solely owned by the author. 43 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods 3. Select the first member randomly from the first K members of the list and then select every Kth member of the population for the sample. For example, let us assume a total population of 500 subjects and a desired sample size of 50: K = N/n = 500/50 = 10. Start near the top of the list so that the first case can be randomly selected from the fi rst 10 cases, and then select every tenth case thereafter. The research has the freedom to choose its sampling interval, it could be every 3 rd, 4th or etc. choices are not independent. Once the first case is chosen, all subsequent cases to be included in the sample are automatically determined. If the original population list is in random order, systematic sampling would yield a sample that could be statistically considered a reasonable substitute for a random sample. NONPROBABILITY SAMPLING In nonprobability sampling, there is no assurance that every element in the population has a chance of being included. Its main advantages are convenience and economy. The major forms of nonprobability sampling are convenience sampling, purposive sampling, and quota sampling. Table 2.1.7 Nonprobability Sampling Techniques Nonprobability Description Sampling Techniques Convenience Convenience sampling, which is regarded as the weakest of Sampling all sampling procedures, involves using available cases for a study. Those participants present during the conduct of the research visit will be chosen as respondent. Example: Getting responses from those who are present in the school campus during a specific period for a mobile application friendliness survey. If you got 30, then that is your sample. Purposive Sampling In purposive sampling—also referred to as judgment sampling—sample elements judged to be typical, or representative, are chosen from the population. The assumption is that errors of judgment in the selection will counterbalance one another. This module can be redistributed but the contents of this module are solely owned by the author. 44 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods Example: A team of researchers wanted to understand what the significance of white skin—whiteness—means to white people, so they asked white people about this. Quota Sampling Quota sampling involves selecting typical cases from diverse strata of a population. The quotas are based on known characteristics of the population to which you wish to generalize. Elements are drawn so that the resulting sample is a miniature approximation of the population with respect to the selected characteristics. Example: An interviewer may be told to sample 200 females and 300 males between the age of 45 and 60. This means that individuals can put a demand on who they want to sample (targeting). Think about it 2.1.3 Determine the correct and appropriate sampling techniques of the following: 1. Divide a sample of adults into subgroups by age, like 18-29, 30-39, 40-49, 50-59, and 60 and above. Conduct a simple random sampling in each groups. 2. A researcher divide the population of Davao del Norte into groups by municipality and cities. He then selects a number of groups and the population of those selected groups are the sample. 3. A local NGO is seeking to form a sample of 500 volunteers from a population of 5000, they can select every 10th person in the population to build a sample systematically. 4. A politician asks his neighbors their opinions about a controversial issue. 5. You may be conducting a study on why high school students choose community college over university. You might canvas high school students and your first question would be “Are you planning to attend college?” People who answer “No,” would be excluded from the study. This module can be redistributed but the contents of this module are solely owned by the author. 45 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods APPLICATION Solving a problem using descriptive statistics. Instruction: This is by individual. You will prepare two sets of answers. One handwritten and one in Spreadsheet format (i.e. Excel or Google sheet). As for the handwritten, you need to write all of your answers in a set of clean papers (any paper type and size will do) and take a picture(s) of your output (make sure it is clear and not blurred). Write your name(s) and section on top of each papers. As for the spreadsheet, you have the freedom to format your answers, just make sure that it is in the right order. You may create multiple sheets in one file (only 1 file). Filename must be DescriptiveStatAssessment. Write your name(s) and section on top of each sheets. Submit all the files (image and spreadsheet files) to LMS. Only one person must submit the file. Problem: The listed below are the scores of a group of 50 students on final exam scores of programming 1 test. This test composed of 70 items comprises of multiple choice and programming lab problems. Their programming instructor intends to test of students if they are ready for the next semester to take programming 2 subject. The final scores correspond the following: 64, 27, 61, 56, 52, 51, 3, 15, 6, 34, 6, 17, 27, 17, 24, 64, 31, 29, 31, 29, 31, 29, 29, 31, 31, 29, 61, 59, 56, 34, 59, 51, 38, 38, 38, 38, 34, 36, 36, 34, 34, 36, 21, 21, 24, 25, 27, 27, 27, 63 Tasks: A. Create a complete frequency distribution B. Draw a histogram and a frequency polygon based on the frequency distribution C. Find the Mean, Median, Mode and Range for the above data. D. Is this data skewed? Defend your answer E. Compute the variance and the standard deviation (you may prepare a table) F. What does this information tell you about students' s, does they are ready for next programming 2 subject? Justify your answer. This module can be redistributed but the contents of this module are solely owned by the author. 46 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods Congratulations! You finished lesson 1 of module 2. Should there be some parts of the lesson which you need clarification, we can have a virtual meeting interaction. You may now proceed to lesson 2 that will discuss about inferential statistics using t-test and ANOVA. This module can be redistributed but the contents of this module are solely owned by the author. 47 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods LESSON 2 Hypothesis Testing (T-Test and ANOVA) LEARNING OUTCOMES At the end of this lesson, the student is expected to be able to: Demonstrate ability to analyze and solve problems utilizing hypothesis testing (t-test and ANOVA). Appreciate the process of solving statistical problems in inferential statistics TIME FRAME Week 5 INTRODUCTION Welcome to Lesson 2 of Module 1: Hypothesis Testing (t-test and ANOVA)! In this lesson, you will discover the process of hypothesis testing and learn about computing inferential statistics using t-test and ANOVA. Inferential statistics is the science of making reasonable decisions with limited information. Researchers use what they observe in samples and what is known about sampling error to reach fallible but reasonable decisions about populations. The statistical procedures performed before these decisions are made are called tests of significance. There are a lot of topics on hypothesis testing and inferential statistics but we will only cover independent samples t-test and ANOVA as this will be needed for your major project. ACTIVITY: Evaluate Filipino Movies You will become a movie analyst. List at least 10 Filipino movies from 2018-present and give them a rating from 1-10 (10 as the highest and 1 as the lowest, decimal values are allowed). Answer the questions in the analysis part? ANALYSIS 1. Explain why you given your top 1 movie with the highest score and explain also with the movie with the lowest score. _________________________________________________________________________________ _________________________________________________________________________________ _________________________________________________________________________________ This module can be redistributed but the contents of this module are solely owned by the author. 48 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods 2. How does it relate with research writing? _________________________________________________________________________________ _________________________________________________________________________________ _________________________________________________________________________________ ABSTRACTION INDEPENDENT SAMPLES T TEST The independent samples t test (also called the unpaired samples t test) is the most common form of the T test. It helps you to compare the means of two sets of data. For example, you could run a t test to see if the average test scores of males and females are different; the test answers the question, “Could these differences have occurred by random chance?” The two other types of t test are: One sample t test: used to compare a result to an expected value. For example, do males score higher than the average of 70 on a test if their exam time is switched to 8 a.m.? Paired t test (dependent samples): used to compare related observations. For example, do test scores differ significantly if the test is taken at 8 a.m. or noon? This test is extremely useful because for the z test you need to know facts about the population, like the population standard deviation. With the independent samples t test, you don’t need to know this information. You should use this test when: You do not know the population mean or standard deviation. You have two independent, separate samples. In our math concepts example, the statistic is the difference between the mean of the group taught by method B and the group taught by method Through deductive logic statisticians have determined the average difference between the means of two randomly assigned groups that would be expected through chance alone. This expected value (the error term) is derived from the variance within each of the two groups and the number of subjects in each of the two groups. It is called the standard error of the difference between two independent means Its definition formula is This module can be redistributed but the contents of this module are solely owned by the author. 49 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods The standard error of the difference between two means is sometimes referred to as the “error term for the independent t test.” The t test for independent samples is a straightforward ratio that divides the observed difference between the means by the difference expected through chance alone. In formula form, If this t ratio is equal to 1.00 or less, the observed difference between means is very probably due to chance alone. The observed difference is less than the difference expected by chance. Therefore, the null hypothesis is retained. There is not sufficient evidence to draw a tentative conclusion. Example: A physical education teacher conducted an experiment to determine if archery students perform better if they get frequent feedback concerning their performance or do better with infrequent feedback. She randomly divided her class into two groups of 15 and flipped a coin to determine which group got frequent feedback and which group got infrequent feedback. She set her α at.05 for a two-tailed test. At the end of her study, she administered a measure of archery performance. Table 2.2.1 Archery Feedback Performance in 2 groups We need to set first the hypothesis for our problem: Null: There is no significant difference of the archery performance of students in frequent feedbacks versus infrequent feedbacks. Ho: µfrequent = µinfrequent Alternative: There is a significant difference of the archery performance of students in frequent feedbacks versus infrequent feedbacks. Ha: µfrequent ≠ µinfrequent This module can be redistributed but the contents of this module are solely owned by the author. 50 Mark Van M. Buladaco, MIT, PhilNITS (JITSE) FE Quantitative Methods The computation formula for the independent t test is Inserting the numbers from Table 2.2.1 into this formula gives us 15) Here, we have an observed difference that is 4.14 times as large as the average difference expected by chance. Is it large enough to reject the null hypothesis? To answer this question, we must consider the t curves and degrees of freedom. The number of degrees of freedom (df) is the number of observations free to vary around a constant parameter. Follow these steps: 1. Find the degrees of freedom using the formula: where n is the number of samples or responses in a group. With this formula we get that df of our problem is 15-1 = 14. 2. Since we use two-tailed, we just need to combine the degrees of freedom of the two samples: df = df1+df2 = 14+14 = 28. 3. Look up your degrees of freedom in the t-table (Appendix A at the end of this module) in α at.05 for a two-tailed test distribution. We find that it has 28 degrees of freedom at an α level of 0.05 = 2.048. Our tcrit now is 2.048. 4. Then we will compare our tcalc = 4.14 with tcrit =2.048. If tcalc< tcrit: "ACCEPT" null hypothesis. Otherwise, "REJECT" null hypothesis and support the alternative hypothesis. 5. Since, 4.14