Practical Research 2 - Quarter 2 PDF

Summary

This document outlines different types of quantitative research design, including descriptive, correlational, ex post facto, quasi-experimental, and experimental approaches. It also covers important concepts like sampling and sample size determination, which are crucial for any research study.

Full Transcript

Practical Research 2 – Quarter 2 Lesson 1: Quantitative Research Design Research Design Research design is defined as the logical and coherent overall strategy that the researcher uses to integrate all the components of the research study (Barrot, 2017, p 102). To find meaning in...

Practical Research 2 – Quarter 2 Lesson 1: Quantitative Research Design Research Design Research design is defined as the logical and coherent overall strategy that the researcher uses to integrate all the components of the research study (Barrot, 2017, p 102). To find meaning in the overall process of doing your research study, a step-by-step process will be helpful to you. According to Fraenkel and Wallen (2007, p 15), the research designs in quantitative research are mostly pre-established. Hence having an appropriate research design in quantitative research, the researcher will have a clearer comprehension of what he is trying to analyze and interpret. Types of Quantitative Research Design Quantitative Research Designs have five general classifications: descriptive, correlational, ex post facto, quasi- experimental, and experimental. A. Descriptive Research The purpose of descriptive research is basically to answer questions such as who, what, where, when, and how much. So, this design is best used when the main objective of the study is just to observe and report a certain phenomenon as it is happening. B. Correlational Research The main goal of this design is to determine if variable increases or decreases as another variable increases or decreases. It does not seek cause and effect relationship like descriptive research; it measures variables as it occurs. It has two major purposes: (a) to clarify the relationship between variables and (b) predict the magnitude of the association. C. Ex Post Facto If the objective of the study is to measure a cause from a pre-existing effect, then Ex Post Facto research design is more appropriate to use. In this design, the researcher has no control over the variables in the research study. Thus, one cannot conclude that the changes measured happen during the actual conduct of the study. The last two types of quantitative research designs are identifiable for the existence of treatment or intervention applied to the current research study. Intervention or treatment pertains to controlling or manipulating the independent variable in an experiment. It is assumed that the changes in dependent variables were caused by the independent variable. There are also two groups of subjects, participants, or respondents in quasi-experimental and experimental research. The treatment group refers to the group subjected to treatment or intervention. The group not subject to treatment or intervention is called the control group. D. Quasi-Experimental The term means partly, partially, or almost – pronounced as kwahz-eye. This research design aims to measure the causal relationship between variables. The effect measured is considered to have occurred during the conduct of the current study. The partiality of quasi-experimental design comes from assigning subjects, participants, or respondents into their groups. The groups are known to be already established before the study, such as age educational background and nationality. Since the assignment of subjects, participants, or respondents are not randomly assigned into an experimental or control groups, the conclusion of results is limited. E. Experimental Research This research design is based on the scientific method called experiment with a procedure of gathering data under a controlled or manipulated environment. It is also known as true experimental design since it applies treatment and manipulation more extensively compared to quasi-experimental design. Random assignment of subjects or participants into treatment and control group is done increasing the validity of the study. Experimental research, therefore, attempts to affect a certain variable by directly manipulating the independent variable. Lesson 2: Sampling Population refers to a collection of individuals who share one or more noteworthy traits that are of interest to the researcher. The population may be all the individuals belonging to a specific category or a narrower subset within that larger group. Sample is a small portion of the population selected for observation and analysis. Sampling is the procedure of getting a small portion of the population for research. Approaches in Identifying the Sample Size A. Heuristics approach refers to the rule of the thumb for sample size. B. Literature Review approach is by reading similar or related literature and studies to your current research study. C. Formulas are also being established for the computation of an acceptable sample size. The common formula is Slovin’s Formula. Slovin’s Formula: 𝑁 𝑛= 1 + 𝑁𝑒 2 where: n is the sample size N is the population size E is the desired margin of error D. Power Analysis approach is founded on the principles of statistical power and effect size. Probability Sampling Method A. Simple Random Sampling. All members of the population have an equal chance at being chosen as part of the sample. These are fishbowl technique, roulette wheel, or use of the table of random numbers. B. Stratified Random Sampling. The population is split into different groups. People from each group will be randomly chosen to represent the whole population. Example: A population of 600 Junior High School students includes 180 Grade 7, 160 Grade 8, 150 Grade 9, and 110 Grade 10. If the computed sample size is 240, the following proportionate sampling will be as follows. C. Systematic Random Sampling. The sample is drawn by randomly selecting a starting number and then selecting every nth unit in arbitrary order until the desired sample size is reached. This procedure is as simple as selecting samples every nth (example every 2nd, 5th) of the chosen population until arriving at a desired total number of sample size. D. Cluster/Area Sampling. Districts or blocks of a municipality or a city which are part of the cluster are randomly selected. It is a method where multiple clusters of people from the chosen population will be created by the researcher in order to have homogenous characteristics. Lesson 3: Validity and Reliability of Research Instrument Characteristics of a Good Research Instrument Concise. Have you tried answering a very long test, and because of its length, you just pick the answer without even reading it? A good research instrument is concise in length yet can elicit the needed data. Sequential. Questions or items must be arranged well. It is recommended to arrange it from simplest to the most complex. In this way, the instrument will be more favorable to the respondents to answer. Valid and reliable. The instrument should pass the tests of validity and reliability to get more appropriate and accurate information. Easily tabulated. Since you will be constructing an instrument for quantitative research, this factor should be considered. Hence, before crafting the instruments, the researcher makes sure that the variable and research questions are established. These will be an important basis for making items in the research instruments. Ways in Developing Research Instrument adopting an instrument modifying an existing instrument researcher made his own instrument Common Scales Used in Quantitative Research Likert Scale. This is the most common scale used in quantitative research. Respondents were asked to rate or rank statements according to the scale provided. Semantic Differential. In this scale, a series of bipolar adjectives will be rated by the respondents. This scale seems to be more advantageous since it is more flexible and easy to construct. Types of Validity of Instrument Face Validity. It is also known as “logical validity.” It calls for an initiative judgment of the instruments as it “appear.” Just by looking at the instrument, the researcher decides if it is valid. Content Validity. An instrument that is judged with content validity meets the objectives of the study. It is done by checking the statements or questions if this elicits the needed information. Experts in the field of interest can also provide specific elements that should be measured by the instrument. Construct Validity. It refers to the validity of instruments as it corresponds to the theoretical construct of the study. It is concerning if a specific measure relates to other measures. Concurrent Validity. When the instrument can predict results similar to those similar tests already validated, it has concurrent validity. Predictive Validity. When the instrument is able to produce results similar to those similar tests that will be employed in the future, it has predictive validity. This is particularly useful for the aptitude test. Reliability of Instrument Test-retest Reliability. It is achieved by giving the same test to the same group of respondents twice. The consistency of the two scores will be checked. Equivalent Forms Reliability. It is established by administering two identical tests except for wordings to the same group of respondents. Internal Consistency Reliability. It determines how well the items measure the same construct. It is reasonable that when a respondent gets a high score in one item, he will also get one in similar items. There are three ways to measure the internal consistency; through the split-half coefficient, Cronbach’s alpha, and Kuder-Richardson formula. Lesson 4: Research Intervention Steps in Describing the Research Intervention Process 1. Write the Background Information. It is an introductory paragraph that explains the relevance of the intervention to the study conducted. It also includes the context and duration of the treatment. 2. Describe the Differences and Similarities between the Experimental and Control Group. State what will happen and what will not both in the experimental and control groups. This will clearly illustrate the parameters of the research groups. 3. Describe the Procedures of the Intervention. In particular, describe how will the experimental group receive or experience the condition. It includes how will the intervention happens to achieve the desired result of the study. For example, how will the special tutorial program will take place? 4. Explain the Basis of Procedures. The reason for choosing the intervention and process should clear and concrete reasons. The researcher explains why the procedures are necessary. In addition, the theoretical and conceptual basis for choosing the procedures is presented to establish the validity of the procedures. Lesson 5: Planning Data Collection Procedure Techniques in Collecting Quantitative Data Observation. It is gathering information about a certain condition by using senses. The researcher records the observation as seen and heard. This is done by direct observation or indirect observation by the use of gadgets or apparatus. An observation checklist aid the researcher in recording the data gathered. Survey. Data gathering is done through interview or questionnaire. By means of questionnaire you use series of questions or statements that respondents will have to answer. Basically, respondents write or choose their answer from given choices. On the other hand, interview is when you ask respondents orally to tell you the responses. Since you are doing quantitative research, it is expected that responses have numerical value either it is nominal or ordinal in form. Experiment. When your study is an experimental design, it was already discussed in the previous lesson that it would use treatment or intervention. After the chosen subjects, participants, or respondents undergone the intervention, the effects of such treatment will be measured. Three Phases in Data Collection Lesson 6: Planning Data Analysis Data analysis in research is a process in which gathered information are summarized in such a manner that it will yield answers to the research questions. These numerical data are usually subject to statistical treatment depending on the nature of data and the type of research problem presented. The statistical treatment makes explicit the different statistical methods and formulas needed to analyze the research data. Planning your Data Analysis Descriptive Statistical Technique provides a summary of the ordered or sequenced data from your research sample. Frequency distribution, measure of central tendencies (mean, median, mode), and standard deviation are the sets of data from descriptive statistics. Inferential Statistics is used when the research study focuses on finding predictions; testing hypothesis; and finding interpretations, generalizations, and conclusions. Since this statistical method is more complex and has more advanced mathematical computations, you can use computer software to aid your analysis. Types of Statistical Analysis of Variable Univariate Analysis – one variable Bivariate Analysis – two variables Multivariate Analysis - multiple relations between multiple variables Lesson 7: Presenting Research Methodology The Chapter 2 includes: Research Design Sampling (if possible) Materials – need materials and equipment in research Locale (Setting) – place to conduct experiment or research (may include localization of materials if significant) Procedure – Step by step procedure of research or experiment Instruments (if possible) – research or data collection instrument(s) Laboratory Sample data gathering – step by step procedure of gathering data after experiment (can be the laboratory test applied to the product of experiments) Statistical Treatment – statistic or parameter to be used to treat the collected data Ethical Consideration – addressing the possible hazard and permission to conduct the experiment or research Lesson 8: Data Collection Instruments In collecting the data, the researcher must decide on the following questions: (1) Which data to collect? (2) How to collect the data? (3) Who will collect the data? (4) When to collect the data? (Barrot, 2018, p138). Research Instrument A. A questionnaire consists of a series of questions about a research topic to gather data from the participants. It consists of indicators that is aligned to the research questions. Three Structures of Questionnaire Structured questionnaires employ closed-ended questions. Unstructured questionnaires use open-ended questions in which the research participants can freely answer and put his thoughts into it. Semi-structured questionnaires are combinations of both the structured and unstructured ones. B. A quantitative interview is a method of collecting data about an individual’s behaviors, opinions, values, emotions, and demographic characteristics using numerical data. Difference between Quantitative and Qualitative Interviews Distinction Between Interview and Questionnaire C. Tests are used for assessing various skills and types of behavior as well as for describing some characteristics. Two Types of Test Standardized test is scored uniformly across different areas and groups. It is usually administered by institutions to assess a wide range of groups such as students and test-takers. It is considered as more reliable and valid. Examples are Achievement test, University Entrance Exam, Personality Tests, and the likes. Non-standardized test may not be scored uniformly. It is administered to a certain set of people. Types of Test Questions Recall Questions. It requires participants to retrieve information from memory (e.g. fill-in-the blank test, identification test, enumeration test, etc.) Recognition Questions. It provides respondents to select from given choices the best or correct choice (e.g. multiple-choice test, true or false test, yes or no test, etc.) Open-ended Questions. It allows the respondents more freedom in their responses, expressing their thoughts and insights (e.g. essay writing tests and other performance-based tests. D. Observation is another method of collecting data that is frequently used in qualitative research. However, it can be used in quantitative research when the observable characteristics are quantitative in nature (e.g. length, width, height, weight, volume, area, temperature, cost, level, age, time, and speed) Forms of Observation Controlled Observation. It is usually used in experimental research and is done under a standard procedure. It provides more reliable data (obtained through structured and well-defined process). The procedure can be replicated, and the data are easier to analyze. Lastly, the observer performs a non- participant role (i.e. does not interact with the participants). Natural Observation. It is carried out in a non-controlled setting. It has greater ecological validity (i.e. flexibility of the findings to be generalized to real-life contexts). It also responds to other areas of inquiry not initially intended by the researcher. Its major limitation is its strength to establish a causal relationship due to the presence of extraneous variables which can affect the behavior of the participants Participant Observation. It allows the observer to become a member of the group or community that the participants belong to. It can be performed covertly (i.e. participants are not aware of the purpose behind the observation. It can be done also overtly, wherein participants know the intention or objectives of the observation. Different Roles of a Researcher during a Participant Observation Lesson 9: Data Presentation and Interpretation Techniques in Data Processing Editing is a process wherein the collected data are checked. At this stage, handling data with honesty should be employed. When you edit it is expected that you will not change, omit, or makeup information if you think that the data you collected is insufficient or does not meet your personal expectations. The main purpose of editing is for checking the consistency, accuracy, organization, and clarity of the data collected. Data editing can be done manually like traditional tallying or with the assistance of a computer or combination of both. Coding is a process wherein the collected data are categorized and organized. It is usually done in qualitative research. In quantitative research, coding is done to assign numerical value to specific indicator especially if it is qualitative in nature. This numerical value will be useful when you are going to analyze your data using statistical tool. Just make sure that the categories created are aligned with your research questions. Consider the following example. Tabulation is a process of arranging data. In many studies, table is used to do this process. Tabulation can done manually or electronically using MS Excel. Again organize the data based on your research questions. Before inputting your data into the table, it will be helpful to review your statistics class on how to arrange data according to the statistical techniques you will use. Take note that the digital tool you are going to use will also matter on how you are going to tabulate your data; like MS Excel, Minitab, or other digital tools have different ways of entering your data. Correct arrangement of your data will be helpful during actual data analysis. Presentation and Interpretation of Data A. Table helps summarize and categorize data using columns and rows. It contains headings that indicate the most important information about your study. To interpret the tables, one needs to do the following: 1. Analyze the connections among the details of the headings. 2. Check the unusual pattern of the data and determine the reason behind these. 3. Begin with the table number and the title. 4. Present the significant figures (overall results, high and low values, the unusual pattern). 5. Refrain from repeating again what’s inside the table. 6. Support your findings with literature and studies that confirms or contrasts your results. 7. Establish the practical implications of the results. This will add value to your research findings. 8. End with a brief generalization. Example: Interpretation: Table 1 shows the summary of the overall adjectival rating in frequency and percentage of students in their pretest in Pre-calculus at Gulayan National High School for S.Y. 2019-2020. Results reveal that 66% of the students have satisfactory rating. Only 5% have outstanding rating. Overall, the data showed that the students at Gulayan National High School have fair ratings based on their pretest scores. This implies that most of the students do not have prior mastery on the concepts of this subject. Hence, teacher is expected to apply teaching strategies that will increase students’ concepts of the subject. This result is supported by Ignacio (2016) that pretest scores especially if it is valid and reliable shows prior knowledge of the learners of the subject matter. B. Graphs focuses on how a change in one variable relates to another. Graphs use bars, lines, circles, and pictures in representing the data. In interpreting the graph, it is the same process in table. Line Graph illustrates trends and changes in data over time. Example: Figure 2. Students Quarterly Average Grade by Sections in Elective Mathematics (S.Y. 2019-2020) Interpretation: Figure 2 showed changes in the average grade of Elective Mathematics between Grade 10- Max and Grade 10-Min from the first quarter to the fourth quarter for S.Y. 2019-2020. From the graph, it is evident that both sections are performing well, but Grade 10-Max managed to maintain consistently its high performance than Grade 10-Min every quarter. During the second quarter, there is a noticeably far difference between the two sections. Overall, Grade 10-Max gained a better performance in Elective Mathematics than Grade 10-Min. Bar Graph illustrates comparisons of amounts and quantities. Example: Figure 1. GRSHS-X Canteen Lunch Menu Interpretation: Figure 1 shows the canteen lunch menu of GRSHS-X. The graph reveals that rice is highly patronized by the students and teachers with 150 cups sold daily. It can also be noted that pork and chicken menus have a good number of buyers (315 serve/pieces). Vegetable menus cannot be undervalued since several consumers (135 serve/pieces) also patronized the food. At the same time, seafood menus earn the last spot (50 serve/pieces sold). Generally, students and faculty of GRSHS-X preferred meat (pork and chicken) menus next to rice. Pie Graph (Circle Graph) displays the relationship of parts to a whole. Example: Figure 3. Dream Job of the Grade 7 Students from GRSHS-X Interpretation: Figure 3 showed the result of the survey conducted to Grade 7 students when asked about their dream job. From the graph, forty percent (40%) and thirty percent (30) of the participants wanted to become a doctor and an engineer, respectively with just thirty percent (30%) left for other professions. Only about five percent (5%) wanted to become a teacher. From the data, more than 70% of the Grade 7 students will likely pursue STEM strand courses when they graduate in high school. Lesson 10: Using Statistical Techniques to Analyze Data Statistical Techniques Percentage is any proportion from the whole. 𝑃𝐴𝑅𝑇 𝑃𝐸𝑅𝐶𝐸𝑁𝑇𝐴𝐺𝐸(%) = ( ) × 100 𝑊𝐻𝑂𝐿𝐸 Example: Here’s a data gathered by Purok A City High School administration regarding the number of Grade 7 parents who opted to receive digital copies of the learning modules. Table 1: Percentage of Parents who Opted to Receive Digital Copies of Learning Modules Mean or average is the middlemost value of your list of values and this can be obtained by adding all the values and divide the obtained sum to the number of values. Σ𝑥 𝑥̅ 𝑜𝑟 𝜇 = → Ungrouped Data 𝑛 Σf𝑥 𝑥̅ 𝑜𝑟 𝜇 = → Grouped Data 𝑛 Where, 𝑥̅ is the sample mean. 𝜇 is the population mean. Examples: 1. Ungrouped Data Refer to Table 1 above, to get the mean or average number of parents who opted to receive digital copies of learning modules, do the following: 24 + 25 + 16 + 11 76 𝑥̅ = = = 19 4 4 2. Grouped Data Here’s the data gathered from the survey on Study Habits conducted by the Grade 12 students to the 150 Grade 7 students of Purok A City High School. Where, Verbal Description From 1 to 1.80 represents (strongly disagree). From 1.81 to 2.60 represents (do not agree). From 2.61 to 3.40 represents (true to some extent or undecided). From 3:41 to 4:20 represents (agree). From 4:21 to 5:00 represents (strongly agree). Standard Deviation shows the spread of data around the mean. Σ(𝑥−𝑥̅ )2 𝜎 𝑜𝑟 𝑠 = √ → Ungrouped Data 𝑛 Σ𝑓𝑥 2 Σ𝑓𝑥 𝜎 𝑜𝑟 𝑠 = √ − → Grouped Data 𝑛 𝑛 Where, 𝑠 is the sample standard deviation. 𝜎 is the population standard deviation. Hypothesis testing. A hypothesis test helps you determine some quantity under a given assumption. The outcome of the test tells you whether the assumption holds or whether the assumption has been violated. Types of Statistical Hypothesis 𝐻𝑂 is null hypothesis. (negative statement of your hypothesis) 𝐻𝑂 : 𝜇 = 𝑥 𝐻𝐴 is alternative hypothesis. (opposite of the null hypothesis) The statistical test uses the data obtained from a sample to decide about whether the null hypothesis should be rejected. In a one-tailed test (left-tailed or right-tailed test), when the test value falls in the critical region on one side of the mean, the null hypothesis should be rejected. 𝐻𝐴 : 𝜇(, ≤, 𝑜𝑟 ≥)𝑥 → state of alternative hypothesis for one-tailed In a two-tailed test, the null hypothesis should be rejected when the test value falls in either of the two critical regions. 𝐻𝐴 : 𝜇 ≠ 𝑥 → state of alternative hypothesis for two-tailed The Four Possibilities of Hypothesis Testing The Basic Format for Hypothesis Testing 1. State the hypotheses and identify them. 2. Find the critical value(s). 3. Compute the test value. 4. Make the decision. 5. Summarize the result. Where, 𝛼 is the level of significance (default: 0.05 or 5%. Other common: 0.01 or 1%, 0.10 or 10%) 𝑑𝑓 is the degree of freedom computed as 𝑛 − 1. How To Find p - value from t - statistic If 𝑝 − 𝑣𝑎𝑙𝑢𝑒 ≤ 𝑠𝑖𝑔𝑛𝑖𝑓𝑖𝑐𝑎𝑛𝑐𝑒 𝑙𝑒𝑣𝑒𝑙, then we reject 𝐻𝑂. If 𝑝 − 𝑣𝑎𝑙𝑢𝑒 > 𝑠𝑖𝑔𝑛𝑖𝑓𝑖𝑐𝑎𝑛𝑐𝑒 𝑙𝑒𝑣𝑒𝑙, then we fail to reject 𝐻𝑂. Example (One-Sample t-test): A random sample of 10 Grade 7 students has grades in Math, where marks range from 90 (Good) to 98 (Excellent). The general average grade (Gen. Ave.) of all Grade 7 students as of the last 5 years is 93. Is the Gen. Ave. of the 10 Grade 7 students different from the population’s Gen. Ave? Use 0.05 level of significance. Given: 𝑛 = 10 𝛼 = 0.05 𝜇 = 93 𝑥̅ = 94 𝑠 = 2.68 Computational Procedure: 1. Define the Null and Alternative Hypothesis 𝐻𝑂 : There is no significant difference between the gen. ave. of 10 Grade 7 students from the population’s gen. average of 93. (𝐻𝑂 : 𝜇 = 93) 𝐻𝐴 : There is a significant difference between the gen. ave. of 10 Grade 7 students from the population’s gen. average of 93. (𝐻𝐴 : 𝜇 ≠ 93)(two-tailed) 2. State the alpha and the degree of freedom. 𝛼 = 0.05 𝑑𝑓 = 𝑛 − 1 = 10 − 1 = 9 3. State the decision rule. If 𝑝 − 𝑣𝑎𝑙𝑢𝑒 ≤ 𝑠𝑖𝑔𝑛𝑖𝑓𝑖𝑐𝑎𝑛𝑐𝑒 𝑙𝑒𝑣𝑒𝑙, then we reject 𝐻𝑂. If 𝑝 − 𝑣𝑎𝑙𝑢𝑒 > 𝑠𝑖𝑔𝑛𝑖𝑓𝑖𝑐𝑎𝑛𝑐𝑒 𝑙𝑒𝑣𝑒𝑙, then we fail to reject 𝐻𝑂. 4. Calculate the Test Statistic. 𝑥̅ −𝜇 94−93 𝑡= 𝑠 = 2.68 = 1.18 √𝑛 √10 5. State results (use the table of t-statistic) 𝑡 − 𝑠𝑡𝑎𝑡𝑖𝑠𝑡𝑖𝑐: 1.18 ≡ 𝑝 − 𝑣𝑎𝑙𝑢𝑒: 0.13412 (𝑜𝑛𝑒 − 𝑡𝑎𝑖𝑙𝑒𝑑) 𝑜𝑟 𝟎. 𝟐𝟔𝟖𝟐𝟓 (𝒕𝒘𝒐 − 𝒕𝒂𝒊𝒍𝒆𝒅) 𝑝 − 𝑣𝑎𝑙𝑢𝑒: 0.26825 > 𝛼: 0.05 6. Decision: Accept 𝐻𝑂. 7. Conclusion: Therefore, the average grade of 10 Grade 7 students is not different from the population’s average grade in Math which is 93. Lesson 11: Conclusion Conclusions are precise statement that directly answers the stated research questions. Lesson 12: Recommendation Recommendations can be described as a suggestion regarding the best course of action to take as a result of your summary of findings and conclusion. The purpose of a recommendation is to provide a useful guide that will not only address certain problems but result in a successful outcome.

Use Quizgecko on...
Browser
Browser