Essential Elements of Research Methodology PDF

Summary

This document provides an overview of essential elements in research methodology. It details various research designs, including experimental and non-experimental approaches, and discusses core concepts such as validity, reliability, and threats to internal and external validity. The document also offers examples and potential applications.

Full Transcript

# Essential Elements of the Research Methodology - Research Design: Describes the research mode - whether it is qualitative research or quantitative, or if the researcher will use a specific research type, such as descriptive, survey, historical, case, or experimental. - Respondents of the S...

# Essential Elements of the Research Methodology - Research Design: Describes the research mode - whether it is qualitative research or quantitative, or if the researcher will use a specific research type, such as descriptive, survey, historical, case, or experimental. - Respondents of the Study: Describes the target population and the sample frame. - Instrument of the Study: Describes the specific type of research instrument that will be used, such as questionnaire, checklist, questionnaire-checklist, interview schedule, teacher-made tests, and the like. - Establishing validity and reliability: The instrument must pass validity and reliability tests before it is utilized. - Statistical Treatment: One of the many ways of establishing the objectivity of research findings is by subjecting the data to different but appropriate statistical formulas and processes. ## Quantitative Research Designs The following table presents the two major designs in quantitative research, namely experimental and non-experimental designs. | Experimental Designs | Non-experimental Designs | |---|---| | True Experimental Design | Action Studies | | Pretest-posttest control design | Comparative Studies | | Posttest only control group | Correlational Studies | | Solomon four-group | Developmental Studies | | Quasi-Experimental Designs | Evaluation Studies | | Non equivalent | Meta-Analysis Studies | | Time series | Methodological Studies | | Needs Assessment Studies | | Pre-Experimental Designs | Secondary Analysis Studies | | One-shot case study | Survey Studies | | One group pretest-posttest | | Source: Nieswiadomy, R. (2004). Foundations of Nursing Research, 4th edition. New Jersey: Prentice Hall, p. 127. ## Experimental Designs Experimental research is concerned primarily with cause-and-effect relationships in studies that involve manipulation or control of the independent variables (causes) and measurement of the dependent variables (effects). This design utilizes the principle of research known as the method of difference. This means that the effect of a single variable applied to the situation can be assessed and the difference can be determined **(Mill, as cited in Sevilla et al., 2003)** - In experimental research, there are variables that are not part of the study but are believed to influence the outcomes. These are called intervening or extraneous variables. These variables are part of the study limitations. These extraneous or intervening variables are labeled threats to internal or external validity *(Campbell & Stanley, as cited in Nieswiadomy, 2004)*. Internal validity is the degree to which changes in the dependent variable can be attributed to the independent variable. External validity, however, is the degree to which the changes in the dependent variable can be attributed to the extraneous variables. - As validity is defined as the ability of a certain tool to measure what it intends to measure, it is therefore expected that an experimental research should come up with accurate results. The accuracy of the results of an experimental research, however, is hindered internally and externally. ## Threats to Internal Validity - Selection bias. This results when the subjects or respondents of the study are not randomly selected. In this case, the requirements of objectivity are not met since there is subjectivity in the selection of subjects. - Maturation. This happens when the experiment is conducted beyond a longer period of time during which most of the subjects undergo physical, emotional, and/or psychological changes. Maturation is to be avoided if such changes are not desired. - History. This refers to a threat to internal validity which happens during the conduct of the study when an unusual event affects the result of an experiment. - Instrumentation change. The instrument used in gathering the data must not be changed or replaced during the conduct of the study. The instrument must also be applied to all respondents or subjects. - Mortality. There is a threat to validity when one or more subjects die, drop out, or transfer. - Testing. The testing threat may occur in a study when a pretest is given to subjects who have knowledge of baseline data. Testing bias is the influence of the pretest or knowledge of baseline data on the posttest scores. Subjects may remember the answers they put on the pretest and will put the same answers on the posttest. The time of the conduct of the test should also be considered. ## Threats to External Validity - Experimenter effect. This threat appears when the characteristics of the researcher affect the behavior of the subjects or respondents. - Hawthorne effect. This occurs when the respondents or subjects respond artificially to the treatment because they know they are being observed as part of a research study. - Measurement effect. It is also called the reactive effects of the pretest. It occurs when subjects have been exposed to the treatment through taking the pretest. This exposure might affect the posttest results. ## Types of Experimental Research Designs - True experimental design: A design is considered a true experiment when the following criteria are present: - the researcher manipulates the experimental variables, that is, the researcher has control over the independent variables, as well as the treatment and the subjects; - there must be one experimental group and one comparison or control group; - the subjects are randomly assigned either to the comparison or experimental group. The control group is a group that does not receive the treatment. - Quasi-experimental design: A design in which either there is no control group or the subjects are not randomly assigned to groups. **Types of Experimental Designs** - True experimental design: *A design is considered a true experiment when the following criteria are present: the researcher manipulates the experimental variables, that is, the researcher has control over the independent variables, as well as the treatment and the subjects; there must be one experimental group and one comparison or control group; and the subjects are randomly assigned either to the comparison or experimental group. The control group is a group that does not receive the treatment.* - **Pretest-posttest controlled group design:** 1. Subjects are randomly assigned to groups. 2. A pretest is given to both groups. 3. The experimental group receives the treatment while the control group does not. 4. A posttest is given to both groups. The procedure is summarized below. R01 X 02 (experimental group) R01 02 (control group) *Where: R stands for random selection, 01 stands for pretest, 02 stands for posttest, X stands for intervention* - **Posttest-only controlled group design:** 1. Subjects are randomly assigned to groups. 2. The experimental group receives the treatment while the control group does not receive the treatment. 3. A posttest is given to both groups. The procedure is summarized below. RX 02 (experimental group) R 02 (controlled group) *Where: R stands for random selection, 02 stands for posttest, X stands for intervention* - **Solomon four-group design:** *This design is considered as the most reliable and suitable experimental design. It minimizes threats to both internal and external validity.* 1. Subjects are randomly assigned to four groups. 2. Two of the groups (experimental group 1 and control group 1) are pretested. 3. The other two groups (experimental group 2 and control group 2) receive the routine treatment or no treatment. 4. A posttest is given to all four groups. The procedure is summarized below. R01 X 02 (experimental group) R01 02 (control group) R X 02 (experimental group) R 02 (control group) - **Quasi-experimental design:** *A design in which either there is no control group or the subjects are not randomly assigned to groups.* - **Non-equivalent controlled group design:** *This design is similar to the pretest-posttest control group design except there is no random assignment of subjects to the experimental and control groups.* The procedure is summarized below. 01 X 02 (experimental group) 01 02 (control group) - **Time-series design:** *The researcher periodically observes or measures the subjects.* where: 01 02 03 stand for pretest (multiple observations) 04, 05, 06 stand for posttest (multiple observations) - **Pre-experimental design:** *This experimental design is considered very weak because the researcher has little control over the research.* - **One-shot case study:** *A single group is exposed to an experimental treatment and observed after the treatment.* The procedure is summarized below. XO - **One-group pretest-posttest design:** *It provides a comparative description of a group of subjects before and after the experimental treatment.* The procedure is summarized below. 01 X 02 ## Non-Experimental Research Designs: Survey Studies - Non-experimental research designs include survey studies. In this type of research design, investigations are conducted through self-report. Surveys generally ask respondents to report on their attitudes, opinions, perceptions, or behaviors. Thus, survey studies aim at describing characteristics, opinions, attitudes, and behaviors as they currently exist in a population *(Wilson, 1990)*. **Surveys can be categorized according to:** - Whom the data is collected from: - Sample-a representative of the total population - Group-can be smaller than a mass - Mass - larger than a group - Methods used to collect the data: - telephone - text messages - snail mail - e-mail or other social media modalities - face-to-face interaction - Time orientation: - Retrospective. The dependent variable is identified in the present and an attempt is made to determine the independent variable that occurred in the past. - Cross-sectional. The data are collected at a single point in time. The design requires subjects who are at different points, phases, or stages of an experience. The subjects are assumed to represent data collected from different time periods. For example, if the researcher wants to determine the psychological experience of students in different grade levels, he or she will gather data from a specific number of subjects from each grade level. - Longitudinal. Unlike in the cross-sectional survey, the researcher collects data from the same people at different times. In the same study about determining the psychological experience of students in the different grade levels, the researcher will have enough number of subjects in the first grade level and they will be observed as they pass through the different stages. Compared to the cross-sectional survey, this study is conducted over a longer period of time. - Purpose or objectives - Descriptive. This design is utilized for the purpose of accurately portraying a population that has been chosen because of some specific characteristics. It is also used to determine the extent or direction of attitudes and behaviors. This design aims to gather more information on certain characteristics within a particular field of study. The purpose is to provide a picture of a situation as it naturally happens. It may be used to develop theories, identify problems with a current practice, justify current practices, aid in making professional judgments, or determine what other practitioners in similar situations are doing. No manipulation of variables is involved in a descriptive design. - Comparative. This design is used to compare and contrast representative samples from two or more groups of subjects in relation to certain designated variables that occur in normal conditions. The results obtained from these analyses are frequently not generalized in a population. - Correlational. The design is used to investigate the direction and magnitude of relationships among variables in a particular population. Likewise, it is designed to study the changes in one characteristic or phenomenon which correspond to the changes in another or with one another. A wide range of variable scores is necessary to determine the existence of relationships. Thus, the sample should reflect the full range of scores, if possible, on the variables being measured. - Evaluative. This design involves making a judgment of worth or value. It allows the researcher to delineate, obtain, and provide information that is useful for judging decision alternatives when conducting a program or service. It can be formative *process*) or summative (*outcome).* ## Research Design Example 1 **The Intrapersonal and Interpersonal Competencies of School Managers: Basis for Human Relation Intervention Program (Cristobal, 2003)** - This study used the survey approach of conducting research. Specifically, it utilized the descriptive survey and correlation procedures. The principal purpose of the researcher was to discover how the groups of respondents assessed the intrapersonal and interpersonal competencies of school managers and to find out the relationship of these competencies to the school's effectiveness. The descriptive method was supplemented with documentary analysis of the school's, teacher's, and student's performance as reflected in the Performance Appraisal for Secondary School Teachers *PAST*, and the documents available in the Division office for the performance indicators as well as local documents available in the school for the awards received and data on the school participation in the community. - Gay *1976* defines descriptive research as involving the current status of the subject of the study. This method of research is designed to gather information on condition existing at a particular period. Similarly, Travers *1978* added that the descriptive method of research is used to describe the nature of a situation as it exists at the time of study and to explore the causes of particular phenomena. - A correlation approach was used to relate the competencies of the school managers to school performance. A correlation survey *(Calmorin, 1998)* is defined as the study that aims to determine the relationship of variables. It also indicates the extent to which different variables are related to each other and what variables are related to each other in the target population. It also ascertains how much variation is caused by another variable. Measure of correlation determines the magnitude and direction of relationship. ## Participants of the Study - This element of the Research Methodology discusses how the subjects or respondents of the study are selected and how an appropriate sampling method is chosen. In this part of the research, the subjects or respondents are introduced to the readers through their basic profiles. - Subjects can be individuals or groups to which interventions or processes are applied. In some studies, the subjects are the respondents themselves, but in other researches, the subjects are not necessarily the respondents. - **Participants or respondents** are individuals or groups of people that serve as the sources of information during data collection. - **The population** is composed of persons or objects that possess some common characteristics that are of interest to the researcher. There are two groups within the population: the target population and the accessible population. The target population consists of the entire group of people or objects to which the findings of the study generally apply. Meanwhile, the accessible population is the specific study population. **Ways to Determine the Sample Size** - An important task of the researcher is to determine the acceptable sample size. The larger the sample, the more reliable the result of the study is. Hence, it is advisable to have a sample large enough to yield more reliable results. **Factors to consider in determining the sample size:** - Homogeneity of the population. The higher the degree of variation within the population, the smaller the sample size that can be utilized. - Degree of precision desired by the researcher. A larger sample size will result in greater precision or accuracy of results. - Types of sampling procedure. Probability sampling utilizes smaller sample sizes than non-probability sampling. - The use of formulas. - **Slovin's formula:** It is used to compute for sample size *(Sevilla et al., 2003)*. This formula is used when you have limited information about the characteristics of the population and are using a non-probability sampling procedure *(Ellen, 2018)*. *n= N/1+Ne²* Where: n = a sample size, N = population size, e = desired margin of error (Note: An acceptable margin of error used by most survey researchers typically falls between 2% and 8% at the 95% confidence level. It is affected by sample size, population size, and confidence level.) - **Calmorin's formula**: This is used when the population is more than 100 and the researcher decides to utilize scientific sampling *(Calmorin & Calmorin, 2003)*. *S= NV+[(S)×(1-p)]/NS +[V²xp(1-p)]* Where: S = sample size, N = population size, V = standard value (2.58) of 1% level of probability with 0.99 reliability, S = sampling error, p = the largest possible proportion. **Kinds of Sampling** - **Probability Sampling:** This is a type of sampling in which all members of the population are given a chance of being selected. This is also called scientific sampling. - **Simple random sampling:** This is a method of choosing samples in which all the members of the population are given an equal chance to be selected as respondents. It is an unbiased way of selection as samples are drawn by chance. There are various ways of getting the samples through simple random sampling. These include the roulette wheel, fishbowl technique, and the use of the table of random numbers. - **Stratified random sampling:** The population is first divided into different strata then the sampling follows. Age, gender, and educational qualifications are just some of the criteria used in dividing the population into strata. - **Cluster sampling:** This is used in large-scale studies in which the population is geographically spread out. In these cases, sampling procedures may be difficult and time-consuming. - **Systematic sampling:** It is a method of selecting every nth element of the population (e.g., every fifth, eighth, ninth, or eleventh element). After the size of the sample has been determined, the selection of the sample follows. - **Non-probability Sampling:** This is a process of selecting respondents in which the members of the entire population do not have an equal chance of being selected as samples. There are cases in which a sample is given priority instead of other members. - **Convenience sampling:** It is also called accidental or incidental sampling. - **Quota sampling:** It is somewhat similar to stratified sampling in which the population is divided into homogenous strata and then sample elements are selected from each stratum. - **Purposive sampling:** It involves the handpicking of subjects. This is also called judgmental sampling. - **In formulating the description of respondents of the study, the following elements must be properly discussed:** - the total population and its parameters - the sample and its statistic - the sampling method, with references to support it - the explanation and discussion of the sampling method - the explanation of how the sampling is done - the enumeration of the qualifying criteria - the profile of the respondents ## Data Collection - Data collection is impossible without the use of instruments that would allow for the data to be collected. These instruments help the researcher gather valuable information, which serves as the basis of the results of the study. Without reliable data, the study is in itself totally invalid and unreliable. The gathering of reliable data is possible through the use of valid, reliable, and effective research instruments. These instruments must be designed carefully and techniques for data collection must be planned accordingly. ## Most Frequently Used Data Collection Techniques - **Documentary Analysis:** This technique is used to analyze primary and secondary sources that are available mostly in churches, schools, public or private offices, hospitals, and community, municipal, or city halls, among other institutions. Sometimes, data are not available or are difficult to locate in these places and the information gathered tend to be incomplete or indefinite and inconclusive. - **Interview:** The instrument used in this method is the interview schedule. The skill of the interviewer determines if the interviewee is able to express his or her thoughts clearly. Usually, an interview is conducted with a single person. However, there are also times when it is conducted with a group of people (around five to ten) whose opinions and experiences are elicited simultaneously. This type is called a focus group discussion. Life histories are also needed in this area. These are narratives or self-disclosures about an individual's life experiences. The interviewer must guide the respondents in narrating their accounts. Data obtained from an interview may be recorded on audiotapes or videotapes. Today, smartphones can be used as recording devices. Some researchers believe that writing down responses during the interview affects rapport, reduces spontaneity, and hinders eye contact. - **Unstructured:** This interview can be in the form of normal conversations or a free-wheeling exchange of ideas. The researcher must be skilled in conducting the interview so that he or she can steer the course of the conversation. The interviewer must be knowledgeable on the subject or topic of concern. - **Structured:** The conduct of questioning follows a particular sequence and has a well-defined content. The interviewer does not ask questions that are not part of the interview schedule. - **Semi-structured:** There is a specific set of questions, but there are also additional probes that may be done in an open-ended or close-ended manner. The researcher can gather additional data from a respondent to add depth and significance to the findings. - **Observation:** This process or technique enables the researcher to participate actively in the conduct of the research. The instrument used in an observation is called the observation guide or observation checklist. Observation must be done in a quiet and inconspicuous manner so as to get realistic data. In nursing research, for instance, the observation method has broad applicability, particularly for clinical inquiries. Nurses are in an advantageous position to observe the behaviors and activities of the patients and their families as well as the healthcare staff. It can be used to gather information such as the characteristics and conditions of individuals, verbal communication, non-verbal communication, activities, and environmental conditions. The following dimensions should be taken into consideration: the focus of observation; concealment, which is the condition wherein the subject of observation has no knowledge that he or she is being observed; duration; and the method of recording the observations. - **Structured:** The researcher uses a checklist as a data collection tool. This checklist specifies expected behaviors of interest, and the researcher records the frequency of the occurrences of these behaviors. - **Unstructured:** The researcher observes things as they happen. The researcher conducts the observation without any preconceived ideas about what will be observed. - **Physiological Measures:** The technique applied for physiological measures involves the collection of physical data from the subjects. It is considered more accurate and objective than other data collection methods. However, skills and expertise are needed to enable the researcher to use and manipulate the measurement devices. Examples of instruments used to collect physiological measures are illustrated below: - Thermometer - Stethoscope - Weighing Scale - **Psychological Tests:** These include personality inventories and projective techniques. Personality inventories are self-reported measures that assess the differences in personality traits, needs, or values of people. These involve gathering information from a person through questions or statements that require responses or reactions. Examples of these are the Minnesota Multiphasic Personality Inventory *(MMPI)* and the Edwards Personal Preference Schedule *(EPPS)*. Meanwhile, in projective techniques, the subject is presented with a stimulus designed to be ambiguous or vague in meaning. The person is then asked to describe the stimulus or tell what the stimulus appears to represent. Examples of common projective techniques are the Rorschach Inkblot Test and the Thematic Apperception Test. In the Rorschach Inkblot Test, subjects are presented with cards that contain designs which are actually inkblots. The Thematic Apperception Test, meanwhile, consists of a set of pictures about which the subjects are asked to tell a story or what they think is happening. - **Questionnaire:** This is the most commonly used instrument in research. It is a list of questions about a particular topic, with spaces provided for the response to each question, and intended to be answered by a number of persons *(Good, 1984)*. It is less expensive, yields more honest responses, guarantees confidentiality, and minimizes biases based on question-phrasing modes. The questionnaire can be structured *provide possible answers and respondents just have to select from them* or unstructured *do not provide options and the respondents are free to give whatever answer they want*. ## Relationship of the Review of Related Literature to the Questionnaire The review of related literature and studies must have sufficient information and data to enable the researcher to understand thoroughly the variables being investigated in the study. The descriptive information gathered from different sources are called indicators for the specific variable and they are used in making sure that the content of the questionnaire is valid. A valid indicator must be supported by previous studies done by experts. ## Types of Questions - **Recognition type:** Alternative responses are already provided, and respondents simply choose among the given choices. It also contains close-ended questions. - **Completion type:** The respondents are asked to fill in the blanks with the necessary information. Questions are open-ended. - **Coding Type:** Numbers are assigned to names, choices, and other pertinent data. This entails knowledge of statistics on the part of the researcher, as the application of statistical formulas is necessary to arrive at the findings. - **Subjective type:** The respondents are free to give their opinions about an issue of concern. - **Combination type:** The questionnaire is a combination of two or more types of questions. ## Wordings of Questions - State questions in an affirmative rather than in a negative manner. - Avoid ambiguous questions. - Avoid double negative questions. - Avoid double-barreled questions. ## Scales Commonly Used in an Instrument - **Likert scale:** It is a common scaling technique which consists of several declarative statements that express a viewpoint on a topic. The respondents are asked to indicate how much they agree or disagree with the statements. - **Semantic differential scale:** The respondents are asked to rate concepts in a series of bipolar adjectives. ## Characteristics of a Good Data Collection Instrument - It must be concise yet able to elicit the needed data. - It seeks information which cannot be obtained from other sources. - Questions must be arranged in sequence. - It must also be arranged according to the questions posed in the statement of the problem. - It should pass validity and reliability. - It must be easily tabulated and interpreted. ## Validity and Reliability of the Instrument - The level of accuracy and credibility of the results of a quantitative research study depends on the validity and reliability of the instruments used in the conduct of research. Thus, before using a certain instrument to explore a research problem, there is a need to establish its validity and credibility. ## Validity - **Validity:** The ability of an instrument to measure what it intends to measure. - **Face validity:** Also known as logical validity, face validity involves an analysis of whether the instrument is using a valid scale. - **Content validity:** This is determined by studying the questions to see whether they are able to elicit the necessary information. An instrument with high content validity has to meet the objectives of the research. - **Construct validity:** This refers to whether the test corresponds to its theoretical construct. - **Criterion-related validity or equivalence test:** This type of validity is an expression of how scores from the test are correlated with an external criterion. ## Reliability - **Reliability:** refers to the consistency of results. A reliable instrument yields the same results for individuals who take the test more than once. **Methods in establishing reliability:** - **Test-retest or Stability Test:** The same test is given to a group of respondents twice. - **Internal Consistency:** If the test in question is designed to measure a single basic concept, it is reasonable to assume that a respondent who gets one item right is likely to correctly answer another similar item. ## Other Criteria for Assessing Quantitative Measures - **Sensitivity:** The instrument should be able to identify a case correctly. - **Specificity:** The instrument should be able to identify a non-case correctly. - **Comprehensibility:** Subjects and researchers should be able to comprehend the behavior required to secure accurate and valid measurements. - **Precision:** An instrument should discriminate between people who exhibit varying degrees of an attribute as precisely as possible. - **Speed:** The researcher should not rush the measuring process so that he or she can obtain reliable measurements. - **Range:** The instrument should be capable of detecting the smallest expected value of the variable to the largest in order to obtain meaningful measurements. - **Linearity:** A researcher normally strives to construct measures that are equally accurate and sensitive over the entire range of values. - **Reactivity:** The instrument should, as much as possible, avoid affecting the attribute being measured. ## Planning the Collection of Data - In quantitative research, the researcher should clarify whether the type of data to be collected is textual or numerical. The researcher must also have a timeline in collecting the data from the subjects or respondents for practical purposes. When the collection of data is planned properly, it will serve as a motivation to the researcher, and will help save money, time, effort, and energy. - **The People** - **The Finances** - **The Schedule** - **Miscellaneous:** The researcher must consider the following: what to wear during the data collection; what to do to ensure that the participants are safe; how to motivate and encourage participants to answer all the items in the instrument; and what to do to build rapport and gain the trust and cooperation of the participants.

Use Quizgecko on...
Browser
Browser