PR2 1ST TRIM MIDTERM Shortbond Landscape PDF
Document Details
Uploaded by ProactiveDada5868
null
Angela Casimiro
Tags
Summary
This document provides an overview of qualitative research methods, focusing on data analysis, particularly thematic analysis. The text describes the process of drawing themes and patterns from research data and the use of coding and other techniques to identify these patterns.
Full Transcript
Practical Research | 1T MT This is much more than simply summarizing the data; a good thematic Angela Casimiro STEM 12 - BLOCK N analysis interprets and makes sense of it. A common pitfall is to use...
Practical Research | 1T MT This is much more than simply summarizing the data; a good thematic Angela Casimiro STEM 12 - BLOCK N analysis interprets and makes sense of it. A common pitfall is to use the main interview questions as the themes (Clarke & Braun, 2013). MODULE 1: ANALYZING THE MEANING OF DATA AND DRAWING BRAUN & CLARKE (2006) SIX-PHASE GUIDE FOR CONDUCTING CONCLUSIONS THEMATIC ANALYSIS “Data analysis is central to credible qualitative research. The 1. Become familiar with the data. - The first step in any qualitative qualitative researcher is often described as the research instrument as analysis is reading, and re-reading the transcripts. his or her ability to understand, describe and interpret experiences and perceptions is the key to uncovering meaning in particular 2. Generate initial codes. - Start to organize your data in a meaningful circumstances and contexts.” (Maguire and Delahunt, 2017) and systematic way. DRAWING THEMES AND PATTERNS 3. Search for themes. - As defined earlier, a theme is a pattern that captures something significant or interesting about the data and/or THEMES research question. A theme is characterized by its significance. - are features of participants’ accounts characterizing particular perceptions and/or experiences that the researcher sees as 4. Review themes. - Modify and develop the preliminary themes that relevant to the research question. were identified. Do they make sense? It is useful to gather together all - come both from the data (an inductive approach) and from the the data that is relevant to each theme. investigator’s prior theoretical understanding of the phenomenon under study (an a priori approach). 5. Define themes. - This is the final refinement of the themes and the - A priori themes come from the characteristics of the aim is to ‘..identify the ‘essence’ of what each theme is about.’. phenomenon being studied; from already agreed on professional definitions found in literature reviews; from local, common sense 6. Writing up. constructs; and from researchers’ values, theoretical orientations, and personal experiences IDENTIFYING THEMES A. Observational Techniques CODING is the process of identifying themes in accounts and 1. Repetitions - easiest ways to identify themes. Some attaching labels (codes) to index them. of the most obvious themes in a corpus of data are those “topics that occur and reoccur”. THEMATIC ANALYSIS - the process of identifying patterns or themes within qualitative data. 2. Indigenous Typologies or Categories - look for local terms that may sound unfamiliar or are used in unfamiliar ways. Braun & Clarke (2006) suggest that it is the first qualitative method that should be learned as ‘..it provides core skills that will be useful for 3. Metaphors and Analogies - Lakoff and Johnson (1980) conducting many other kinds of analysis’ observed that people often represent their thoughts, behaviors, and experiences with analogies and d. Negative characteristics: Identified by words like metaphors. “not,” “no,” “none,” and prefixes like “un-,” “in-,” “il-,” “im-.” 4. Transition - Naturally occurring shifts in content may Additionally, Casagrande and Hale (1967) suggest looking for other be markers of themes. In written texts, new paragraphs types of relationships: may indicate shifts in topics. In speech, pauses, changes a. Attributes: X is Y. in voice tone, or the presence of particular phrases may b. Contingencies: If X, then Y. indicate transitions. c. Functions: X affects Y. d. Spatial orientations: X is near Y. 5. Similarities and Differences - Glaser and Strauss e. Operational definitions: X is a tool for doing Y. (1967:101–16) called the “constant comparison f. Examples: X is an instance of Y. method” as a method that involves searching for g. Comparisons: X resembles Y. similarities and differences by making systematic h. Class inclusions: X is a member of class Y. comparisons across units of data. i. Synonyms: X is equivalent to Y. a. Line-by-line analysis: Grounded theorists j. Antonyms: X is the opposite of Y. analyze data sentence by sentence, asking how k. Provenience: X is the source of Y. each statement relates to others, keeping focus l. Circularity: X is defined as X. on the data rather than abstract theories. b. Comparing expressions: Researchers compare Metaphors, transitions, and connectors are all part of a pairs of expressions (from the same or different native speaker’s ability to grasp meaning in a text. By informants) to determine how they are similar or making these features more explicit, we sharpen our different. ability to find themes. c. Theme identification: Similarities and differences across data generate themes. If a 7. Missing Data - works in reverse from typical theme theme appears in both expressions, researchers identification techniques. Instead of asking, What is then analyze variations in how the theme is here?, you can ask, What is missing? Researchers expressed. These variations can lead to the have long recognized that much can be learned from identification of subthemes based on the strength qualitative data by what is not mentioned. Bogdan and or degree of the theme's presence. Taylor (1975) suggested being “alert to topics that your subjects either intentionally or unintentionally avoid” 6. Linguistic Connectors - Researchers can identify Themes that are discovered in this manner need to be relationships in data by paying attention to specific words carefully scrutinized to ensure that investigators are not and phrases: finding only what they are looking for or what they are a. Causal relations: Words like “because,” “since,” only expecting to find. and “as a result.” b. Conditional relations: Phrases like “if,” “then,” 8. Theory-related Material - Researchers can also look for “rather than,” and “instead of.” statements that are related to or supports a theory. c. Time-oriented relations: Words like “before,” However, over-focusing on this can also draw the focus “after,” “then,” and “next.” away from crucial more obvious themes. That is why, again, this should be done after “obvious” themes were 1. Kind of Data already identified. B. Manipulative Techniques 1. Cutting and Sorting After the initial pawing and marking of text, cutting and sorting involves identifying quotes or expressions that seem somehow important and then arranging the quotes/ expressions into piles of things that go together (those with common implications). Must be properly labeled or coded (who said it, in what part of the transcript was it taken etc.) 2. Word Lists and Key Words in Context (KWIC) If you want to understand what people are talking about, look closely at the words they use. To generate word lists, researchers first identify all the unique words in a text and then count the number of times each occurs. Computer programs perform this task effortlessly. (ATLAS.ti, Nvivo, Provalis Research Text Analytics Software, FreeQDA, QDA Miner 2. Expertise - Not all techniques are available to all researchers. Lite) One needs to be truly fluent in the language of the text to use techniques that rely on metaphors, linguistic connectors, and indigenous 3. Word Co-occurrence (Collocation) typologies or that requires spotting subtle nuances such as missing data. This comes from linguistics and semantic network analysis and is Researchers who are not fluent in the language should rely on cutting based on the idea that a word’s meaning is related to the concepts to and sorting and on the search for repetitions, transitions, similarities and which it is connected. differences, and etic categories (theory-related material). Word lists and co-occurrences, as well as metacoding, also require less language 4. Metacoding competence and so are easier to apply. However, co-occurrences and This examines the relationship among a priori themes to discover metacoding require skills in manipulating matrices. potentially new themes and overlapping themes. The technique requires a fixed set of data units (paragraphs, whole texts, pictures, 3. Labor and Time - Today, computers have made counting words etc.) and a fixed set of a priori themes. For each data unit, the and co-occurrences of words much easier. Software also has made it investigator asks which themes are present and, possibly, the direction easier to analyze larger corpora of texts. Still, some of the and valence of each theme. The data are recorded in a unit-by-theme scrutiny-based techniques (searching for repetitions, indigenous matrix. This matrix can then be analyzed statistically. It works best typologies, metaphors, transitions, and linguistic connectors) are best when applied to short, descriptive texts of one or two paragraphs. done by manually by human eye, and this can be quite time consuming. IMPORTANT POINTS TO CONSIDER What to consider when choosing appropriate techniques in identifying 4. Number and Kinds of Themes - In theme discovery, more is themes better. It is not that all themes are equally important. Investigators must eventually decide which themes are most salient and how themes are related to each other. But unless themes are first discovered, none help the researcher know about the sources and where to locate of this additional analysis can take place. them. 5. Validity and Reliability - Theme identification does not produce a 3. Consistency check - A very useful tool that researchers use unique solution. As Dey (1993) noted, “there is no single set of in interviews and questionnaires. The researcher asks two categories [themes] waiting to be discovered. There are as many similar questions from the respondents with different ways of ‘seeing’ the data as one can invent” The answer is that there wordings and phrasing. The aim is to know whether the is no ultimate demonstration of validity. The validity of a concept answers are consistent or not. The researchers usually use depends on the utility of the device that measures it and on the this technique for important questions. They also use these collective judgment of the scientific community that a construct and its techniques for questions that are sensitive or have personal measure are valid. meanings. Should be done with caution to produce real answers. CORROBORATION - to enhance validity, reliability, authenticity, 4. Comparing results to similar studies - There might be similar replicability, and accuracy of the research. The researcher uses studies conducted by other researcher if you are not sure about several tools to achieve corroboration and to reduce faulty observations, the validity and accuracy of the data and its results you can biased analysis, and inaccurate conclusions. Corroboration is compare the results with other such studies. The researchers necessary to maintain standards in conducting surveys, data can provide references of the useful and relevant sources in analysis, and interpretation. their own research. These references add more weight to the research outcomes. “The purpose of corroboration is not to confirm whether people’s perceptions are accurate or true reflections of a situation but rather to MODULE 2 - QUANTITATIVE DATA COLLECTION TECHNIQUES ensure that the research findings accurately reflect people’s Data collection is the process of gathering and measuring information on perceptions, whatever they may be. The purpose of corroboration is to variables of interest, in an established systematic fashion that enables help researchers increase their understanding of the probability that one to answer stated research questions, test hypotheses and evaluate their findings will be seen as credible or worthy of consideration by outcomes. others.” (Stainback & Stainback, 1988). MOST FREQUENTLY USED DATA COLLECTION TECHNIQUES APPROACHES TO CORROBORATION - Using research instruments play a crucial role in collecting data, 1. Supporting documents/proofs - The researcher might ask According to Yaya (2014), it is significant for every researcher to the respondents to provide supporting documents or proofs know what kind of data should be collected and what method where necessary. Gives the reader an idea about the should be used. Methods that researchers use in collecting authenticity of research. desired data is called a measurement instrument. 2. Other data sources- books, government records, and other 1. Observation form of recorded data, or reports that can be used to Using sense organs to gather facts or information corroborate the findings of the interview or questionnaire. The about people, things, places, events by watching and researcher often knows about other sources that help him listening to them. corroborate the data through the interview. The interviewee can Expressing sensory experience to quantitative data, the 1. Test - A tool used to assess the knowledge of the researchers record them with the use of numbers. respondents. In doing test, there is a usual time limit to take it. Observations are then made of their user behaviour, user Example: Pre-test, Post Test, Aptitude Test, etc. processes, workflows etc, either in a controlled situation 2. Checklist - A comprehensive list that allows the respondents (e.g lab based) or in a real world situation (e.g. the to give a multiple answer. workplace) 3. Point-scale system - A tool used to determine the level of a ○ Direct observation – seeing, touching, and specific measurement. The most commonly used scale is the 5 hearing the sources of data personally – point scale or also known as the Likert scale. ○ Indirect observation – with the use of technological and electronic gadgets like 3. Interview – a conversation between two or more people (the audiotapes, video records and other recording interviewer and the interviewee) where questions are asked by the devices used to capture earlier events, images, or interviewer to obtain information from the interviewee. A more structured sounds. approach would be used to gather quantitative data. Example: Watching STEM students lining up for enrolment, instead of 4. Experiment - Situation in which variables Links to an external site centering your eyes on looks of people, you focus your attention on the are controlled and manipulated to establish cause-and-effect number of students, measurement of their height and weight, etc. relationships. - Scientific method of collecting data whereby you give the 2. Survey subjects a sort of treatment or condition then evaluate the results - Technique to obtain facts or information about the subject or to find out the manner by which the treatment affected the object of your research through the data-gathering instruments of subjects and to discover the reasons behind the effects of such interview and questionnaire. treatment on the subjects. Questionnaire – It is a list of questions about a particular topic, - Involves selection of subjects or participants, pre-testing the with spaces provided for responses. Each question offers a subjects prior to the application of any treatment or condition, number of probable answers from which the respondents, on the and giving the subjects post-test to determine the effects of the basis or their own judgment, will choose the best answer. treatment on them. These components of experiment operate in Responses yielded by this instrument are given their numerical various ways. forms (numbers, fractions, percentages) and categories and are CHARACTERISTICS OF A GOOD DATA COLLECTION INSTRUMENT subjected to statistical analysis. It is less expensive, yields more 1. It must be concise yet able to elicit the needed data. honest responses, guarantees confidentiality and minimizes 2. It seeks information which cannot be obtained from other sources biases. like documents that are available at hand. 3. Questions must be arranged in sequence, from the simplest to - Structured questionnaires – provide possible answers the complex. and respondents just have to select from them. 4. It must also be arranged according to the questions posed in the - Unstructured questionnaires – do not provide options statement of the problem. and the respondents are free to give whatever answer 5. It should pass validity and reliability. they want. 6. It must be easily tabulated and interpreted. Types of Questionnaires VALIDITY AND RELIABILITY Validity is a judgment or estimate of how well a test measures what it 1. Test – Retest Reliability Estimates - This is an estimate of purports to measure in a particular context. reliability obtained by correlating pairs of score from the same people on two different administrations of the same test. Types of Validity 2. Split – Half Reliability Estimates - This can be obtain by 1. Face Validity - This type of validity relates more to what a test correlating two pairs of scores obtained from equivalent halves of appears to measure to the person being tested than to what the a single test administered once. test actually measures. It is also known as Logical Validity 2. Content Validity - This is a measure of validity based on 3. Internal Consistency Reliability Estimates - Or also known as evaluation of the subject, topics, or content covered by the items inter-item consistency. This refers to the degree of correlation in the test. among all the items on a scale. 3. Construct Validity - This is a judgment about the appropriateness of inference drawn from the test score regarding Types of internal consistency reliability estimates individual standing on a variable called a construct. 1. Kuder-Richardson (KR-20 or KR-21) - KR – 20 is only used for A Construct is an informed, scientific idea developed or dichotomous items. hypothesized to describe or explain behavior. 2. Cronbach Alpha - This can be used for non-dichotomous items. 3. Average Proportional Distance - This measures the differences 4. Criterion – related Validity - This is a judgment of how that exist between item scores. adequately a test score can be used to infer an individual’s most 4. Inter – Rater Consistency Reliability Estimate - also known as probable standing on some measure of interest – the measure of inter – scorer consistency. This refers to the degree of agreement interest being the criterion. or consistency between two or more scorer with regards to a particular measures A criterion narrow standard LEVELS OF MEASUREMENT SCALE - The type and scale of Types of Criterion – Related Validity measurement that you have to use in your quantitative research is important because your measurement choices tell you the type of 1. Concurrent Validity – Go against from a proven test statistical analysis to use in your study. 2. Predictive Validity – Measures of the relationship between the test score and a criterion measure obtained A. Nominal scale – It is also called categorical variables. It is used at a future time providing an indicator. for labelling variables. The numbers assigned to the variables have no quantitative values. Example: Gender, religion, naming data for statistical purposes Reliability refers to consistency in measurement. like “Male 1” and “Female 2” Types of Reliability B. Ordinal scale – Ranking or arranging the classified variables to determine who should be the 1st, 2nd, 3rd, 4th, etc., in the group Example: Order in honor roll (First honor, second honor, third honor) Likert-scale questions (e.g., very dissatisfied to very satisfied) - Provide exact values and illustrate results efficiently as C. Interval data – It has equal units of measurement, thereby, making it they enable the researcher to present a large amount of possible to interpret the order of scale scores and the distance between data in a small amount of space them. It has no true zero - Data usually shown as specific numerical figures, are Examples: Scores of test, differences in the temperatures between 10 arranged in an orderly display of rows and columns to am and 12 pm, etc. add in comparison A good table should include the following parts: D. Ratio data – It is considered as the highest level of measurement. It has characteristics of an interval scale but it has a “zero point”. 1. Table number and title- these are placed above the table. The Examples: Height, number of accidents in a month title is usually written right after the table number. 2. Caption subhead- this refers to columns and rows PRESENTATION OF DATA IN TABULAR AND GRAPHICAL FORM 3. Body- It contains all the data under each subhead Commonly used tools of data presentation in quantitative 4. Source- it indicates if the data is secondary and it should be research are figures, tables and graphs. acknowledged. These are tools to clearly and easily present one or more sets of 5. After the presentation of the table, there is a need for a written data series to readers. analysis. Before the actual presentation of data, these non-prose forms must be properly introduced or described. Example: Elrod, Emily, and Joo Young Park. "A Comparison of Here are some ways of introducing graphs or tables: Students’ Quantitative Reasoning Skills in STEM and Non-STEM 1. The pie graph presented in Figure 2 shows the total number or Math Pathways." Numeracy 13, Iss. 2 (2020): Article 3. DOI: enrolled Grade 11 senior high school students for school year https://doi.org/10.5038/ 1936-4660.13.2.1309 2014-2015. B. Graph 2. The bar graph in Figure 1 presents the level of performance of - shows relations, comparisons, and distributions in a set of data senior high school students in different subjects such as English, like absolute values, percentages, or index numbers Mathematics, Social Science, and Management. - A graph or chart portrays the visual presentation of data using 3. Table 9, entitled "Weighted Mean of the Responses of the symbols such as lines, dots, bars or slices. The x and y axis has Grade-Vi Teachers Regarding CIinical Supervision during a heading and units are included. The known value is plotted on Post-Conference, appears on page 34. the x-axis and the measured value is plotted on the y-axis. 4. Table 4 below shows the weighted mean of the level of validity of Types of Graphs test papers in terms of hierarchy of taxonomy 1. Area Graph - This graph shows the relationship of different parts to a whole over time. 2. Bar Graph - This graph usually present categorical and numeric variables grouped in class intervals. They consist of an axis and a series or labeled horizontal and vertical bars. He bars depict COMMONLY USED TOOLS OF DATA PRESENTATION frequencies of different values of a variable or simply the different A. Tables values themselves. 3. Line Graph - This graph features values at different points in time. It is a visual comparison of how two variables, shown on the x- and y-axes, are related or vary with each other. It shows 1. Measure of central tendencies – It indicates where the center related information by drawing a continuous line between all the of distribution tends to be located. points on a grid It is a way to describe what’s typical for a set of data. 4. Pie Graph - This type of chart is a circle divided into a series of a. Mean – Average of a set of numbers. It is the most widely segments. Each segment represents a particular category. The used measure of central tendency. It is equal to the sum area of each segment is the same proportion of a circle as the of all scores divided by the number of cases. category is of the total data set. It usually shows the component parts of a whole. STATISTICAL TREATMENT - It is used to properly test the hypothesis, answer the research questions, and present the results of the study in clear and Example: Suppose you chose eight students who entered the understandable manner. campus with the following ages, what is the mean age of this - Statistics – the body of knowledge and techniques used in sample? collecting, organizing, presenting, analyzing, and interpreting data. TYPES OF STATISTICAL DATA ANALYSIS 1. Univariate Analysis – analysis of one variable 2. Bivariate Analysis – analysis of two variables (independent and dependent variables) 3. Multivariate Analysis – analysis of multiple relations between multiple variables b. Mode – It refers to the most frequently occurring score in a distribution. It locates the point where the observation BRANCHES OF STATISTICS values occur with the greatest density. Mode of a sample 1. Descriptive statistics – It involves tabulating, depicting, and is denoted by (“𝑥hat”). describing the collected data. The data are summarized to reveal A data set can have one mode, more than one mode or no overall data patterns and make them manageable. mode: 2. Inferential statistics – It involves making generalizations about - Bimodal – two values occur with the same greatest the population through a sample drawn from it. It also includes frequency hypothesis testing and sampling. It is concerned with a higher - Multimodal – more than two data values occur with the degree of critical judgement and advanced mathematical modes same greates frequency such as parametric (interval and ratio scale) and non-parametric - No mode – when no data value is repeated (nominal and ordinal) statistical tools. Examples: - 1,2,3,4,5,6,7,8, 9 (No mode) - 12.5, 9.2, 11.4, 12.5, 8.6, 3.4, 12.5 (One mode: 12.5) - 98, 95, 93, 99, 92, 95, 97, 99 (Bimodal: 95 and COMMON STATISTICAL TOOLS 99) A. Descriptive Statistics - 4,1,2,1,3,6,5,4,2 (Multimodal: 1,2,4) c. Median – middle value of a given set of measurements, the average distance that the average value is from the mean. It is used provided that the values are arranged in increasing or to measure the confidence in statistical conclusion. decreasing order. Examples: - In a laboratory experiment, the students have gathered the B. Inferential Statistics following reaction time in seconds: 50, 54, 35, 49, 38, 43, 46. 1. Parametric tests – These tests require a normal distribution. What is the median? The level of measurements must either be interval and ratio. a. T-test - This test is used to compare two means: the means of two independent samples or two independent groups or the means of two correlated samples before and after treatment. It can be used for samples composed of at least 30 elements. 2. Frequency distribution – a table that displays the frequency of b. Z-test - It is used to compare two means: the sample various outcomes in a sample. Each entry in the table contains the count mean and the perceived population mean. It can be used of the occurrences of values within a particular group or interval, and in when the sample has 30 or more elements. this way, the table summarizes the distribution of values in the sample. c. F-test - Also known as the analysis of variance (ANOVA), - The relative frequency of a class equals the fraction or proportion it is used when comparing the means of two or more of the observations belonging to a class or category. Thus, the independent groups. One-way ANOVA is used when relative frequency can be computed using: there is one variable involved and two-way ANOVA is used when there are two variables involved. The results of this statistical analysis are sued to determine if the difference in the means or averages of two categories of data are statistically significant. Example: If the mean of the grades of a student attending tutorial lessons is significantly different from the mean of the grades of a student not attending tutorial lessons d. Pearson product-moment coefficient of correlation - It is an index of relationship between two variables. It measures the strength and direction of the linear relationship of two variables and of the association between interval and ordinal variables. Example: Types of research questions a Pearson correlation can 3. Standard deviation – The standard deviation (SD) is a measure of examine: Is there a statistically significant relationship between spread or variation of data about the mean. SD computed by calculating age, as measured in years, and height, measured in inches? Is there a relationship between temperature, measured in degree without a correct analysis. Analysis is the process of breaking a Fahrenheit, and ice cream sales measured by income? whole into parts. The researcher must be critical in looking at 2. Non-parametric test – It does not require the normal distribution of details to prove or disprove a certain theory or claim. scores. It can be utilized when the data are nominal or ordinal. a. Chi-square test - This is a test of difference between the In analyzing the data, the following must be considered: observed and the expected frequencies. It is the statistical test 1. The highest numerical value such as scores, weighted means, for bivariate analysis of nominal variables, specifically, to test the percentages, variability, etc. null hypothesis. It tests whether or not a relationship exists 2. The lowest numerical value such as scores, weighted means, between or among variables and tells the probability that the percentages, variability, etc. relationship is caused by chance. This cannot in any way show 3. The most common numerical values like mode or values that the extent of the association between two variables. appear repeatedly b. Spearman's Rank Order Correlation Coefficient - This is the 4. The final numerical value like the average weighted mean, total, non-parametric version of the Pearson product-moment correlation index, etc. correlation. This measures the strength and direction of association between two ranked variables. The test to measure INTERPRETATION OF DATA the dependence of the dependent variable on the independent - The following are the levels of interpretation which are variable. considered in organizing the discussion of the results of findings PRESENTATION AND INTERPRETATION OF RESULT (Ducut and Pangilinan, 2006): In a research paper, the presentation, analysis, and interpretation of data are usually placed before the summary of findings, conclusion, and recommendations. MAJOR ELEMENTS OF THE SECTION: 1. Presentation of data - This part features the data for easy understanding of the reader. The data are usually presented in charts, tables, or figures with textual interpretation. 2. Analysis - The intelligence and logic of the researcher are DISCUSSION OF DATA required in this part in which important data are emphasized. The The following must be considered in the discussion of data: analysis will be the basis of the findings of the study. 1. The flow of the discussion of results or findings is based on how the 3. Interpretation - Comprehensible statements are made after problems are stated. translating the statistical data. 2. The manner or sequence of discussion should include the following: 4. Discussion - After the analysis and interpretation of the data, the a. Discussion of the findings in relation to the results of previous studies explanation of the results or findings is needed to establish a cited in the review of more logical and empirical basis of the conclusion. In this part, related literature and studies the results or findings of the investigation are compared and b. Implications, inferences, and other important information contrasted with those of the reviewed literature and related studies. IMPORTANCE OF A GOOD DISCUSSION ANALYSIS OF DATA - Numbers or figures simply presented will not be easily comprehended and their significance will not be determined 1. This section is often considered the most important part of a research paper because it most effectively demonstrates your ability as a researcher to think critically about an issue, to develop creative solutions to problems based on the findings, and to formulate a deeper, more profound understanding of the research problem you are studying. 2. The discussion section is where you explore the underlying meaning of your research, its possible implications in other areas of study, and the possible improvements that can be made in order to further develop the concerns of your research. 3. This is the section where you need to present the importance of your study and how it may be able to contribute to and/or fill existing gaps in the field. If appropriate, the discussion section is also where you state how the findings from your study revealed new gaps in the literature that had not been previously exposed or adequately described. 4. This part of the paper is not strictly governed by objective reporting of information but, rather, it is where you can engage in creative thinking about issues through evidencebased interpretation of findings. This is where you infuse your results with meaning.