Business Research Chapter 6 (PDF)
Document Details
Tags
Related
Summary
This document presents key points for presenting demographic profiles and descriptive findings in a business research study. It emphasizes the importance of clear overviews of study participants, describing socio-demographic characteristics, and utilizing appropriate graphical and statistical representations for data.
Full Transcript
BUSINESS RESEARCH CHAPTER 6 Presentation of the Demographic Profile and the Descriptive Findings When presenting the demographic profile and descriptive findings in a research study, it's essential to provide a clear overview of the study participants. Here are some key points to consider: 1. **...
BUSINESS RESEARCH CHAPTER 6 Presentation of the Demographic Profile and the Descriptive Findings When presenting the demographic profile and descriptive findings in a research study, it's essential to provide a clear overview of the study participants. Here are some key points to consider: 1. **Demographic Profile**: 1. Describe the socio-demographic characteristics of the participants. These may include age, gender, ethnicity, education level, occupation, income, and marital status. 2. Present the distribution of values for each demographic variable. [You can use descriptive statistics like mean, median, standard deviation, range, or inter-quartile range to summarize the data^1^](https://academic.oup.com/ageing/article/46/4/576/3787761). 3. Consider graphical displays (e.g., bar charts, histograms) to visually represent the demographic information. 2. **Descriptive Findings**: 4. Discuss the descriptive statistics related to the main variables of interest. For example, if your study focuses on health outcomes, describe the mean and standard deviation of relevant measurements. 5. Use appropriate statistical measures to convey central tendency (mean or median) and variability (standard deviation or range). 6. Highlight any notable patterns or trends observed in the data. Remember that the goal is to help readers understand the characteristics of the study sample and assess the generalizability of findings to other contexts. Keep your presentation concise and reader-friendly. [If you have specific data to share, feel free to include relevant tables or figures in your paper^1^](https://academic.oup.com/ageing/article/46/4/576/3787761) PRESENTATION OF DESCRIPTIVE RESEARCH OUTPUT - When presenting the output of descriptive research, it normally starts with the presentation of the respondents' demographic profile and then followed by the major findings of the study. - To give context to the reader, the presentation is preceded with some discussion on how the data were gathered and the supporting activities that resulted in the successful data gathering. - When presenting, present the data first (frequency table or pie chart) and provide descriptive discussion or interpretation after. - After the demographic profile of the respondents, the researcher will discuss the other results of the data gathered. - The sequence of presenting the discussion and table should be based on the subproblems found at the statement of the problem in Chapter 1. - The findings should also be supported with an RRL, which may support or oppose the findings of the study. OTHER DESCRIPTIVE FINDINGS 1. Preferences 2. Ranking 3. Level of interpretation 4. Research question Research question 1. How do respondents assess organizational culture in terms of: 1. Power Distance 2. Uncertainty avoidance 3. Collectivism 4. Masculinity 5. Confucian work dynamics - High SD- High variability. There is high variation in your data - Low sd- low variability- the data points are relatively consistent and less variation - How disperse in a set of values Summary - When presenting the output of descriptive research, it normally starts with the presentation of the respondents' demographic data and then followed by the major findings of the study. - To give context to the reader, the presentation is preceded with some discussion on how the data were gathered and the supporting activities that resulted in the successful data gathering. - When presenting, present the data first (frequency table or pie chart) and provide descriptive discussion or interpretation after. CHAPTER 7 Multi-Variate Technique 1: Factor analysis Factor Analysis **-aims to simplify complex datasets by identifying a smaller set of unobserved factors that explain the variance in a larger number of observed variables. These latent factors are not directly measurable but play a crucial role in shaping the observed data.** **Factor Analysis (FA) is also known as [exploratory factor analysis (EFA)]. It is called EFA to differentiate it from [confirmatory factor analysis (CFA)].** - Factor analysis is a powerful statistical technique used to uncover underlying patterns and relationships among observed variables. - **Exploratory Factor Analysis (EFA)** 1. **What is EFA?** 1. EFA stands for Exploratory Factor Analysis. Researchers use EFA when they don't have a clear understanding of the underlying factors in their dataset. 2. It's an exploratory technique that helps identify latent factors without preconceived hypotheses. 3. Latent variables are those variables that are measured indirectly using observable variables. So, rather than measuring things that can't be quantified, we infer the value using variables we can quantify. 2. **How Does EFA Work?** 4. EFA models the correlation structure among observed variables. 5. It assumes that each observed variable is a linear combination of the underlying factors plus an error term. 6. The goal is to find a smaller set of factors that explain the commonalities among the observed variables. 3. **Example: Socioeconomic Status (SES)** 7. Imagine we want to understand SES, which we can't measure directly. 8. Instead, we collect data on occupation, income, and education levels (our indicators). 9. EFA helps us identify the latent factor (SES) that drives the observed values in these indicators. **CFA is a technique based on covariance-based structural equation modeling (CB-SEM).** **FA or EFA is a major technique in multivariate statistics. The foundations for developing CB-SEM is FA or EFA and multiple regression analysis (MRA). If a student wants to learn CB-SEM, he or she should learn and understand FA and MRA first.** **FA and CFA are based on interdependence techniques, where the variables are not referred to as independent, dependent, or moderating.** - **Confirmatory Factor Analysis (CFA)** 1. **What is CFA?** 1. CFA is based on covariance-based structural equation modeling (CB-SEM). 2. Unlike EFA, CFA tests pre-specified hypotheses about the factor structure. 3. Researchers use it to validate existing theories or measurement instruments. 2. **Interdependence Techniques** 4. Both FA and CFA treat variables as interdependent---no rigid distinction between independent, dependent, or moderating variables. 5. They focus on understanding the shared variance among variables. - **Conclusion** - Factor analysis is a valuable tool in psychology, sociology, marketing, and machine learning. Remember that while the analysis identifies factors, naming them is up to the researchers! - **Factor Analysis -- processed by forming the variables into a structure called *[factors.]*** - **These variables come from the item-indicators or item-questions that describe the factor which is initially called [pseudo-factor].** - Factor Analysis (FA) is like assembling a jigsaw puzzle. We have several pieces (variables), and we want to find the underlying structure (factors) that connects them. Here's how it works: 1. **Variables and Pseudo-Factors:** 1. Imagine you're designing a survey with questions (item-indicators) about people's happiness. 2. Each question (variable) asks about different aspects: family, work, health, etc. 3. Initially, we call this structure a "pseudo-factor" because we haven't figured out the real connections yet. 2. **Pseudo Factor -- temporary label for the variables (item-questions) to be grouped in the survey questionnaire.** 3. **Once the structure is formed through FA, the factors will be renamed based on the attributes of the variables that comprise each factor.** 4. **Grouping Variables:** 4. FA groups similar variables together. It's like putting related puzzle pieces side by side. 5. For example, if questions about family and relationships are related, they form a factor. 5. **Naming the Factors:** 6. Once we've assembled the puzzle (formed the structure), we rename the factors. 7. Attributes of the variables within each factor guide the naming process. 8. If a factor includes questions about family, love, and friendships, we might call it "Social Well-Being." 6. **Why Is It Useful?** - **Simplification:** FA simplifies complex data by revealing the hidden factors. - **Understanding:** It helps us understand what drives our observed variables. - **Research and Surveys:** Researchers use FA in psychology, marketing, and social sciences. - Remember, FA is like detective work---finding the missing pieces that make sense of the whole picture. Feel free to explore further or ask questions! 😊 - Factor Analysis (FA) is like discovering hidden treasures in a forest. We encounter two main paths: Principal Component Analysis (PCA) and Common Factor Analysis (CFA). - **What Is PCA?** - **PCA considers the total variance in our data.** - **It's like finding the biggest gems---the factors that explain most of the variation.** - **These factors might contain a bit of unique sparkle (error variance), but that's okay.** - **Example:** - **Imagine we're studying happiness. PCA identifies factors related to overall well-being, like family, work, and health.** - **2) Common Factor Analysis (CFA)** - **What Is CFA?** - **CFA focuses only on the common shared variance among variables.** - **It's like gathering gems that shine together---ignoring the unique sparkles.** - **We assume that unique and error variance isn't crucial for defining the structure.** - **Example:** - **If we're studying intelligence, CFA looks at the shared factors (like problem-solving skills) across different tests.** - **Sample Size Matters** - **Ideal Sample Size:** - **For a magical spell to work (valid FA), we need enough observations (people or data points).** - **Minimum: At least 5 times as many observations as variables.** - **Better: Aim for a 10:1 ratio (10 observations per variable).** - **Validation Split:** - **If you split your data for validation, aim for a sample size of 200 or more.** 1. **Measure of sampling adequacy (MSA) through Kaiser-Meyer-Olkin (KMO) Test and Bartlett's test of sphericity.** 2. **Total variance explained** - **EFA Results: Unveiling the Clues** - **1. Measure of Sampling Adequacy (MSA)** - Think of MSA as our compass---it guides us through the forest of data. - We have two tests: - **Kaiser-Meyer-Olkin (KMO) Test:** It checks if our data is good enough for factor analysis. - Guidelines for KMO values: - 0.90 or above: Marvelous - 0.80 - 0.89: Meritorious - 0.70 - 0.79: Middling - 0.60 - 0.69: Mediocre - 0.50 - 0.59: Miserable - Below 0.50: Unacceptable - **Bartlett's Test:** It ensures our variables stick together like puzzle pieces. - We want a significant result (p-value ≤ 0.05) for reliable factor analysis. - **2. Total Variance Explained** - Imagine our clues are gems. We want to explain at least *60%* of the treasure. - **3. Communalities of Variables** - Each variable's communality is how much it shares with the common factors.\`\` - If a variable's communality is below 0.50, it's like a shy gem hiding from the others. - **4. Scree Plot** - Picture a mountain range---the peaks represent our factors. - A factor is significant if its eigenvalue (EIGENMAN) (a fancy term) is greater than 1 (Kaiser's rule). - It is better to use rotated factor loadings as it would make the interpretation easier. Factor Loadings should be interpreted in terms of their significance, which is also linked to the sample size. - **When checking with the rotated output which factor loads significantly, it is important to note that a variable can cross-load to more than one factor loading, the choice should be the variable that loads the greatest.** - **5. Unrotated or Rotated Factor Loadings** - Rotated factor loadings are like polished gems---they're easier to interpret. - Factor loadings tell us which variables are buddies with which factors. - Sample size matters here---more samples mean clearer interpretations. - Remember, EFA is like deciphering ancient scrolls/Finding treasures/checking clue---each clue brings us closer to the treasure - **1. Kaiser-Meyer-Olkin (KMO) Measure of Sampling Adequacy** - Think of the KMO as our treasure map quality check. - It measures how well our data fits FA. - Values range from 0 to 1: - Above 0.5: Acceptable - Above 0.8: Good - **2. Bartlett's Test of Sphericity** - Imagine Bartlett as our gatekeeper. - His test checks if our variables are related (correlated). - A significant p-value (\< 0.05) means our variables are likely connected. - The KMO value is 0.832, which is above the recommended threshold of 0.8. This suggests that the data is well-suited for factor analysis. - The p-value for Bartlett\'s test is 0.000, which is less than 0.05. This indicates that the null hypothesis can be rejected and that the variables are likely to be correlated. - Overall, the results of the KMO and Bartlett\'s tests suggest that the data in the image is well-suited for factor analysis. - **Interpreting the Results** 1. **KMO Value (0.832):** 1. It's above the recommended threshold (0.8). 2. Our data suits factor analysis well---like a well-paved treasure trail. 2. **Bartlett's Test (p-value = 0.000):** 3. The null hypothesis (variables unrelated) can be rejected. 4. Our variables are likely correlated---essential for factor analysis. - **Why Does It Matter?** - FA helps us uncover hidden factors (like secret chambers in a treasure cave). - These factors explain why things happen (like why shoppers prefer certain malls). - To interpret communalities, it is important to remember that they are a proportion of the variance explained. This means that a communality of 0.5 does not mean that half of the variance in the variable is explained by the common factors. It means that the common factors explain 50% of the variance that is explained by all factors, including the common factors and the specific factors. - Communality below 0.50 is interpreted as failing to explain the other variables, thus a candidate for deletion, especially if it has a low factor loading. - **What Are Communalities?** - Think of communalities as the shared magic/variance among variables. - They represent the proportion of variance explained by common factors. - But wait! A communality of 0.5 doesn't mean half the variance is explained ---it means 50% of the shared magic/variance! - **The Hidden Factors** - Imagine we're studying mall preferences. - Our variables (like cleanliness, discounts, and ambiance) have both common and specific magic/variance. - Common factors (like "Atmospherics") explain part of the magic/variance. - **Interpreting Communalities** - If a variable has a high communality (say 0.8), it's like a gem shining brightly. - But if it's below 0.5, it's like a dimly lit gem---failing to explain other variables. - Low communality + low factor loading = candidate for removal. - **Why Does It Matter?** - Communalities guide us in selecting the right gems (variables) for our analysis. - They help us decide which variables contribute significantly to the magic/variance. - **TOTAL VARIANCE EXPLAINED** - **Factor analysis (FA) is considered reliable when its total variance explained (TVE) is at least 0.60. This means that the common factors should be able to explain at least 60% of the variance in the observed variables.** - In this study, the TVE is 69.855, which is higher than 60. Therefore, the FA result in this study is reliable. - **Understanding Total Variance Explained (TVE) in Factor Analysis** - **Introduction** - Factor Analysis (FA) is like unraveling a mystery---finding hidden patterns in data. Let's explore one crucial aspect: **Total Variance Explained (TVE)**. - **What Is TVE?** - TVE tells us how much of the data's magic (variance) is explained by our common factors. - Think of it as the treasure chest---the more magic we capture, the better! - **Why Does TVE Matter?** - Imagine we're studying mall preferences. - We want to know how much our factors (like cleanliness, discounts, and ambiance) explain the overall magic (variance) in shoppers' choices. - **Interpreting TVE** 1. **Minimum Threshold: 0.60** 1. For reliable FA, our TVE should be at least 0.60. 2. It's like saying, "At least 60% of the magic must be revealed!" 2. **Example: TVE = 69.855** 3. In our study, the common factors explain 69.855% of the variance. 4. Since it's higher than 60%, our FA result is reliable. - **Conclusion** - A scree plot is a graphical representation of the eigenvalues of the factors extracted in a factor analysis. - Kaiser's rule is a heuristic rule for determining the number of significant factors to retain in a factor analysis. - To interpret a scree plot and Kaiser's rule, look for the elbow in the scree plot. The factors before the elbow are generally considered to be significant, while the factors after the elbow are generally considered to be insignificant. - Kaiser's rule states that only factors with eigenvalues greater than 1 should be considered as significant. - **Understanding Scree Plots and Kaiser's Rule in Factor Analysis** - **Introduction** - Factor Analysis (FA) is like exploring a magical forest of data. Scree plots and Kaiser's rule are our compasses---they guide us to the right factors. Let's break it down: - **1. Scree Plot: Finding the Elbow** - Imagine we're on a treasure hunt for significant factors. - A scree plot is our map---it shows the eigenvalues of extracted factors. - Look for the "elbow" in the plot---the point where the curve levels off. - **2. Interpreting the Scree Plot** - Factors before the elbow are like bright gems---they're significant. - Factors after the elbow are like dull stones---they're less important. - **3. Kaiser's Rule: Eigenvalues Matter** - Kaiser's rule is our ancient wisdom. - It says: "Only factors with eigenvalues greater than 1 are significant." - Eigenvalues measure the magic (variance) explained by each factor. - **Why Does It Matter?** - Scree plots help us decide how many factors to keep. - Kaiser's rule ensures we focus on the brightest gems. - **Rotated Component Matrix** - A rotated component matrix is a table of factor loadings that have been rotated to make them easier to interpret. - The rotation is done in such a way that the factor loadings for each variable are concentrated on a single factor. - This makes it easier to identify which variables are most strongly associated with which factors. - The rotated component matrix in the previous examples shows nine factors. - The factor loading for each variable is shaded to indicate which factor it loads the highest on. - **Understanding Rotated Component Matrix in Factor Analysis** - **Introduction** - Factor Analysis (FA) is like deciphering ancient scrolls---it reveals hidden patterns in data. The **Rotated Component Matrix** is our magical decoder ring. Let's explore: - **1. What Is It?** - Imagine we're studying mall preferences. - The rotated component matrix is our treasure map---it shows how strongly each variable is associated with specific factors. - These factors are like secret chambers in the mall---each containing a unique magic/variance. - **2. How It Works** - We start with raw factor loadings (like uncut gems). - Then we rotate them to make them easier to interpret. - The rotation ensures that each variable aligns with a single factor (like polishing the gems). - **3. Interpreting the Matrix** - Each cell in the matrix represents a variable's factor loading. - The shading indicates which factor the variable loads the highest on. - Bright shading = strong association; dim shading = weaker association. - **4. Example: Nine Factors** - In our previous example, we have nine factors. - The matrix helps us see which variables shine the brightest in each factor's light. - **Why Does It Matter?** - The rotated component matrix simplifies complex data. - **Mall Product Variety** - Key factors: - Product Variety DS - Product Assort GS - Product Assort DS - Product Variety GS - **Mall Service** - Don't forget: - Service Guards - Service Clerks - Service CAC - **Mall Density** - Shoppers notice: - Atmos GS Crowded - Atmos DS Crowded - **Mall Accessibility** - Both public and private vehicles matter: - Acce Public trans - Access Location - Access Near Park - Access Private Park - Atmos CR Clean - **Mall Smell** - Even the scent matters: - Atmos Odor - Atmos Supermarket - Factor analysis is based on the interdependence technique, where the variables are not labeled as dependent, independent, or moderating. - The relationship between the variables is manifested with a line with two-headed arrows. - Factor analysis is processed by forming the variables into structures called factors. - The variables come from the item-indicators that describe the factors, which initially are called pseudo factors. - Once the structure is formed into factors, the factors will be renamed based on the attributes of the variables that comprise each factor. - **Unveiling the Magic: Factor Analysis Made Simple** - **Introduction** - Factor Analysis (FA) is like deciphering ancient scrolls---it reveals hidden patterns in data. But fear not! We'll demystify this powerful technique for students. - **1. What Is Factor Analysis?** - Imagine we're explorers in a data jungle. - Factor analysis helps us find the underlying factors that explain variation in observed variables. - It's like discovering secret ingredients in a recipe! - **2. The Interdependence Technique** - Variables aren't labeled as dependent or independent. - Instead, they dance together---like partners in a waltz. - The relationship is shown with two-headed arrows. - **3. Creating Magical Structures: Factors** - We group variables into structures called **factors**. - Think of them as mystical realms within our data. - Initially, they're like "pseudo factors"---waiting to be named. - **4. Renaming Our Factors** - Once the structure forms, we rename factors. - Their names reflect the attributes of the variables they contain. - It's like giving magical lands their true names! - **Why Does It Matter?** - Factor analysis reduces data complexity.