Podcast
Questions and Answers
A researcher aims to assess the consistency of a newly developed survey instrument. To do so, the researcher divides the survey into two halves and calculates the correlation between the scores of each half. Which type of reliability is being assessed?
A researcher aims to assess the consistency of a newly developed survey instrument. To do so, the researcher divides the survey into two halves and calculates the correlation between the scores of each half. Which type of reliability is being assessed?
- Convergent validity
- Discriminant validity
- Test-retest reliability
- Split-half reliability (correct)
A researcher is developing a new measure of customer satisfaction. To ensure that the measure accurately reflects customer satisfaction, the researcher calculates correlations with other established measures of customer satisfaction. This process is most closely associated with:
A researcher is developing a new measure of customer satisfaction. To ensure that the measure accurately reflects customer satisfaction, the researcher calculates correlations with other established measures of customer satisfaction. This process is most closely associated with:
- Discriminant validity
- Convergent validity (correct)
- Face validity
- Predictive validity
Discriminant validity is demonstrated when a measure shows a strong correlation with measures of unrelated constructs.
Discriminant validity is demonstrated when a measure shows a strong correlation with measures of unrelated constructs.
False (B)
To assess test-retest reliability, you should administer the same questions over two points in _______ and calculate the correlation between the responses.
To assess test-retest reliability, you should administer the same questions over two points in _______ and calculate the correlation between the responses.
Nike uses customer satisfaction to see how likely consumers are to return to the store or recommend it to others. What kind of validity are they trying to measure?
Nike uses customer satisfaction to see how likely consumers are to return to the store or recommend it to others. What kind of validity are they trying to measure?
Which of the following best describes systematic error in research?
Which of the following best describes systematic error in research?
A ratio scale has a meaningful zero point, allowing for calculations of percentages and ratios.
A ratio scale has a meaningful zero point, allowing for calculations of percentages and ratios.
What type of measurement error occurs when a researcher measures something other than what they intended to measure?
What type of measurement error occurs when a researcher measures something other than what they intended to measure?
__________ error is reduced by increasing sample size.
__________ error is reduced by increasing sample size.
Which type of scale is most suitable when you want to determine the exact difference in preference between several options?
Which type of scale is most suitable when you want to determine the exact difference in preference between several options?
Ranking scales provide more detailed information than rating scales.
Ranking scales provide more detailed information than rating scales.
List the three components that influence responses in quantitative research.
List the three components that influence responses in quantitative research.
Match the following scale types with their characteristics.
Match the following scale types with their characteristics.
Which type of validity is primarily concerned with how well a measure forecasts future outcomes it should logically influence?
Which type of validity is primarily concerned with how well a measure forecasts future outcomes it should logically influence?
Reliability primarily focuses on the accuracy of a measurement, ensuring it reflects the true score of the concept being measured.
Reliability primarily focuses on the accuracy of a measurement, ensuring it reflects the true score of the concept being measured.
What type of measurement error occurs when an interviewer consciously or unconsciously influences respondents to provide a desired response?
What type of measurement error occurs when an interviewer consciously or unconsciously influences respondents to provide a desired response?
A researcher designs a survey with questions that subtly encourage participants to agree with a particular viewpoint. This is an example of ______ instrument bias.
A researcher designs a survey with questions that subtly encourage participants to agree with a particular viewpoint. This is an example of ______ instrument bias.
Match each type of validity with its description:
Match each type of validity with its description:
Which of the following scenarios is the MOST likely to introduce nonresponse bias into a survey?
Which of the following scenarios is the MOST likely to introduce nonresponse bias into a survey?
A researcher is developing a questionnaire to assess customer satisfaction with a new product. Which activity would BEST ensure content validity?
A researcher is developing a questionnaire to assess customer satisfaction with a new product. Which activity would BEST ensure content validity?
A measure can be reliable without being valid, but it cannot be valid without being reliable.
A measure can be reliable without being valid, but it cannot be valid without being reliable.
Flashcards
Measurement Instrument Bias
Measurement Instrument Bias
Systematic problems with survey design that lead to biased responses.
Nonresponse Bias
Nonresponse Bias
Bias caused by systematic differences between respondents and non-respondents.
Response Bias
Response Bias
The tendency for respondents to answer inaccurately or falsely.
Interviewer Error
Interviewer Error
Signup and view all the flashcards
Validity
Validity
Signup and view all the flashcards
Reliability
Reliability
Signup and view all the flashcards
Face Validity
Face Validity
Signup and view all the flashcards
Content Validity
Content Validity
Signup and view all the flashcards
Convergent Validity
Convergent Validity
Signup and view all the flashcards
Discriminant Validity
Discriminant Validity
Signup and view all the flashcards
Test-Retest Reliability
Test-Retest Reliability
Signup and view all the flashcards
Split-Half Reliability
Split-Half Reliability
Signup and view all the flashcards
Backward Marketing Research
Backward Marketing Research
Signup and view all the flashcards
Interval Scale
Interval Scale
Signup and view all the flashcards
Ratio Scale
Ratio Scale
Signup and view all the flashcards
Ranking
Ranking
Signup and view all the flashcards
Rating
Rating
Signup and view all the flashcards
True Score
True Score
Signup and view all the flashcards
Random Error
Random Error
Signup and view all the flashcards
Systematic Error
Systematic Error
Signup and view all the flashcards
Surrogate Information Error
Surrogate Information Error
Signup and view all the flashcards
Study Notes
- Marketing research projects involve defining a problem and formulating a research design.
Triggers for Research
- Research is often needed when evaluating alternatives. This includes determining the right message for a new product (e.g., a new iPhone).
- Opportunities, such as emerging technologies or government incentives for electric vehicles (EVs), can also trigger research. Companies should seize these opportunities and research them.
- Threats, like government regulations (e.g., a TikTok ban), require research to find alternative platforms for advertising.
- New competitors entering the market, such as BlueSky affecting X (formerly Twitter), necessitate research to understand user migration and retention strategies.
Defining the Problem
- It involves translating management decision problems into marketing research problems.
- Management decision problems (e.g., declining sales) require actionable solutions and focus on symptoms, asking "what should the decision-maker do?".
- Marketing research problems are information-oriented, focusing on root causes and asking "where can I find the info?".
Formulating the Research Design
- There is no single ideal research design.
- Exploratory (qualitative) and conclusive (quantitative) research designs both have pros and cons.
- Descriptive research aims to understand the state of a product or company by describing "how many, when, etc.".
- Casual research explores cause-and-effect relationships.
Exploratory vs. Conclusive Research
- Exploratory research is open-ended, uses small samples, unstructured data, and non-statistical analysis. It is broad and open-ended, providing deeper insights into motivations, attitudes, and beliefs.
- Exploratory research helps to understand consumers. It goes beyond functional benefits and helps in brand positioning and ideation.
- Conclusive research is not described.
Focus Groups: Origins
- Focus groups originated as a way to sell the war to the American public more effectively by changing their opinions.
- Ernest Dichter introduced using the same procedure to sell products after the war, becoming the father of focus groups in marketing.
Focus Groups: Accomplishments
- Focus groups help in understanding how users use a product, gather feedback, and improve the product.
- They can be a product of qualitative research, as seen with the Google Digital Wellbeing App.
Brand Management Using Focus Groups
- Focus groups can identify new segments of brand users and understand what drives brand loyalty.
- Reebok used research to understand that people liked the “vintage” look of their shoes, which helped them make targeted ads.
Designing Focus Groups
- Participants should be knowledgeable about the topic (6-10 people).
- Avoid professional research participants; seek homogeneity within groups and heterogeneity across groups. People might conform to others' decisions or want to speak on behalf of their demographic.
- The environment should be quiet, comfortable, and allow for recording during a 1-2 hour duration. The process should be carefully moderated.
Focus Groups: Pros
- Focus groups gather a higher volume of information from several people at once, and can be faster than in-depth interviews.
- Focus groups provide a better understanding of people's actual reactions. They are more convincing and impactful than quantitative data.
Focus Groups: Cons
- Groupthink and conforming is possible due to group setting.
- There is a risk of biases from the moderator who might ask leading questions.
- Focus groups may produce noisy data, and the views expressed may not represent the whole market.
- There is a temptation to dismiss any negative feedback.
In-Depth Interview (IDI)
- An IDI is a one-on-one conversation:
- IDIs aim to understand deeply what a person thinks about a product. Useful when discussing sensitive topics.
- The laddering technique helps uncover true motivations by going from concrete product traits to abstract benefits.
IDI Projective Techniques
- Projective techniques indirectly assess motivations and attitudes, useful when people are unaware, unwilling to share, or think it's irrelevant.
- Word association involves saying the first thing that comes to mind when a list is read.
- Sentence completion gives partial questions for people to fill in.
- Personification tries to understand brand image, which may show a unified or non-unified perception.
When NOT to Conduct Research
- Avoid research when there is disagreement about the problem, results are meaningless, resources are insufficient, or costs outweigh benefits.
- Also not conduct research if the opportunity has passed, decisions are already made, or needed information already exists.
Primary vs. Secondary Data
- Primary data is newly collected for a specific purpose. Used in exploratory, descriptive, and casual research, it’s tailored, actionable, and accurate.
- Secondary data is existing data gathered for a different purpose, used in exploratory and descriptive research, like Statista and Mintel.
Advantages of Secondary Data
- Advantages: faster and cheaper than primary collection methods. It also permits more flexibility, and has a wide-range of free sources.
Disadvantages of Secondary Data
- Disadvantages: often insufficient, ambiguous, and outdated, and may not address the specific research question.
Evaluating Secondary Data
- Two main questions must be answered: How was the data created? How can it be used?
- To know how it was created: understand the original study's purpose, sponsorship, methodology, sampling, and timing.
- To know how it can be used: ask how relevant it is to the original question and the company involved (same sector, competitor, different market.)
Tools
- Statista: Provides industry and brand-level data in simple tables that summarize complex information.
- Social Listening: Monitoring social media channels (Reddit, TikTok, X, Instagram) to understand what consumers are saying about a brand, product, competitors, or industry.
- Google Trends: Useful for comparing brands, assessing interest over time, identifying seasonal trends, and analyzing regional differences.
- YouTube: Brand YouTube Channels and consumer generated videos
- Reddit: Unfiltered and organic reviews of products
Research Designs
- Descriptive research aims to describe and answers well-defined questions through conclusive and quantitative methods. It does not establish causality, using common methods like surveys.
Measurements
- Measuring involves assigning numbers or categories to concepts of interest, allowing for statistical analysis and comparisons.
Steps in the Measurement Process
- Conceptualize: Define what you want to measure.
- Operationalize: Determine what you will observe.
- Measure: Specific survey questions.
- Evaluate: Asses the sources of measurement error.
Levels of Measurement
- Nominal Scale (lowest level): Distinguishes one from another, like listing streaming apps used.
- Ordinal Scale: Ranks preferences from high to low.
- Interval Scale: Rates disagreement with a statement.
- Ratio Scale: Measures something you can count.
Four Levels of Measurement
- Nominal: Can calculate percentages and the mode, e.g., which streaming apps do you use?
- Ordinal: Values can be ranked, and percentages, the mode, and the median can be calculated.
- Interval: The distance between values is constant, allowing for calculating percentages, the mode, the median, and standard deviation.
- Ratio: Has a meaningful zero, where percentages, mode, median, mean, and standard deviation can be calculated.
Ranking vs. Rating Scales
- Ranking puts things in order using an ordinal scale. Rating uses an interval or near-interval scale.
Random vs. Systematic Error
- Any measurement has: the true score, random error (uncontrollable), and systematic error (controllable).
Two Types of Systematic Error
- Measurement Error: Biases due to flaws in the actual measurement process.
- Sample Design Error: Biases due to flaws in the sampling design.
5 Measurement Errors
- Surrogate Information Error: Not measuring what you're supposed to be measuring.
- Measurement Instrument Bias: Problems with survey design and questions, e.g., leading questions.
- Nonresponse Bias: Participants who don't respond are systematically different than those who do respond.
- Response Bias: Tendency to respond inaccurately, consciously or subconsciously.
- Interviewers are influenced by respondents - giving an incentive for a desire response
Tests for Validity
- Face validity: Asks if you measure what you're set out to measure in a purely subjective sense.
- Content validity: Asks if the measure represents the subject in it's entirety.
- Predictive validity: Asks can the measure forecast future outcomes
- Convergent validity: Measure correlates strongly with other measures of the same construct
- Discriminant validity: Measure correlates weakly with other measures of unrelated constructs
Validity and Reliability
- Validity: Asks more about accuracy and how close the measurement is to the true score.
- Reliability: Asks more about consistency and precision. Asks is the measurement consistent across situations, respondents, and time
Survey Flaws
- 1-3 3-5 5-6 questions are BAD
- Response options are not mutually exclusive exclusive
- Demographics are grouped together
- Questions start with specific then have a general ending.
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.