Untitled

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

A researcher aims to assess the consistency of a newly developed survey instrument. To do so, the researcher divides the survey into two halves and calculates the correlation between the scores of each half. Which type of reliability is being assessed?

  • Convergent validity
  • Discriminant validity
  • Test-retest reliability
  • Split-half reliability (correct)

A researcher is developing a new measure of customer satisfaction. To ensure that the measure accurately reflects customer satisfaction, the researcher calculates correlations with other established measures of customer satisfaction. This process is most closely associated with:

  • Discriminant validity
  • Convergent validity (correct)
  • Face validity
  • Predictive validity

Discriminant validity is demonstrated when a measure shows a strong correlation with measures of unrelated constructs.

False (B)

To assess test-retest reliability, you should administer the same questions over two points in _______ and calculate the correlation between the responses.

<p>time</p> Signup and view all the answers

Nike uses customer satisfaction to see how likely consumers are to return to the store or recommend it to others. What kind of validity are they trying to measure?

<p>Predictive validity (C)</p> Signup and view all the answers

Which of the following best describes systematic error in research?

<p>Error due to flawed research methods, leading to consistent bias. (A)</p> Signup and view all the answers

A ratio scale has a meaningful zero point, allowing for calculations of percentages and ratios.

<p>True (A)</p> Signup and view all the answers

What type of measurement error occurs when a researcher measures something other than what they intended to measure?

<p>surrogate information error</p> Signup and view all the answers

__________ error is reduced by increasing sample size.

<p>random</p> Signup and view all the answers

Which type of scale is most suitable when you want to determine the exact difference in preference between several options?

<p>Interval/Near-Interval Scale (A)</p> Signup and view all the answers

Ranking scales provide more detailed information than rating scales.

<p>False (B)</p> Signup and view all the answers

List the three components that influence responses in quantitative research.

<p>true score, random error, systematic error</p> Signup and view all the answers

Match the following scale types with their characteristics.

<p>Nominal Scale = Categorical data with no inherent order Ordinal Scale = Data with an order or rank, but intervals are not equal Interval Scale = Equal intervals between values, but no true zero point Ratio Scale = Equal intervals between values with a meaningful zero point</p> Signup and view all the answers

Which type of validity is primarily concerned with how well a measure forecasts future outcomes it should logically influence?

<p>Predictive validity (C)</p> Signup and view all the answers

Reliability primarily focuses on the accuracy of a measurement, ensuring it reflects the true score of the concept being measured.

<p>False (B)</p> Signup and view all the answers

What type of measurement error occurs when an interviewer consciously or unconsciously influences respondents to provide a desired response?

<p>Interviewer error</p> Signup and view all the answers

A researcher designs a survey with questions that subtly encourage participants to agree with a particular viewpoint. This is an example of ______ instrument bias.

<p>measurement</p> Signup and view all the answers

Match each type of validity with its description:

<p>Face validity = Subjective assessment of whether a measure appears to measure what it intends to Content validity = Assessment of whether a measure covers all aspects of the concept being measured Predictive validity = Assessment of whether a measure can forecast future outcomes it should logically influence</p> Signup and view all the answers

Which of the following scenarios is the MOST likely to introduce nonresponse bias into a survey?

<p>Only individuals with strong opinions about the survey topic choose to participate. (A)</p> Signup and view all the answers

A researcher is developing a questionnaire to assess customer satisfaction with a new product. Which activity would BEST ensure content validity?

<p>Including a variety of questions that cover all relevant aspects of customer satisfaction. (B)</p> Signup and view all the answers

A measure can be reliable without being valid, but it cannot be valid without being reliable.

<p>True (A)</p> Signup and view all the answers

Flashcards

Measurement Instrument Bias

Systematic problems with survey design that lead to biased responses.

Nonresponse Bias

Bias caused by systematic differences between respondents and non-respondents.

Response Bias

The tendency for respondents to answer inaccurately or falsely.

Interviewer Error

Bias introduced by the interviewer's influence on respondents.

Signup and view all the flashcards

Validity

The accuracy of a measurement; whether it measures what it's intended to.

Signup and view all the flashcards

Reliability

The consistency and precision of a measurement across situations and time.

Signup and view all the flashcards

Face Validity

Subjective assessment whether a measure appears to measure the intended construct.

Signup and view all the flashcards

Content Validity

The extent to which a measure covers all aspects of the concept being measured.

Signup and view all the flashcards

Convergent Validity

The degree to which a measure correlates strongly with other measures of the same construct.

Signup and view all the flashcards

Discriminant Validity

The degree to which a measure does NOT correlate with measures of unrelated constructs.

Signup and view all the flashcards

Test-Retest Reliability

Assesses consistency of a measure over time by measuring the same people more than once.

Signup and view all the flashcards

Split-Half Reliability

Assesses internal consistency by dividing a measure into two groups and comparing the results.

Signup and view all the flashcards

Backward Marketing Research

Start with the end goal in mind; how will the survey data be used?

Signup and view all the flashcards

Interval Scale

Scale with a neutral point and opposite endpoints; responses have consistent intervals.

Signup and view all the flashcards

Ratio Scale

Values have a meaningful zero point, indicating the absence of the measured quantity.

Signup and view all the flashcards

Ranking

Arranging items in a specific order based on preference or magnitude.

Signup and view all the flashcards

Rating

Assigning a value to an item along a predefined scale.

Signup and view all the flashcards

True Score

The value you are truly trying to capture in your research.

Signup and view all the flashcards

Random Error

Unpredictable variations that cannot be attributed to any consistent factors.

Signup and view all the flashcards

Systematic Error

Consistent and repeatable errors caused by flaws in the research design.

Signup and view all the flashcards

Surrogate Information Error

Bias because the measured data doesn't truly reflect the information interest.

Signup and view all the flashcards

Study Notes

  • Marketing research projects involve defining a problem and formulating a research design.

Triggers for Research

  • Research is often needed when evaluating alternatives. This includes determining the right message for a new product (e.g., a new iPhone).
  • Opportunities, such as emerging technologies or government incentives for electric vehicles (EVs), can also trigger research. Companies should seize these opportunities and research them.
  • Threats, like government regulations (e.g., a TikTok ban), require research to find alternative platforms for advertising.
  • New competitors entering the market, such as BlueSky affecting X (formerly Twitter), necessitate research to understand user migration and retention strategies.

Defining the Problem

  • It involves translating management decision problems into marketing research problems.
  • Management decision problems (e.g., declining sales) require actionable solutions and focus on symptoms, asking "what should the decision-maker do?".
  • Marketing research problems are information-oriented, focusing on root causes and asking "where can I find the info?".

Formulating the Research Design

  • There is no single ideal research design.
  • Exploratory (qualitative) and conclusive (quantitative) research designs both have pros and cons.
  • Descriptive research aims to understand the state of a product or company by describing "how many, when, etc.".
  • Casual research explores cause-and-effect relationships.

Exploratory vs. Conclusive Research

  • Exploratory research is open-ended, uses small samples, unstructured data, and non-statistical analysis. It is broad and open-ended, providing deeper insights into motivations, attitudes, and beliefs.
  • Exploratory research helps to understand consumers. It goes beyond functional benefits and helps in brand positioning and ideation.
  • Conclusive research is not described.

Focus Groups: Origins

  • Focus groups originated as a way to sell the war to the American public more effectively by changing their opinions.
  • Ernest Dichter introduced using the same procedure to sell products after the war, becoming the father of focus groups in marketing.

Focus Groups: Accomplishments

  • Focus groups help in understanding how users use a product, gather feedback, and improve the product.
  • They can be a product of qualitative research, as seen with the Google Digital Wellbeing App.

Brand Management Using Focus Groups

  • Focus groups can identify new segments of brand users and understand what drives brand loyalty.
  • Reebok used research to understand that people liked the “vintage” look of their shoes, which helped them make targeted ads.

Designing Focus Groups

  • Participants should be knowledgeable about the topic (6-10 people).
  • Avoid professional research participants; seek homogeneity within groups and heterogeneity across groups. People might conform to others' decisions or want to speak on behalf of their demographic.
  • The environment should be quiet, comfortable, and allow for recording during a 1-2 hour duration. The process should be carefully moderated.

Focus Groups: Pros

  • Focus groups gather a higher volume of information from several people at once, and can be faster than in-depth interviews.
  • Focus groups provide a better understanding of people's actual reactions. They are more convincing and impactful than quantitative data.

Focus Groups: Cons

  • Groupthink and conforming is possible due to group setting.
  • There is a risk of biases from the moderator who might ask leading questions.
  • Focus groups may produce noisy data, and the views expressed may not represent the whole market.
  • There is a temptation to dismiss any negative feedback.

In-Depth Interview (IDI)

  • An IDI is a one-on-one conversation:
  • IDIs aim to understand deeply what a person thinks about a product. Useful when discussing sensitive topics.
  • The laddering technique helps uncover true motivations by going from concrete product traits to abstract benefits.

IDI Projective Techniques

  • Projective techniques indirectly assess motivations and attitudes, useful when people are unaware, unwilling to share, or think it's irrelevant.
  • Word association involves saying the first thing that comes to mind when a list is read.
  • Sentence completion gives partial questions for people to fill in.
  • Personification tries to understand brand image, which may show a unified or non-unified perception.

When NOT to Conduct Research

  • Avoid research when there is disagreement about the problem, results are meaningless, resources are insufficient, or costs outweigh benefits.
  • Also not conduct research if the opportunity has passed, decisions are already made, or needed information already exists.

Primary vs. Secondary Data

  • Primary data is newly collected for a specific purpose. Used in exploratory, descriptive, and casual research, it’s tailored, actionable, and accurate.
  • Secondary data is existing data gathered for a different purpose, used in exploratory and descriptive research, like Statista and Mintel.

Advantages of Secondary Data

  • Advantages: faster and cheaper than primary collection methods. It also permits more flexibility, and has a wide-range of free sources.

Disadvantages of Secondary Data

  • Disadvantages: often insufficient, ambiguous, and outdated, and may not address the specific research question.

Evaluating Secondary Data

  • Two main questions must be answered: How was the data created? How can it be used?
  • To know how it was created: understand the original study's purpose, sponsorship, methodology, sampling, and timing.
  • To know how it can be used: ask how relevant it is to the original question and the company involved (same sector, competitor, different market.)

Tools

  • Statista: Provides industry and brand-level data in simple tables that summarize complex information.
  • Social Listening: Monitoring social media channels (Reddit, TikTok, X, Instagram) to understand what consumers are saying about a brand, product, competitors, or industry.
  • Google Trends: Useful for comparing brands, assessing interest over time, identifying seasonal trends, and analyzing regional differences.
  • YouTube: Brand YouTube Channels and consumer generated videos
  • Reddit: Unfiltered and organic reviews of products

Research Designs

  • Descriptive research aims to describe and answers well-defined questions through conclusive and quantitative methods. It does not establish causality, using common methods like surveys.

Measurements

  • Measuring involves assigning numbers or categories to concepts of interest, allowing for statistical analysis and comparisons.

Steps in the Measurement Process

  • Conceptualize: Define what you want to measure.
  • Operationalize: Determine what you will observe.
  • Measure: Specific survey questions.
  • Evaluate: Asses the sources of measurement error.

Levels of Measurement

  • Nominal Scale (lowest level): Distinguishes one from another, like listing streaming apps used.
  • Ordinal Scale: Ranks preferences from high to low.
  • Interval Scale: Rates disagreement with a statement.
  • Ratio Scale: Measures something you can count.

Four Levels of Measurement

  • Nominal: Can calculate percentages and the mode, e.g., which streaming apps do you use?
  • Ordinal: Values can be ranked, and percentages, the mode, and the median can be calculated.
  • Interval: The distance between values is constant, allowing for calculating percentages, the mode, the median, and standard deviation.
  • Ratio: Has a meaningful zero, where percentages, mode, median, mean, and standard deviation can be calculated.

Ranking vs. Rating Scales

  • Ranking puts things in order using an ordinal scale. Rating uses an interval or near-interval scale.

Random vs. Systematic Error

  • Any measurement has: the true score, random error (uncontrollable), and systematic error (controllable).

Two Types of Systematic Error

  • Measurement Error: Biases due to flaws in the actual measurement process.
  • Sample Design Error: Biases due to flaws in the sampling design.

5 Measurement Errors

  • Surrogate Information Error: Not measuring what you're supposed to be measuring.
  • Measurement Instrument Bias: Problems with survey design and questions, e.g., leading questions.
  • Nonresponse Bias: Participants who don't respond are systematically different than those who do respond.
  • Response Bias: Tendency to respond inaccurately, consciously or subconsciously.
  • Interviewers are influenced by respondents - giving an incentive for a desire response

Tests for Validity

  • Face validity: Asks if you measure what you're set out to measure in a purely subjective sense.
  • Content validity: Asks if the measure represents the subject in it's entirety.
  • Predictive validity: Asks can the measure forecast future outcomes
  • Convergent validity: Measure correlates strongly with other measures of the same construct
  • Discriminant validity: Measure correlates weakly with other measures of unrelated constructs

Validity and Reliability

  • Validity: Asks more about accuracy and how close the measurement is to the true score.
  • Reliability: Asks more about consistency and precision. Asks is the measurement consistent across situations, respondents, and time

Survey Flaws

  • 1-3 3-5 5-6 questions are BAD
  • Response options are not mutually exclusive exclusive
  • Demographics are grouped together
  • Questions start with specific then have a general ending.

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

Marketing Research Project PDF

More Like This

Untitled
110 questions

Untitled

ComfortingAquamarine avatar
ComfortingAquamarine
Untitled Quiz
6 questions

Untitled Quiz

AdoredHealing avatar
AdoredHealing
Untitled
49 questions

Untitled

MesmerizedJupiter avatar
MesmerizedJupiter
Untitled
40 questions

Untitled

FreedParadox857 avatar
FreedParadox857
Use Quizgecko on...
Browser
Browser