Marketing Research Project PDF
Document Details

Uploaded by HumbleFauvism3147
University of Georgia
Tags
Related
- Module 4 Entrepreneur (Marketing Research) PDF
- Entrepreneurship Lesson 4 - Market Research PDF
- Tema 1. Planificación e Instrumentos para la Obtención de la Información (PDF)
- Consumer Behavior Test #2 PDF
- Tema 4: Investigacion Comercial Y Informacion en Marketing PDF
- Introducción a la Investigación de Mercado PDF
Summary
This document outlines the marketing research process, covering problem definition, research design, and evaluation methods. It explores qualitative and quantitative research approaches, including focus groups and surveys, providing insights into data collection, analysis, and the importance of sampling in market research. It is relevant to anyone studying or working in marketing and market research.
Full Transcript
The Marketing Research Project 1. Define the problem 2. Formulate research Design What Triggers the Need for Research - **Evaluating alternatives** - i.e. what message should we use to promote a new product - The iPhone: a new iPhone comes our every year with a new message...
The Marketing Research Project 1. Define the problem 2. Formulate research Design What Triggers the Need for Research - **Evaluating alternatives** - i.e. what message should we use to promote a new product - The iPhone: a new iPhone comes our every year with a new message ("blast past fast, big and bigger, bigger and slimmer, etc.") - **Opportunities** - i.e. new technology emerges - apple watch hasn't crushed SWISS watch - government incentive are introduced (government giving discounts for people purchasing discounts for buying electric vehicles (EV). So company should seize opportunity and research - THREATS - Government regulations are imposed - i.e. TikTok ban threat, if im an advertiser who advertises on this platform, I need to research where to migrate to reach my target. I need to understand the other apps they use and spend time on - New Competitor enters the market - i.e. BlueSky is a new app like twitter who is scaring off X users. So I need to figure out why people are leaving from X to BlueSky, how can I attract new users and maintain retention rate 1 Define the Problem - Translate the management decision problem into a marketing research problem - Management Decision Problem: ultimate decision trying to make to solve a clear and observable problem (we have noticed a decline in sales) - Action oriented - Focuses on symptoms - "what should the decision maker do" - Marketing Research Problem - Information oriented - Focuses on root cause - "where can I find the info" Underlying the marketing research problem involves identifying the underlying cause of apparent symptoms Examples For Twitter Graph - Symptoms: steady decline of monthly active users after 2022 - Causes: poor management after switching management team (Elon taking over), bad policy changes, targeting wrong market group due to increase of politics, threat of alternatives. 2 Formulate the Research Design There is no single idea research deisng -- each has pros and cons - Exploratory (qualitative) research Design - Conclusive (quantitative) research Design - Descriptive research -- trying to understand how many, when, etc. describint state of product or company - Casual research -- looking at cause and effect research Exploratory vs Conclusive Research - Exploratory: open ended, many possibilities, small sample, unstructured data, non-statistical analysis - Broad and open ended - Pros - Deeper insight; motivations, attitudes, beliefs - Go beyong functional benefits of product - Brand positioning - Ideation - Unexpected discoveries; when talking to consumers you find out information you didn't forsee and couldn't find on your own - Sour Cream Example: did research to see how packaging can become any more convenient for increase sales - Conclusive: Focus Group: Origins - How to sell the war to the American public more effectively - People were not thrilled the country was going to war, but they wanted to change their opinions - After the war ended, Ernest Dichter introduced that we can use the same producure to sell products - The father of focus groups in marketing What Can Focus Groups Accomplish - You can get better understanding of how users use product, get feedback, see how to improve the product - In our project, improvements and creation of new products is important - Google Digital Wellbeing App - This app was a product of qualitative research - **Brand Management:** - How people feel about the brand, identify new segments of brand users (learn of small groups that use product in different ways), understand what drives brand loyalty, what people associate your brand with - Reebok: - Did research to understand their users. Reebok discovered that people like the "vintage" look of their shoes, so they made an ad to highlight the "timeless" look of their shoes Designing Focus Groups - Participants - (6-10 people) - Should be knowledgeable about topic - Avoid professional research participants meaning like paying for the participants because they won\'t really care about product they care about making a buck - Want homogeneity within groups and heterogeneity across groups. People might conform to others' decisions or want to speak on behalf of their demographic - Environment - Quiet and comfortable - Ability to record - Duration - 1-2 hours - Moderator Focus Groups Pros and Cons Pros - Often faster than indepht interviews bc taking to several people at ones; higher volume of info - A survey may cause misunderstanding of questions - Could give group product and undestadnad better what their actual reaction is - More convincing than quantitative data, more impactful to see the reaction in person is more tangible than looking at a chart Cons - Groupthink and conforming is possible due to group setting - Risk of moderator bias; asking questions in non-neutral way or making expressions - Noisy data; words fluff up instead of only data - Not always representative because the groups views may not represent whole market - Temptation to dismiss negative feedback In-Depth Interview (IDI) - One on one conversation - Trying to understand deeply what this one person thinks about product - Can be useful when talking about sensitive topics Laddering Technique - Goal is to go from concrete product attribute to the abstract benefits people get from product - Helps you get to the true motivation of a response - 1\. Start by eliciting specific product attributes - 2\. Follow up to understand the motivation behind preferences IDI Projective Techniques - Some motivations and attitudes are hard to asses because - People are unaware of them (exist at subconcious) - People are aware but unwilling to share (too personal/conflicting) - People re aware but think it's irrelevant (liking the color) - Projective techniques -- indirect form of questions -- can get around it Word Association - Someone will read a list of things and people say the first thing that comes to brand - Noticing pattern between responses to understands strong associations with particular brands Sentence Completion - Giving partial questions for people to fill in Personification - Trying to understand brand image - If there\'s no clear answer there\'s no unified perception of my brand When NOT to conduct Research - Disagreement about the research problem -- have to have a point to the research - If results are managerially meaningless - Insufficient resources - Research costs \> benefits - Opportunity has passed -- irrelevant trends - If you have already made your decision -- paying a research form to validate reasoning - Needed information already exists (secondary data available) Primary VS Secondary Data - Primary Data - New data collected to serve a specific purpose - Used in exploratory, descriptive, and casual research - PROS: tailored to your own purposes, actionable, accurate - CONS: expensive, takes more time and coordination to collect - Survey Monkey, Qualtrics, etc. - Secondary Data - Existing data previously gathered for different purpose that you canuse - Used in exploratory and descriptive, but not casual, research - Statista, Mintel, Gale, IBIS, Reddit, etc. - There are private companies that create reports on specific companies or industries or trends/behavior, etc. - Advantages of Secondary Data: (EXAM QUESTION) - Faster than primary data - Finding the data is the longest part and the analysis - Cheaper than primary data - Reports can be affordable and lots of free sources available - More Flexible than primary data - Not constrained by specific hypothesis, can be repurposed for multiple research questions - Disadvantages of Secondary Data - Often insufficient - Not designed to solve YOUR question, only summary results - Often ambiguous - Not always clear how data were collected, cannot determine accuracy - Often outdated - Time and changing markets make information irrelevant, can't generalize outdated information Evaluating Secondary Data - Two main questions - How was it Created - What was the purpose of the original study - Who sponsored it, is there a hidden agenda - What was the methodology - What was the sampling method - When was the study conducted - How can we use it - How relevant is the original question - How relevant is the original company (same sector, competitor, diff market,) - Can this data answer our question -- OR do we need more secondary data, or should we use it to guide our primary research Camel Example - An ad citing research they conducted to market their product is not a trustworthy place to draw your data Statista: Overview - Data base you can access for free for industry level and brand level information -- simple tables summarizing complex information - Product category/industry - Growth and forecasts - Competitive landscapes - Sales channels - Regional differences - Focal Brand and Competitors - Profit/Revenue - Time trends - By sales channel (online vs offline) - Regional differences - Usage - Time trends - Demographics breakdown - Regional differences Social Listening - Social listening is the process of monitoring social media channels to understand what consumers are saying about your brand, product, competitors, and category industry - Reddit, tik Tok, X, Instagram Google Trends: Overview - Useful for exploring focal brand vs. competitors - Awareness/interest - How has search interest changed over time - How does it compare to other brands - Seasonality - Are there predictable fluctuations in interest over time - Time of day - Does search interest vary by the time of day - Regional differences - National, metro, city, - Related search terms - What are the other concepts associated with brand Youtube - Brand Youtube Channels - How does the brand portray itself - What is the user imagery - How are consumers reacting to these videos - Objective metrics: views, likes - Consumer Generated Videos (like product reviews) - What are consumers saying in the videos Reddit - Gives more unfiltered and organic reviews of products Research Designs Descriptive Research - Aims to describe - What it Does: conclusive and quantitative research that answers well-degined questions - Doesn't establish cause and effect relationships, cant establish causality - Common Methods: surveys A Manager May Want to Describe - Imagine that you started a clothing company; what are some basic things that you would want to describe about your customers and the industry - Demographics; age, gender, psychographics (lifestyle), competitors (what other brands are they shopping at). - Consumer characteristics: what % of customers are 18-24 - Purchase/consumption behaviors: average spending per visit - Market Characteristics: do they purchase more online or in store - To describe these you need to measure Measure - Process of assigning numbers or categories to concepts on interest - Why measure: allows us to conduct statistical analysis and facilitates comparisons - 4 Steps in the Process 1. Conceptualize \-- define a. Define what you want to measure b. Clearly definfe the concept 2. Operationalize -- what are you observing c. Goal is to go from abstract definition to a more concrete idea d. What can be reported or observed that reflects the concept i. What can we observe/ask? 3. Measure -- specific question for survey e. How will you choose your wording or identify specific behaviors to track 4. Evaluate -- assess f. What are the sources of measurement error Example: Measuring Brand Loyality - 1\. Conceptualize - What concept do you want to measure - Brand loyalty - Operationalize - If we only have behavioral data; - Could operationalize brand loyalty using self reported attitudinal data (thoughts, opinions, emotions). - What attitudinal information would you ask for to undertand level of brand loyalty: - Why did you pick this over that - Word of mouth -- would you recommend - Commitment to the brand (what would it take you to abandon) - Level of identification with the brand - Past purchase/usage frequency - Intentions to purchase the brand in the future - Measure - How will you word the questions and track observations - 4 levels of measurement - Nominal Scale (lowest level): distinguish one from another - Ordinal Scale: Rank your preference from high to low - Interval Scale: rate your disagreement with statement - Ratio Scale: Question where we measure something you can count - Each level of measurement dictated what type of statistics you can perform on the response Four Levels of Measurement - Nominal: - most basic style - Can calculate percentages and the mode (average response) - Example: Which of the following streaming apps do you use: (1) Spotify, (2) apple music, (3) amazon music - Ordinal Scale: - The values people select or respond with can be levels of magnitude - Asking to rank things, evaluate preferences - Measuring social class, class letter grade (A-F) - Can calculate %, mode, and median - Example: Rank the following music streaming services from least (1) to most preferred (3) - The number stands for somethings - Interval (or Near-Interval): - The distance between adjacent values are constant - Zero is an arbitrary point -- doesn't matter 1 and 0 are equal - Without a zero point, you cant count satisfaction and the points in between (ratio) don't stand for the same thing. - Clock time; difference between 1 and 2 is same as 4 and 5; temperature, gpa, attitude, intentions - Can calculate %, mode, median, std. dev, +/- - Example: on a typical day, at what time do you fist start listening to music - 1 (4 AM), 2 (5AM, 3 (6AM), 4 (7AM) - OR "rate your level of satisfaction with \_\_\_" - 1 (Very Dissatisfied, 2, 3, 4, 5, 6, 7 (Very Satisfied) - Any survey with a neutral point and the end points are opposite it\'s interval or near interval. - CAN\'T SAY: one group was double or twice as high as the other because it doesn't have meaning - Ratio - Values have a meaningful zero point -- zero means zero - For unit sales, price, age, weight, etc. - Can calculate %, mode, median, mean, std. dev, +/-, x/division - Example: during the past month, how many days did you use Spotify A table with black text and white text Description automatically generated A Note on Ranking vs Rating Scales - Different terms - Rank: Putting in order (ordinal Scale) - Rating: can be ordinal or rating. - Rating questions (interval/near-interval) are more useful than ordinal/nominal (ranking) - Ranking does not give exact information, it only shows you the preference between options, hard to distinguish consumers from just their rankings - Always use the highest level of measurement that is feasible Random vs Systematic Error - Anytime you ask a quantitative measure, the response is a function of the **true score** (true attitude), a **random error** (out of your control like noise), and **systematic error** (within your control) - True score - What you actually want to measure - Random error - Not anything you can due -- get a bigger sample - Systematic error - Due to flawed research methods Two Types of Systematic Error - Measurement error: systematic bias due to flaws in the actual measurement process - Sample design error Five Types of Measurement Error - **Surrogate information error**: not measuring what you're supposed to be measuring - Coke changing the recipe, people thought new coke tasted better with the new recipe but consumers did not like it when the new one came out - Coke did not ask "which coke would you buy" they failed to connect the idea that people buy coke because it\'s a comfort buy and they. Gap in information - **Measurement instrument bias**: problems with survey design and questions themselves -- making something sound bad/good to make you agree/disagree - Seen in the political domain often - Leading questions that are not framed in a neutral way "what disturbs you most about Donald trump being president?" makes it hard to disagree with the question. - **Nonresponse bias**: participants who don't respond are systematically different than those who do respond - The people who go out of their way to respond to a survey may mean that they feel strongly about the topic so they're response is significantly different from the general public - **Response Bias**: when everyone has the tendency to respond in a certain inaccurate way. Subconscious or consciously (usually lying for desirable answer) - When were asking people about they're desirable behaviors, they have motivation to respond favorable for them... this causes response bias. - Being anonymous or in person questions changes this - **Interviewer error**: interviewer consciously or unconsciously influenced respondents -- giving an incentive for a desired response - When you're in the back of an uber and they ask for a 5 star rating, theyre not asking for an honest review. - Giving candy for a review - At a restaurant, free dessert for a review Validity and Reliability - Validity - More about accuracy - Are we measuring what we think were measuring and how close are we to the true score - Reliability - More about consistency and precision - Is our measurement consistent across situations, respondents, and time - Is our measurement process free from random error Validity - Face validity: purely subjective and asking on the surface did you measure what you're set out to measure - Example: asking in a survey about a product but not - Content validity: do the items in your measure represent the entirety of the focal concept - Have some questions asking about how clean the store is, but missing questions covering other content - Predictive validity: can the measure forecast future outcomes that it logically should influence - To measure consumers identification with the brand Nike, researchers asked people to indicate how many pairs of Nike shoes they own. A month later, to evaluate the \_\_\_ of this message they calculated the correlation between responses to this question and how Nike Instagram posts respondents liked over the past month - Asking the same question both times - Store experience changes whether I come back to the store or recommend to other people - To test this ask if they would recommend or ask about the probability of returning" - Convergent validity: does the measure correlate strongly with other measures of the same construct - Taking SAT and ACT -- they should strongly correlate to measure people accurately and consistently - The size of tip left at a restaurant is a big indicator - Discriminant validity: does the measure correlate weakly with other measures of unrelated constructs - You want convergent and discriminant validity 2 types of Reliability - Test-Retest Reliability: are we getting the same values when we measure the same people more than once - Testing consistency over time - Split-Half Reliability: do we get the same values when we divide our measure into two groups and compare values for the same people - Comparing the responses of one group to another over the same content Test-Retest Reliability - Create an index (taking the average response of same questions for each time point - IN ORDER TO ASSESS THIS THE MEASURE YOU SHOULD ADMINISTER THE SAME QUESTIONS OVER TWO POINTS IN TIME AND CACULATE THE Split Half Reliability - Take that scale and split into 2 halves. - Looking at a single time point and splitting it and testing correlation between both sides - TEST OF internal consistensies -- looking at how tbe two halves correlate Basic Survey Design Procedure - Set your objective - What are you measuring - Determine necessary analysis - Backward marketing research: we should begin with the end in mind (how will I use the information of the survey... this dictated what and how the questions will be asked) - Choose format and delivery method - Online, phone, mail, in person - Construct survey - Introduction, questions, debriefing statement, payment - Pretest and launch survey Survey Introduction - Every survey should begin with - Why -- purpose of the survey - Who is running the survey - How youre going to use the information - How long it will take - Whether the survey will be anonymous Survey Question Sequencing - Start with easy, non-threatening questions - Survey should flow smoothly logically from topics - Group similar questions together -- don't jump around - Order questions form broad to specific (like in exploratory research) - Demographics are generally collected last - This can influence your responses because people want to identify with their identity Question Order Bias -- Strack et al. 1988 - Responses to one question can be influenced by a preceding question - In one study students were asked about how happy they were in like with a xscale and then about how normally people go out on dates in a month - If the dates were asked before, people would assume the number of dates they went on reflects their happiness Question Wording Guidelines - Focus on one issue/topic per question - Avoid double barreled question -- "how satisfied were you with the food and the drinks at the restaurant" food and drinks shouldn't be connected - Keep questions as brief and simple as possible - Only use low vocab so everyone understands - Make sure there is only one way to interpret the question - Avoid ambiguous wording - Avoid leading or biased questions - Provided collectively exhaustive response options - All possible responses should be represented -- offer all the options - Provide mutually exclusive response options - A respondent should only fall into one category Scaling Considerations - What\'s the right kind of response scale for my survey question - Nominal, ordinal, interval, or ratio - Open ended or close ended - Number of categories - Should we include a neutral opinion - Unipolar or bipolar numbering - Category and endpoint labels - Order from low to high or high to low Closed Vs Open Ended Response (Shuman And Scott 1987) - "what is the most important national workd event or change of the past 50 years" - When the responses were open ended only 37% (vs 95%) had something specific to write about. Pros/Cons of Open and Close Ended Questions on Surveys Open Ended - Pros: - easier to design - Useful for exploratory purposes - Helpful when there are too many options - Cons: - Higher potential for researcher bias - Harder for respondents - Harder to analyze data Close Ended - Pros - Easier for responders - Lower potential for - Cons - Harder to design - Respondent Ambivalence and Ignorance - Ambivalence: if people are likely to have mixed feelings , provide a neutral response - Ignorance: not much background knowledge provides a "don't know" option - People who are ambivalent or ignorant will choose randomly -- which brings noise to your data Response Scale Bias - How you label the different scale categories differs - Responses to a question can be influenced by the response scale category labels - Example: if a survey asks -- on average how many hours do you spend watching TV - If the scale presents your choice as an "extreme" option, youre more likely to lie about it. Provide a huge range scale to make people comfortable responding honestly Sample VS Census - Census: survey of everyone in the target population - Use IF - Population is small enough (B2B) - If it\'s easy or cheap enough to access entire population (ex: costumer purchase history) - Sample: a subset of the target population - Use IF - Usually used in sample research Sampling Process - 1: find target population - Who do you need info about or from - 2\. Identify sampling frame - What list will you select members of the target population from? - 3\. Choose sampling method - How will you select the subset of the target population to survey - 4\. Determine sample size - How many people do you want to interview Two Categories of Sampling Methods - Nonprobability Sampling Techniques - They are not random; everyone in the population does not have an equal chance of being selected - Probability sampling Techniques - Sample selection is random so everyone in the population has an equal chance of being selected Nonprobability Sampling Methods **Convenience Sampling** - Based on who is easily accessible - Asking friends, coworkers, roommates, contacts, passing people, etc. - Pros - Cheapest sampling method - Lest time consuming sampling method - Can be helpful for exploratory research - Cons - Risk of selection error -- bias - Sample is not representing whole population - Inappropriate for conclusive research **Judgement Sampling** - An improvement on convinence sampling - Participants are selected on the basis of your own expert judgement (who you think is qualified) - Examples: test markets (lets say you work for juice company and you want to test a new flavor, you would hand select a couple stores in some regions to try it with because of certain reasons), retail experiments. - Pros - Low cost and time investments - Can select for informed respondents - Step up from convenience sample because there is some criteria for including people beyond picking the available person - Cons - Risk of selection error -- no systematic choosing - Sample is not representative -- who you've chosen may not relflexr entire population - Relies on subjectivity **Quota Sampling** - Participants are selected to match the population on some key "control characteristics" (CC) determined by researcher - Lets say you're doing research for Spotify and you know 60 use free and 40 use premium, in the sample you want the same 60/40 split. Ensure the same ratio to represent population - Divide population into subgroups based on CC - Determine proportion of population that each subgroup accounts for - Using convenience or judgement sampling to choose the sample to match the population - PROS - Sample reflects population on critical dimensions -- a little more representative - CONS - Same risk of selection error - Cant assure that the sample is 100% representative Snowball Sampling - Useful when populations are hard to reach or identify -- rare characteristics - Trying to research fortune 500 CEOs, there are not many of them and hard to reach so you could: find one CEO and send survey to pass it on with the same trait/characteristic - 1\. Select an initial group of respondents from target population - 2\. Respondents are asked to identify others who they know fit the target population - 3\. New respondents are also asked to identify others and so on... snowball effect - Pros - Low cost - Easier to locate people with rare characteristics - CONS - Risk of selection error - More time consuming than other methods because youre relying on others to get data back to you. Probability Samling Methods **Simple Random Sampling** - Everybody in population has an equal chance of being selected - 1\) assign number to each member of population - 2\) randomly select members until you reach desired sample - The bigger the sample, the more representative it is - Pros - Easy to understand and implement - Results are more generalizable to population - Cons - Difficult to construct sampling frame - Cannot guarantee representatives - In class -- 40 people, a sample of 3 people would be concering that they may not represent the rest of the class, if you take 30 people instead, itll be more representative **Systematic Sampling** - Select participants using a SKIP INTERVAL \-- choosing every 4^th^ person in a group - 1\) decide the sample size - 2\) assign random number to each member of population - 3\) choose a random starting point - 4\) select every nth person in the sampling frame, where n = population/sample size - Pros - Easy to implement - Results are more generalizable to population - Cons - Difficult to construct sampling frame - Cannot guarantee representativeness - For a group of 200 people and you want to sample 20 people. Divide 200 by 20= 10 pick a rando number between 1 and 10 and use that point to randomly skip. Lets say it\'s 6, every 6^th^ person is asked. **Stratified Sampling** - Strata---AKA subgroups defined within a population; e.g., generations, zip-codes, etc. - Steps - 1\) divide population into mutually exclusive strata - 2\) select simple random sample from each stratum - Helps ensure every subgroup is represented in sample - Pros - Improves statistical precision - Ensures that all relevant subgroups are included - Cons - Requires more time and effort that simple random sampling - Not feasible to stratify based on many variables - How is it different from QUOTA - Stratified sampling is a probability so you're choosing at random - More representative but both have the benefit that it represents population about a similar characteristic - Quota is manually divided by researcher and then chooses people by convenience **Example of stratified sampling** - **A marketer is planning its advertising media schedule and is interested in measuring TV viewing habits in a county with this population:** - o Town A has 15,000 households - o Town B has 6,000 households - o Town C has 9,000 households - How would you select a stratified sample of 1,000 households from this county? In full population: 30,0000 households - Town a is 50% 500 households - Town b is 20% 200 households - Town c is 30% 300 households Cluster Sampling - Here you're sampling entire groups rather than only individuals - Sample clusters (groups of people) - 1\) divide population into mutually exclusive and collectively exhaustive clusters - 2\) each cluster should resemble the population as closely as possible - 3\) select a random sample of clusters - Pros - Easy to implement - Cost effective - Cons - Relatively imprecise sample - Difficult to form appropriate clusters - Take population and create multiple micro versions and choose some of the mini clusters at random.  Two Types of Systematic Error - Measurement error - Systematic bias due to flaws in the actual measurement process - Sample Design Error - Systematic bias due to flaws in the sampling design and selection process Three Types of Sample Design Error - **Population Specification error**: incorrect definition of the population (i.e., sampled the wrong population) - **Sampling frame error**: correct population but sampling frame is inaccurate or incomplete - **Selection error**: biased process of selecting participants to be in sample - Can occur even when population and sampling frame are accurate - Think of it as a funnel -- is there a population specification error ( am I talking to right people), if not keep going (is the way they're responding okay), if no then go to selection error (is there any biased process of selecting participants) Identify the sample design error (Example Questions) - Kellogg's conducted a survey using a nationally representative sample of parents in order to get feedback about the packaging of a new children's cereal they're developing. When the product was finally released, sales were much lower than expected - Population specification error - Lyft conducted a survey to learn about consumers' fears regarding autonomous vehicles. To find respondents, they used a list of registered car owners provided by the DMV in various states. - Sampling frame error - Leaves out all other people who don't own cars - Bank of America is conducting interviews to understand the spending habits of adults in the US. All customers who visit bank branches on weekday afternoons are invited to participate. - Selection error -- not a representative sample, the people they chose are part of the population however, they're only selecting weekday afternoons -- not representative of EVERYONE because some people work - BMW is trying to determine what type of imagery will be most effective in attracting Gen Z consumers. To explore this, they sent an email survey to a random sample of BMW owners. - Population specification error - Furniture retailer Design Within Reach is trying to determine what new brands to carry in their brick-and-mortar stores and sent a survey to a random sample from a list of the last 1,000 customers who purchased anything from their website. - Sampling frame error - AMC Theatres wants to redesign their Phipps Plaza location. They got the contact information of everyone who visited the theater in the past 3 years and sent a survey to all AMC app users on this list. - Selection error -- right population and right sampling but the process of choosing people off the list (app user) is biased because Exam 1 Review Popular press readings (e.g., news articles) are fair game - **NYT: Ozempic Could Crush the Junk Food Industry** - **WSJ: The Dubious Management Fad Sweeping Corporate America** - WSJ: How Companies Use a Popular, but Limited, Tool to Measure Customer Happiness - NYT: How Can the Opinions of 1,000 People Possibly Represent the Entire Country - NYT: How Polls Have Changed to Try to Avoid a 2020 Repeat - FastCo: We're All Being Manipulated by A/B Testing All the Time - Verge: What Instagram Really Learned from Hiding Like Counts Types of Questions - If you read the article you'll be able to answer - The first two are important Marketing Research Process - 1\) Define the Problem or Opportunity - 2\) Formulate the Research Design - 3\) Select the Research Method for Data Collection - 4\) Select the Sample & Collect Data - 5\) Analyze and Interpret the Data - 6\) Prepare and Present the Results Management Decision Problem: what course of action should we take - Should we offer this product variant or the other Management Research Problem - More information oriented, focusing on root causes rather than symtons Research Design - Qualitative (exploratory) - In depth interviews - Focus groups - Quantitative - Descriptive (survey - Casual Research Most Projects Design - Start with an exploratory research design, and then end with conclusive design for qualitative data Question Examples \-- Choose the best research deisng Madewell wants to test whether sending an email with a discount code or an email simply mentioning a sale is more effective in driving website traffic. - **Causal research experiment**: trying to establish a cause-and-effect relationship Snapchat wants to break into the virtual reality industry and is trying to come up with ideas for different VR products and services. - Qualitative -- **exploratory** research for ideation Tidal wants to understand how many of their subscribers also use other streaming services. - Quantitative -- **Descriptive** research (survey for how many of something) Exploratory Research - Focus groups can accomplish a lot for product design and brand management decisions - List PROS AND CONS of focus group - In Depth Interview - Try to give more depth to information rather than larger range of info - Better for sensitive questions/topics - List PROS AND CONS of In Depth Interviews Primary vs. Secondary Data for Exploratory Research - Primary: newly created - More specific to your topic - Secondary: - Advantages: faster and cheaper - Disadvantages: may be insufficient, may be ambiguous, may be outdated - For Secondary data: ASK - HOW WAS IT CREATED -- who and why was it conducted/ valid? - HOW CAN WE USE IT -- how relevant is it to our topic Measurement - Measurement Pocess - Conceptualize -- find what you want to measure - Operationalize -- what can be observed that reflects question - Measure -- how will you record - Evaluate -- assess measures reliability and validity A table of rules for assigning numbers Description automatically generated  - This is an example of INTERVAL/near-interval measurement - We know it\'s interval because the end points are opposite and 4 is a mid-point - "The average safety rating was 1.75 on a 1-5 scale. Twice as many people rated autonomous vehicles as a 1 than as a 5 on the safety scale" - This is a valid claim, you can compare the number of people who chose one response and the number of those who chose another A close up of text Description automatically generated - This is an example of a NOMINAL measurement scale - We know this because it simply because the numbers have no value other than distinguishing the different options - CAN\'T claim the average between the numbers because 2 is not an actual value, you can talk about the MODE because \- ordinal scale because all the response options are ranges. The response scale is not precise \- 5 Measurement Errors - Surrogate Information Error - Asking the wrong questions - Measurement Instrument Bias - Bad questions (double-barrel question, leading questions) - Nonresponse Bias - Nonresponses are going to be very different from responders - Response Bias - The responses you get may be inaccurate due to pressure - Interviewer Error - NEVER for survey questions -- only physical interviewer can cause this - Interview provides tempting factor for response. Interviewer is PRESENT Validity - Face Validity - You would evaluate this BY - Is it measuring what it\'s meant to measure - Content Validity - As a set, are these questions capturing the construct - Predictive Validity - Convergent Validity - SAT and ACT both measuring math abilitites should be the same - Discriminant Validity - It means there should be NO correlation between unrelated things 2 Types of Reliability - Test-Retest Reliability - Same question over time, should be a high correlation - Split-Half Reliability - Are we getting similar values if we divide the group and compare them AT ONE POINT Survey Deisign - 1\) Set your Objective - 2\) Determine Necessary Analysis - Backward marketing research -- start with end in mind - 3\) Choose format and delivery method - Online? - 4\) Make your questions construct your survey - 5\) Pretest and Launch Survey Survey Question Sequencing - Start with easy questions - Similar questions should be grouped together - Demographics at the end to not influence responses Question Wording Guidelines - Avoid double barreling questions -- one topic per question - Keep questions brief and simple - Avoid ambiguous questions - Avoid leading/biased questions - Provide all possible responses - Provide mutually exclusive questions - 1-3 3-5 5-6 is BAD Identify Survey Flaws  1. Not mutually exclusive options, demographics belong in the end, and what if there is a lot of people under 20 but you cant distinguish their age... make the questions open ended 2. Response options start with specific general ending with "life overall", SO start with "life overall" to avoid bias from other questions before. 3. The open response is hard to answer to. The responses may not be accurate 1)Where do you get most of your information about current events in the nation? Facebook Online Newspapers Magazines Radio \- [not collectively exhaustive, and not mutually exclusive] 2\) College courses should provide more projects and fewer exams. Strongly Disagree 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 Strongly Agree \- [double barreling question] 3\) Would you rather have a more expensive Netflix account with more programs or a less expensive account with fewer programs? Strongly Disagree 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 7 Strongly Agree\ - response scale is ambiguous and inconsistent with the question Sampling - Start with Target Population - Identify Sampling Frame - Choose Sampling Method - Probability and nonprobability - Determine Sample Size A diagram of a sampling process Description automatically generated KNOW HOW TO SET THESE UP & when one is more appropiate Convinvnece: easy to access people Judgement: using our own knowledge Quota Sampling: choosing peoile by convenience Snowball: for reaching harder popiulations Simple Random Sampling: everyon has an equal change of being included Systematic Sampling: Skip interval of \_\_ people Stratified: similar to quote BUT stratified chooses at random rather than convenience Cluster: Randomly sampling whole groups rather than individuals Sample Design Error - Population Specification Error?\> if yes, you can stop here, if no continue to sampling frame error - Sampling Frame Error? - If there is a list, it may be this if the list was incomplete in some way, if there is no list, this cannot be it - Selection Error: biased process of selecting participants - Can occur even when population and sampling frame are accurate Example Questions: Based on a survey of people who were in the office on Saturday, your boss decided to cancel the company retreat so people have more time to meet deadlines. - Selection error because if you're in the office on Saturday youre different than usual. Likely to be a systematic difference between those included and not - They are employees so they are target - There is no list mentioned so move onto sampling frame During midterm elections, a USA Today poll contacts a random sample of people from a list of federal employee registered voters to get their opinion on senatorial candidates. - Sampling frame error because I would want EVERY registered voted on that list - List is incomplete The College Board wants to understand the different factors that people consider when making decisions about saving and paying for college. They send a survey to a random sample of undergraduate students throughout the country. - Population specification error because the majority of students don't pay for their own college, talk to parents rather than students The UGA Athletics Department wants to understand how to improve the spectator experience at sports events. They conduct interviews of randomly selected attendees at track and field meets throughout the season. - Selection error because people who watch track and field may be different from people that don't -- there\'s no sampling frame mentioned To evaluate satisfaction with their brand, Athleta sent a short survey to a random sample selected from a **list** of people who signed up for their free in-store yoga class. - Sampling frame error -- those who signed up for free yoga class don't represent the whole target population, so the LIST is incomplete Amazon wants to understand what new grocery items their most loyal Amazon Fresh customers would like to see offered. Once Prime members sign into their account, they receive a pop-up survey. - Population specification error because not every prime member is not amazon fresh customer