Marketing Research Methods PDF
Document Details
Tags
Related
Summary
This document provides an overview of marketing research methodologies, focusing on quantitative data collection techniques using questionnaires. The text details questionnaire design including the choice of questions and scales, as well as the importance of pretesting. It also touches upon collecting primary data through experimental research and qualitative research.
Full Transcript
60 4 Getting Data understanding of how their usage of and preferences for automobiles differ from those of European drivers.6 Test markets are a useful, but costly, type of market research in which a company introduces a...
60 4 Getting Data understanding of how their usage of and preferences for automobiles differ from those of European drivers.6 Test markets are a useful, but costly, type of market research in which a company introduces a new product or service in a specific geographic market. Sometimes, test markets are also used to understand how consumers react to different marketing mix instruments, such as changes in pricing, distribution, or advertising and com- munication. Thus, test marketing is about changing the product or service offering in a real market and gauging consumers’ reactions. While the results from such studies provide important insights into consumer behavior in a real-world setting, they are expensive and difficult to conduct. Some frequently used test markets include Hassloch in Germany, as well as Indianapolis and Nashville in the US. Box 4.3 Using mystery shopping to improve customer service https://www.youtube.com/watch?v=JK2p6GMhs0I 4.4.2 Collecting Quantitative Data: Designing Questionnaires There is little doubt that questionnaires are the mainstay of primary market research. While it may seem easy to create a questionnaire (just ask what you want to know, right?), there are many issues that could turn good intentions into bad results. In this section, we discuss the key design choices to produce good surveys. A good survey requires at least six steps. First, determine the goal of the survey. Next, determine the type of questionnaire and method of administration. Thereafter, decide on the questions and the scale, the design of the questionnaire, and conclude by pretesting and administering the questionnaire. We show these steps in Fig. 4.5. 6 See http://www.businessweek.com/autos/autobeat/archives/2006/01/there_have_been.html [email protected] 4.4 Conducting Primary Data Research 61 Set the goal of the survey Determine analyses required for the study Determine data required to do the analyses Consider the type of information or advice you want to give based on the survey Determine the type of questionnaire and method of administration Design the questions Set the scale Design the questionnaire Pretest the questionnaire Execution Fig. 4.5 Steps in designing questionnaires 4.4.2.1 Set the Goal of the Survey Before you start designing the questionnaire, it is vital to consider the goal of the survey. Is it to collect quantitative data on the background of customers, to assess customer satisfaction, or do you want to understand why and how customers com- plain? These different goals influence the type of questions asked (such as open- ended or closed-ended questions), the method of administration (e.g., by mail or on the Web), and other design issues are discussed below. We discuss three sub-aspects that are important to consider when designing surveys. First, it is important to consider the analyses required for the study early in the design process. For example, if a study’s goal is to determine market segments, you should most probably use cluster analysis (cluster analysis is discussed in Chap. 9). Similarly, if the study’s goal is to develop a way to systematically measure customer satisfaction, you are likely to use factor analysis (see Chap. 8). A second step is to consider what types of data these analyses require. Cluster analysis, for example, often requires equidistant data (see Chap. 3), meaning that researchers need to use a type of questionnaire that can produce this data. On the other hand, factor analysis usually requires data that includes different, but related, questions. If researchers use factor analysis to distinguish between the different aspects of consumer satisfaction, they need to design a survey that allows them to conduct factor analysis. A third point to consider is the information or advice you want to give based on the study. Say you are asked to help understand check-in waiting times at an airport. If the specific question is to understand how many minutes travelers are willing to wait before becoming dissatisfied, you should be able to provide answers to how [email protected] 62 4 Getting Data much travelers’ satisfaction decreases as the waiting time increases. If, on the other hand, the specific question is to understand how people perceive waiting time (short or long), your questions should focus on how travelers perceive this time and, perhaps, what influences their perception. Thus, the information or advice you want to provide influences the questions that you should ask in a survey. 4.4.2.2 Determine the Type of Questionnaire and Method of Administration After determining the goal of a survey, you need to decide on the type of question- naire you should use and how it should be administered. There are four key ways to administer a survey: – Personal interviews, – Telephone interviews, – Web surveys, and – Mail surveys. In some cases, researchers combine different ways of administering surveys. This is called a mixed mode. Personal interviews (or face-to-face interviews) can obtain high response rates, since engagement with the respondents are maximized, allowing rich information (visual expressions, etc.) to be collected. Moreover, since people find it hard to walk away from interviews, it is possible to collect answers to a reasonably lengthy set of questions. Consequently, personal interviews can support long surveys. It is also the best type of data collection for open-ended responses. In situations where the respondent is initially unknown, this may be the only feasible data collection type. Consequently, depth interviews may be highly preferable, but they are also the most costly per respondent. This is less of a concern if only small samples are required (where personal interviewing could be the most efficient). Other issues with personal interviews include the possibility of interviewer bias (i.e., a bias resulting from the interviewer’s behavior, for example, in terms of his/her reactions or presentation of the questions), respondent bias to sensitive items, and that the data collection usually takes more time. Researchers normally use personal interviewing when they require an in-depth exploration of opinions. Such interviewing may also help if drop out is a key concern. For example, if they collect data from executives around the globe, using methods other than face-to-face interviewing may lead to excessive non-response in countries such as Russia or China where face-to-face interviews are seen as a sign of respect and appreciation for time used. A frequently used term in the context of depth interviewing is CAPI, which is an abbreviation of Computer- Assisted Personal Interviews. CAPI involves using computers during the interviewing process to, for example, route the interviewer through a series of questions, or to enter responses directly. Similarly, in CASI (Computer-Assisted Self Interviews) the respondent uses a computer to complete the survey questionnaire without an interviewer administering it. Telephone interviewing allows researchers to collect data quickly. It also supports open-ended responses, though not as well as personal interviews do. Moreover, there is only a moderate control of interviewer bias, since interviewers follow predetermined protocols, and the respondent’s interactions with others during the [email protected] 4.4 Conducting Primary Data Research 63 interview is strongly controlled. Telephone interviewing can be a good compromise between the low cost of mail and the richness of depth interviews. Similar to CAPI, CATI refers to Computer-Assisted Telephone Interviews. Telephone surveys are an important method of administering surveys. In the 1990s telephone interviews were typically conducted over fixed lines but mobile phone usage has soared since. In many countries mobile phone adoption rates are higher than landline adoption was (especially in African countries and India). This has caused market researchers to be increasingly interested in using mobile phones for survey purposes. The cost of calling mobile phones is still higher than calling landlines, but is decreasing. Can market researchers simply switch to calling mobile phones? Research suggests not. In many countries, users of mobile phones differ from the country’s general population in that they are younger, more educated, and represent a smaller household size. Moreover, the process of surveying by calling mobile phones differs from using landlines. For example, the likeli- hood of full completion of surveys is higher for mobile calling, although completion takes around 10% longer (Vincente et al. 2008). Researchers should be aware that calling mobile phones differs from calling landlines and that those who use mobile phones are unlikely to represent the general population of a country. Web surveys (sometimes referred to as CAWI, or Computer-Assisted Web Interviews) are often the least expensive to administer and can be fast in terms of data collection, particularly since they can be set up very quickly. Researchers can administer Web surveys to very large populations, even internationally, because, besides the fixed costs of setting up a survey, the marginal costs of administering additional Web surveys is relatively low. Many firms specializing in carrying out Web surveys will ask $0.30 (or more) to process every additional respondent, which is substantially lower than the costs of telephone interviews, depth interviews or mail surveys. It is easy to obtain precise quotes quickly. For example, Qualtrix Panels (http://qualtrics.com/ panel-management/) allows a specific type of respondent and a desired sample size to be chosen. For example, using Qualtrix’s sample to survey 500 current owners of cars to measure their satisfaction costs $2,500 for a 10-minute survey. This cost increases sharply if samples are hard to access and/or need to be compensated for their time. For example, surveying 500 purchasing managers by means of a 10-minute survey costs approximately $19,500. Web surveys also support complex survey designs with elaborate branching and skip patterns that depend on the response. For example, Web surveys allow different surveys to be created for different types of products. Also, as web surveys reveal questions progressively to the respondents, the option exists to channel respondents to [email protected] 64 4 Getting Data the next question based on their earlier responses. This procedure is called adaptive questioning. In addition, Web surveys can be created that allow respondents to automatically skip questions if they do not apply. For example, if a respondent has no experience using Apple’s iPad, researchers can create surveys that do not ask questions about this product. The central drawback of Web surveys is the difficulty of drawing random samples, since Web access is known to be biased regarding income, race, gender, and age. Some firms, like Toluna (http://www.toluna.com) or Qualtrix (http://qualtrics.com), provide representative panels to address this concern. Another issue with Web surveys is that they impose similar burdens on the respondents as mail surveys do (see below). This makes administering long Web surveys difficult. Moreover, open-ended questions tend to be problematic because few respondents are likely to provide answers, leading to low item response. There is evidence that properly conducted Web surveys lead to data as good as that obtained from mail surveys, and that they can provide better results than personal interviews due to the lack of an interviewer and subsequent interviewer bias (Bronner and Ton 2007; Deutskens et al. 2006). In addition, in Web surveys, the respondents are less exposed to evaluation apprehension and are less inclined to respond with socially desirable behavior.7 Web surveys are also used when a quick “straw poll” is needed on a subject. It is important to distinguish between true web-based surveys used for collecting information on which marketing decisions will be based and polls, or very short surveys on websites that are used to increase interactivity. These polls/short surveys are used to attract and keep people interested in websites and are thus not part of market research. For example, the USA Today (http://www. usatoday.com/), an American newspaper, regularly publishes short polls on their main website. Mail surveys are paper-based surveys sent out to respondents. They are a more expensive type of survey research and are best used for sensitive items. As no interviewer is present, there is no interviewer bias. However, mail surveys are a poor choice for complex survey designs, such as when respondents need to skip a large number of questions depending on previously asked questions, as this means that the respondent needs to correctly interpret the survey structure. Open-ended items are also problematic because few people are likely to provide answers to such questions if the survey is administered on paper. Other problems include a lack of control over the environment in which the respondent fills out the survey and that mail surveys take longer than telephone or Web surveys. However, in some situations, mail surveys are the only way to gather data. For example, while executives rarely respond to Web-based surveys, they are more likely to respond to paper-based surveys. More- over, if the participants cannot easily access the Web (such as employees working in supermarkets, cashiers, etc.), handing out paper surveys is likely more successful. 7 For a comparison between CASI, CAPI and CATI with respect to differences in response behavior, see Bronner and Ton (2007). [email protected] 4.4 Conducting Primary Data Research 65 Mixed mode approaches are increasingly used by market researchers. An example of a mixed mode survey is when potential respondents are first approached telephon- ically and asked to participate and confirm their email addresses, after which they are given access to a Web survey. Mixed mode approaches could also be used in cases where people are first sent a paper survey and are then called if they fail to respond to the survey. Mixed mode approaches may help because they signal that the survey is impor- tant. They may also help response rates, as people who are more visually oriented prefer mail and Web surveys, whereas those who are aurally oriented prefer tele- phone surveys. By providing different modes, people can use the mode they prefer most. A downside of mixed mode surveys is that they are expensive and require a detailed address list (including a telephone number and matching email address). However, the most serious issue with mixed mode surveys is are systematic (non) response issues. For example, when filling out mail surveys, the respondents have more time than when providing answers by telephone. If respondents need this time to think about their answers, the responses obtained through mail surveys may differ systematically from those obtained by means of telephone surveying. 4.4.2.3 Design the Questions Designing questions (items) for a survey, whether it is for a personal interview, Web survey, or mail survey, requires a great deal of thought. Take, for example, the survey item in Fig. 4.6: Fig. 4.6 Example of a bad survey item It is unlikely that people are able to give meaningful answers to such a question. First, using negation (“not”) in sentences makes questions hard to understand. Second, the reader may not have an iPhone, or may not even know what it is. Third, the answer categories are unequally spaced. That is, the difference from neutral to disagree is unequal to the distance from neutral to completely agree. These issues are likely to create difficulties in understanding and answering questions which may, in turn, cause validity and reliability issues. When designing survey questions, there are at least three essential rules you should keep in mind. As a first rule, ask yourself whether everyone will be able to answer each question. If the question is, for example, about the quality of train transport and the respondent always travels by car, his or her answers will be meaningless. However, the framing of questions is important since, for example, questions about why that particular respondent does not use the train can be meaningful answers. [email protected] 66 4 Getting Data As a second rule, you should check whether respondents are able to construct or recall an answer. If you require details that possibly occurred a long time ago (e.g., what information did the real estate agent provide when you bought/rented your current house?), the respondents may have to “make up” an answer, which could also lead to validity and reliability issues. As a third rule, assess whether the respondents will be willing to answer the questions. If questions are considered sensitive (e.g., referring to sexuality, money, etc.), respondents may adjust their answers (e.g., by reporting higher or lower incomes than are actually true). They may also not answer such questions at all. You have to determine whether these questions are necessary to attain the research objective. If they are not, omit them from the survey. What comprises a sensitive question is subjective and differs across cultures, age categories, and other variables. Use your common sense and, if necessary, expert judgment to decide whether the questions are appropriate. In addition, make sure you pretest the survey and ask those participants whether they were reluctant to provide certain answers. In some cases, a reformulation of the question can avoid this issue. For example, instead of directly asking about respondents’ disposable income, you can provide various answering categories, which might increase their willingness to answer this question. A survey’s length is another issue that may make respondents reluctant to answer questions. Many people are willing to provide answers to a short questionnaire but are reluctant to even start with a lengthy survey. As the length of the survey increases, respondents’ willingness and ability to complete it decrease. Providing specifics on what is “too long” in the world of surveys is hard. The mode of surveying and the importance of the survey to the respondent help determine a maximum. Vesta Research (http://www.verstaresearch.com/ blog/rules-of-thumb-for-survey-length/) suggests about 20 minutes for a tele- phone and a web-based (5 minutes when mobile phones are contacted) survey. 10 minutes appears to be the practical maximum for social media- based surveys. Personal interviews and mail surveys can be much longer, depending on the context. For example, surveys comprising personal interviews on topics that respondents find important, could take up to 2 hours. However, when topics are less important, mail surveys and personal interviews need to be considerably shorter. A final issue that has a significant bearing on respondents’ willingness to answer specific questions is the question type in terms of using open-ended or closed-ended questions. Open-ended questions provide little or no structure for respondents’ answers. Generally, the researcher asks a question and the respondent writes down his or her answer in a box. Open-ended questions (also called verbatim items in the market research world) are flexible and allow explanation, but the drawback is that respondents may feel reluctant to provide such detailed information. In addition, their interpretation requires substantial coding. This coding issue arises when respondents [email protected] 4.4 Conducting Primary Data Research 67 provide many different answers (such as “sometimes,” “maybe,” “occasionally,” or “once in a while”) and the researcher has to divide these into categories (such as very infrequently, infrequently, frequently, and very frequently) for further statistical analysis. This coding is very time-consuming, subjective, and difficult. On the other hand, closed-ended questions provide a few categories from which the respondent can choose by ticking an appropriate answer. When open-ended and closed-ended questions are compared, open-ended questions usually have much lower response rates. 4.4.2.4 Set the Scale When deciding on scales two separate decisions need to be made. First you need to decide on the type of scale. Second, you need to set the properties of the scale you chose. Type of Scale Marketing research and practice have provided a variety of scale types. In the following, we will discuss the most important (and useful) ones: – Likert scales, – Semantic differential scales, and – Rank order scales. The type of scaling where all categories are named and respondents indicate the degree to which they (dis)agree is called a Likert scale. Likert scales are used to establish the degree of agreement with a specific statement. Such a statement could be “I am satisfied with my mortgage provider.” The degree of agreement is usually set by scale endings ranging from completely disagree to completely agree. Likert scales are used very frequently and are relatively easy to administer. A caveat with Likert scales is that the statement should be phrased in such a way that complete agreement, or disagreement, is possible. For example, “completely agreeing” with the statement “I have never received a spam email” is almost impossible. If the statement is too positive or negative, it is unlikely that the endings of the scale will be used, thereby reducing the number of answer categories actually used. The semantic differential scale uses an opposing pair of words (young/old, masculine/feminine) and respondents then indicate to what extent they agree with that word. These scales are widely used in market research. As with Likert scales, 5 or 7 answer categories are commonly used (see the next section regarding the number of answer categories you should use). We provide an example of the semantic differen- tial scale in Fig. 4.7, in which respondents can mark their perception of how important online evaluations are for those having to decide on a hotel. The advantages of semantic differential scales include their ability to profile respondents or objects (such as products or companies) in a simple way. [email protected] 68 4 Getting Data Fig. 4.7 Example of a 7-point semantic differential scale Fig. 4.8 Example of a rank order scale Rank order scales are a unique type of scale, as they force respondents to compare alternatives. In its basic form, a rank order scale (see Fig. 4.8 for an example) asks the respondent to indicate which alternative they rank highest, which alternative they rank second-highest, etc. The respondents therefore need to balance their answers instead of merely stating that everything is important. In a more complicated form, rank order scales ask the respondent to allocate a certain total number of points (often 100) to a number of alternatives. This is called the constant sum scale. Constant sum scales work well when a small number of answer categories is used (typically up to 5). Generally, respondents find constant scales that have 6 or 7 answer categories somewhat challenging, while constant scales that have 8 or more categories are very difficult to answer. The latter are thus best avoided. In addition to these types of scaling, there are other types, such as graphic rating scales, which use pictures to indicate categories, and the MaxDiff scale in which respondents indicate the most and least applicable items. We introduce the MaxDiff scale in the Web Appendix (! Chap. 4). Properties of the Scale After deciding on the type of scale, it is time to set the properties of the scale. The properties that need to be set include the following: – The number of answer categories, – Whether or not to use an undecided option, and – Whether or not to use a balanced scale. [email protected] 4.4 Conducting Primary Data Research 69 Number of answer categories: When using closed-ended questions, the number of answer categories needs to be determined. In its simplest form, a survey could use just two answer categories (yes/no). Multiple categories (such as, “completely disagree,” “disagree,” “neutral,” “agree,” “completely agree”) are used more frequently to allow for more nuances. In determining how many scale categories to use, one has to balance having more variation in responses versus asking too much of the respondents. There is some evidence that 7-point scales are better than 5-point scales in terms of obtaining more variation in responses. A study analyzing 259 academic marketing studies suggests that 5-point (24.2%) and 7-point (43.9%) scales are the most common by far (Peterson 1997). Ten-point scales are most commonly used for practical market research. However, scales with a large number of answer categories often confuse respondents because the wording differences between the scale points become trivial. For example, differences between “tend to agree” and “somewhat agree” are subtle and respondents may not pick them up. Moreover, scales with many answer categories increase a survey’s length. You could, of course, also use 4-point or 6-point scales (by deleting the neutral choice). If you wish to force the respondents to be positive or negative, you should use a forced-choice (Likert) scale. This could bias the answers, thereby leading to validity issues. By providing a “neutral” category choice, namely the free-choice (Likert) scale, the respondents are not forced to give a positive or negative answer. Many respondents feel more comfortable about participating in a survey when offered free-choice scales. Whatever you decide, provide descriptions of all the answer categories (such as “agree” or “neutral”) to help the respondents answer. When designing answer categories, use scale categories that are exclusive, so that answers do not overlap (e.g., age categories 0–4, 5–10 etc.). A question is how to choose the spacing of categories. For example, should we divide US household income in categories 0–$9,999, $10.000–$19.999, $20,000-higher or use some other way of setting categories? One suggestion is to use narrower categories if the variable is easy to recall by the respondent. A second suggestion is to space the categories such that as researchers, we expect approximately equal number of observations in categories. In the example above we may find that most households have an income of $20.000 or higher and that categories 0–$9,999 and $10.000–$19.999 are used infrequently. It is best to choose categories where equal percentages are expected such as 0–$24,999, $25.000–$44.999, $45.000–$69.999, $70.000–$109.999, $110.000 and higher. Although the range in each category differs, we can reasonably expect that each category should hold about 20% of responses if we randomly sample US households (see http://en.wikipedia.org/wiki/House hold_income_in_the_United_States#Household_income_over_time). Sometimes a scaling with an infinite number of answer categories is used (without providing answer categories). This seems very precise because the respon- dent can tick any answer on the scale. However, in practice, these are imprecise because respondents do not, for example, know where along the line [email protected] 70 4 Getting Data “untrustworthy” falls. Another drawback of an infinite number of answer categories is that entering and analyzing the responses are time-consuming because this involves measuring the distance from one side of the scale to the answer that the respondent ticked. We provide two examples of such a semantic differential scale in the Web appendix ( Web Appendix ! Chap. 4). Undecided option: A related choice is to include an “undecided” category in the scaling. Using an undecided category allows the researcher to distinguish between those respondents who have a clear opinion and those who do not. Moreover, it may make answering the survey slightly easier for the respondents. While these are good reasons for including this category, the drawback is that there will then be missing observations. If a large number of respondents choose not to answer, this will substantially reduce the number of surveys that can be used for analysis. Generally, when designing surveys, you should include an “undecided” or “don’t know” category as answers to questions that the respondent might genuinely not know, for example, when factual questions are asked. For other types of questions (such as on attitudes or preferences) the “undecided” or “don’t know” category should not be included, as researchers are interested in respondents’ perceptions regardless of their knowledge of the subject matter. Balanced scale: A balanced scale has an equal number of positive and negative scale categories. For example, in a 5-point Likert scale, we may have two negative categories (completely disagree and disagree), a neutral option, and two positive categories (agree and completely agree). Besides this, the wording in a balanced scale should reflect equal distances between the scale items. If this is the case, we can claim that the scale is equidistant which is a requirement for its use in many analysis techniques. Therefore, we strongly recommend using a balanced scale. A caveat of balanced scales is that many constructs cannot have negative values. For example, one can have some trust in a company or very little trust, but negative trust is highly unlikely. If a scale item cannot be negative, you will have to resort to an unbalanced scale in which the endpoints of the scales are unlikely to be exact opposites. Table 4.1 summarizes the key choices we have to make when designing surveys. [email protected] 4.4 Conducting Primary Data Research 71 Table 4.1 A summary of some of the key choices when designing surveys Choice Actions Can all the respondents answer Ensure that all potential respondents can answer all the question asked? items. If they cannot, ask screener questions to direct them. If the respondents cannot answer questions, they should be able to skip them. Can the respondents construct/recall If the answer is no, you should use other methods answers? to obtain information (e.g., secondary data or observations). Moreover, you should ask the respondents about major aspects before zooming in on details to help them recall answers. Do the respondents want to If the questions concern “sensitive” subjects, check answer each question? whether they can be omitted. If not, stress the confidentiality of the answers and mention why these answers are useful for the researcher, the respondent, or society before introducing them. Should you use open-ended Keep the subsequent coding in mind. If easy coding is or closed-ended questions? possible beforehand, design a set of exhaustive answer categories. Further, remember that open-ended scale items have a much lower response rate than closed- ended items. What scaling categories should you Use Likert scales, semantic differential scales, use (closed-ended questions only) or rank order scales. Should you use a forced-choice Respondents feel most comfortable with the or open-choice scale? open-choice scale. However, if an even number of scale categories is used, the forced-choice scale is most common, since the scaling can become uneven if it isn’t. Should you include an “undecided”/ Only for questions that the respondent might “don’t know” category? genuinely not know, should a “don’t know” category be included. If included, place this at the end of the scale. Should you use a balanced scale? Always use a balanced scale. There should be an exact number of positive and negative wordings in the scale items. The words at the ends of the scale should be exact opposites. Do’s and Don’ts in Designing Survey Questions When designing survey questions, there are a number of do’s and don’ts, which we discuss in the following. Always use simple words and avoid using jargon or slang if not all the respondents are likely to understand it. There is good evidence that short sentences work better than longer sentences because they are easier to understand (Holbrook et al. 2006). Thus, try to keep survey questions short and simple. More- over, avoid using the word not or no where possible. This is particularly important when other words in the same sentence are negative, such as “unable,” or “unhelpful” because sentences with two negatives (called a double negative) are hard to [email protected] 72 4 Getting Data understand. For example, a question such as “I do not use the email function in my iPhone because it is unintuitive” is quite hard to follow. Also avoid the use of vague quantifiers such as “frequent” or “occasionally” (Dillman 2007). Vague quantifiers make it difficult for respondents to answer questions (what exactly is meant by “occasionally”?). They also make comparing responses difficult. After all, what one person considers “occasionally,” may be “frequent” for another. Instead, it is better to use frames that are precise (“once a week”). Never suggest an answer, for example, by asking “Company X has done very well, how do you rate it?” In addition, avoid double-barreled questions at all costs; these are questions in which a respondent can agree with one part of the question but not the other, or cannot answer without accepting a particular assumption. Examples of double-barreled questions include: “Is the sales personnel polite and responsive?” and “In general, are you satisfied with the products and services of company X?” Lastly, when you run simultaneous surveys in different countries, make use of professional translators as translation is a complex process. Functionally translating one language into another is quite easy and many websites, such as Google translate (http://translate.google.com/) can do this. However, translating surveys requires pre- serving conceptual equivalence of whole sentences and paragraphs; current software applications and websites cannot ensure this. In addition, cultural differences may require changes to the entire instrument format or procedure. A technique to establish conceptual equivalence across languages is back-translation. Back-translation requires translating a survey instrument into another language after which the trans- lated survey instrument is translated into the original language by another translator. After the back-translation, the original and back-translated instruments are compared and points of divergence are noted. The translation is then corrected to more accurately reflect the intent of the wording in the original language. In Fig. 4.9, we provide a few examples of poorly designed survey questions followed by better-worded equivalents. These questions relate to how satisfied iPhone users are with performance, reliability, and after-service. 4.4.2.5 Design the Questionnaire After determining the individual questions, you have to integrate these, together with other elements, to create the questionnaire. This involves the following elements: – Designing the starting pages of the questionnaire, – Choosing the order of the questions, and – Designing the layout and format. Starting pages of the questionnaire: At the beginning of each questionnaire, the importance and goal are usually described to stress that the results will be treated confidentially, and to mention what they will be used for. This is usually followed by an example question (and answer), to demonstrate how the survey should be filled out. [email protected] 4.4 Conducting Primary Data Research 73 Poor: The question is double-barreled: it asks about performance and reliability Strongly Somewhat Somewhat Completely Neutral disagree disagree agree agree I am satisfied with the performance and reliability of 0 0 0 0 0 my Apple iPhone Better: Separate the question into two questions Strongly Somewhat Somewhat Completely Neutral disagree disagree agree agree I am satisfied with the performance of my Apple 0 0 0 0 0 iPhone I am satisfied with the 0 0 0 0 0 reliability of my Apple iPhone Poor: The question cannot be answered by those who have not experienced after-sales service. Strongly Somewhat Somewhat Completely Neutral disagree disagree agree agree I am satisfied with the after- sales service of my Apple 0 0 0 0 0 iPhone Better: The question uses branching No Yes 1.1: Have you used Apple’s after-sales service for your 0 0 iPhone? If you answered yes, please proceed to question 1.2.; otherwise, skip question 1.2 and proceed to question 1.3 Strongly Somewhat Somewhat Completely Neutral disagree disagree agree agree 1.2: I am satisfied with the after-sales service that Apple 0 0 0 0 0 provided for my iPhone Fig. 4.9 (continued) [email protected] 74 4 Getting Data Poor: Question design will likely cause inflated expectations (“everything is very important”) Not at all Not Very Neutral Important important important important Which of the following iPhone features do you find most important? Camera 0 0 0 0 0 Music player 0 0 0 0 0 App store 0 0 0 0 0 Web browser 0 0 0 0 0 Mail client 0 0 0 0 0 Better:: Use rank order scale Rank Rank the following iPhone features from most to least important. Begin by picking out the feature you think is most important and assign it the number 1. Then find the second most important and assign it the number 2, etc. Camera ___ Music player ___ App store ___ Web browser ___ Mail client ___ Fig. 4.9 Examples of good and bad practice in designing survey questions If questions relate to a specific issue, moment, or transaction, you should indicate this clearly at the very beginning. For example, “Please provide answers to the following questions, keeping the purchase of product X in mind.” If applicable, you should also point out that your survey is conducted in collaboration with a univer- sity, a recognized research institute, or a known charity, as this generally increases respondents’ willingness to participate. Moreover, do not forget to provide a name and contact details for those participants who have questions, or in case technical problems arise. Consider including a picture of the research team as this increases response rates. Lastly, you should thank the respondents for their time and describe how the questionnaire should be returned (for mail surveys). [email protected] 4.4 Conducting Primary Data Research 75 Order of the questions: Choosing the appropriate order of questions is crucial because it determines the logical flow of the questionnaire and therefore contributes to high response rates. The order of questions is usually as follows: 1. Screeners or classification questions come first. These questions determine what parts of the survey a respondent should fill out. 2. Next, insert the key variables of the study. This includes the dependent variables, followed by the independent variables. 3. Use a funnel approach. That is, ask questions that are more general first and then move on to details. This makes answering the questions easier as the order helps the respondents recall. Make sure that sensitive questions are put at the very end of this section. 4. Demographics are placed last if they are not part of the screening questions. If you ask demographic questions, always check whether they are relevant with regard to the research goal. In addition, check if these demographics are likely to lead to non-response. Asking about demographics, like income, educational attainment, or health, may result in a substantial number of respondents refusing to answer. If such sensitive demographics are not necessary, omit them from the survey. Note that in certain countries asking about a respondent’s demographic characteristics means you have to abide by specific laws, such as the Data Protection Act 1998 in the UK. If your questionnaire contains several sections (e.g., in the first section you ask about the respondents’ buying attitudes and in the following section about their satisfaction with the company’s services), you should make the changing context clear to the respondents. Layout and format of the survey: The layout of both mail and web-based surveys should be concise and should conserve space where possible. In Box 4.4, we discuss further design issues that should be considered when planning surveys. Box 4.4 Design issues when planning surveys Avoid using small and colored fonts, which reduce readability. For mail- based surveys, booklets work well, since postage is cheaper if surveys fit in standard envelopes. If this is not possible, single-sided stapled paper can also work. For web-based surveys, it is good to have a counter letting the respondents know what percentage of the questions they have already filled out. This gives them some indication of how much time they still have to spend on completing the survey. Make sure the layout is simple and follows older and accepted Web standards. This step allows respondents with older and/or non-standard browsers or computers to fill out the survey. In addition, take into consideration that many people access Web surveys through mobile phones and tablet computers. Using older and accepted Web standards will cause these respondents a minimum number of technical problems. [email protected] 76 4 Getting Data 4.4.2.6 Pretest the Questionnaire We have already mentioned the importance of pretesting the survey several times. Before any survey is sent out, you should pretest the questionnaire to enhance its clarity and to ensure the client’s acceptance of the survey. Once the questionnaire is in the field, there is no way back! You can pretest questionnaires in two ways. In its simplest form, you can use a few experts (say 3–6) to read the survey, fill it out, and comment on it. Many web-based survey tools allow researchers to create a pretested version of their survey, in which there is a text box for comments behind every question. Experienced market researchers are able to spot most issues right away and should be employed to pretest surveys. If you aim for a very high quality survey, you should also send out a set of preliminary (but proofread) questionnaires to a small sample consisting of 50–100 respondents. The responses (or lack thereof) usually indicate possible problems and the preliminary data may be analyzed to determine the potential results. Never skip pretesting due to time issues, since you are likely to run into problems later! Box 4.5 Dillman’s (2007) recommendations on how to increase response rates It is becoming increasingly difficult to get people to fill out surveys. This may be due to over-surveying, dishonest firms that disguise sales as research, and a lack of time. In his book, Mail and Internet Surveys, Dillman (2007) discusses four steps to increase response rates: 1. Send out a pre-notice letter indicating the importance of the study and announcing that a survey will be sent out shortly. 2. Send out the survey with a sponsor letter, again indicating the importance of the study. 3. Follow up after 3–4 weeks with both a thank you note (for those who responded) and a new survey plus a reminder (for those who did not respond). 4. Call or email those who have not responded still and send out a thank you note to those who replied in the second round. Further, Dillman (2007) points out that names and addresses should be error free. Furthermore, he recommends using a respondent-friendly questionnaire in the form of a booklet, providing return envelopes, and personalizing correspondence. An increasingly important aspect of survey research is to induce potential respondents to participate. In addition to Dillman’s (2007) recommendations on how to increase response rates (Box 4.5), incentives are increasingly used. A simple example of such an incentive is to provide potential respondents with a cash reward. In the US, one-Dollar bills are often used for this purpose. Respondents who participate in (online) research panels often receive points that can be exchanged for products and services. For example, Research Now, a market research company, [email protected] 4.5 Basic Qualitative Research 77 provides its Canadian panel members AirMiles that can be exchanged for free flights, amongst others. A special type of incentive is to indicate that, for every returned survey, money will be donated to a charity. ESOMAR, the world organi- zation for market and social research (see Chap. 10), suggests that incentives for interviews or surveys should “be kept to a minimum level proportionate to the amount of their time involved, and should not be more than the normal hourly fee charged by that person for their professional consultancy or advice.” Another incentive is to give the participants a chance to win a product or service. For example, you could randomly give away iPods or holidays to a number of participants. By providing them with a chance to win, the participants need to disclose their name and address so that they can be reached. While this is not part of the research itself, some respondents may feel uncomfortable to provide contact details, which could potentially reduce response rates. Finally, a type of incentive that may help participation (particularly in professional settings) is reporting the findings to the participants. This can be done by providing a general report of the study and its findings, or by providing a customized report detailing the participant’s responses and comparing them with all the other responses (e.g., www.selfsurvey.com). Obviously, anonymity needs to be assured so that the participants cannot compare their answers with those of other individual responses. 4.5 Basic Qualitative Research Qualitative research is mostly used to gain an understanding of why certain things happen. It can be used in an exploratory context by defining problems in more detail, or by developing hypotheses to be tested in subsequent research. Qualitative research also allows researchers to learn about consumers’ perspectives and vocabulary, especially when the context (e.g., the industry) is unknown to them. As such, qualitative research offers importance guidance when little is known about consumers’ attitudes and perceptions or the market. Qualitative research leads to the collection of qualitative data as discussed in Chap. 3. One can collect qualitative data by explicitly informing the participants that you are doing research (directly observed qualitative data), or you can simple observe the participants’ behavior without the them being explicitly aware of the research goals (indirectly observed qualitative data). There are ethical issues associated with conducting research when the participants are not aware of the research purpose. Always check regulations regarding what is allowed in your context and what not. It is always advisable to brief the participants on their role and the goal of the research after the data has been collected. The two key forms of directly observed qualitative data are depth interviews and focus groups. Together, they comprise most of the conducted qualitative market research. First, we will discuss depth interviews which are – as the terms suggests – interviews conducted with one participant at a time, allowing for high levels of [email protected] 78 4 Getting Data personal interaction between the interviewer and respondent. Next, we will discuss projective techniques, a frequently used type of testing procedure in depth interviews. Lastly, we will introduce focus group discussions, which are conducted with multiple participants. 4.5.1 Depth Interviews Depth interviews are qualitative conversations with participants about a specific topic. These participants are often consumers, but they may also be the decision- makers in a market research study, who are interviewed to gain an understanding of their clients’ needs. They may also be government or company representatives. Interviews vary in their level of structure. In their simplest form, interviews are unstructured and the participants talk about a topic in general. This works well if you want to obtain insight into a topic, or as an initial step in a research process. Interviews can also be fully structured, meaning all questions and possible answer categories are decided in advance. This leads to the collecting of quantitative data. However, most depth interviews for gathering qualitative data are semi-structured and contain a series of questions that need to be addressed, but that have no specific format regarding what the answers should look like. The person interviewed can make additional remarks, or discuss somewhat related issues, but is not allowed to wander off too far. In these types of interviews, the interviewer often asks questions like “that’s interesting, could you explain?,” or “how come...?” to probe further into the issue. In highly structured interviews, the interviewer has a fixed set of questions and often a fixed amount of time for each person’s response. The goal of structured interviews is to maximize the comparability of the answers. Conse- quently, the set-up of the questions and the structure of the answers need to be similar. Depth interviews are unique in that they allow for probing on a one-to-one basis, fostering interaction between the interviewer and interviewee. Depth interviews also work well when those being interviewed have very little time and when they do not want the information to be shared with the other study participants. This is, for example, likely to be the case when you discuss marketing strategy decisions with CEOs. The drawbacks of depth interviews include the amount of time the researcher needs to spend on the interview itself and on traveling (if the interview is conducted face-to-face and not via the telephone), as well as transcribing the interview. When conducting depth interviews, a set format is usually followed. First, the interview details are discussed, such as confidentiality issues, the topic of the interview, the structure, and the duration. Moreover, the interviewer should disclose whether the interview is being recorded and inform the interviewee that there is no right or wrong answer, just opinions on the subject. The interviewer should also try to be open and keep eye contact with the interviewee. Interviewers can end an interview by informing their respondents that they have reached the last question and thanking them for their time. [email protected] 4.5 Basic Qualitative Research 79 Interviews are often used to investigate means-end issues in which researchers try to understand what ends consumers aim to satisfy and which means (consumption) they use to do so. A means-end approach involves determining the attributes of a product first. These are the functional product features, such as the speed a car can reach or its acceleration. Subsequently, researchers look at the functional consequences that follow from the product benefits. This could be driving fast. The psychosocial consequences, or personal benefits, are derived from the functional benefits and, in this example, could include an enhanced status, or being regarded as successful. Finally, the psychosocial benefits are linked to people’s personal values or life goals, such as a desire for success or acceptance. Analyzing and identifying the relationships between these steps is called laddering. 4.5.2 Projective Techniques Projective techniques describe a special type of testing procedure, usually used in depth interviews. They work by providing participants with a stimulus and gauging their responses. Although participants in projective techniques know that they are participating in a market research study, they may not be aware of the research’s specific purpose. The stimuli provided in projective techniques are ambiguous and require a response from the participants. A key form of projective techniques is sentence completion, for example: An iPhone user is someone who: …………………………… The Apple brand makes me think of: …………………………… iPhones are most liked by: …………………………… In this example, the respondents are asked to express their feelings, ideas, and opinions in a free format. Projective techniques’ advantage is that they allow for responses when people are unlikely to respond if they were to know the exact purpose of the study. Thus, projective techniques can overcome self-censoring and allow expression and fantasy. In addition, they can change a participant’s perspective. Think of the previous example. If the participants are users of the Apple iPhone, the sentence completion example asks how they think other people regard them, not what they think of the Apple iPhone. A drawback is that projective techniques require the interpretation and coding of responses, which can be difficult. 4.5.3 Focus Groups Focus groups are interviews conducted among a number of respondents at the same time and led by a moderator. This moderator leads the interview, structures it, and [email protected] 80 4 Getting Data often plays a central role in transcribing the interview later. Focus groups are usually semi or highly structured. The group usually comprises between 4 and 6 people to allow for interaction between the participants and to ensure that all the participants have a say. The duration of a focus group interview varies, but is often between 30 and 90 minutes for focus groups of company employees and between 60 to 120 minutes for consumers. When focus groups are held with company employees, moderators usually travel to the company and conducts their focus group in a room. When consumers are involved, moderators often travel to a market research company, or hotel, where a conference room is used for the focus group. Market research companies often have special conference rooms with equipment like one-way mirrors, built-in microphones, and video recording devices. How are focus groups structured? They usually start with the moderator introducing the topic and discussing the background. Everyone is introduced to establish rapport. Subsequently, the moderator tries to get the members of the focus group to speak to one another, instead of asking the moderator for confirmation. Once the focus group members start discussing topics with one another, the moderator tries to stay in the background, merely ensuring that the discussions stay on-topic. Afterwards, the participants are briefed and the discussions are transcribed for further analysis. Table 4.2 Comparing focus groups and depth interviews Focus groups Depth interviews Group Group interaction is present. This may There is no group interaction. interactions stimulate new thoughts from Therefore, stimulation for new ideas respondents. from the respondents comes from the interviewer. Group/peer Group pressure and stimulation may In the absence of group pressure, the pressure clarify and challenge thinking. respondents’ thinking is not challenged. Peer pressure and role playing. With one respondent, role playing is minimized and there is no peer pressure. Respondent Respondents compete with one another Individuals are alone with the competition for time to talk. There is less time to interviewer and can express their obtain in-depth details from each thoughts in a non-competitive participant. environment. There is more time to obtain detailed information. Peer Responses in a group may be biased With one respondent, there is no influence by other group members’ opinions. potential of other respondents influencing this person. Subject If the subject is sensitive, respondents If the subject is sensitive, respondents sensitivity may be hesitant to talk freely in the may be more likely to talk. presence of other people. Stimuli The volume of stimulus materials that A fairly large amount of stimulus can be used is somewhat limited. material can be used. Interviewer It may be difficult to assemble 8 or Individual interviews are easier to schedule 10 respondents if they are a difficult type schedule. to recruit (such as busy executives). [email protected] 4.6 Collecting Primary Data Through Experimental Research 81 Focus groups have distinct advantages: they are relatively cheap compared to depth interviews, they work well with issues that are important socially or which require spontaneity. They are also useful for developing new ideas. On the down- side, focus groups do not offer the same ability as interviews to probe, and also run a greater risk of going off-topic. Moreover, a few focus group members may domi- nate the discussion and, especially in larger focus groups, “voting” behavior may occur, hindering real discussions and the development of new ideas. Table 4.2 summarizes the key differences between focus groups and depth interviews. 4.6 Collecting Primary Data Through Experimental Research In Chap. 2, we discussed causal research and briefly introduced experiments as a means of conducting research. The goal of designing experiments is to control for as many influencing factors as possible in an effort to avoid unintended influences. Experiments are typically conducted by manipulating one variable, or a few, at a time. For example, we can change the price of a product, the type of product, or the package size to determine whether these changes affect important outcomes such as attitudes, satisfaction, or intentions. Often, simple field observations cannot estab- lish these relationships as inferring causality can be problematic. Imagine a com- pany that wants to introduce a new type of soft drink aimed at health-conscious consumers. If the product were to fail, the managers would probably conclude that the consumers did not like the product. However, many (often unobserved) variables, such as price cuts by competitors, changing health concerns, and a lack of availability, can also influence new products’ success. 4.6.1 Principles of Experimental Research An experiment deliberately imposes a treatment on a group of subjects in the interest of observing the response. This way, experiments attempt to isolate how one particular change affects an outcome. The outcome(s) is (are) the dependent variable(s) and the independent variable(s) (also referred to as factors) are used to explain the outcomes. To examine the influence of the independent variable(s) on the dependent variable(s), treatments or stimuli are administered to the participants. These are supposed to manipulate participants in that they put them into different situations. A simple form of treatment could be an advertisement with and without humor. In this case, the humor is the independent variable, which can take two levels (i.e., with or without humor). If we manipulate, for example, the price between low, medium, and high, we have three levels. When selecting independent variables, we typically include those marketers care about and which are related to the marketing and design of the products and services. Care should be taken not to include too many of these variables in order to keep the experiment manageable. An experiment that includes four independent variables, each of which has three levels, and which includes every possible [email protected]