Research Methods (ASC415) – Actuarial Science PDF
Document Details
Uploaded by GraciousEpitaph
Tags
Summary
This document provides an overview of research methods, including various sampling techniques (probability and non-probability), survey design, data analysis, and reliability/validity concepts related to actuarial science.
Full Transcript
RESEARCH METHODS (ASC415) – ACTUARIAL SCIENCE WEEK 5 COVERAGE 1. Population and sample 2. Probability sampling 3. Non-probability sampling 4. Survey questionnaires and interviews 5. Reliability and validity of instruments 6. Data analysis and interpretation of res...
RESEARCH METHODS (ASC415) – ACTUARIAL SCIENCE WEEK 5 COVERAGE 1. Population and sample 2. Probability sampling 3. Non-probability sampling 4. Survey questionnaires and interviews 5. Reliability and validity of instruments 6. Data analysis and interpretation of results INTRODUCTION Statistics: Is a group of methods used to collect, analyze, present, and interpret data and to make decisions. Types of Statistics 1. Descriptive statistics 2. Inferential statistics DESCRIPTIVE AND INFERENTIAL STATISTICS Descriptive Statistics Consists of methods for organizing, displaying, and describing data by using tables, graphs, and summary measures. Inferential Statistics Consists of methods that use sample results to help make decisions or predictions about a population. POPULATION VS SAMPLE Motivation 1. Suppose we want to know prices of all houses in Chikanda. 2. The percentage of all voters in a city who will vote for a particular candidate in the upcoming elections. Population or Target Population A population consists of all elements—individuals, items, or objects— whose characteristics are being studied. Target population is the population that is being studied. POPULATION VS SAMPLE Sample A portion of the population selected for study. CENSUS AND SAMPLE SURVEY Census Census is a survey that includes every member of the population. Sample Survey Sample survey is a technique of collecting information from a portion of the population. REPRESENTATIVE AND RANDOM SAMPLE Representative Sample Is a sample that represents the characteristics of the population as closely as possible. Random Sample Is a sample drawn in such a way that each element of the population has a chance of being selected. PROBABILITY SAMPLING TECHNIQUES TYPES OF PROBABILITY SAMPLING TECHNIQUES 1. Simple random sampling 2. Systematic sampling 3. Stratified sampling 4. Cluster sampling 1. SIMPLE RANDOM SAMPLING Simple Random Sampling Sample is selected from population and has equal chance of being selected as a sample. The sample is taken randomly from a sampling frame. Some common sampling frames include the telephone directory, customer list and student list. 1. SIMPLE RANDOM SAMPLING Example If we are to select 5 students from this class of 21, we write each of the 21 names on a separate piece of paper. Then we place all 21 slips in a box and mix them thoroughly. Finally, we randomly draw 5 slips from the box. The 5 names drawn give a random sample. On the other hand, if we arrange all 21 names alphabetically and then select the first 5 names on the list, it is a nonrandom sample because the students listed 6th to 21st have no chance of being included in the sample. 2. SYSTEMATIC SAMPLING In systematic sampling, we divide the population size ,𝑁, by the sample size ,𝑛, to obtain the range, 𝑘, 𝑁 𝑘= 𝑛 Steps 21 1. Sample of 5 is desired from 21 students in the population. Thus, 𝑘 = = 4.2 ≈ 5 4. 2. Label the students from 1 to 21 and arrange them in order. 3. Count from 1 to 4 and get the 5th student select as a random element. Again, count from 6 to 9 and get the 10th student as a random student. 4. All the selected students will form a sample. 2. SYSTEMATIC SAMPLING 3. STRATIFIED SAMPLING Stratified sampling involves dividing population into homogeneous strata and then taking random sample in each stratum. Divide population into homogeneous strata Taking random sample in each stratum using simple random sampling or systematic sampling 3. STRATIFIED SAMPLING Example An actuarial science student wants to assess student opinion on the co-curricular activities at UNIMA. S/he decides to survey only 600 students. ❑To ensure a representative sample of students from all years of study, the student can use a stratified sampling technique. 3. STRATIFIED SAMPLING Grade Population (N) Sample (n) 1 2,500 2500 × 600 = 147 10200 2 2,000 2000 × 600 = 118 10200 3 3,000 3000 × 600 = 177 10200 4 2,200 2200 × 600 = 129 10200 5 500 500 × 600 = 29 10200 TOTAL 10,200 600 3. STRATIFIED SAMPLING In this case, the strata were the 5 years of study The student then selected a sample within each stratum. The students selected in this sample were extracted using simple random or systematic sampling. CLUSTER SAMPLING Divide the entire population (population of state X) into different clusters (town). From the selected clusters include all items as samples OR Select items from each cluster through simple or systematic random sampling. 4. CLUSTER SAMPLING ▪ Cluster sampling divides the population into groups or clusters. ▪ A number of clusters are selected randomly to represent the total population. ▪ No units from non-selected clusters are included in the sample because they are represented by those from selected clusters. ▪ With stratification, we sample from each of the subgroups but in cluster sampling, we sample from selected subgroups only. ▪ Cluster sampling is most applicable when sampling frame is incomplete. 4. CLUSTER SAMPLING Example The most common cluster used in research is a geographical cluster. For example, a researcher wants to survey academic performance of high school students in State X. 1. Divide the entire population (population of state X) into different clusters (town). 2. Selects a number of clusters through simple or systematic sampling. 3. Then, from the selected clusters i. Include all items as samples or ii. Select items from each cluster through simple or systematic random sampling. NON-PROBABILITY SAMPLING TYPES OF NON-PROBABILITY SAMPLING 1. Convenience sampling 2. Consecutive sampling 3. Judgmental sampling 4. Snowball sampling 5. Volunteer sampling 6. Quota sampling 1. CONVENIENCE SAMPLING Convenience Sampling Is a non-probability sampling technique where samples are selected from the population only because they are conveniently available to the researcher. Researchers choose these samples just because they are easy to recruit, and the researcher did not consider selecting a sample that represents the entire population. Ideally, in research, it is good to test a sample that represents the population. But, in some research, the population is too large to examine and consider the entire population. It is one of the reasons why researchers rely on convenience sampling, which is the most common non-probability sampling method, because of its speed, cost- effectiveness, and ease of availability of the sample. 2. CONSECUTIVE SAMPLING This non-probability sampling method is very similar to convenience sampling, with a slight variation. Here, the researcher picks a single person or a group of a sample, conducts research over a period, analyzes the results, and then moves on to another subject or group if needed. Consecutive sampling technique gives the researcher a chance to work with many topics and fine-tune his/her research by collecting results that have vital insights. 3. JUDGMENTAL OR PURPOSIVE SAMPLING In the judgmental sampling method, researchers select the samples based purely on the researcher’s knowledge and credibility. In other words, researchers choose only those people who they deem fit to participate in the research study. Judgmental is not a scientific method of sampling, and the downside to this sampling technique is that the preconceived notions of a researcher can influence the results. Thus, this research technique involves a high amount of ambiguity. 4. SNOWBALL SAMPLING Snowball sampling helps researchers find a sample when they are difficult to locate. Researchers use this technique when the sample size is small and not easily available. This sampling system works like the referral program. Once the researchers find suitable subjects, he asks them for assistance to seek similar subjects to form a considerably good size sample. 5. VOLUNTEER SAMPLING The respondents are only volunteers in this method. Generally, volunteers must be screened so as to get a set of characteristics suitable for the purposes of the survey (e.g. individuals with a particular disease). This method can be subject to large selection biases, but is sometimes necessary. Another example of volunteer sampling is callers to a radio or television show, when an issue is discussed and listeners are invited to call in to express their opinions. Only the people who care strongly enough about the subject in one way or another tend to respond. The silent majority does not typically respond, resulting in a large selection bias. Volunteer sampling is often used to select individuals for focus groups or in-depth interviews (i.e. for qualitative testing, where no attempt is made to generalize to the whole population). 6. QUOTA SAMPLING Sampling is done until a specific number of units (quotas) for various subpopulations have been selected. The quotas may be based on population proportions. For example, if there are 100 men and 100 women in the population and a sample of 20 are to be drawn, 10 men and 10 women may be interviewed. Quota sampling can be considered preferable to other forms of non-probability sampling (e.g. judgment sampling) because it forces the inclusion of members of different subpopulations. Quota sampling is somewhat similar to stratified sampling, which is probability sampling, in that similar units are grouped together. However, it differs in how the units are selected. In probability sampling, the units are selected randomly while in quota sampling a non-random method is used—it is usually left up to the interviewer to decide who is sampled. SURVEY QUESTIONNAIRES AND INTERVIEWS SURVEY A survey is the process of asking a group of people a series of questions to gather information and draw conclusions. Various reasons why surveys are done, for example: To learn voters opinion on how they perceive candidates , to study potential market of a product before production, Can be conducted in many ways; 1. Over the telephone 2. Via mail 3. In-person SIMILARITIES AND DIFFERENCES BETWEEN QUESTIONNAIRE AND INTERVIEW A questionnaire is a written set of questions that respondents answer independently, while an interview involves a direct conversation between the interviewer and respondent. Read more on the differences and similarities. CONSIDERATIONS WHEN DEVELOPING A QUESTIONNAIRE Classes of people The questionnaire is the main instrument of the survey and must be prepared keeping in mind the requirements of five classes of people. A few indications of these needs are given below. 1. The client 2. The interviewer 3. The respondent 4. The coder 5. The analyst 1. THE CLIENT Does the questionnaire ask about all the topics agreed by surveyor and client? Does it search out sufficient detail in each topic? 2. THE INTERVIEWER Is it clear in what form answers are to be recorded? Is the layout as simple as possible? Are “skip” instructions clear and kept to a minimum? Is the questionnaire of reasonable length? 3. THE RESPONDENT Will all the concepts implied by the questions be clearly understood by the respondent? Are questions written in the language normally used by the respondent? Are there any embarrassing or threatening questions? 4. THE CODER Can all conceivable responses be unambiguously coded? Can data be computerised by direct reference to response sheet without the need for a special coding operation? 5. THE ANALYST Can all important variables be unambiguously identified and studied? Can all important relationships be analysed? GENERAL POINTS ON QUESTION DESIGN The interviewer-administered questionnaire has to be like a conversation between the Interviewer (I) and the respondent (R). If the questions or answers make the conversation too “unnatural” e.g. embarrassing, boring or threatening, the results will not be of good quality! Writing good questions is an ART. There are a few basic principles, but the art has to be learned by experience, and based on broad knowledge of the respondent population, and of how they and the interviewers will react. One individual (the surveyor) must take final responsibility for the questionnaire. While brainstorming by a team may be useful for sorting out broad topics, good questions are almost impossible to write in committee. GENERAL POINTS ON QUESTION DESIGN … After the logic of the survey has been sorted out, the CLIENT must agree that the questionnaire seems to be suitable. He must be persuaded from “tinkering” which will “spoil the conversation”, or make the planned analyses less effective. The surveyor should make a first assessment of the impact of the questionnaire by playing the role of interviewer (preferably with some awkward specimen respondents) and the role of respondent (preferably with a specimen interviewer he does not know well). The complete document should be tested in a PILOT SURVEY, which should include interviews and coding of completed questionnaires, as well as monitoring procedures to check the effectiveness of questionnaire administration, and the quality of the data captured. ASSESSING EACH QUESTION OBJECTIVES Who said? Is including this question a direct consequence of the statement of survey targets? CONTENT OBJECTIVES why? What will I discover from the answer? Will the answer be meaningful and relevant? WHO SAID? Is including this question a direct consequence of the statement of survey targets? Could I leave it out? WORDING how? CONTENT Is the question ambiguous? Are the words simple? (Specifically will they be simple for the least educated WHY? respondent)? What will I discover from the answer? FORM Will the answer be meaningful and relevant? What type? Could I leave it out? Open form, with prompts? Pre-coded, scaled, semantic anchors? ORDER WORDING Where? What is the logical position for this question in the Questionnaire? HOW? Is the question ambiguous? RELATIONSHIPS Are the words simple? (Specifically will they be simple for the least educated respondent)? Other questions? What does this question add to the others? Is it here in its own right? FORM Or is it here to clarify the answer to another question? ASSESSING QUESTION CONTENT 1. Ask “why include it?” An unnecessary question is costly:- It uses time, which is always at a premium. It has an opportunity cost – you could perhaps have asked something more relevant. It will reduce efficiency; this can have a “carry-over” effect to more important questions. Respondents tire with each extra question. Interviewers sense irrelevant questions and start to ask them carelessly. ASSESSING QUESTION CONTENT … 2. Ask “how will its results be used in the analysis?” To which conclusions will the answers contribute? If none, omit! Question may be of primary importance to survey targets. Alternatively it may serve to clarify e.g. age-group may serve to subdivide responses, and this show up a trend in an attitude with respondent age. ASSESSING QUESTION CONTENT … 3. Then ask: “is it possible?” Will the respondent understand? (language, education) Will the respondent know the answer? e.g. memory fails him, e.g. he never had access to this information. Is respondent motivated to give truthful answers? e.g. public interest, e.g. legal requirement. Will the respondent reveal the correct answer in the interview circumstances? (fear, prestige) Have we a way of checking the answer given? e.g. rephrase answer and say it back to respondent, e.g. ask same or related Question elsewhere in Questionnaire, e.g. external checks. QUESTION WORDING Is the wording sufficiently specific? For example, try to imagine as many meanings as possible for the question, “How many cousins do you have?” There are special pitfalls for questions about income/occupation and for attitude questions. QUESTION WORDING … Is the language sufficiently simple/natural? Questions should use the simplest possible words which will convey the exact meaning. Phrasing should be as simple and informal as possible. A series of simple questions is better than one complex question. Slightly longer conversational questions may be better than very short (brusque) questions. QUESTION WORDING … Avoid ambiguity Ambiguities arise in two different ways: (i) an important word may have more than one possible meaning, (ii) the question may be ambiguously phrased or be “double-barrelled”. QUESTION WORDING … Avoid vagueness e.g. “Often…”, Generally…”, “Many…”, “What kind of…?”, “Why…?” and hypothetical questions like, “What would you do if…?” or “would you like…?” QUESTION WORDING … Do not presume anything about the respondent. The common error here is to imply that the respondent “ought” to have some knowledge about a topic or that he “should” have engaged in some activity. QUESTION WORDING … Do not lead the respondent Avoid leading words which imply approval-disapproval. Take care with partial lists to jog memory (especially if included in the first question about the topic). Watch the timing of the survey and its relationship to important political/social events. QUESTION WORDING … Decide whether to personalise the question. “Do you think people ought to behave thus?” vs. “Do you behave thus? OPINION/ATTITUDE QUESTIONS There is no “correct” answer to a question about opinion. It is often difficult to elicit opinions in similar form from different members of a sample. Getting people to rate on a verbally anchored scale e.g. “Any fool can design a survey”:- - strongly disagree, no opinion, disagree, indifferent, strongly agree. Forces everyone to address the issue as posed in the question, but means that results are restricted to the “profile” of opinions for which ratings are solicited QUESTION TYPE – OPEN/CLOSED Closed: pre-coded, limited choice, respondent codes his or her own answer or interviewer codes it. Open: free choice, detailed answers, perhaps less interviewer effect, but it may be difficult to define codes which cover all the responses. With open questions, detail in the answer depends on how talkative the respondent is, and on how much is recorded by the enumerator. With pre-coded questions: the interviewer will code and has the opportunity to know the complete answer of the respondent. However, no chance to discover errors at a later stage since the respondent’s actual answer is never recorded; need to prepare exhaustive and mutually exclusive lists of options; it is difficult to accommodate qualified answers (where the respondent does not really like any of the choices he is offered); if the respondent fails to understand, he can still give an apparently sensible reply. RELIABILITY AND VALIDITY OF INSTRUMENTS UNDERSTANDING RELIABILITY VS VALIDITY Reliability and validity are closely related, but they mean different things. A measurement can be reliable without being valid. However, if a measurement is valid, it is usually also reliable. WHAT IS RELIABILITY? Reliability refers to how consistently a method measures something. If the same result can be consistently achieved by using the same methods under the same circumstances, the measurement is considered reliable. Example You measure the temperature of a liquid sample several times under identical conditions. The thermometer displays the same temperature every time, so the results are reliable. Example A doctor uses a symptom questionnaire to diagnose a patient with a long-term medical condition. Several different doctors use the same questionnaire with the same patient but give different diagnoses. This indicates that the questionnaire has low reliability as a measure of the condition. WHAT IS VALIDITY? Validity refers to how accurately a method measures what it is intended to measure. If research has high validity, that means it produces results that correspond to real properties, characteristics, and variations in the physical or social world. High reliability is one indicator that a measurement is valid. If a method is not reliable, it probably isn’t valid. Example If the thermometer shows different temperatures each time, even though you have carefully controlled conditions to ensure the sample’s temperature stays the same, the thermometer is probably malfunctioning, and therefore its measurements are not valid. VALIDITY However, reliability on its own is not enough to ensure validity. Even if a test is reliable, it may not accurately reflect the real situation. Validity is harder to assess than reliability, but it is even more important. To obtain useful results, the methods you use to collect data must be valid: the research must be measuring what it claims to measure. This ensures that your discussion of the data and the conclusions you draw are also valid. HOW ARE RELIABILITY AND VALIDITY ASSESSED? Reliability can be estimated by comparing different versions of the same measurement. Validity is harder to assess, but it can be estimated by comparing the results to other relevant data or theory. Methods of estimating reliability and validity are usually split up into different types. METHODS OF ESTIMATING RELIABILITY AND VALIDITY Read more on the following website. https://www.scribbr.com/methodology/reliability- vs-validity/ DATA ANALYSIS AND INTERPRETATION OF RESULTS DATA ANALYSIS AND INTERPRETATION Data analysis is the most crucial part of any research. Data analysis summarizes collected data. It involves the interpretation of data gathered through the use of analytical and logical reasoning to determine patterns, relationships or trends. DATA ANALYSIS Data analysis procedures in quantitative research approach are different from those in qualitative research approach. Assignment Find out how data analysis in quantitative research differs from data analysis in qualitative research. DATA ANALYSIS CHECKLIST 1. Cleaning data 2. Analyzing data 3. Reports the results 1. CLEANING DATA Here are some questions to guide your data cleaning exercise 1. Did you capture and code your data in the right manner? 2. Do you have all data or missing data? 3. Do you have enough observations? 4. Do you have any outliers? If yes, what is the remedy for outlier? 5. Does your data have the potential to answer your questions? 2. ANALYZING DATA Here are some items that can guide your analysis 1. Visualize your data, e.g. charts, tables, and graphs, to mention a few. 2. Identify patterns, correlations, and trends 3. Test your hypotheses 4. Let your data tell a story 3. REPORTS THE RESULTS Communicate and interpret the results Conclude and recommend Your targeted audience must understand your results USEFUL TIPS IN DATA ANALYSIS Use more datasets and samples Use accessible and understandable data analytical tool Do not delegate your data analysis Clean data to confirm that they are complete and free from errors Analyze cleaned data Understand your results Keep in mind who will be reading your results and present it in a way that they will understand it Share the results with the supervisor oftentimes DATA INTERPRETATION The usual step proceeding data analysis is interpretation. Interpretation involves attaching meaning and significance to the analysis, explaining descriptive patterns, and looking for relationships and linkages among descriptive dimensions. Once these processes have been completed the researcher must report his or her interpretations and conclusions. In your case, you write a dissertation which accounts for all your findings.