Research Prelims PDF
Document Details

Uploaded by AttractiveGoblin7184
Tags
Summary
The document presents an overview of the research process, focusing on data analysis and statistical methods. It covers various scales of measurement, qualitative versus quantitative analysis, and statistical tools used for data treatment. The framework addresses topics ranging from data analysis and interpretation with a focus on theoretical frameworks, and presenting the research findings.
Full Transcript
RESEARCH PRELIMS (2ND SEM) PHASES OF RESEARCH PROCESS Each time it is used, similar scores should be obtained. EMPIRICAL PHASE A questionnaire is r...
RESEARCH PRELIMS (2ND SEM) PHASES OF RESEARCH PROCESS Each time it is used, similar scores should be obtained. EMPIRICAL PHASE A questionnaire is reliable if we get This involves the collection of data and the same/similar answers repeatedly preparation for analysis. VALIDITY Most time consuming Gaining results, sorting them, and Definition – It is measures what it is evaluating them suppose to measure It answers the question, “Is the The result of this phase questionnaire providing answers to the Qualitative Form research questions for which it will Quantitative Form undertake?” 1. Analogue form If so, is it using the appropriate tool 2. Digital form SENSITIVITY Definition – defined as the probability of correctly identifying some condition or disease state. Sensitivity is one of four related statistics used to describe the accuracy of an instrument for making a dichotomous classification (i.e., positive or negative test outcome). Sensitivity is calculated based on the relationship of the following two types of dichotomous outcomes: (1) the outcome of the test, instrument, or battery of procedures and (2) the true state of affairs. OBJECTIVITY It means that everyone follows the same rules and does not interpret it subjectively. Assessment of Qualitative Data Trustworthiness Credibility Criteria for Assessing Quality of Measurement Dependability Confirmability Reliability Transferability Validity Sensitivity Objectivity TRUSTWORTHINESS RELIABILITY Qualitative research is subjective, so the most important thing is to respond to the Definition – It is the ability of an instrument concerns of outsiders to create reproducible results To what extent can we place confidence in Each time it is used, similar scores should the outcomes of the study? be obtained. Do the readers believe what we have A questionnaire is reliable if we get reported same/similar answers repeatedly Definition – It is the ability of an instrument to create reproducible results RESEARCH PRELIMS (2ND SEM) CREDIBILITY including natural and social sciences, government, business, and nursing. Credibility refers to the confidence in the truth of research findings. KINDS OF STATISTICS Credibility will be assessed by using 1. Descriptive Statistics – statistical methods that purposive sampling that the participants can be used to summarize or describe a collection have the same knowledge and of data. These are statistics intended to organize experiences on the phenomenon under and summarize numerical data from the population study and sample. Uses: DEPENDABILITY 1. Measures and condenses data in Dependability shows that the findings are a. Frequency distribution – scores are tested consistent and could be repeated. from highest to lowest or from lowest to To provide the dependability of the highest. findings, constant revisions by the b. Graphic presentation – data are presented researcher with the assistance of the in graphic form to make frequency adviser, critic and participants will be done distribution data readily apparent. 2. Measures of central tendency used to describe the mean, median, and mode. CONFIRMABILITY Descriptive statistics is the term given to the Confirmability is a degree of neutrality or analysis of data that helps describe, show or the extent to which findings of a study are summarize data in a meaningful way such that, for shaped by the respondents and not by the example, patterns might emerge from the data. researchers’ bias, motivation, or interest They are simply a way to describe data. Descriptive statistics therefore enables us to TRANSFERABILITY present the data in a more meaningful way, which allows simpler interpretation of the data. This refers to the probability that the study findings have meanings to others in similar 2.Inferential Statistics – these are concerned situations. with population and the use of sample data to The process of member checks, that the predict future occurrences. experiences of one participant are the Uses: same experiences and meanings to other participants 1. To estimate population parameter a. Sampling error which is the difference STATISTICAL MEASUREMENTS IN NURSING between data obtained from a random RESEARCH sampled population and data that STATISTICS would be obtained if an entire population is measured. Sampling error Statistics is a branch of mathematics used also occurs when the sample does not to summarize, organize, present, analyze accurately reflect the population. and interpret numerical data such as the b. Sampling distribution is a theoretical numerical characteristics of sample frequency distribution based on an parameters and the numerical infinite number of samples. The characteristics of a population. researcher never actually draws an Statistics improve the quality of data with infinite number of samples from a the design of experiments and survey population. sampling. Statistics also provide tools for c. Sampling bias occurs when samples prediction and forecasting using data and are not carefully selected as in non- statistical needs. Statistics is applicable to probability sampling. a wide variety of academic disciplines 2. Testing the null hypothesis RESEARCH PRELIMS (2ND SEM) Inferential statistics are techniques that allow us Types to use these samples to make generalizations 1. Grouped frequency distribution – these about the populations from which the samples are used with nominal data or when were drawn. continuous variables are being examined Inferential statistics arise out of the fact that such as age, weight, blood pressure etc. sampling naturally incurs sampling error and thus 2. Ungrouped frequency distribution – this a sample is not expected to perfectly represent the is used when data are categorized and population. presented in tabular form to display all numerical values obtained for a particular Use Decision Theory – this theory is based on the variable. assumption associated with the theoretical normal 3. Percentage distribution – this shows curve, used in testing for differences between tpercentage of subjects in a sample whose groups with the expectations that all groups are scores fall into a specific group and the members of the same population. This is number scores in that group. This is useful expressed as a null hypothesis and the level of in comparing the present data with findings significance (alpha) is set at 0.05 before data from other studies that have different collection. samples size. According to this theory 2 types of errors can occur STATISTICAL TOOLS FOR TREATMENT OF when the researcher is deciding what the result of DATA statistical test means (Burns & Groove, 2007) 1. Percentage is computed to determine the A. Type I error – occurs when the null proportion of a part to a whole such as hypothesis is rejected when in reality is not. given number respondents in relation to the This occurs when the level of significance entire population. is at 0.05 than with a 0.01 level of 2. Ranking – is used to determine the order significance. Type 1 decreases when the of decreasing or increasing magnitude of level of significance become more extreme. variables. The largest frequency is rank 1, B. Type II error – occurs when null hypothesis the second 2 and so on is regarded as true, but it is in fact false. A 3. Weighted Mean – refers to the overall statistical analysis may indicate no average or responses /perceptions of the significant differences between groups but study respondents. It is the sum of the in reality, the groups are different. There is scores and its product of the frequency of greater risk of type II error when the level of responses in a Likert 5-point Scale significance is 0.01 than when it is 0.05. 4. T test compares the responses of two Power analysis – this is the way to control type II respondent groups in the study on the error. Power analysis will determine the probability phenomenon under investigation. This is of the statistical test to detect significant difference used to test for significant differences that exists. The research determines the sample between samples. size, the level of significance and the effect size on 5. ANOVA – test the differences between 2 the outcome variable, (Cohen, 1988) means which can be used to examine data from two or more groups. Degree of freedom – the interpretation of a 6. Factor analysis – examines relationships statistical test, in most cases depends on the among large number of variables and number of values that can vary. Although do isolate those relationships to identify indicates the number of values that can vary, clusters of variables that are most closely attention is actually focused on the values that are linked. This is necessary in developing not free to vary. This is generally expressed by the instruments to measure relate. df sign and a number that denotes significant level 7. Regression analysis – used to predict the (eg. Df= 0.01 or 0.05) value of one variable when the value of one Frequency distribution – this is the method to or more variables are known. The variable organize the research data to be predicted din regression analysis is referred to as dependent variable. RESEARCH PRELIMS (2ND SEM) 8. Multiple regression analysis – is used to be exactly how it appears in the text. In correlate more than two variables. numbering the tables, use Arabic 9. The complete randomized block design numerals. is the same as the ANOVA except that 8. List of Figures - is composed of complete blocks are used instead of items. paradigms, diagrams, graphs and charts For instance, use of different antibiotics per or flowcharts. patient per room are tested. The 9. Abstract - An abstract is a short summary heterogeneity of respondents will give of the completed research. Abstracts different results. should be self-contained and concise, explaining the research study as briefly and clearly as possible. THE FINAL RESEARCH OUTPUT 10. The Abstract of the thesis ought to be included in the copy to be evaluated and WRITING THE FINAL OUTPUT defended. It consists of concise The researcher should know not only the statements (more or less 150 words) of: parts in research process but also the a. what the study is all about, forms and style in writing the research b. the methodology, proposal and the research paper c. the most important findings. LC - Format of Writing the Study MAIN BODY PRELIMINARY PAGES Chapter I. 1. Title page/ Title of the Study - is a Introduction : This section refers to: phrase that describes the research study. - “What this study is all about” or “What It should not be too long or too short as makes the researcher interested in doing well as too vague and general. the study”. 2. Endorsement Page - is a section wherein Purpose: to introduce the reader to the it states that the study has been examined subject matter. and recommended for oral exam. The introduction serves as a springboard 3. Approval Sheet- is a section wherein it for the statement of the problem as stated presents that the study has been by Dr. Barrientos-Tan. approved by the Committee on oral examination Chapter II - Methodology 4. Acknowledgement Page - is a section Chapter III – Results and Discussions wherein the researcher expresses his deep gratitude for those persons who Chapter V - Conclusions, and assisted and helped him to make the Recommendations study a successful one. Conclusions 5. Dedication - a section wherein it allows the researcher to personally dedicate the Answer sub-problems study to family members, spouses, Summary of the study and concluding friends, or community groups. remarks that highlight thoughts 6. Table of Contents - it contains all the parts of the research paper including the Recommendations pages. Indicates all the contents of Revision of the plan (if necessary), or research paper and the page number for improvement each section is placed at the right-hand Satisfy the following questions: margin. The page for the table of contents a. Did the intervention work? is usually written in Roman numeral and b. What should be changed indicated at the bottom of the paper. c. What should be the next step? 7. List of Tables - this follows the table of content and indicates the title of the tables in the research paper. The caption should SUPPLEMENTARY PAGES RESEARCH PRELIMS (2ND SEM) 1. Bibliography – Books/ Online References (Articles, Books) 2. Appendices a) communication letters b) Questionnaire c) Validity Result of the questionnaire d) Grammarly Check Result Informed Consent Form Approval Letter from LC REC Interview Guide 3. Curriculum Vitae DATA ANALYSIS, INTERPRETATION AND An ordinal scale is where: the data can be PRESENTATION classified into non-numerical or named categories OVERVIEW an inherent order exists among the response Qualitative and quantitative categories. Simple quantitative analysis Ordinal scales are seen in questions that call Simple qualitative analysis for ratings of quality (for example, very Tools to support data analysis good, good, fair, poor, very poor) and Theoretical frameworks: grounded theory, agreement (for example, strongly agree, distributed cognition, activity theory agree, disagree, strongly disagree). Presenting the findings: rigorous notations, stories, summaries Numerical scale A numerical scale is: where numbers WHY DO WE ANALYZE DATA represent the possible response categories The purpose of analysing data is to obtain there is a natural ranking of the categories usable and useful information. zero on the scale has meaning there is a The analysis, irrespective of whether the quantifiable difference within categories and data is qualitative or quantitative, may: between consecutive categories describe and summarise the data Common myths identify relationships between variables compare variables Complex analysis and big words impress people. identify the difference between variables Most people appreciate practical and forecast outcomes understandable analyses. Analysis comes at the end after all the data are SCALES OF MEASUREMENT collected. Many people are confused about what type We think about analysis upfront so that we of analysis to use on a set of data and the HAVE the data we WANT to analyze. relevant forms of pictorial presentation or Quantitative analysis is the most accurate type of data display. data analysis. The decision is based on the scale of measurement of the data. These scales are Some think numbers are more accurate than nominal, ordinal and numerical. words but it is the quality of the analysis process that matters. When using a quantitative methodology, you Nominal scale are normally testing theory through the testing of a hypothesis. A nominal scale is where: the data can be In qualitative research, you are either classified into a non-numerical or named exploring the application of a theory or categories, and the order in which these model in a different context or are hoping categories can be written or asked is for a theory or a model to emerge from the arbitrary. data. In other words, although you may have Ordinal scale some ideas about your topic, you are also looking for ideas, concepts and attitudes often from experts or practitioners in the 3. Interpreting the information field Numbers do not speak for themselves. For example, what does it mean that 55 Data have their own meaning. youth reported a change in behavior. Or, Data must be interpreted. Numbers do 25% of participants rated the program a 5 not speak for themselves. and 75% rated it a 4. What do these numbers mean? Stating limitations to the analysis weakens the Interpretation is the process of attaching evaluation. meaning to the data All analyses have weaknesses; it is more Interpretation demands fair and careful honest and responsible to acknowledge judgments. Often the same data can be them. interpreted in different ways. So, it is helpful to involve others or take time to hear how Computer analysis is always easier and better. different people interpret the same It depends upon the size of the data set information. and personal competencies. For small Think of ways you might do this...for sets of information, hand tabulation may example, hold a meeting with key be more efficient stakeholders to discuss the data: ask individual participants what they think 1. Organizing the data Part of interpreting information is identifying Organize all forms/questionnaires in one the lessons learned place Check for completeness and accuracy What did you learn? Remove those that are incomplete or do - about the program, about the participants, not make sense; keep a record of your about the evaluation. decisions - Are there any 'ah-has'? What is new? What was Assign a unique identifier to each expected? form/questionnaire - Were there findings that surprised you? - Are there things you don't understand very 2. Enter your data well A. By hand B. By computer - where further study is needed? - Excel (spreadsheet) - Microsoft Access (database mngt) We often include recommendations or an action - Quantitative analysis: SPSS (statistical plan. This helps ensure that the results are used. software 4. Discuss limitations - Count (frequencies) - Percentage A. Written reports: - Mean Be explicit about your limitations - Mode - Median B. Oral reports: - Range Be prepared to discuss limitations - Standard deviation Be honest about limitations - Variance Know the claims you cannot make - Ranking Do not claim causation without a true - Cross tabulation experimental design Do not generalize to the population without A. Categorization and theme-based analysis, random sample and quality administration e.g. N6 (e.g.,