Research Methods in Psychology

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

Briefly explain the difference between describing behavior and understanding behavior as goals of psychological research.

Describing behavior involves naming and classifying observable actions, while understanding behavior delves into identifying the causes of those actions.

How does a theory differ from a hypothesis in psychological research?

A theory is a set of principles built on verifiable facts explaining a phenomenon, while a hypothesis is a testable prediction based on that theory.

What does it mean for psychological research to be 'scientific,' and what key requirements must it meet?

For psychological research to be considered scientific, it must meet specific requirements such as objectivity, reliability, validity, and be based on empirical evidence using the scientific method.

Define reflexivity in the context of psychological research and why it is important.

<p>Reflexivity is a researcher's awareness of how and why they conduct research, recognising how their beliefs and opinions may influence data collection/analysis. It's important for objectivity.</p> Signup and view all the answers

Explain the difference between reliability and validity in data collection measures.

<p>Reliability refers to the consistency and accuracy of the measures, whereas validity refers to whether the measures accurately assess what they are intended to measure.</p> Signup and view all the answers

In the context of research, what are demand characteristics and how does the Hawthorne effect exemplify them?

<p>Demand characteristics are cues that lead participants to guess the research's purpose and change their behavior accordingly. The Hawthorne effect, where productivity increased regardless of changes, exemplifies this response to being observed.</p> Signup and view all the answers

Why is sampling important in research, and what does 'sampling bias' refer to?

<p>Sampling saves time and resources. Sampling bias refers to a tendency for a sample to over or under-represent certain categories within a population, threatening the validity of the results.</p> Signup and view all the answers

What is the difference between opportunity sampling and stratified sampling? Give an example of each.

<p>Opportunity sampling uses readily available participants (e.g., asking passersby), while stratified sampling divides the population into categories and samples proportionally from each (e.g., age groups).</p> Signup and view all the answers

Explain the fundamental differences between qualitative and quantitative research methodologies in psychology.

<p>Quantitative research involves systematic empirical investigation using numerical data and statistical techniques, aiming to generalize findings, while qualitative research aims to deeply understand human behavior.</p> Signup and view all the answers

How should a researcher decide between using a quantitative or qualitative methodology?

<p>A researcher should decide based on the research question, aims, and objectives of the study. Quantitative methods are suited for examining relationships, while qualitative methods are better for in-depth understanding.</p> Signup and view all the answers

Provide two methods used for collecting data within both quantitative and qualitative research.

<p>Quantitative: Experiments and surveys. Qualitative: In-depth interviews and focus groups.</p> Signup and view all the answers

Briefly describe the core principle of experimental research in psychology.

<p>The core principle of experimental research involves manipulating one condition (independent variable) and measuring its effect on another (dependent variable) to establish cause and effect.</p> Signup and view all the answers

What is the difference between an experimental hypothesis and a null hypothesis?

<p>An experimental hypothesis predicts a relationship between the independent and dependent variables, while a null hypothesis predicts no such relationship.</p> Signup and view all the answers

Differentiate between a directional and non-directional hypothesis, providing an example of each.

<p>A directional hypothesis predicts the direction of the effect (e.g., &quot;Students who study in silence will perform better&quot;), while a non-directional hypothesis predicts an effect but not its direction (e.g., &quot;There is a difference in work produced in a noisy and silent environment&quot;).</p> Signup and view all the answers

Define the independent and dependent variables in experimental research, and provide an example.

<p>The independent variable (IV) is the variable manipulated by the researcher, and the dependent variable (DV) is the variable measured. For example, if testing sugar and ADHD, sugar is the IV, and ADHD symptoms are the DV.</p> Signup and view all the answers

What are confounding variables, and why are they a concern in experimental research?

<p>Confounding variables are other variables that might affect the dependent variable unintentionally, thereby influencing the results of the experiment. They are a concern because they obscure the true relationship.</p> Signup and view all the answers

Explain the difference between participant and situational extraneous variables in experiments.

<p>Participant variables are individual characteristics that may impact how a person responds, while situational variables are environmental factors that may affect a participant's responses.</p> Signup and view all the answers

In experimental design, what is the role of a control group, and why is it important?

<p>A control group does not receive the experimental treatment, serving as a baseline for comparison to determine if the independent variable has an effect. It is essential to determine if changes in DV related to the IV.</p> Signup and view all the answers

What is the difference between a laboratory experiment and a field experiment, and what are the trade-offs of each?

<p>Laboratory experiments are conducted in controlled settings offering high control but potentially low ecological validity, while field experiments are conducted in natural settings offering high ecological validity but less control.</p> Signup and view all the answers

When would a researcher opt for a natural or quasi-experiment instead of a true experiment?

<p>A researcher would opt for a natural/quasi-experiment when the independent variable cannot be directly manipulated for practical or ethical reasons.</p> Signup and view all the answers

Define observations as a research method and explain why researchers cannot infer causation based on the results.

<p>Observations are used to study naturally occurring behavior without researcher manipulation. However, researchers cannot infer causation in correlational or observational studies as they do not involve manipulation of independent variables.</p> Signup and view all the answers

Briefly explain what operationalizing a target behavior means in observational studies.

<p>Operationalizing a target behavior means defining it in specific, measurable terms, outlining exactly what will be recorded and how.</p> Signup and view all the answers

Describe the difference between covert and overt observations.

<p>Covert observations occur when participants are unaware they are being observed, while overt observations occur when participants are aware.</p> Signup and view all the answers

What are the key differences between participant and non-participant observations?

<p>In participant observation, the researcher becomes involved in the environment they are studying, whereas in non-participant observation, the researcher remains separate and observes from a distance.</p> Signup and view all the answers

Differentiate between controlled and naturalistic observations.

<p>Controlled observations occur in artificial settings constructed by the researcher, while naturalistic observations occur in the environment where the behavior naturally transpires.</p> Signup and view all the answers

What are the potential drawbacks of using observations as a research method, and how does reflexivity help?

<p>Potential drawbacks : demand characteristics/researcher bias. Reflexivity helps researchers realize those shortcomings.</p> Signup and view all the answers

Explain the concepts of time sampling and event sampling in observational research.

<p>Time sampling involves recording behaviors within specific time frames, while event sampling involves counting instances of a specific behaviour.</p> Signup and view all the answers

Define interviews and explain why they're considered a 'self-report method'.

<p>Interviews involve gathering information via direct questioning. Considered self-report because participants provide information about themselves</p> Signup and view all the answers

List three general categories in which interviews may vary.

<p>Interviews vary in: communication channel/medium, in structure, the number of participants.</p> Signup and view all the answers

Differentiate between a one-to-one interview and a focus group.

<p>In a one-to-one, the participant answers directly. In a focus group, the participant answers as part of a group.</p> Signup and view all the answers

As a researcher, give three different ways to facilitate your focus group.

<p>Facilitate group conversation, introduce ideas, and encourage quieter members.</p> Signup and view all the answers

What are some strengths and weaknesses in a focus group?

<p>More time efficient, helps participants gain experience/perspective, and questions are quickly solved between members. Weaknesses may involve discomfort to share sensitive topics, some may fear being judged, and dominant people can take over the conversation.</p> Signup and view all the answers

Differentiate between a structured, semi-structured, and unstructured interview.

<p>A structured follows a pre-set list. Semi-structured follows a flexible framework. Unstructured allows the reacher freedom concerning the topic.</p> Signup and view all the answers

Identify ethical concerns in smaller sampling interviews.

<p>Asking questions which cannot be answered, and discussing limitations.</p> Signup and view all the answers

Describe the difference between types of questions one can ask in an interview.

<p>Open questions allow the participant to answer in full, though it may take time. Closed is often more specific, with a better analysis and shorter responses.</p> Signup and view all the answers

When you use technological devices to record an interview, what key points should the interviews be aware of?

<p>Should always be informed, this may make them feel awkward, and rapport.</p> Signup and view all the answers

Describe what 'debriefing' includes after an interview.

<p>Inform the participants about what will happen with the recording/notes, who will have access, and the timing they will recieve feedback.</p> Signup and view all the answers

Define 'Thematic Data Analysis'.

<p>It is attempting to find patterns of meanings or themes and interpret what the user is saying.</p> Signup and view all the answers

Describe what an interviewer may require to be ready for in ethical issues.

<p>Ensuring the ethical requirements are there, in a Code of Ethics. Research has to not harm directly/indirectly, and that it prevents harm.</p> Signup and view all the answers

List the ethical principles one gains when conducting research.

<p>Gain informed consent, avoid deception, there is no harm, confidentiality, the right to privacy/withdraw.</p> Signup and view all the answers

Flashcards

Describing Behaviors

Naming and classifying observable, measurable behaviors.

Understanding Behavior

Discovering the causes of behavior(s).

Predicting Behavior

Forecasting behavior accurately.

Controlling Behavior

Modifying conditions that influence behaviors, can be positive or negative.

Signup and view all the flashcards

Theory

A set of principles built on observations and verifiable facts that explains a phenomenon and predicts its future behavior.

Signup and view all the flashcards

Hypothesis

A testable prediction consistent with our theory.

Signup and view all the flashcards

Scientific Rigor

Ensuring research findings are objective, reliable, and valid.

Signup and view all the flashcards

Reflexivity

Awareness of how researcher's beliefs influence data collection/analysis.

Signup and view all the flashcards

Reliability

Whether the measures used are accurate.

Signup and view all the flashcards

Inter-rater Reliability

Agreement between people gathering data.

Signup and view all the flashcards

Test-retest Reliability

Consistency of results if the same measurements are used again.

Signup and view all the flashcards

Validity

Whether the measures used are measuring what they are supposed to be measuring.

Signup and view all the flashcards

Ecological Validity

How close the research is to a real-life situation.

Signup and view all the flashcards

Hawthorne Effect

Participants alter behavior to meet expectations of the researcher.

Signup and view all the flashcards

Demand Characteristics

Effects on participants' behaviors when guessing the research aim.

Signup and view all the flashcards

Sample

A smaller, representative collection of units from a population.

Signup and view all the flashcards

Target Population

The specific population we are interested in.

Signup and view all the flashcards

Random Sampling

Equal chance for every individual to be in your sample.

Signup and view all the flashcards

Sampling Bias

Tendency to over- or under-represent categories in a population.

Signup and view all the flashcards

Opportunity Sampling

Using readily available participants.

Signup and view all the flashcards

Stratified Sampling

Sampling based on strata characterizing the target population.

Signup and view all the flashcards

Cluster Sampling

Randomly selecting sections (clusters) of the target population.

Signup and view all the flashcards

Purposive Sampling

Basing participant selection on who offers relevant information, used in qualitative research.

Signup and view all the flashcards

Snowball Sampling

Selecting key participants who provide further contacts for the study.

Signup and view all the flashcards

Qualitative

In-depth description and understanding of human behavior.

Signup and view all the flashcards

Quantitative

Systematic empirical investigation via statistical techniques.

Signup and view all the flashcards

Experiment

Altering one condition and recording the observed change.

Signup and view all the flashcards

Hypothesis

Statement regarding the expected outcome of the study.

Signup and view all the flashcards

Experimental Hypothesis

Predicts a relationship between independent and dependent variable.

Signup and view all the flashcards

Null Hypothesis

Predicts there is no relationship between variables.

Signup and view all the flashcards

Directional Hypothesis

Predicts the direction of the results.

Signup and view all the flashcards

Non-directional Hypothesis

Does not predict the direction of the results.

Signup and view all the flashcards

Independent Variable (IV)

Variable researchers manipulate in an experiment.

Signup and view all the flashcards

Dependent Variable (DV)

Variable expected to change depending on the manipulation.

Signup and view all the flashcards

Confounding Variables

Variables that unintentionally influence the results.

Signup and view all the flashcards

Participant Variables

Individual characteristics impacting how one responds.

Signup and view all the flashcards

Situational Variables

Factors in the study environment that may impact results.

Signup and view all the flashcards

Treatment Group

Group receiving the manipulated independent variable.

Signup and view all the flashcards

Control Group

Group not subject to the manipulated variable; benchmark group.

Signup and view all the flashcards

Placebo Group

Group receiving an ineffective level of IV, to check for placebo effects.

Signup and view all the flashcards

Study Notes

  • The date of the presentation is 05/01/2025.
  • The topic of the presentation is Research Methods in Psychology.
  • The presenting psychologists are: Stephanie Borg Bugeja Olivia Galea Seychell Mireille Vila Miriam Geraldi Gauci Chiara Borg

Psychology Basics

  • Psychology is the study of the mind and behavior.
  • Psychologists study learning, personality, development, and sensation and perception.

Goals of Research

  • Describe Behaviors: Define and categorize observable, measurable behaviors.
  • Understand: Determine the causes of behaviors.
  • Predict: Accurately forecast behavior.
  • Control: Influence behaviors by altering conditions. Positive uses involve controlling unwanted behavior like smoking, negative involves controlling behavior without their knowledge.

Theory vs. Hypothesis

  • A theory in science is a set of principles, based on verifiable facts and observations, explaining phenomena and predicting future behavior.
  • A "theory" example suggests all ADHD symptoms are a reaction to eating sugar.
  • A hypothesis is a testable prediction consistent with a theory.
  • "Testable" implies the hypothesis is stated so it can be studied to determine its truth.
  • An example hypothesis: "If a kid gets sugar, the kid will act more distracted, impulsive, and hyper."
  • The phrase "All ADHD is about sugar" could yield the testable hypothesis that ADHD symptoms will persist even after sugar is removed from the diet.

Scientific Psychological Research

  • Psychological research is considered scientific if researchers meet specific requirements, including:
  • Objectivity
  • Operational definitions
  • Reliability
  • Empirical evidence
  • Scientific Method
  • Validity

Enhancing Objectivity: Reflexivity

  • Reflexivity involves a researcher's awareness of how and why they are conducting research.
  • The researchers recognize how their beliefs and opinions have likely influenced data collection or analysis.
  • Reflexivity is essential in all types of research.
  • It is both an internal and external process, meaning it should be openly shared in a research study
  • Sometimes involves interviewing a colleague to answer outlined questions.

Principles of Reliability and Validity

  • Data collection measures are used to study aspects of psychology, such as experiments and questionnaires.
  • Reliability refers to the accuracy of measures.
  • In considering reliability, evaluate agreement in data gathering methods (inter-rater reliability) and consistency of results if measurements are replicated (test-retest reliability and replicability).
  • Validity reflects whether measures accurately assess the intended targets.
  • Assess if the chosen tool is appropriate, if extraneous factors impacted data, objectivity was maintained, findings can be generalized, and the research closely resembles real-life situations (ecological validity).

Validity: Demand Characteristics and the Hawthorne Effect

  • In the 1920s, the Hawthorne electrical plant aimed to identify ways to boost productivity in the USA.
  • Five female workers were assessed over two years under changing conditions, such as levels of illumination and timing of breaks.
  • Productivity increased regardless of the change, with workers responding to higher attention from management and researchers.
  • The Hawthorne Effect occurs when participants try to perform in a way they think meets the researcher's expectations".
  • Demand characteristics are the effects on participants' behaviors when they try to guess what the research is studying.
    • Examples include wanting to please the experimenter or changing behavior to align with social norms.

Sampling

  • General population gets sampled to make Target population.
  • The sampling "[A] smaller (but hopefully representative) collection of units from a population used to determine truths about that population" (Field, 2005).
  • Sampling saves time and resources.
  • Random sampling is a technique for making sure that every individual in a population has an equal chance of being in your sample.
  • "Sampling bias" refers to a tendency to over- or under-represent one or more categories in a population.

Sampling Techniques

  • Opportunity / Convenience Sampling involves using immediately available participants in research
  • Stratified / Quota Sampling takes a sample based on strata (categories) characterizing the target population. Quotas are set for each category, and participants are chosen randomly based on which needs fulfilling.
  • Cluster Sampling takes a sample based on a random selection of one or more sections (clusters) of the target population. Participants are then chosen at random from the cluster.
  • Purposive Sampling bases participant selection on who can offer the most relevant information, and used in qualitative research.
  • Snowball Sampling involves selecting key people as participants and asking them to provide more contacts for the study.

Qualitative vs. Quantitative Methodology

  • Quantitative research is the systematic empirical investigation of social phenomena using statistical, mathematical, or computational techniques. Assumes research can be objective.
  • Attempts to generalize findings where representation importance.
  • Focuses on testing hypotheses from theories.
  • Qualitative research aims to describe and deeply understand human behavior.
  • Focuses on subjective meaning
  • Sample choice depends on participants' experiences (representativeness is not important)
  • Assists in constructing theory rather than testing it.

Methodology Choice

  • Selection between quantitative and qualitative methodologies depends on the research question and the aims/objectives of the study.

Research Question Examples

  • Qualitative research questions are centered around experiences, and quantitative are centered around links between multiple factors.
  • Qualitative Ex: "What is the experience of living with diabetes?"
  • Quantitative Ex: "What is the link between playing violent computer games and violent behaviour in children?"

Methods: Quantitative vs. Qualitative

  • Methods employed depends on the type of research.
  • Quantitative: Experiments, Correlational studies, Surveys, Structured observations.
  • Qualitative: one-to-one interviews, Focus groups, Case Studies, Unstructured Observations.

Experiments

  • Experiments are a way to conduct research where one condition is changed and the change recorded. Researchers document any changes observed in participants/subjects after they are changed.
  • The experiment is an effective way of gathering evidence to support theories. It eliminates a lot of alternative explanations of cause and effect.

Hypothesis in Experiments

  • A hypothesis is the researcher's statement regarding the expected outcome of the study.
  • Two types of hypotheses exist:
  • The Experimental/Research Hypothesis predicts a relationship between independent and dependent variables, with the former causing an effect on the latter.
  • The Null Hypothesis predicts the experimental hypothesis has no impact, meaning there is no relationship between independent and dependent variables.

Hypotheses in Experiments (Example)

  • Experimental example is seeing the impact of noise on productivity.
  • Experimental Hypothesis (H1): Production (dependent variable) will be lower in a noisy environment (independent variable).
  • With the Null Hypothesis (Hº): There is no difference between work produced in a noisy and silent environments.

Directionality in Hypotheses

  • There are two types of experimental (H¹) hypotheses: directional and non-directional.
  • A directional hypothesis predicts the direction of results is also called the one-tailed hypothesis.
  • Example: Students who do homework with messaging apps off do better work than those with messenger on.
  • The Non-directional hypothesis does not predict the direction of the results and is the two-tailed hypothesis.
  • Example: There is a difference in the level of work produced by students who keep messaging on and those who do not.

Variables in Experiments

  • Experiments aim to define a cause and effect relationship between variables.
  • The manipulated variable in an experiment is the independent variable (IV). Only one IV is in an experiment.
  • A variable is a measurable or countable characteristic
  • The variable we expect to change based on our manipulation is the dependent variable (DV).
  • Other variables that affect the dependent variable are confounding variables, which influences the results unintentionally.

Extraneous Variables

  • Extraneous variables can be divided into participant variables and situational variables.
  • Participant Variables: are related to individual characteristics of participants like background differences, mood, anxiety, intelligence, etc.
  • Situational Variables: are related to things in the environment impacting participant responses.

Designing Experiments

  • Designed to ensure that the independent variable is having the following impact.

Study Groups

  • The treatment group (or experimental group) experiences variable being studied.
  • The control group is not manipulated.
  • The placebo group receives an ineffective level of the independent variable.

Types of Experiments

  • Lab Experiment
  • Field Experiment
  • Natural/Quasi Experiment

Lab Experiments

  • Laboratory experiments take place in a controlled environment designed to maximize control over extraneous variables to help ensure the validity of the study.
  • Participants are aware of taking part in an experiment but may not know the aims.

Strengths and Weaknesses of Lab Experiments

  • Strengths: have higher levels of control over extraneous variables giving confidence in if change caused by the variables, confidence is high because the measures used are accurate, and situations easier to replicate.
  • Weaknesses: lower ecological validity, because the situation is artificial, and participants know they are participating and that affects their reactions (demand characteristics).

Field Experiments

  • Field experiments are conducted in a more natural environment.
  • Similar to a lab experiment because manipulates independent variable.
  • It is less likely, however, that participants know that an experiment is taking place.

Strengths and Weaknesses of Field Experiments

  • Strengths: experiments occur more naturally to participants , higher ecological validity, Participants are not aware and thus have a lower impact of demand characteristics.
  • Weaknesses: higher likelihood of confounding variables, less confident of a cause and effect relationship, Ethical issues due lack of awareness of behavior being recorded, harder to replicate.

Natural/Quasi Experiments

  • Natural or Quasi Experiments are used if independent variable cannot be manipulated.
  • Participants are not randomly allocated to groups so more confounding variables exist.

Strengths and Weaknesses of Natural/Quasi Experiments

  • Strengths: When the Independent variable can't be manipulated for ethical/practical issues, studies 'real' problems is beneficial, higher ecological validity
  • Weaknesses: higher likelihood of confounding variables, less confident of a cause and effect relationship, Participants may be aware.

Observations

  • Observations are used to see naturally behavior with the researchers not manipulating the independent variable
  • Researchers cannot make assumptions about 'causation'.

Observation Guidelines

  • When using observations, define what will be observed (aka the target behavior) and plan for them in advance.
  • The targeted behaviour needs to be operationalized

Different Types of Observations

  • Researchers use different styles to observe: Levels of participants are aware, how much the researcher is involved, behavior to be observed

Types of Observations: Covert vs. Overt

  • The difference between the two relates to if participants know that they are being observed.
  • Covert Observation is where participants are unaware.
  • Researchers use hidden cameras, audio equipment etc.
  • Overt Observation is where participants know and researchers are open about it.

Strengths and Weaknesses of Overt/Covert

  • Overt means that participants are aware providing they know being studied; more ethical while covert is the reverse where participants are unaware and unethical.

Types of Observations: Participant vs. Non-Participant

  • Difference is whether researcher becomes involved or a part of the environment that is being observed.
  • Participant Observation sees the researcher involved.
  • Non-participant Observation has the researcher remote from the subjects.

Strengths and Weaknesses of Participant Observations

  • Participant Observations brings greater insight so more valid while risking lower objectivity
  • Non-participant means a more objective view lacking in depth.

Types of Observations

  • Controll vs Naturalistic, the differece being whether the environment is controlled or not.
  • Naturalistic Observation occurs naturally.
  • Controlled Observation occurs in artificial settings.

Strength/Weaknesses

  • Naturalistic Observations is more believable and can be generalized. While risk having many distractions during observation-
  • Controlled Observations mean easier environment while risking behavior feeling fake and unnatural.

Importance of Reflexivity

  • Must be addressed when dealing with limitations or design flaws.

General Weaknesses

  • Data may be impacted by being known as observed - can cause unatural reactions.
  • There is risk of researcher bias that influence experiment (lead to experiment to fulfill hypothesis)
  • Extent of the research is shared, and emotional discomfort can affect experiment.

Recording Data

  • Observation is both Qual/Quan in nature, qualitive observation produce unstructured data, quantitive create structured data with statistics
  • Unstructured data comes from those that don't have behavior prerecorded.
  • Structured Data takes in variables and is recorded (frequency/Grid).

Recording Solutions

  • Time Sampling involves behavior taken in a timeframe (per x seconds).
  • Event Sampling involves how a behavior occurs.

Problems & Solutions for Studies

  • Researchers can be trained to produce high integrity of analysis to increase inter-rate reliability.
  • Knowledge is vital for subjects, with some cases subjects are informed (aka. debriefing).

Interview Basics

  • Through questioning, information is taken. It is a direct-report
  • Various types are to find more about channel, # of participants, structure

Mediums Chosen

  • Face2Face
  • Internet
  • Phone
  • Email
  • Texting

Number of Participants

  • Dependent on senstivity/topic of study, interviewer has to decide if he has 1 participant or more, it will come down to topics such as 121 or focus group

Focus Groups

  • In this case the subject will be talking together on specific topics
  • Foci will be group conversation, introducing ideas to lead discussion, pushing more quiet members to speak up

Strength & Weakness of Focus Groups

  • Strengths are that, it can be time save, participants can grow together, inspire better. While weaknesses are it may include feelings that are uncomfy and create sensitive topics, more dominant will usually come out on the conversation, so more passive will not say a part.

On a 121 setting

  • Has pre set data
  • Structured data is taken to be changed in the order it will come to get more details
  • Unstructured is where the the person is spoken to to create freedom

Conclusion

  • Use Snow ball sampling for requires person with more knowledge
  • Interview is for small samples for big information
  • Ethical is a factor - make sure everything is okay and limits are to be mentioned and known

Strong & Weaknesses of Interviews

  • Can speak on sensitive info/provide detail for researcher.
  • People may become less uncomfy because they risk being intimidated by each other.

Question Types

  • Both allow the person to react flexible (aka. no pre-chosen answers)
  • Can be time consuming and require a yes and no

Questions cont

  • Descriptive - aimed at improving resondent state
  • Structured would be a simple q, do you like races
  • Questions can improve data collection.

Listenting - Data COllect

  • Make sure you have permission; It's important, ensure it's a fit with some rapport
  • Need Active to be followed + body language + understanding

Recording

  • More transcription tech must take place and non verbal needs to be recorded

Debriefing

  • What can happen, feedback of researcher
  • Data collection + debrief are vital

Analytics

  • A mix of thematics is taken to study
  • Data can depend

Conclusion

  • Survey tool box and cases, and more.

Final

  • It is vital to maintain a code of ethics when running an experiment.
  • Can come in phy/society/psy way.

Rules

  • All sides are vital and important regarding participation + its needs. Harm may occur if data taken.

What to do

  • A code of confidentiality/withdraw taken to insure things are taken to be right.

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

More Like This

Psychological Research Methods Quiz
3 questions
Research Methods in Psychology
40 questions

Research Methods in Psychology

MarvellousEnglishHorn7442 avatar
MarvellousEnglishHorn7442
Psychology Research Methods Quiz
48 questions
Use Quizgecko on...
Browser
Browser