Introduction To Neuroimaging: Eye-Tracking PDF

Summary

This presentation by Esperanza Badaya at Ghent University in 2023 details the basics of eye-tracking, its applications, and different paradigms used in psychology. It covers measurement techniques and operationalizations in the field of neuroimaging and explores various cognitive processes.

Full Transcript

DEPARTMENT OF EXPERIMENTAL PSYCHOLOGY INTRODUCTION TO NEUROIMAGING: EYE-TRACKING Esperanza Badaya, 13th Oct 2023 OUTLINE + HOUSE-TIDYING Part I: Eye-tracking basics ● ● What is eye-tracking Why do we do it ○ ○ ○ ● ● ● Human Visual System Visual Attention Why do we care in psychology Measures...

DEPARTMENT OF EXPERIMENTAL PSYCHOLOGY INTRODUCTION TO NEUROIMAGING: EYE-TRACKING Esperanza Badaya, 13th Oct 2023 OUTLINE + HOUSE-TIDYING Part I: Eye-tracking basics ● ● What is eye-tracking Why do we do it ○ ○ ○ ● ● ● Human Visual System Visual Attention Why do we care in psychology Measures in eye-tracking How do we eye-track ○ ○ ○ ● Part 2: Eye-tracking in psychology The past The present How do eye-trackers work Pros and cons ● Paradigms ○ Anti-saccade ○ Gaze-contingent ○ Change detection ○ Visual World Paradigm ○ Scene perception ○ Natural reading ○ Natural environments Wrap up & Questions EYE-TRACKING: WHAT AND WHY EYE-TRACKING: WHAT AND WHY ● Eye-tracking: non-invasive technique to explore cognitive processes as they unfold (i.e., online processing). ● Main idea: record individuals’ eye(s) as they are presented with stimuli (visual/visual+auditory). THE HUMAN VISUAL SYSTEM THE HUMAN VISUAL SYSTEM ● Light enters the eye via the pupil ● The lens reflects light onto the retina. ● Retina = photosensitive layer. ○ Cones ○ Rods THE HUMAN VISUAL SYSTEM Photoreceptors with different properties. ● Cones ○ Color vision ○ Well-illuminated conditions. ● Rods ○ Black and white. ○ Low-light conditions. THE HUMAN VISUAL SYSTEM Photoreceptors with different properties. ● Cones ○ ○ ○ Color vision Well-illuminated conditions. Highest density in the fovea. ● Rods ○ ○ ○ Black and white. Low-light conditions. Higher density in the fovea’s periphery. THE HUMAN VISUAL SYSTEM Fovea: where our vision is sharpest. But small section of our visual field: THE HUMAN VISUAL SYSTEM Fovea: where our vision is sharpest. But small section of our visual field: THE HUMAN VISUAL SYSTEM Fovea: where our vision is sharpest. But small section of our visual field: EYE MOVEMENTS Eye movements = outcome of eyes’ anatomy. ● Perceive objects at sharpest in the fovea. ● NB != we cannot perceive what’s not in the fovea. ○ Parafoveal processing is a thing. EYE MOVEMENTS Why do we care? ● Linking hypothesis: attention. ● Bottom-up and top-down processes. ○ Details that attract individuals’ attention. ○ Individuals’ strategies → The active viewer. EYE MOVEMENTS Why do we care? Free viewing. Yarbus (1967) EYE MOVEMENTS Why do we care? Estimate individuals’ ages. Yarbus (1967) EYE MOVEMENTS Why do we care? Guess what people were doing before the visitor arrived. Yarbus (1967) EYE-TRACKING: MEASURES FIXATIONS When our eye ‘stops’. ● Automatic, physiological response (i.e., not under our control). ● Eye is relatively stable. ○ Assumption = processing of whatever is being looked at. ● Average duration: 200 - 300 ms. FIXATIONS When our eye ‘stops’. ● Automatic, physiological response (i.e., not under our control). ● Eye is relatively stable. ● Drift & microsaccades. ○ Slow movement away from the center of a fixation and then movements to the original center of the fixation. FIXATIONS When our eye ‘stops’. ● ● ● Automatic, physiological response (i.e., not under our control). Eye is relatively stable. Drift & microsaccades. ○ Slow movement away from the center of a fixation and then movements to the original center of the fixation. SACCADES “Jerky” movement: When our eyes ‘move’ from one fixation to another. During this time, we are blind (saccadic suppression). ● Duration: 30-80 ms. ● Takes around 100-200 ms to launch a saccade (Matin et al., 1993). SACCADES “Jerky” movement: When our eyes ‘move’ from one fixation to another. ● You can move your eyes forward and backwards (i.e., regressions). SACCADES “Jerky” movement: When our eyes ‘move’ from one fixation to another. ● Forwards and backwards. BLINKS People blink. ● Surrounded by saccades. ● Pupil changes when eyelids open/close SMOOTH PURSUIT Move a fixation, similar to following a target. Slower than a saccade, but bounded by the velocity of the target being followed. Asymmetrical. ● Better at horizontal than at vertical. PUPIL SIZE Pupil dilates for reasons other than light, e.g., cognitive effort. ● Linking hypothesis: pupil size reflects effort exerted. Factors worth controlling. Not very common in language research, yet cf. Mathôt et al., 2017; Porretta & Tucker, 2019. OPERATIONALISATIONS These eye events can be then later operationalised as a function of the research question. ● Space ● Time ● Interaction OPERATIONALISATIONS These eye events can be then later operationalised as a function of the research question. Qualitative measurements ● Scan path OPERATIONALISATIONS These eye events can be then later operationalised as a function of the research question. Qualitative measurements ● Bee swarm OPERATIONALISATIONS These eye events can be then later operationalised as a function of the research question. Qualitative measurements ● Heatmap OPERATIONALISATIONS Quantitative measurements ● Reading: ○ Timed measures: First fixation duration, total reading time. ○ Probabilistic measures: Skipping, regression. ● Other measures: saccadic amplitude, number of regressions/fixations, pupil size, proportion of fixations,... OPERATIONALISATIONS Quantitative measurements Badaya et al. (in prep) OPERATIONALISATIONS Quantitative measurements Clifton et al. (2007) EYE-TRACKING: HOW EYE-TRACKING IN THE PAST Louis Émile Javal (1879) ● ‘Naked eye’ observations. ● Pattern in reading: stop-start pattern. Edmund Huey (1908) ● Primitive ‘eye-tracking’ device. ● Sound of a tube over eyes as a measurement of eye movements. EYE-TRACKING IN THE PAST Alfred Yarbus ● Suction cups. ● Scan paths. EYE-TRACKING TODAY Different methods. ● Most popular: Video-based combined pupil and corneal reflection. HOW DO EYE-TRACKERS WORK? HOW DO EYE-TRACKERS WORK? HOW DO EYE-TRACKERS WORK? HOW DO EYE-TRACKERS WORK? HOW DO EYE-TRACKERS WORK? Procedure: ● Record the eye. ● Output x and y coordinates of the eye on the screen @ software. ● Match it with experimental stimuli to map gaze onto areas of interest. ● Parse eye positions into events (i.e., measurements). ○ e.g., many gaze points on an area close in time → fixation. HOW DO EYE-TRACKERS WORK? Record the eye → Sample rate. ● Number of times the tracker measures the eye position per second. ○ 300 Hz = 1 data point every 3 ms. ○ Interplay with measure of interest. ○ If a fixation is 300 ms and a saccade are 20 ms. ■ 30 Hz: No saccades, 9 fixations. ■ 300 Hz: 10 saccades, 100 fixations. TYPES OF EYE-TRACKERS TYPES OF EYE-TRACKERS COMBINING EYE-TRACKING PROS & CONS PROS OF EYE-TRACKING Versatile. Easy. “Similar” to daily life scenarios. Accessible for many populations. CONS OF EYE-TRACKING Expensive (both in terms of money and time). ● But also less expensive than other techniques. Lab versus natural testing. Participants’ exclusion criteria (e.g., glasses). EYE-TRACKING IN PSYCHOLOGY PARADIGMS PARADIGMS NB: You can also eye-track while performing other tasks (e.g., visual search) ANTI-SACCADE PARADIGM Task: Participants are instructed to look in the opposite direction of a target to investigate voluntary and flexible control of movement. ANTI-SACCADE PARADIGM Task: Participants are instructed to look in the opposite direction of a target to investigate voluntary and flexible control of movement. ANTI-SACCADE PARADIGM Interest: Saccades; operationalised in pro-saccades (looks to the target), anti-saccades (looks away from the target). ● Latency ● Proportion of errors Relevant in: ● Neurological and psychiatric disorders ● Development GAZE-CONTINGENT PARADIGMS Task: Presentation of stimuli is contingent on where participants fixate to investigate foveal and parafoveal processing. ● Perceptual span → region from which observers obtain useful information during an eye fixation. ● Not necessary to have an overt response from participants. GAZE-CONTINGENT PARADIGMS Task: Presentation of stimuli is contingent on where participants fixate to investigate foveal and parafoveal processing. In fact, a set of eye-contingent paradigms. ● ● ● ● Moving window Moving mask Boundary paradigm Flash-preview moving window GAZE-CONTINGENT PARADIGMS Moving window paradigm. ● McConkie & Rayder (1975) ● Manipulations: ○ ○ Size of the visible box Language properties (e.g., frequency). ● Asymmetric perceptual span: right-skewed for Western languages. GAZE-CONTINGENT PARADIGMS ● Moving window paradigm GAZE-CONTINGENT PARADIGMS ● Moving mask paradigm (Rayner & Bertera, 1979) GAZE-CONTINGENT PARADIGMS ● Boundary paradigm (Rayner, 1975) GAZE-CONTINGENT PARADIGMS Interests: Fixations, saccades; operationalised in saccadic amplitude, fixation durations, number of fixations, etc. ● In reading: How do patterns change depending on how much information could be processed? CHANGE DETECTION Task: Look at a scene and detect changes (e.g., McConkie & Currie,1996; Rensink, 2002). CHANGE DETECTION Interest: Time needed to detect a change. ● Viewers can inspect an image for a minute without noticing the change. New paradigm: Flash when change is induced (saccadic suppression). ● We do not perceive everything in our direct environment. VISUAL WORLD PARADIGM Task: Presentation of auditory and visual stimuli are presented to a participant, with the goal of understanding how the latter influences the former around a scene. ● Not necessary to have an overt response from participants. Can have variations (written words instead of images). ● Affected by linguistic and non-linguistic factors. VISUAL WORLD PARADIGM Cooper (1974), later popularised by Tanenhaus et al. (1995) ‘While on a safari in Africa [...] I noticed a hungry lion slowly moving through the tall grass toward a herd of a grazing zebra’ VISUAL WORLD PARADIGM Auditory stimuli (i.e., time window of analysis) ● Eye-movements in the VWP are time-locked ○ How auditory information guides attention throughout a visual scene. VISUAL WORLD PARADIGM Visual stimuli (Areas of Interest) ● At least, two: the referent (or target) and a distractor (between 2 and 5). ● Commonly, one of the objects on the screen will be named. ● You can have more than one distractor, and the distractors can relate to the target in different ways, e.g., Ito et al., 2018. VISUAL WORLD PARADIGM Visual stimuli (Areas of Interest) ● Can also be line drawings. ○ Less effect of word knowledge, activation of conceptual and lexical knowledge of words. ● Or even words! ○ Sensitive to phonological information and orthographic processing, less effect of conceptual knowledge. ○ Interesting when you are looking at processing of abstract words. VISUAL WORLD PARADIGM Do people anticipate upcoming words based on verb semantics? Distractor 1 DV: Fixations to objects on screen. IV: Verb semantics (constraining v. neutral) Referent Distractor 3 “The boy will eat the cake” Distractor 2 vs “The boy will move the cake” Altmann & Kamide (1999) VISUAL WORLD PARADIGM Do people anticipate upcoming words based on verb semantics? DV: Fixations to objects on screen. IV: Verb semantics (constraining v. neutral) “The boy will eat the cake” vs “The boy will move the cake” Fixations on the cake before it is said VISUAL WORLD PARADIGM Interest: Fixations (usually over time), pupil size (relevant for auditory stimuli), saccade latency. ● Looking at something = output of comprehending speech. ○ The difference in time between hearing ‘cake’ and fixating on the cake reflects lexical access. ● Fixation duration = considering whether the object is the referent of the sound (or the intended meaning). ○ The amount of time spent on ‘cake’ as opposed to ‘ball’ reflects commitment to a representation. VISUAL WORLD PARADIGM Relevant in many areas of psycholinguistics (see Huettig et al., 2011), e.g.: Phonological level (e.g., beetle v beaker, Allopenna et al., 1998); Pragmatic level (e.g., adjective informativeness depends on speaker’s reliability.Grodner et al., 2010); Paralinguistic cues (e.g., new versus given information following a disfluency, Arnold et al., 2004) SCENE PERCEPTION Task: Participants are shown a complex image to explore the allocation of attention and visual processing. ● Not necessary to have an overt response from participants (or give them a task). Early studies: Fixations on ‘interesting and informative’ regions (Henderson, 2003) ● Bottom-up stimulus-based versus top-down memory-based versus task-related. SCENE PERCEPTION Interests: Fixation (position, duration) Castelhano & Henderson, 2009 NATURAL READING Task: Read (silently) a piece of text with the aim of exploring written language comprehension. ● Not necessary to have an overt response from participants. Potential manipulations: language, word length, word frequency, etc. ● Corpora: huge datasets of eye movement data (e.g., GECO, Cop et al., 2017; MECO, Siegelman et al., 2022) NATURAL READING Task: Read (silently) a piece of text with the aim of exploring written language comprehension. ● Not necessary to have an overt response from participants. NATURAL READING NATURAL READING Interest: Fixations, saccades; operationalised in number of fixations, fixation duration, regressions, skipping … (early v intermediate v late measures). ● Early measures: automatic recognition and lexical processes. ○ First pass: first reading. ● Late measures: more conscious, strategic processes. ○ Second, third passes. ● But cf. Pickering et al., 2004, on the nature of this linking hypothesis. NATURAL READING Interest: Fixations, saccades; operationalised in number of fixations, fixation duration, regressions, skipping … (early v intermediate v late measures). The more difficult the text ● The longer the fixation durations ● The smaller the saccade sizes ● The more frequent the regressions ● The less frequent the skipping of words NATURAL ENVIRONMENT Task: Have participants perform actions as they occur in daily life. ● Applicable through a wide range of contexts. ○ Eye-tracking while driving (Balk et al., 2006) ● Oftentimes, with head-mounted eye-trackers. NATURAL ENVIRONMENT Interests: Fixations (duration, place). “NATURAL” ENVIRONMENT Virtual reality allows for rich, visual and immersive scenes. ● Eichert et al. (2018) ○ Prediction in more naturalistic settings (emulation of Altmann & Kamide, 1999). CONCLUSION WRAP-UP ● ● ● ● ● Eye-tracking is a non-invasive technique to study the unfolding of cognitive processes. Linking hypothesis: eye movements are a window into the mind due to attention. Basic eye events: Fixations, saccades, blinks, pursuits and pupil size. These measures are then re-operationalised depending on the research question. Advantages: versatile, accessible; Disadvantages: expensive, ecological validity. Applicable in many areas inside and outside of psychology. THANK YOU FOR YOUR ATTENTION! QUESTIONS? REFERENCES Allopenna, P. D., Magnuson, J. S., & Tanenhaus, M. K. (1998). Tracking the time course of spoken word recognition using eye movements: Evidence for continuous mapping models. Journal of memory and language, 38(4), 419-439. Altmann, G. T., & Kamide, Y. (1999). Incremental interpretation at verbs: Restricting the domain of subsequent reference. Cognition, 73(3), 247-264. Arnold, J. E., Tanenhaus, M. K., Altmann, R. J., & Fagnano, M. (2004). The old and thee, uh, new: Disfluency and reference resolution. Psychological science, 15(9), 578-582. Balk, S. A., Moore, K. S., Steele, J. E., Spearman, W., & Duchowski, A. T. (2006). Mobile phone use in a driving simulation task: Differences in eye movements. J Vis, 6(6), 872. Castelhano, M. S., Mack, M. L., & Henderson, J. M. (2009). Viewing task influences eye movement control during active scene perception. Journal of Vision, 9(3), 6–6. doi:10.1167/9.3.6 Clifton, C., Staub, A., & Rayner, K. (2007). Eye movements in reading words and sentences. Eye movements: A window on mind and brain. van Gompel R.Fischer M. H.Murray W. S.Hill R. L. Oxford, UK: Elsevier 341–372. Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: a new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive psychology. Cop, U., Dirix, N., Drieghe, D., & Duyck, W. (2017). Presenting GECO: An eyetracking corpus of monolingual and bilingual sentence reading. Behavior research methods, 49(2), 602–615. https://doi.org/10.3758/s13428-016-0734-0 Eichert, N., Peeters, D., & Hagoort, P. (2018). Language-driven anticipatory eye movements in virtual reality. Behavior research methods, 50, 1102-1115. Grodner, D. J., Klein, N. M., Carbary, K. M., & Tanenhaus, M. K. (2010). “Some,” and possibly all, scalar inferences are not delayed: Evidence for immediate pragmatic enrichment. Cognition, 116(1), 42-55. REFERENCES Henderson, J. M. (2003). Human gaze control during real-world scene perception. Trends in cognitive sciences, 7(11), 498-504. Huettig, F., Rommers, J., & Meyer, A. S. (2011). Using the visual world paradigm to study language processing: A review and critical evaluation. Acta psychologica, 137(2), 151-171. Ito, A., Pickering, M. J., & Corley, M. (2018). Investigating the time-course of phonological prediction in native and non-native speakers of English: A visual world eye-tracking study. Journal of Memory and Language, 98, 1-11. Javal, E. (1879). Essai sur la physiologie de la lecture. Annaels d'oculistique, 82, 242-25. Matin, E., Shao, K. C., & Boff, K. R. (1993). Saccadic overhead: Information-processing time with and without saccades. Perception & psychophysics, 53, 372-380. McConkie, G. W., & Currie, C. B. (1996). Visual stability across saccades while viewing complex pictures. Journal of Experimental Psychology: Human Perception & Performance, 22(3), 563-581. McConkie, G. W., & Rayner, K. (1975). The span of the effective stimulus during a fixation in reading. Perception & Psychophysics, 17, 578–586. Mathôt, S., Grainger, J., & Strijkers, K. (2017). Pupillary responses to words that convey a sense of brightness or darkness. Psychological science, 28(8), 1116-1124. Papoutsaki, A., Laskey, J., & Huang, J. (2017, March). Searchgazer: Webcam eye tracking for remote studies of web search. In Proceedings of the 2017 conference on conference human information interaction and retrieval (pp. 17-26). Pickering, M. J., Frisson, S., McElree, B., & Traxler, M. J. (2004). Eye Movements and Semantic Composition. In M. Carreiras & C. Clifton Jr. (Eds.), The On-line Study of Sentence Comprehension. Psychology Press. Porretta, V., & Tucker, B. V. (2019). Eyes wide open: Pupillary response to a foreign accent varying in intelligibility. Frontiers in Communication, 4, 8. REFERENCES Rayner, K. (1975). The perceptual span and peripheral cues in reading. Cognitive psychology, 7(1), 65-81. Rayner, K., & Bertera, J. H. (1979). Reading without a fovea. Science, 206(4417), 468-469. Rensink, R. A. (2002). Change Detection. Annual Review of Psychology, 53(1), 245–277. doi:10.1146/annurev.psych.53.100901.135125 Siegelman, N., Schroeder, S., Acartürk, C., Ahn, H. D., Alexeeva, S., Amenta, S., Bertram, R., Bonandrini, R., Brysbaert, M., Chernova, D., Da Fonseca, S. M., Dirix, N., Duyck, W., Fella, A., Frost, R., Gattei, C. A., Kalaitzi, A., Kwon, N., Lõo, K., Marelli, M., … Kuperman, V. (2022). Expanding horizons of cross-linguistic research on reading: The Multilingual Eye-movement Corpus (MECO). Behavior research methods, 1–21. Advance online publication. https://doi.org/10.3758/s13428-021-01772-6 Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268(5217), 1632-1634. Trueswell, J. C. (2008). Using eye movements as a developmental measure within psycholinguistics. Language acquisition and language disorders, 44, 73. Yarbus A. L. (1967). Eye movements and vision. New York: Plenum.

Use Quizgecko on...
Browser
Browser