Cognition and Neuroscience Module 2
35 Questions
0 Views

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to lesson

Podcast

Play an AI-generated podcast conversation about this lesson

Questions and Answers

What are the two main tasks for vision?

  • Tasting food
  • Detecting smells
  • Object recognition (correct)
  • Guiding movement (correct)
  • What is the technique used to record the firing rate of neurons?

    Single-cell recording

    Which pathway is responsible for object recognition?

  • Extrastriate visual areas
  • Ventral pathway (correct)
  • Retino-geniculo-striate pathway
  • Dorsal pathway
  • Dopamine is fully model-free according to the Reward Prediction Error theory.

    <p>False</p> Signup and view all the answers

    What type of neural network model predicts well the area V4?

    <p>HMO model</p> Signup and view all the answers

    Neurons that respond to a narrow range of orientations and spatial frequencies are called ________ cells.

    <p>simple</p> Signup and view all the answers

    DCNNs show an internal feature representation similar to the representation of the ______ pathway.

    <p>ventral</p> Signup and view all the answers

    Which type of images did primates outperform DCNNs on in object recognition?

    <p>Challenge images</p> Signup and view all the answers

    Recurrent computation is not relevant for core object recognition.

    <p>False</p> Signup and view all the answers

    Match the following neural network model components with their roles:

    <p>Deep convolutional neural networks = Internal feature representation similar to primate ventral pathway CORnet = Good predictors of early and late phases of IT RNNh = Higher performance in pattern completion</p> Signup and view all the answers

    What task are monkeys required to solve in the memory-guided saccade task?

    <p>Memory-guided saccade task</p> Signup and view all the answers

    What is the relationship between the reward probability and the post-reward trial number (PNR) in the experiments?

    <p>The reward probability increases with PNR</p> Signup and view all the answers

    Dopamine neurons are less active if the reward is delivered later.

    <p>True</p> Signup and view all the answers

    The animal's reward prediction is expected to increase after each __________ trial.

    <p>non-rewarded</p> Signup and view all the answers

    Match the following terms with their descriptions:

    <p>Sensory prediction error (SPE) = Generalized prediction error over sensory features Successor representation (SR) = Indicates the expected occupancy of a state by starting from another state</p> Signup and view all the answers

    What are some reasons that make decisions inherently non-deterministic according to the content? (Select all that apply)

    <p>Agents make choices unaware of the full consequences</p> Signup and view all the answers

    Define perceptual decision-making.

    <p>Perceptual decision-making is when an agent selects between actions based on weak or noisy external signals.</p> Signup and view all the answers

    Decision-making involves the following processes: Representation, Valuation, Choice, ________, and Learning.

    <p>Outcome evaluation</p> Signup and view all the answers

    Dopamine response can vary depending on reward probability. Is this statement true or false?

    <p>True</p> Signup and view all the answers

    Match the dopamine pathway with its associated function:

    <p>Nigrostriatal system = Mostly associated with motor functions Meso-cortico-limbic system = Mostly associated with motivation</p> Signup and view all the answers

    What are the stages of computation in simple cells?

    <p>Linear filtering through weighted sum of image intensities by receptive field (convolutions) and rectification to determine if neuron has to fire.</p> Signup and view all the answers

    What are the characteristics of complex cells?

    <p>Respond to linear stimuli with specific orientation and movement direction.</p> Signup and view all the answers

    Complex cells exhibit position invariance.

    <p>True</p> Signup and view all the answers

    End-stopped cells respond to short segments, long curved lines, or ___.

    <p>angles</p> Signup and view all the answers

    Match the following areas with their functions:

    <p>Area V4 = Visual object recognition and visual attention Inferior temporal cortex (IT) = Object perception and recognition</p> Signup and view all the answers

    What is a training environment that exposes the learning system to interrelated tasks?

    <p>A training environment that exposes the learning system to interrelated tasks is a situation where the learning system learns to use the short-term system as a basis for fast learning.</p> Signup and view all the answers

    What is the primary usage of Python?

    <p>General-purpose programming</p> Signup and view all the answers

    The prefrontal cortex is involved in meta-learning.

    <p>True</p> Signup and view all the answers

    What is the result of the case study on RNN meta-learning?

    <p>The agent learns to balance exploration and exploitation, and it is able to explore new bandit problems after some training.</p> Signup and view all the answers

    Dopamine neurons report an error in the ______________ of reward during learning.

    <p>temporal prediction</p> Signup and view all the answers

    In reinforcement learning, what is a key difference from supervised learning?

    <p>The agent has to learn the best policy in the distribution of policies</p> Signup and view all the answers

    What does the Bias-variance tradeoff in neural networks refer to?

    <p>Neural networks have a weak inductive bias. A learning procedure with a weak inductive bias (and large variance) is able to learn a wide range of patterns but is generally less sample-efficient.</p> Signup and view all the answers

    Deep reinforcement learning uses a neural network to learn the representation of the environment and the __________ solving an RL problem.

    <p>policy</p> Signup and view all the answers

    Episodic RL is a non-parametric RL approach that learns from future experiences.

    <p>False</p> Signup and view all the answers

    What is the main function of the Hippocampus in the Complementary learning systems (CLS) theory?

    <p>Rapidly learn spatial and non-spatial features of a particular experience</p> Signup and view all the answers

    Study Notes

    Object Recognition

    • Definition: Vision process that produces a description of the world without irrelevant information, including what is in the world and where it is.
    • Importance: Vision is the most important sense in primates, involved in memory and thinking.
    • Two main tasks:
      • Object recognition
      • Guiding movement
    • Bayesian modeling: Ideal observer uses prior knowledge and sensory data to infer the most probable interpretation of a stimulus.

    Vision Levels

    • Level 1: Low-level processes (local contrast, orientation, color, depth, and motion)
    • Level 2: Intermediate-level processes (integrating local features into global image, identifying boundaries and surfaces)
    • Level 3: High-level processes (object recognition, associating objects with memories and meaning)

    Pathways

    • Retino-geniculo-striate pathway: responsible for visual processing (retina, LGN, V1, extrastriate areas)
    • Ventral pathway: object recognition (extends from V1 to temporal lobe, feed-forward processing)
    • Dorsal pathway: movement guiding (connects V1 with parietal lobe and frontal lobe)

    Neuron Receptive Field

    • Receptive field: region of the visual scene at which a neuron will respond if a stimulus falls within it
    • Retinotopy: mapping of visual inputs from the retina to neurons in the visual areas
    • Eccentricity: diameter of the receptive field is proportional to the wideness of the visual angle
    • Cortical magnification: more cortical space is dedicated to the central part of the visual field

    Retina Cells

    • Photoreceptor: specialized neurons that are hyperpolarized in bright regions and depolarized in dark regions
    • Retinal ganglion cell (RGC): neurons with circular receptive fields, categorized into ON-center and OFF-center cells

    Area V1 Cells

    • Simple cells: respond to a narrow range of orientations and spatial frequencies
    • Complex cells: respond to linear stimuli with a specific orientation and movement direction
    • End-stopped (hypercomplex) cells: respond to short segments, long curved lines, or angles
    • Ice cube model: each 1 mm of the visual cortex can be modeled as an ice cube module with all neurons for decoding information in a specific location

    Extrastriate Visual Areas

    • Areas outside the primary visual cortex (V1), responsible for object recognition
    • Area V4: intermediate cortical area for visual object recognition and attention
    • Inferior temporal cortex (IT): responsible for object perception and recognition

    Object Recognition

    • Core object recognition: ability to rapidly discriminate a given visual object from all other possible objects
    • Selectivity: different responses to distinct specific objects
    • Consistency: similar responses to transformations of the same object
    • View-dependent unit: responds only to objects at specific points of view
    • View-invariant unit: responds regardless of the position of the observer

    Local vs Distributed Coding

    • Local coding hypothesis: IT neurons are gnostic units that are activated only when a particular object is recognized
    • Distributed coding hypothesis: recognition is due to the activation of multiple IT neurons### IT Neurons and Object Recognition
    • The best offset from the stimulus onset to measure IT neurons is 125 ms.
    • The visual ventral pathway, responsible for object recognition, also encodes information on object size.
    • A machine learning algorithm can extract this information from neural readings, hinting at the ventral pathway's contribution to identifying object location and size.

    Artificial Neural Networks to Predict Neuronal Activity

    • Different neural networks are trained on image recognition tasks and compared to neuronal activity in the brain.
    • The networks should have the following properties:
      • Provide information useful for behavioral tasks (like IT neurons).
      • Have layers corresponding to areas on the ventral pathway.
      • Be able to predict the activation of single and groups of biological neurons.
    • A dataset of images is divided into training and test sets, with varying levels of difficulty and random backgrounds.

    Neural Network Training and Evaluation

    • Hierarchical convolutional neural networks (HCNNs) are used for the experiments.
    • HCNNs are composed of linear-nonlinear layers, including filtering, activation, pooling, and normalization.
    • Models are divided into groups based on random sampling, high-variation image performance, and IT neural predictivity.
    • Evaluation is done using object recognition performances and partial least squares regression to measure the ability of a neural network to predict neuronal activity.

    Results

    • The hierarchical modular optimization (HMO) model has human-like performances.
    • Higher categorization accuracy is associated with better explanation of IT neural activity.
    • None of the neural network parameters independently predict IT better than performance.
    • Higher levels of the HMO model yield good prediction capabilities of IT and V4 neurons.

    Object Recognition Emulation through Neural Networks

    • Deep convolutional neural networks (DCNNs) show internal feature representations similar to the ventral pathway.
    • Object confusion in DCNNs is similar to behavioral patterns in primates.
    • However, DCNNs diverge from human behavior on higher resolution levels.

    Recurrent Neural Networks

    • Recurrent neural networks (RNNs) may be involved in object recognition, especially in cases where feed-forward networks fail.
    • RNNs are able to solve some challenge images, but those that remain unsolved are those with the longest object solution times (OSTs) among the challenge images.
    • Recurrence can be seen as additional non-linear transformations in addition to those of the feed-forward phase.

    Pattern Completion

    • Pattern completion is the ability to recognize poorly visible or occluded objects.
    • Recurrent computation is hypothesized to be involved in pattern completion.
    • Human and RNN results show that subjects are able to robustly recognize whole and partial objects, but performances decline for partial objects in the masked case.

    Unsupervised Neural Networks

    • Most models simulating the visual cortex are trained on supervised datasets, but this is not able to explain how primates learn to recognize objects.
    • Unsupervised learning might explain what happens in between the representations at low-level visual areas and the representations learned at higher levels.
    • Contrastive embeddings, an unsupervised method, have the best performances on object recognition tasks.

    Dopamine in Reinforcement Learning

    • Decision-making is a voluntary process that leads to the selection of an action based on sensory information.
    • Decisions are inherently non-deterministic due to inconsistent choices, uncertainty, and noisy internal and external signals.
    • Decision-making involves representation, valuation, choice, outcome evaluation, and learning.
    • Valuation circuitry involves neurons sensitive to reward value, spread throughout the brain.
    • Decision-making theories include economic learning, which involves the selection of an action with the maximum utility.### Reinforcement Learning
    • Reinforcement learning involves learning a mapping between states and actions to maximize the expected cumulative future reward.
    • The Bellman equation is a fundamental concept in reinforcement learning, which describes the expected future reward given an action and state.

    Model-Based and Model-Free Reinforcement Learning

    • Model-based reinforcement learning aims to learn the right-hand side of the Bellman equation, which requires knowing the state transition distribution.
    • Model-free reinforcement learning aims to directly learn the left-hand side of the Bellman equation by estimating the Q-function from experience.
    • Temporal difference learning is a type of model-free reinforcement learning that updates the Q-function based on the reward prediction error.

    Dopaminergic System

    • The dopaminergic system is involved in reinforcement learning, particularly in predicting natural rewards and addictive drugs.
    • Dopamine pathways include the nigrostriatal system, associated with motor functions, and the meso-cortico-limbic system, associated with motivation.
    • The actor/critic architecture is a model of reinforcement learning that consists of two components: the critic, which learns state values, and the actor, which maps states to actions.

    Dopamine Properties

    • Phasic response: dopamine neurons show excitatory or inhibitory responses to stimuli, which can be interpreted as a reward prediction error.
    • Bidirectional prediction: dopamine captures both improvements (positive prediction error) and worsenings (negative prediction error) of the reward.
    • Transfer: dopaminergic activity shifts from responding to the reward to responding to the conditioned stimulus that predicts it.
    • Probability encoding: the dopaminergic response varies with the reward probability.
    • Temporal prediction: dopamine also accounts for the time the reward is expected to be delivered.

    Reward Prediction Error (RPE) Theory of Dopamine

    • Dopamine reflects the value of the observable state, which is a quantitative summary of future reward.
    • State values are directly learned through experience.
    • Dopamine only signals surprising events that bring a reward.
    • Dopamine does not make inferences on the model of the environment.

    Case Studies

    • Monkey saccade: dopamine neurons are less active if the reward is delivered later and more depressed if the reward is omitted.
    • Dopamine indirect learning: dopamine might also reflect values learned indirectly.
    • Dopamine RPE reflects inference over hidden states: dopamine responds to the change in the identity of the reward, even if the value remained the same.

    Generalized Prediction Error

    • Dopamine might be involved in a more general state prediction error.
    • Dopamine state change prediction: rats learn to associate new stimuli with rewards, and dopamine responds to the change in the identity of the reward.

    Successor Representation

    • Successor representation (SR) is a mapping of a state to the expected occupancy of future states.
    • Sensory prediction error (SPE) is a generalized prediction error over sensory features that estimates the successor representation.
    • SR learning predicts the value of a state by combining the efficiency of model-free approaches and some flexibility from model-based RL.

    Distributional Reinforcement Learning

    • Distributional reinforcement learning aims to learn the full distribution of the expected reward instead of the mean expected reward.
    • In traditional temporal-difference learning, predictors learn similar values, while in distributional temporal-difference learning, there are optimistic and pessimistic predictors with different scaling.
    • The reversal point of a dopaminergic neuron is the point where the reward expresses a negative or positive error.
    • Case study: measured neural data show that dopamine neurons respond differently to rewards of different magnitudes and probabilities, similar to simulated distributional RL data.

    Studying That Suits You

    Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

    Quiz Team

    Related Documents

    Description

    This quiz covers key concepts in cognition and neuroscience, including object recognition, neural pathways, and retina cells. It's designed for students of the University of Bologna's Academic Year 2023-2024 program.

    More Like This

    Use Quizgecko on...
    Browser
    Browser