Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Document Details

SpiritualJadeite7831

Uploaded by SpiritualJadeite7831

Columbia University

Tags

cognitive science computational models neural networks cognitive psychology

Summary

These notes explain cognitive science concepts, including the cognitive approach, levels of explanation, and computational models. They discuss topics like algorithms, perception, memory, and computational theory of mind. The notes also touch on neural network computation and classical computation.

Full Transcript

What is Cognitive Science? Cognitive science is not phenomenology, psychoanalysis, behaviorism, or biology The Cognitive Approach The mind is like a computer in that it receives information, stores information, retrieves information, transforms information and transmits information...

What is Cognitive Science? Cognitive science is not phenomenology, psychoanalysis, behaviorism, or biology The Cognitive Approach The mind is like a computer in that it receives information, stores information, retrieves information, transforms information and transmits information An algorithm is a step-by-step process for generating outputs from inputs ○ Requires goals and constraints (multiplication example) One view of perception is that what we perceive is the result of interpretations that are shaped by previous experiences Another view of perception is that what we perceive is the result of assumptions about the environment that are sometimes learned, sometimes innate Constructive view: likeness of an object comes into your mind ○ How children think mirroring view: taking bits of data and stringing them together to make assumptions Global workspace theory of consciousness suggests that perceptual information is selected for “broadcast” to other systems in the brain (ex: language, memory, decision making) and that information enters consciousness Memory is an example of an availability heuristic ○ Availability heuristic: a type of mental shortcut that involves estimating the probability or risk of something based on how easily examples come to mind Mouse study showed that we have an internal map ○ Showed larger comprehension abilities Levels of Explanation To ask different kinds of questions about a computational system, we need to use different levels of abstraction ○ What are the goals of the system? ○ What are its inputs and outputs? ○ What representation does the system use? ○ How are these representations transformed? ○ How are these operations physically implemented? Marr’s Three Levels ○ Computational theory: what are the goals and properties of this system? ○ Representation and algorithm:wWhat information is manipulated, in what format, and using what rules? ○ Hardware implementation: what physical objects in the world carry out this process? ○ Look over donkey kong example questions ○ Ultimatum game example Represent choices as continuous-valued intensities Algorithm: have intensities compete with each other Implemented through neurons Classical Computation (category of representation) Charles Babbage created one of the first “computers” in the early 1800’s that was mostly an adding machine Ada Lovelace made the first “computer program” algorithm for using the Analytical engine to compute numbers Turing machine: a theoretical system with ○ An indefinitely long “tape” of memory where symbols can be written and read ○ A current position on the tape and a current state ○ A lookup table for what to do next, based on the machine’s state and the symbol at the current position Church-Turing Thesis: any computation that a human could carry out with pencil and paper can also be done by a Turing machine How a Turing machine is implemented is at the algorithmic/representational level of Marr’s three levels Creating a Turing machine to carry out a computation requires us to specify a representation as well as an algorithm Turing proved that it is possible to construct a universal turing machine ○ With the right inputs, it can carry out any possible computation “NAND” operation is universal, meaning we can build any logical operations out of it Classical computational theory of mind: the mind is a computational system, with core mental processes that use algorithms and representations similar to a Turing machine ○ Memory representations are discrete symbols ○ Algorithms consist of a sequence of steps ○ Allows for non-traditional elements: Memory access beyond left/right movements Parallel computation Probabilistic transitions ○ Thinking can arise from information processing, which can be implemented in physical objects, such as neurons René Descartes’ Dualism: matter and mind are two distinct kinds of substances Is the mind a general-purpose programmable computer? ○ Yes, in some sense - basically by definition, humans can be given instructions to compute anything computable ○ But many specific mental processes are likely not programmable, and instead implement fixed learning algorithms that can be used to acquire new skills and information Neural Network Computation Classical computers are notoriously bad at many tasks: ○ Categorization, generation, extrapolation, and problem solving Artificial neural networks were created as a model of human thought They became more sophisticated in the 1980s as banks wanted something that could recognize what numbers were written In 2015 programs became more accurate than humans in recognizing what category a picture of an animal belonged to ○ This change came because of faster computers, larger networks, and more data Basic elements for neural network computation ○ Nodes aka neurons Very abstract similarity to the brain Having a lot of nodes makes it easier to learn lots of info ○ Weights What you change to make it better ○ Activation functions Non-linear ReLu ○ Objective function What determines how right or wrong output was ○ Learning algorithm Example: backpropagation: which direction to shift so that output would be closer to the right answer Neural network process ex: take input, send through network, get output, measure how correct, use backpropagation to go back into the network and make it better by adjusting weights The more times you cycle through the training data the better Is the human mind like an artificial neural network? ○ Yes: similar-ish internal activity ○ No: we require less training and different kinds of training But, could be evolution that trained us over time ○ No: artificial neural networks struggle with tasks that are easy for us, such as intuitive physics Carl cox study on monkey brains led people to think it was helpful to look at the brain as a neural network Know expert prediction Representation Synesthesia ○ Sound-color ○ Color-sound ○ Auditory-tactile ○ Day-color ○ Grapheme color ○ Explanation could be proximities in the brain of relevant parts for these perceptions example: word and taste regions Format: family of features of the representation ○ Access: who understands the representation ○ Within the same overall system, different regions (and even different layers) might use different formats ○ Basic representations: colored regions in space, any objects, properties, relations ○ Complex representations: represent particular, concrete objects, general and abstract truths ○ Population codes Allow for more possible representations Allow for more similarities between representations Can be more robust to errors Content ○ How can we identify the contents of representations? What tasks the subject can complete How long it takes the subject to complete those tasks What mistakes subjects make on those tasks Where the representations are located ○ Imagery debate is about the format of internal representations Pylyshyn: internal representations used on these tasks are sentence-like Kosslyn: internal representations used on these tasks are picture-like Aphantasia: inability to visualize Hyperphantasia: as vivid as really seeing Modularity Defining features of a module include: ○ Encapsulation: information outside the module is not available inside the module ○ Specialization: restricted to a specific domain and applies a specific algorithm ○ Localization: implemented in a circumscribed region that is dedicated to that function Aimes rooms provide evidence for modulation Classroom experiment provides evidence against Development Nativism: key elements of cognition are innate ○ We start with a basic outline of which representations and algorithms to use, which is refined through learning Empiricism: minds are simply general-purpose pattern detectors ○ All knowledge of the world comes from our life experiences Core cognition hypothesis: humans are born with multiple specialized cognitive systems ○ Numbers, objects, agents Object permanence: understanding that objects still exist even when they cannot be seen, heard, or touched Centration: they reason based on only a single salient aspect of a situation Theory of mind: despite some early expertise in understanding agents, infants do not understand that different people have separate minds Overregularization example: children using “ate” correctly and then incorrectly Nature v Nurture Evolution has shaped genes for brain development which determine how cognitive processes work ○ Shared genes Monkey study: monkeys looked at hands more than faces when raised without seeing faces ○ Strong support for the nurture argument Heritability index: h^2 = variability predicted by genes/variability in a trait Twin studies: if heritability is high, there should be more trait similarity for identical twins compared to dizygotic twins Genome-wide complex trait analysis: tries to build a model that goes from genes to traits Twin study showed that family you’re raised in didn’t seem to matter as much as other life experiences are more important Genes express themselves through biological changes in the brain and by selecting environments Genes can change environment Evolutionary Psychology 2% of size but uses 20% of caloric intake\ Galapagos finches example show changes in environment lead to changes in anatomy Phenotype plasticity: gene only expresses itself under certain conditions Pleiotropy: the same gene simultaneously contributes to multiple phenotypes Genotypes and phenotypes - Evolution about genotypes Brain Dendrite: receive information from other neurons Synapse: sends out messages to other neurons Chemical signaling changes voltage of the neuron Brain Scanning Ways of measuring the brain What can we measure with neuroimaging? ○ Regions involved in a cognitive process ○ Timing of different operations during cognition ○ Testing whether a cognitive process is occurring ○ Measuring individual differences in cognition EEG: non-invasive, cheap + portable MEG: magnetic signals better pass through skin/skull fMRI: access to deep brain regions

Use Quizgecko on...
Browser
Browser