3. Prototypes, Exemplars & Category Learning copy.docx

Full Transcript

CRITICAL READING: CORNELL NOTES Prototypes, Exemplars & Category Learning Name: Date: 21 August 2023 Section: Lecture 3 Period: Questions/Main Ideas/Vocabulary Notes/Answers/Definitions/Examples/Sentences Abstraction Empirical evidence suggests that we store information about rea...

CRITICAL READING: CORNELL NOTES Prototypes, Exemplars & Category Learning Name: Date: 21 August 2023 Section: Lecture 3 Period: Questions/Main Ideas/Vocabulary Notes/Answers/Definitions/Examples/Sentences Abstraction Empirical evidence suggests that we store information about real world categories in the form of conceptual representations. What short of information are we actually storing? Conceptual representations are potentially useful because they provide a means of reducing the amount of data needed to be stored in memory. Rather than remembering every dog we see, we have a conceptual representation that stores information about dogs, some form of ‘dog’ abstraction. The Prototype View The idea that out conceptual knowledge is stored in the form of an abstraction is the major assumption of the prototype view. Under this view, on the basis of experience with the category examples, people abstract out the central tendency of a category. In other words, a category representation consists of a summary of all of the examples of the category, called the prototype. The Exemplar View The exemplar view suggests that experience with a category doesn’t lead to the formation of an abstracted prototype. Rather, it suggests that we simply store in memory every example of a given category that we encounter. In other words, a conceptual representation consists of all the individual members of a category, known as exemplars. The Prototype & Exemplar Views Are Two Ends of a Continuum At one end, we have total abstraction (prototype), and at the other end, we have zero abstraction (exemplar). The prototype view is useful because it reduces memory load, but this reduction comes at the cost of specific information. The exemplar view is useful because it retains specific information, but it comes at a cost of memory load. There are some plausibility issues here. Does it seem likely that everything I know about dogs or cats or vehicles or emotions or chairs can be stored in the form of an abstracted ‘prototype’? And what would this look like? Does it look like anything? Or is it merely a set of features? Does it seem likely that store in my memory every example of dogs or cats or vehicles or emotions or chairs that I come across? This seems computationally intractable, and also redundant. Why would I want to store everything I encounter? Surely, I don’t need to remember every dog I’ve seen in order to understand the concept dog? Implications for Computational Models Dealing with Concepts & Categories Essentially, in order to implement a computational model, you need to make assumptions about the way the data represented. Generally, this means choosing between a prototype or exemplar representation. Examples: The family resemblance model predicts typicality as a function of featural overlap between category members (exemplar). The polymorphous concept model predicts typicality as a function of featural overlap between category members and an abstracted feature list representing the category name (prototype). Typicality can be predicted as a function of the distance between each category member in a multidimensional space and the central tendency of that category (prototype). Typicality can also be predicted as a function of the mean distance between each category member in a multidimensional space and each other category member in that space (exemplar). Category Learning In both of the previous examples, the assumed level of abstraction actually makes little difference to the quality of the models’ predictions. In some cases, the prototype models do better and in some cases, the exemplar models do better. This isn’t always the case. On research area in which levels of abstraction has played an important role is category learning. Category Learning Experiments Tend to employ stimuli that bear little resemblance to ‘real-world’ categories but can be easily manipulated. Hue, brightness, saturation, size, shape, etc. Tend to have two parts: A learning phase A transfer phase Learning Phase In the learning phase, participants are shown stimuli drawn from two categories. They asked if the stimulus is from category A or B. They are given feedback. They learn the categories over time. Transfer Phase Many categorisation experiments involve a second phase (the transfer phase). Once the category structure is learnt, participants are shown a mixture of old and new stimuli and asked to categorise them. This gives us insight into our ability to generalise from a stored category representation to novel stimuli. Manipulating the category structure allows us to test different theories about the processes underlying categorisation and generalisation. Third Stage Many experiments involve a third stage in which the empirical categorisation decisions and/or response latencies are computationally modelled. Sometimes, this involves a similarity rating stage in order to generate a multi-dimensional scaling representation of the stimulus space. Perhaps the most well-known and commonly implemented models is the generalised context model of GCM. Generalised Context Model (GCM) The probability of a stimulus being categorised as a member of a given category is a weighted function of the distance between the target stimulus and the members of the two categories in the space (exemplar). The MDS – Based Prototype Model (MPM) The probability of a stimulus being categorised as a member of a given category is a weighted function of the distance between the target stimulus and the prototypes (central tendencies) of the two categories in the space. GCM & MPM Both the GCM (exemplar model) and MPM (prototype model) do a good job of simulating human performance on category learning tasks. Overall, however, the GCM does better than the MPM. This means that we store concepts using an exemplar representation and not a prototype representation. Prototype or Exemplar? In the context of these sorts of experiments, perhaps an exemplar representation has a greater utility. But we might question the degree to which the category structures in these experiments are representative of real-world category structures. We might also ask whether it is sensible to assume that we are using either an exemplar or a prototype representation. It might make more sense to employ a mixed model. The true representations may lie somewhere between no abstraction (exemplar) and full abstraction (prototype). Vanpaemal & Storms (2008): Varying Abstraction Model (VAM). Applied the model to four previously published category learning data sets that had been used to argue that no abstraction was involved in category decisions. The results suggested that some form of partial abstraction could be used to describe the empirical categorisation decisions. How Well do These Category Learning Models Translate to Categorisation Decisions Using Real-World Stimuli? These models were deigned to replicate empirical data from laboratory studies based on very small data sets (4 – 8 members per category), using fairly abstract stimuli (colour, shape, size, etc). Real-World Category Learning Data set: Pictures of 79 well-known fruits and vegetables and 30 novel stimuli (mainly tropical fruit and vegetables). One group of participants made pairwise similarity ratings. One group asked to categorise all 109 stimuli as wither fruits or vegetables. They used MDS to generate all three-dimensional representation of the stimuli. Based on this representation, the GCM made categorisation predictions. The GCM was able to make a good account of the fruit and vegetable categorisation data. A subsequent paper showed that the MPM does about equally well.

Use Quizgecko on...
Browser
Browser