🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

HCNN_2024-41-83.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Transcript

40 41 42 43 44 45 46 47 48 49 50 The results of the studies on single neurons of the temporal lobe are in agreement with the theories of the distributed code of object recognition. Although it is surprising that some cells are selective for complex objects, the selectivity is almost alwa...

40 41 42 43 44 45 46 47 48 49 50 The results of the studies on single neurons of the temporal lobe are in agreement with the theories of the distributed code of object recognition. Although it is surprising that some cells are selective for complex objects, the selectivity is almost always relative, not absolute. 51 IT neuron selectivity often appears somewhat arbitrary. A single IT neuron could, for example, respond vigorously to a crescent of a particular color and texture. Cells with such selectivity likely provide inputs to higher-order neurons that respond to specific objects. 52 53 For neurons with small receptive fields that are activated by simple light patterns, such as retinal ganglion cells and V1, each object manifold will be highly curved. Moreover, the manifolds corresponding to different objects will be ‘‘tangled’’ together, 54 55 56 57 Objects could be reliably categorized and identified (with less than 10% reduction in performance) even when transformed (spatially shifted or scaled), although the classifier only saw each object at one particular scale and position during training. 58 59 60 61 62 63 64 65 Although the overall natural statistics of the screening images were roughly similar to those of the testing set, the specific content (semantic category) was quite different. Moreover, different camera, lighting and noise conditions, and a different rendering software package, were used. 66 67 68 69 70 71 72 Performance was significantly correlated with neural predictivity in all cases. Models that performed better on the categorization task were also more likely to produce outputs more closely aligned to IT neural responses. Thus, although the Hierarchical Linear-Nonlinear (HLN) hypothesis (i.e., higher level neurons (e.g., IT) output a linear weighting of inputs from intermediate-level (e.g., V4) neurons, followed by simple additional nonlinearities) is consistent with a broad spectrum of particular neural network architectures, specific parameter choices have a 73 large effect on a given model’s recognition performance and neural predictivity. 73 74 75 The x axis in each plot shows 1,600 test images sorted first by category identity (8 stimulus categories) and then by variation amount, with more drastic image transformations toward the right within each category block. The y axis represents the prediction/response magnitude of the neural site for each test image (those not used to train the model). In B, Distributions of model explained variance percentage (r2), over the population of all measured IT sites (n = 168). 76 In C, comparison of IT neural explained variance percentage for various models. Bar height shows median explained variance, taken over all predicted IT units. 76 77 78 79 80

Tags

neural networks object recognition machine learning
Use Quizgecko on...
Browser
Browser