Article
Refine
Year of publication
Document Type
- Article (31284) (remove)
Language
- English (15890)
- German (13386)
- Portuguese (696)
- French (387)
- Croatian (251)
- Spanish (250)
- Italian (134)
- Turkish (113)
- Multiple languages (36)
- Latin (35)
Keywords
- Deutsch (503)
- taxonomy (449)
- Literatur (299)
- new species (194)
- Hofmannsthal, Hugo von (185)
- Rezeption (178)
- Übersetzung (163)
- Filmmusik (155)
- Johann Wolfgang von Goethe (131)
- Vormärz (117)
Institute
- Medizin (5399)
- Physik (1932)
- Biowissenschaften (1150)
- Biochemie und Chemie (1113)
- Extern (1108)
- Gesellschaftswissenschaften (803)
- Frankfurt Institute for Advanced Studies (FIAS) (753)
- Geowissenschaften (593)
- Präsidium (453)
- Philosophie (448)
The illusion of apparent motion can be induced when visual stimuli are successively presented at different locations. It has been shown in previous studies that motion-sensitive regions in extrastriate cortex are relevant for the processing of apparent motion, but it is unclear whether primary visual cortex (V1) is also involved in the representation of the illusory motion path. We investigated, in human subjects, apparent-motion-related activity in patches of V1 representing locations along the path of illusory stimulus motion using functional magnetic resonance imaging. Here we show that apparent motion caused a blood-oxygenation-level-dependent response along the V1 representations of the apparent-motion path, including regions that were not directly activated by the apparent-motion-inducing stimuli. This response was unaltered when participants had to perform an attention-demanding task that diverted their attention away from the stimulus. With a bistable motion quartet, we confirmed that the activity was related to the conscious perception of movement. Our data suggest that V1 is part of the network that represents the illusory path of apparent motion. The activation in V1 can be explained either by lateral interactions within V1 or by feedback mechanisms from higher visual areas, especially the motion-sensitive human MT/V5 complex.
We present a biologically-inspired system for real-time, feed-forward object recognition in cluttered scenes. Our system utilizes a vocabulary of very sparse features that are shared between and within different object models. To detect objects in a novel scene, these features are located in the image, and each detected feature votes for all objects that are consistent with its presence. Due to the sharing of features between object models our approach is more scalable to large object databases than traditional methods. To demonstrate the utility of this approach, we train our system to recognize any of 50 objects in everyday cluttered scenes with substantial occlusion. Without further optimization we also demonstrate near-perfect recognition on a standard 3-D recognition problem. Our system has an interpretation as a sparsely connected feed-forward neural network, making it a viable model for fast, feed-forward object recognition in the primate visual system.