چكيده به لاتين
Objects in our visual world usually have several features dimensions simultaneously (such as color, direction, and orientation). However, previous studies claim that our brain integrates these feature and cause us to perceive these object as a whole, regardless of that different features are encoded in a separate area and visual pathways. Integrating these features separately and precisely from objects is a fundamental challenge for researchers in the field of vision, which is referred to as the Binding problem. Much research has been done on how to bind the features observed in perceiving the environment. They concluded that the integration of features exists in many perceptual processes.
We designed a behavioral task to investigate this by comparing the timing of features’ perception in either a single-feature experiment or an integration experiment. In the first experiment, the subject was asked only to attend to one feature dimension (color/orientation) and report it. In the second experiment, the subject was required to attend to both features simultaneously and report as a single object, including one color and orientation. Also, we considered different target’ exposure times to realize the minimum time required to perceive these features.
Our result demonstrates that subjects attend and respond to the objects’ features sequentially. Furthermore, perception time is different from feature processing time. Because the color feature was perceived earlier than the orientation, but the reaction time for the color was longer than the orientation. Moreover, this sequence was observed both in perception and reaction time in the integration experiment, similar to the single-feature experiment. Besides, due to sequential feature perception and subjects’ response, we claim that visual features integration in the simultaneous display is a post-perception process.
We also examined color and direction encoding in neuronal data in the lateral intraparietal area (LIP) region. Numerous studies have been performed on the role of the LIP region in spatial visual functions (e.g., spatial attention, saccadic eye movements), and it has been shown that feature-based attention allows LIP neurons to integrate multiple visual features representations from upstream areas flexibly. LIP is interconnected with both dorsal and ventral stream visual areas and can encode visual features such as color, direction, and form, particularly when stimuli are task-relevant.
Initially, using Receiver operating characteristic (ROC) showed direction encoded significantly earlier and with greater power than color. We also used the direction and color of the preferred and anti-preferred neurons to examine the coding of these features. The results showed a sequence similar to the case in which the features with the most differences were selected. We validated these observations using the population decoding method in which we trained the Support vector machines (SVMs) classifier for color and direction decoding. Finally, we examined the matching encoding between two identical features and between two different features and showed that the decoding of each feature occurred with a significant difference earlier than the matching of the same feature.