Publications

Displaying 1 - 6 of 6
  • Seijdel, N., Marshall, T. R., & Drijvers, L. (2023). Rapid invisible frequency tagging (RIFT): A promising technique to study neural and cognitive processing using naturalistic paradigms. Cerebral Cortex, 33(5), 1626-1629. doi:10.1093/cercor/bhac160.

    Abstract

    Frequency tagging has been successfully used to investigate selective stimulus processing in electroencephalography (EEG) or magnetoencephalography (MEG) studies. Recently, new projectors have been developed that allow for frequency tagging at higher frequencies (>60 Hz). This technique, rapid invisible frequency tagging (RIFT), provides two crucial advantages over low-frequency tagging as (i) it leaves low-frequency oscillations unperturbed, and thus open for investigation, and ii) it can render the tagging invisible, resulting in more naturalistic paradigms and a lack of participant awareness. The development of this technique has far-reaching implications as oscillations involved in cognitive processes can be investigated, and potentially manipulated, in a more naturalistic manner.
  • Haan, E. H. F., Seijdel, N., Kentridge, R. W., & Heywood, C. A. (2020). Plasticity versus chronicity: Stable performance on category fluency 40 years post‐onset. Journal of Neuropsychology, 14(1), 20-27. doi:10.1111/jnp.12180.

    Abstract

    What is the long‐term trajectory of semantic memory deficits in patients who have suffered structural brain damage? Memory is, per definition, a changing faculty. The traditional view is that after an initial recovery period, the mature human brain has little capacity to repair or reorganize. More recently, it has been suggested that the central nervous system may be more plastic with the ability to change in neural structure, connectivity, and function. The latter observations are, however, largely based on normal learning in healthy subjects. Here, we report a patient who suffered bilateral ventro‐medial damage after presumed herpes encephalitis in 1971. He was seen regularly in the eighties, and we recently had the opportunity to re‐assess his semantic memory deficits. On semantic category fluency, he showed a very clear category‐specific deficit performing better that control data on non‐living categories and significantly worse on living items. Recent testing showed that his impairments have remained unchanged for more than 40 years. We suggest cautiousness when extrapolating the concept of brain plasticity, as observed during normal learning, to plasticity in the context of structural brain damage.
  • Seijdel, N., Tsakmakidis, N., De Haan, E. H. F., Bohte, S. M., & Scholte, H. S. (2020). Depth in convolutional neural networks solves scene segmentation. PLOS Computational Biology, 16: e1008022. doi:10.1371/journal.pcbi.1008022.

    Abstract

    Feed-forward deep convolutional neural networks (DCNNs) are, under specific conditions, matching and even surpassing human performance in object recognition in natural scenes. This performance suggests that the analysis of a loose collection of image features could support the recognition of natural object categories, without dedicated systems to solve specific visual subtasks. Research in humans however suggests that while feedforward activity may suffice for sparse scenes with isolated objects, additional visual operations ('routines') that aid the recognition process (e.g. segmentation or grouping) are needed for more complex scenes. Linking human visual processing to performance of DCNNs with increasing depth, we here explored if, how, and when object information is differentiated from the backgrounds they appear on. To this end, we controlled the information in both objects and backgrounds, as well as the relationship between them by adding noise, manipulating background congruence and systematically occluding parts of the image. Results indicate that with an increase in network depth, there is an increase in the distinction between object- and background information. For more shallow networks, results indicated a benefit of training on segmented objects. Overall, these results indicate that, de facto, scene segmentation can be performed by a network of sufficient depth. We conclude that the human brain could perform scene segmentation in the context of object identification without an explicit mechanism, by selecting or “binding” features that belong to the object and ignoring other features, in a manner similar to a very deep convolutional neural network.
  • Seijdel, N., Jahfari, S., Groen, I. I. A., & Scholte, H. S. (2020). Low-level image statistics in natural scenes influence perceptual decision-making. Scientific Reports, 10: 10573. doi:10.1038/s41598-020-67661-8.

    Abstract

    A fundamental component of interacting with our environment is gathering and interpretation of sensory information. When investigating how perceptual information influences decision-making, most researchers have relied on manipulated or unnatural information as perceptual input, resulting in findings that may not generalize to real-world scenes. Unlike simplified, artificial stimuli, real-world scenes contain low-level regularities that are informative about the structural complexity, which the brain could exploit. In this study, participants performed an animal detection task on low, medium or high complexity scenes as determined by two biologically plausible natural scene statistics, contrast energy (CE) or spatial coherence (SC). In experiment 1, stimuli were sampled such that CE and SC both influenced scene complexity. Diffusion modelling showed that the speed of information processing was affected by low-level scene complexity. Experiment 2a/b refined these observations by showing how isolated manipulation of SC resulted in weaker but comparable effects, with an additional change in response boundary, whereas manipulation of only CE had no effect. Overall, performance was best for scenes with intermediate complexity. Our systematic definition quantifies how natural scene complexity interacts with decision-making. We speculate that CE and SC serve as an indication to adjust perceptual decision-making based on the complexity of the input.

    Additional information

    supplementary materials data code and data
  • Seijdel, N., Sakmakidis, N., De Haan, E. H. F., Bohte, S. M., & Scholte, H. S. (2019). Implicit scene segmentation in deeper convolutional neural networks. In Proceedings of the 2019 Conference on Cognitive Computational Neuroscience (pp. 1059-1062). doi:10.32470/CCN.2019.1149-0.

    Abstract

    Feedforward deep convolutional neural networks (DCNNs) are matching and even surpassing human performance on object recognition. This performance suggests that activation of a loose collection of image
    features could support the recognition of natural object categories, without dedicated systems to solve specific visual subtasks. Recent findings in humans however, suggest that while feedforward activity may suffice for
    sparse scenes with isolated objects, additional visual operations ('routines') that aid the recognition process (e.g. segmentation or grouping) are needed for more complex scenes. Linking human visual processing to
    performance of DCNNs with increasing depth, we here explored if, how, and when object information is differentiated from the backgrounds they appear on. To this end, we controlled the information in both objects
    and backgrounds, as well as the relationship between them by adding noise, manipulating background congruence and systematically occluding parts of the image. Results indicated less distinction between object- and background features for more shallow networks. For those networks, we observed a benefit of training on segmented objects (as compared to unsegmented objects). Overall, deeper networks trained on natural
    (unsegmented) scenes seem to perform implicit 'segmentation' of the objects from their background, possibly by improved selection of relevant features.
  • Smits, A., Seijdel, N., Scholte, H., Heywood, C., Kentridge, R., & de Haan, E. (2019). Action blindsight and antipointing in a hemianopic patient. Neuropsychologia, 128, 270-275. doi:10.1016/j.neuropsychologia.2018.03.029.

    Abstract

    Blindsight refers to the observation of residual visual abilities in the hemianopic field of patients without a functional V1. Given the within- and between-subject variability in the preserved abilities and the phenomenal experience of blindsight patients, the fine-grained description of the phenomenon is still debated. Here we tested a patient with established “perceptual” and “attentional” blindsight (c.f. Danckert and Rossetti, 2005). Using a pointing paradigm patient MS, who suffers from a complete left homonymous hemianopia, showed clear above chance manual localisation of ‘unseen’ targets. In addition, target presentations in his blind field led MS, on occasion, to spontaneous responses towards his sighted field. Structural and functional magnetic resonance imaging was conducted to evaluate the magnitude of V1 damage. Results revealed the presence of a calcarine sulcus in both hemispheres, yet his right V1 is reduced, structurally disconnected and shows no fMRI response to visual stimuli. Thus, visual stimulation of his blind field can lead to “action blindsight” and spontaneous antipointing, in absence of a functional right V1. With respect to the antipointing, we suggest that MS may have registered the stimulation and subsequently presumes it must have been in his intact half field.

    Additional information

    video

Share this page