Publications

Displaying 401 - 500 of 622
  • Peeters, D., Krahmer, E., & Maes, A. (2021). A conceptual framework for the study of demonstrative reference. Psychonomic Bulletin & Review, 28, 409-433. doi:10.3758/s13423-020-01822-8.

    Abstract

    Language allows us to efficiently communicate about the things in the world around us. Seemingly simple words like this and that are a cornerstone of our capability to refer, as they contribute to guiding the attention of our addressee to the specific entity we are talking about. Such demonstratives are acquired early in life, ubiquitous in everyday talk, often closely tied to our gestural communicative abilities, and present in all spoken languages of the world. Based on a review of recent experimental work, we here introduce a new conceptual framework of demonstrative reference. In the context of this framework, we argue that several physical, psychological, and referent-intrinsic factors dynamically interact to influence whether a speaker will use one demonstrative form (e.g., this) or another (e.g., that) in a given setting. However, the relative influence of these factors themselves is argued to be a function of the cultural language setting at hand, the theory-of-mind capacities of the speaker, and the affordances of the specific context in which the speech event takes place. It is demonstrated that the framework has the potential to reconcile findings in the literature that previously seemed irreconcilable. We show that the framework may to a large extent generalize to instances of endophoric reference (e.g., anaphora) and speculate that it may also describe the specific form and kinematics a speaker’s pointing gesture takes. Testable predictions and novel research questions derived from the framework are presented and discussed.
  • Pereira Soares, S. M., Kubota, M., Rossi, E., & Rothman, J. (2021). Determinants of bilingualism predict dynamic changes in resting state EEG oscillations. Brain and Language, 223: 105030. doi:10.1016/j.bandl.2021.105030.

    Abstract

    This study uses resting state EEG data from 103 bilinguals to understand how determinants of bilingualism may
    reshape the mind/brain. Participants completed the LSBQ, which quantifies language use and crucially the di-
    vision of labor of dual-language use in diverse activities and settings over the lifespan. We hypothesized cor-
    relations between the degree of active bilingualism with power of neural oscillations in specific frequency bands.
    Moreover, we anticipated levels of mean coherence (connectivity between brain regions) to vary by degree of
    bilingual language experience. Results demonstrated effects of Age of L2/2L1 onset on high beta and gamma
    powers. Higher usage of the non-societal language at home and society modulated indices of functional con-
    nectivity in theta, alpha and gamma frequencies. Results add to the emerging literature on the neuromodulatory
    effects of bilingualism for rs-EEG, and are in line with claims that bilingualism effects are modulated by degree of
    engagement with dual-language experiential factors
  • Pereiro Estevan, Y., Wan, V., & Scharenborg, O. (2007). Finding maximum margin segments in speech. Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference, IV, 937-940. doi:10.1109/ICASSP.2007.367225.

    Abstract

    Maximum margin clustering (MMC) is a relatively new and promising kernel method. In this paper, we apply MMC to the task of unsupervised speech segmentation. We present three automatic speech segmentation methods based on MMC, which are tested on TIMIT and evaluated on the level of phoneme boundary detection. The results show that MMC is highly competitive with existing unsupervised methods for the automatic detection of phoneme boundaries. Furthermore, initial analyses show that MMC is a promising method for the automatic detection of sub-phonetic information in the speech signal.
  • Perniss, P. M. (2007). Space and iconicity in German sign language (DGS). PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.57482.

    Abstract

    This dissertation investigates the expression of spatial relationships in German Sign Language (Deutsche Gebärdensprache, DGS). The analysis focuses on linguistic expression in the spatial domain in two types of discourse: static scene description (location) and event narratives (location and motion). Its primary theoretical objectives are to characterize the structure of locative descriptions in DGS; to explain the use of frames of reference and perspective in the expression of location and motion; to clarify the interrelationship between the systems of frames of reference, signing perspective, and classifier predicates; and to characterize the interplay between iconicity principles, on the one hand, and grammatical and discourse constraints, on the other hand, in the use of these spatial devices. In more general terms, the dissertation provides a usage-based account of iconic mapping in the visual-spatial modality. The use of space in sign language expression is widely assumed to be guided by iconic principles, which are furthermore assumed to hold in the same way across sign languages. Thus, there has been little expectation of variation between sign languages in the spatial domain in the use of spatial devices. Consequently, perhaps, there has been little systematic investigation of linguistic expression in the spatial domain in individual sign languages, and less investigation of spatial language in extended signed discourse. This dissertation provides an investigation of spatial expressions in DGS by investigating the impact of different constraints on iconicity in sign language structure. The analyses have important implications for our understanding of the role of iconicity in the visual-spatial modality, the possible language-specific variation within the spatial domain in the visual-spatial modality, the structure of spatial language in both natural language modalities, and the relationship between spatial language and cognition

    Additional information

    full text via Radboud Repository
  • Perniss, P. M., Pfau, R., & Steinbach, M. (Eds.). (2007). Visible variation: Cross-linguistic studies in sign language structure. Berlin: Mouton de Gruyter.

    Abstract

    It has been argued that properties of the visual-gestural modality impose a homogenizing effect on sign languages, leading to less structural variation in sign language structure as compared to spoken language structure. However, until recently, research on sign languages was limited to a number of (Western) sign languages. Before we can truly answer the question of whether modality effects do indeed cause less structural variation, it is necessary to investigate the similarities and differences that exist between sign languages in more detail and, especially, to include in this investigation less studied sign languages. The current research climate is testimony to a surge of interest in the study of a geographically more diverse range of sign languages. The volume reflects that climate and brings together work by scholars engaging in comparative sign linguistics research. The 11 articles discuss data from many different signed and spoken languages and cover a wide range of topics from different areas of grammar including phonology (word pictures), morphology (pronouns, negation, and auxiliaries), syntax (word order, interrogative clauses, auxiliaries, negation, and referential shift) and pragmatics (modal meaning and referential shift). In addition to this, the contributions address psycholinguistic issues, aspects of language change, and issues concerning data collection in sign languages, thereby providing methodological guidelines for further research. Although some papers use a specific theoretical framework for analyzing the data, the volume clearly focuses on empirical and descriptive aspects of sign language variation.
  • Perniss, P. M. (2007). Achieving spatial coherence in German sign language narratives: The use of classifiers and perspective. Lingua, 117(7), 1315-1338. doi:10.1016/j.lingua.2005.06.013.

    Abstract

    Spatial coherence in discourse relies on the use of devices that provide information about where referents are and where events take place. In signed language, two primary devices for achieving and maintaining spatial coherence are the use of classifier forms and signing perspective. This paper gives a unified account of the relationship between perspective and classifiers, and divides the range of possible correspondences between these two devices into prototypical and non-prototypical alignments. An analysis of German Sign Language narratives of complex events investigates the role of different classifier-perspective constructions in encoding spatial information about location, orientation, action and motion, as well as size and shape of referents. In particular, I show how non-prototypical alignments, including simultaneity of perspectives, contribute to the maintenance of spatial coherence, and provide functional explanations in terms of efficiency and informativeness constraints on discourse.
  • Perniss, P. M., Pfau, R., & Steinbach, M. (2007). Can't you see the difference? Sources of variation in sign language structure. In P. M. Perniss, R. Pfau, & M. Steinbach (Eds.), Visible variation: Cross-linguistic studies in sign language narratives (pp. 1-34). Berlin: Mouton de Gruyter.
  • Perniss, P. M. (2007). Locative functions of simultaneous perspective constructions in German sign language narrative. In M. Vermeerbergen, L. Leeson, & O. Crasborn (Eds.), Simultaneity in signed language: Form and function (pp. 27-54). Amsterdam: Benjamins.
  • Petersson, K. M., Silva, C., Castro-Caldas, A., Ingvar, M., & Reis, A. (2007). Literacy: A cultural influence on functional left-right differences in the inferior parietal cortex. European Journal of Neuroscience, 26(3), 791-799. doi:10.1111/j.1460-9568.2007.05701.x.

    Abstract

    The current understanding of hemispheric interaction is limited. Functional hemispheric specialization is likely to depend on both genetic and environmental factors. In the present study we investigated the importance of one factor, literacy, for the functional lateralization in the inferior parietal cortex in two independent samples of literate and illiterate subjects. The results show that the illiterate group are consistently more right-lateralized than their literate controls. In contrast, the two groups showed a similar degree of left-right differences in early speech-related regions of the superior temporal cortex. These results provide evidence suggesting that a cultural factor, literacy, influences the functional hemispheric balance in reading and verbal working memory-related regions. In a third sample, we investigated grey and white matter with voxel-based morphometry. The results showed differences between literacy groups in white matter intensities related to the mid-body region of the corpus callosum and the inferior parietal and parietotemporal regions (literate > illiterate). There were no corresponding differences in the grey matter. This suggests that the influence of literacy on brain structure related to reading and verbal working memory is affecting large-scale brain connectivity more than grey matter per se.
  • Petras, K., Ten Oever, S., Dalal, S. S., & Goffaux, V. (2021). Information redundancy across spatial scales modulates early visual cortical processing. NeuroImage, 244: 118613. doi:10.1016/j.neuroimage.2021.118613.

    Abstract

    Visual images contain redundant information across spatial scales where low spatial frequency contrast is informative towards the location and likely content of high spatial frequency detail. Previous research suggests that the visual system makes use of those redundancies to facilitate efficient processing. In this framework, a fast, initial analysis of low-spatial frequency (LSF) information guides the slower and later processing of high spatial frequency (HSF) detail. Here, we used multivariate classification as well as time-frequency analysis of MEG responses to the viewing of intact and phase scrambled images of human faces to demonstrate that the availability of redundant LSF information, as found in broadband intact images, correlates with a reduction in HSF representational dominance in both early and higher-level visual areas as well as a reduction of gamma-band power in early visual cortex. Our results indicate that the cross spatial frequency information redundancy that can be found in all natural images might be a driving factor in the efficient integration of fine image details.

    Additional information

    supplementary materials
  • Pickering, M. J., & Majid, A. (2007). What are implicit causality and consequentiality? Language and Cognitive Processes, 22(5), 780-788. doi:10.1080/01690960601119876.

    Abstract

    Much work in psycholinguistics and social psychology has investigated the notion of implicit causality associated with verbs. Crinean and Garnham (2006) relate implicit causality to another phenomenon, implicit consequentiality. We argue that they and other researchers have confused the meanings of events and the reasons for those events, so that particular thematic roles (e.g., Agent, Patient) are taken to be causes or consequences of those events by definition. In accord with Garvey and Caramazza (1974), we propose that implicit causality and consequentiality are probabilistic notions that are straightforwardly related to the explicit causes and consequences of events and are analogous to other biases investigated in psycholinguistics.
  • Di Pisa, G., Pereira Soares, S. M., & Rothman, J. (2021). Brain, mind and linguistic processing insights into the dynamic nature of bilingualism and its outcome effects. Journal of Neurolinguistics, 58: 100965. doi:10.1016/j.jneuroling.2020.100965.
  • Pliatsikas, C., Pereira Soares, S. M., Voits, T., Deluca, V., & Rothman, J. (2021). Bilingualism is a long-term cognitively challenging experience that modulates metabolite concentrations in the healthy brain. Scientific Reports, 11: 7090. doi:10.1038/s41598-021-86443-4.

    Abstract

    Cognitively demanding experiences, including complex skill acquisition and processing, have been
    shown to induce brain adaptations, at least at the macroscopic level, e.g. on brain volume and/or
    functional connectivity. However, the neurobiological bases of these adaptations, including at the
    cellular level, are unclear and understudied. Here we use bilingualism as a case study to investigate
    the metabolic correlates of experience-based brain adaptations. We employ Magnetic Resonance
    Spectroscopy to measure metabolite concentrations in the basal ganglia, a region critical to language
    control which is reshaped by bilingualism. Our results show increased myo-Inositol and decreased
    N-acetyl aspartate concentrations in bilinguals compared to monolinguals. Both metabolites are
    linked to synaptic pruning, a process underlying experience-based brain restructuring. Interestingly,
    both concentrations correlate with relative amount of bilingual engagement. This suggests that
    degree of long-term cognitive experiences matters at the level of metabolic concentrations, which
    might accompany, if not drive, macroscopic brain adaptations.

    Additional information

    41598_2021_86443_MOESM1_ESM.pdf
  • Pluymaekers, M. (2007). Affix reduction in spoken Dutch: Probabilistic effects in production and perception. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.58146.

    Abstract

    This dissertation investigates the roles of several probabilistic variables in the production and comprehension of reduced Dutch affixes. The central hypothesis is that linguistic units with a high probability of occurrence are
    more likely to be reduced (Jurafsky et al., 2001; Aylett & Turk, 2004). This hypothesis is tested by analyzing the acoustic realizations of affixes, which are meaning-carrying elements embedded in larger lexical units. Most of the results prove to be compatible with the main hypothesis, but some appear to run counter to its predictions. The final chapter of the thesis discusses the implications of these findings for models of speech production, models of speech perception, and probability-based accounts of reduction.

    Additional information

    full text via Radboud Repository
  • Poletiek, F. H., Monaghan, P., van de Velde, M., & Bocanegra, B. R. (2021). The semantics-syntax interface: Learning grammatical categories and hierarchical syntactic structure through semantics. Journal of Experimental Psychology: Learning, Memory, and Cognition, 47(7), 1141-1155. doi:10.1037/xlm0001044.

    Abstract

    Language is infinitely productive because syntax defines dependencies between grammatical categories of words and constituents, so there is interchangeability of these words and constituents within syntactic structures. Previous laboratory-based studies of language learning have shown that complex language structures like hierarchical center embeddings (HCE) are very hard to learn, but these studies tend to simplify the language learning task, omitting semantics and focusing either on learning dependencies between individual words or on acquiring the category membership of those words. We tested whether categories of words and dependencies between these categories and between constituents, could be learned simultaneously in an artificial language with HCE’s, when accompanied by scenes illustrating the sentence’s intended meaning. Across four experiments, we showed that participants were able to learn the HCE language varying words across categories and category-dependencies, and constituents across constituents-dependencies. They also were able to generalize the learned structure to novel sentences and novel scenes that they had not previously experienced. This simultaneous learning resulting in a productive complex language system, may be a consequence of grounding complex syntax acquisition in semantics.
  • Postema, M. (2021). Left-right asymmetry of the human brain: Associations with neurodevelopmental disorders and genetic factors. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Postema, M., Hoogman, M., Ambrosino, S., Asherson, P., Banaschewski, T., Bandeira, C. E., Baranov, A., Bau, C. H. D., Baumeister, S., Baur-Streubel, R., Bellgrove, M. A., Biederman, J., Bralten, J., Brandeis, D., Brem, S., Buitelaar, J. K., Busatto, G. F., Castellanos, F. X., Cercignani, M., Chaim-Avancini, T. M. and 85 morePostema, M., Hoogman, M., Ambrosino, S., Asherson, P., Banaschewski, T., Bandeira, C. E., Baranov, A., Bau, C. H. D., Baumeister, S., Baur-Streubel, R., Bellgrove, M. A., Biederman, J., Bralten, J., Brandeis, D., Brem, S., Buitelaar, J. K., Busatto, G. F., Castellanos, F. X., Cercignani, M., Chaim-Avancini, T. M., Chantiluke, K. C., Christakou, A., Coghill, D., Conzelmann, A., Cubillo, A. I., Cupertino, R. B., De Zeeuw, P., Doyle, A. E., Durston, S., Earl, E. A., Epstein, J. N., Ethofer, T., Fair, D. A., Fallgatter, A. J., Faraone, S. V., Frodl, T., Gabel, M. C., Gogberashvili, T., Grevet, E. H., Haavik, J., Harrison, N. A., Hartman, C. A., Heslenfeld, D. J., Hoekstra, P. J., Hohmann, S., Høvik, M. F., Jernigan, T. L., Kardatzki, B., Karkashadze, G., Kelly, C., Kohls, G., Konrad, K., Kuntsi, J., Lazaro, L., Lera-Miguel, S., Lesch, K.-P., Louza, M. R., Lundervold, A. J., Malpas, C. B., Mattos, P., McCarthy, H., Namazova-Baranova, L., Nicolau, R., Nigg, J. T., Novotny, S. E., Oberwelland Weiss, E., O'Gorman Tuura, R. L., Oosterlaan, J., Oranje, B., Paloyelis, Y., Pauli, P., Picon, F. A., Plessen, K. J., Ramos-Quiroga, J. A., Reif, A., Reneman, L., Rosa, P. G. P., Rubia, K., Schrantee, A., Schweren, L. J. S., Seitz, J., Shaw, P., Silk, T. J., Skokauskas, N., Soliva Vila, J. C., Stevens, M. C., Sudre, G., Tamm, L., Tovar-Moll, F., Van Erp, T. G. M., Vance, A., Vilarroya, O., Vives-Gilabert, Y., Von Polier, G. G., Walitza, S., Yoncheva, Y. N., Zanetti, M. V., Ziegler, G. C., Glahn, D. C., Jahanshad, N., Medland, S. E., ENIGMA ADHD Working Group, Thompson, P. M., Fisher, S. E., Franke, B., & Francks, C. (2021). Analysis of structural brain asymmetries in Attention-Deficit/Hyperactivity Disorder in 39 datasets. Journal of Child Psychology and Psychiatry, 62(10), 1202-1219. doi:10.1111/jcpp.13396.

    Abstract

    Objective: Some studies have suggested alterations of structural brain asymmetry in attention-deficit/hyperactivity disorder (ADHD), but findings have been contradictory and based on small samples. Here we performed the largest-ever analysis of brain left-right asymmetry in ADHD, using 39 datasets of the ENIGMA consortium.
    Methods: We analyzed asymmetry of subcortical and cerebral cortical structures in up to 1,933 people with ADHD and 1,829 unaffected controls. Asymmetry Indexes (AIs) were calculated per participant for each bilaterally paired measure, and linear mixed effects modelling was applied separately in children, adolescents, adults, and the total sample, to test exhaustively for potential associations of ADHD with structural brain asymmetries.
    Results: There was no evidence for altered caudate nucleus asymmetry in ADHD, in contrast to prior literature. In children, there was less rightward asymmetry of the total hemispheric surface area compared to controls (t=2.1, P=0.04). Lower rightward asymmetry of medial orbitofrontal cortex surface area in ADHD (t=2.7, P=0.01) was similar to a recent finding for autism spectrum disorder. There were also some differences in cortical thickness asymmetry across age groups. In adults with ADHD, globus pallidus asymmetry was altered compared to those without ADHD. However, all effects were small (Cohen’s d from -0.18 to 0.18) and would not survive study-wide correction for multiple testing.
    Conclusion: Prior studies of altered structural brain asymmetry in ADHD were likely under-powered to detect the small effects reported here. Altered structural asymmetry is unlikely to provide a useful biomarker for ADHD, but may provide neurobiological insights into the trait.

    Additional information

    jcpp13396-sup-0001-supinfo.pdf
  • Pouw, W., Dingemanse, M., Motamedi, Y., & Ozyurek, A. (2021). A systematic investigation of gesture kinematics in evolving manual languages in the lab. Cognitive Science, 45(7): e13014. doi:10.1111/cogs.13014.

    Abstract

    Silent gestures consist of complex multi-articulatory movements but are now primarily studied through categorical coding of the referential gesture content. The relation of categorical linguistic content with continuous kinematics is therefore poorly understood. Here, we reanalyzed the video data from a gestural evolution experiment (Motamedi, Schouwstra, Smith, Culbertson, & Kirby, 2019), which showed increases in the systematicity of gesture content over time. We applied computer vision techniques to quantify the kinematics of the original data. Our kinematic analyses demonstrated that gestures become more efficient and less complex in their kinematics over generations of learners. We further detect the systematicity of gesture form on the level of thegesture kinematic interrelations, which directly scales with the systematicity obtained on semantic coding of the gestures. Thus, from continuous kinematics alone, we can tap into linguistic aspects that were previously only approachable through categorical coding of meaning. Finally, going beyond issues of systematicity, we show how unique gesture kinematic dialects emerged over generations as isolated chains of participants gradually diverged over iterations from other chains. We, thereby, conclude that gestures can come to embody the linguistic system at the level of interrelationships between communicative tokens, which should calibrate our theories about form and linguistic content.
  • Pouw, W., Wit, J., Bögels, S., Rasenberg, M., Milivojevic, B., & Ozyurek, A. (2021). Semantically related gestures move alike: Towards a distributional semantics of gesture kinematics. In V. G. Duffy (Ed.), Digital human modeling and applications in health, safety, ergonomics and risk management. human body, motion and behavior:12th International Conference, DHM 2021, Held as Part of the 23rd HCI International Conference, HCII 2021 (pp. 269-287). Berlin: Springer. doi:10.1007/978-3-030-77817-0_20.
  • Pouw, W., Proksch, S., Drijvers, L., Gamba, M., Holler, J., Kello, C., Schaefer, R. S., & Wiggins, G. A. (2021). Multilevel rhythms in multimodal communication. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200334. doi:10.1098/rstb.2020.0334.

    Abstract

    It is now widely accepted that the brunt of animal communication is conducted via several modalities, e.g. acoustic and visual, either simultaneously or sequentially. This is a laudable multimodal turn relative to traditional accounts of temporal aspects of animal communication which have focused on a single modality at a time. However, the fields that are currently contributing to the study of multimodal communication are highly varied, and still largely disconnected given their sole focus on a particular level of description or their particular concern with human or non-human animals. Here, we provide an integrative overview of converging findings that show how multimodal processes occurring at neural, bodily, as well as social interactional levels each contribute uniquely to the complex rhythms that characterize communication in human and non-human animals. Though we address findings for each of these levels independently, we conclude that the most important challenge in this field is to identify how processes at these different levels connect.
  • Pouw, W., De Jonge-Hoekstra, L., Harrison, S. J., Paxton, A., & Dixon, J. A. (2021). Gesture-speech physics in fluent speech and rhythmic upper limb movements. Annals of the New York Academy of Sciences, 1491(1), 89-105. doi:10.1111/nyas.14532.

    Abstract

    Communicative hand gestures are often coordinated with prosodic aspects of speech, and salient moments of gestural movement (e.g., quick changes in speed) often co-occur with salient moments in speech (e.g., near peaks in fundamental frequency and intensity). A common understanding is that such gesture and speech coordination is culturally and cognitively acquired, rather than having a biological basis. Recently, however, the biomechanical physical coupling of arm movements to speech movements has been identified as a potentially important factor in understanding the emergence of gesture-speech coordination. Specifically, in the case of steady-state vocalization and mono-syllable utterances, forces produced during gesturing are transferred onto the tensioned body, leading to changes in respiratory-related activity and thereby affecting vocalization F0 and intensity. In the current experiment (N = 37), we extend this previous line of work to show that gesture-speech physics impacts fluent speech, too. Compared with non-movement, participants who are producing fluent self-formulated speech, while rhythmically moving their limbs, demonstrate heightened F0 and amplitude envelope, and such effects are more pronounced for higher-impulse arm versus lower-impulse wrist movement. We replicate that acoustic peaks arise especially during moments of peak-impulse (i.e., the beat) of the movement, namely around deceleration phases of the movement. Finally, higher deceleration rates of higher-mass arm movements were related to higher peaks in acoustics. These results confirm a role for physical-impulses of gesture affecting the speech system. We discuss the implications of
    gesture-speech physics for understanding of the emergence of communicative gesture, both ontogenetically and phylogenetically.

    Additional information

    data and analyses
  • Preisig, B., Riecke, L., Sjerps, M. J., Kösem, A., Kop, B. R., Bramson, B., Hagoort, P., & Hervais-Adelman, A. (2021). Selective modulation of interhemispheric connectivity by transcranial alternating current stimulation influences binaural integration. Proceedings of the National Academy of Sciences of the United States of America, 118(7): e2015488118. doi:10.1073/pnas.2015488118.

    Abstract

    Brain connectivity plays a major role in the encoding, transfer, and
    integration of sensory information. Interregional synchronization
    of neural oscillations in the γ-frequency band has been suggested
    as a key mechanism underlying perceptual integration. In a recent
    study, we found evidence for this hypothesis showing that the
    modulation of interhemispheric oscillatory synchrony by means of
    bihemispheric high-density transcranial alternating current stimulation
    (HD-TACS) affects binaural integration of dichotic acoustic features.
    Here, we aimed to establish a direct link between oscillatory
    synchrony, effective brain connectivity, and binaural integration.
    We experimentally manipulated oscillatory synchrony (using bihemispheric
    γ-TACS with different interhemispheric phase lags) and
    assessed the effect on effective brain connectivity and binaural integration
    (as measured with functional MRI and a dichotic listening
    task, respectively). We found that TACS reduced intrahemispheric
    connectivity within the auditory cortices and antiphase (interhemispheric
    phase lag 180°) TACS modulated connectivity between the
    two auditory cortices. Importantly, the changes in intra- and interhemispheric
    connectivity induced by TACS were correlated with
    changes in perceptual integration. Our results indicate that γ-band
    synchronization between the two auditory cortices plays a functional
    role in binaural integration, supporting the proposed role
    of interregional oscillatory synchrony in perceptual integration.
  • Prieto, P., & Torreira, F. (2007). The segmental anchoring hypothesis revisited: Syllable structure and speech rate effects on peak timing in Spanish. Journal of Phonetics, 35, 473-500. doi:10.1016/j.wocn.2007.01.001.

    Abstract

    This paper addresses the validity of the segmental anchoring hypothesis for tonal landmarks (henceforth, SAH) as described in recent work by (among others) Ladd, Faulkner, D., Faulkner, H., & Schepman [1999. Constant ‘segmental’ anchoring of f0 movements under changes in speech rate. Journal of the Acoustical Society of America, 106, 1543–1554], Ladd [2003. Phonological conditioning of f0 target alignment. In: M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the XVth international congress of phonetic sciences, Vol. 1, (pp. 249–252). Barcelona: Causal Productions; in press. Segmental anchoring of pitch movements: Autosegmental association or gestural coordination? Italian Journal of Linguistics, 18 (1)]. The alignment of LH* prenuclear peaks with segmental landmarks in controlled speech materials in Peninsular Spanish is analyzed as a function of syllable structure type (open, closed) of the accented syllable, segmental composition, and speaking rate. Contrary to the predictions of the SAH, alignment was affected by syllable structure and speech rate in significant and consistent ways. In: CV syllables the peak was located around the end of the accented vowel, and in CVC syllables around the beginning-mid part of the sonorant coda, but still far from the syllable boundary. With respect to the effects of rate, peaks were located earlier in the syllable as speech rate decreased. The results suggest that the accent gestures under study are synchronized with the syllable unit. In general, the longer the syllable, the longer the rise time. Thus the fundamental idea of the anchoring hypothesis can be taken as still valid. On the other hand, the tonal alignment patterns reported here can be interpreted as the outcome of distinct modes of gestural coordination in syllable-initial vs. syllable-final position: gestures at syllable onsets appear to be more tightly coordinated than gestures at the end of syllables [Browman, C. P., & Goldstein, L.M. (1986). Towards an articulatory phonology. Phonology Yearbook, 3, 219–252; Browman, C. P., & Goldstein, L. (1988). Some notes on syllable structure in articulatory phonology. Phonetica, 45, 140–155; (1992). Articulatory Phonology: An overview. Phonetica, 49, 155–180; Krakow (1999). Physiological organization of syllables: A review. Journal of Phonetics, 27, 23–54; among others]. Intergestural timing can thus provide a unifying explanation for (1) the contrasting behavior between the precise synchronization of L valleys with the onset of the syllable and the more variable timing of the end of the f0 rise, and, more specifically, for (2) the right-hand tonal pressure effects and ‘undershoot’ patterns displayed by peaks at the ends of syllables and other prosodic domains.
  • Pronina, M., Hübscher, I., Holler, J., & Prieto, P. (2021). Interactional training interventions boost children’s expressive pragmatic abilities: Evidence from a novel multidimensional testing approach. Cognitive Development, 57: 101003. doi:10.1016/j.cogdev.2020.101003.

    Abstract

    This study investigates the effectiveness of training preschoolers in order to enhance their social cognition and pragmatic skills. Eighty-three 3–4-year-olds were divided into three groups and listened to stories enriched with mental state terms. Then, whereas the control group engaged in non-reflective activities, the two experimental groups were guided by a trainer to reflect on mental states depicted in the stories. In one of these groups, the children were prompted to not only talk about these states but also “embody” them through prosodic and gestural cues. Results showed that while there were no significant effects on Theory of Mind, emotion understanding, and mental state verb comprehension, the experimental groups significantly improved their pragmatic skill scores pretest-to-posttest. These results suggest that interactional interventions can contribute to preschoolers’ pragmatic development, demonstrate the value of the new embodied training, and highlight the importance of multidimensional testing for the evaluation of intervention effects.
  • Protopapas, A., Gerakaki, S., & Alexandri, S. (2007). Sources of information for stress assignment in reading Greek. Applied Psycholinguistics, 28(4), 695 -720. doi:10.1017/S0142716407070373.

    Abstract

    To assign lexical stress when reading, the Greek reader can potentially rely on lexical information (knowledge of the word), visual–orthographic information (processing of the written diacritic), or a default metrical strategy (penultimate stress pattern). Previous studies with secondary education children have shown strong lexical effects on stress assignment and have provided evidence for a default pattern. Here we report two experiments with adult readers, in which we disentangle and quantify the effects of these three potential sources using nonword materials. Stimuli either resembled or did not resemble real words, to manipulate availability of lexical information; and they were presented with or without a diacritic, in a word-congruent or word-incongruent position, to contrast the relative importance of the three sources. Dual-task conditions, in which cognitive load during nonword reading was increased with phonological retention carrying a metrical pattern different from the default, did not support the hypothesis that the default arises from cumulative lexical activation in working memory.
  • Puebla, G., Martin, A. E., & Doumas, L. A. A. (2021). The relational processing limits of classic and contemporary neural network models of language processing. Language, Cognition and Neuroscience, 36(2), 240-254. doi:10.1080/23273798.2020.1821906.

    Abstract

    Whether neural networks can capture relational knowledge is a matter of long-standing controversy. Recently, some researchers have argued that (1) classic connectionist models can handle relational structure and (2) the success of deep learning approaches to natural language processing suggests that structured representations are unnecessary to model human language. We tested the Story Gestalt model, a classic connectionist model of text comprehension, and a Sequence-to-Sequence with Attention model, a modern deep learning architecture for natural language processing. Both models were trained to answer questions about stories based on abstract thematic roles. Two simulations varied the statistical structure of new stories while keeping their relational structure intact. The performance of each model fell below chance at least under one manipulation. We argue that both models fail our tests because they can't perform dynamic binding. These results cast doubts on the suitability of traditional neural networks for explaining relational reasoning and language processing phenomena.

    Additional information

    supplementary material
  • Pye, C., Pfeiler, B., De León, L., Brown, P., & Mateo, P. (2007). Roots or edges? Explaining variation in children's early verb forms across five Mayan languages. In B. Pfeiler (Ed.), Learning indigenous languages: Child language acquisition in Mesoamerica (pp. 15-46). Berlin: Mouton de Gruyter.

    Abstract

    This paper compares the acquisition of verb morphology in five Mayan languages, using a comparative method based on historical linguistics to establish precise equivalences between linguistic categories in the five languages. Earlier work on the acquisition of these languages, based on examination of longitudinal samples of naturally-occuring child language, established that in some of the languages (Tzeltal, Tzotzil) bare roots were the predominant forms for children’s early verbs, but in three other languages (Yukatek, K’iche’, Q’anjobal) unanalyzed portions of the final part of the verb were more likely. That is, children acquiring different Mayan languages initially produce different parts of the adult verb forms. In this paper we analyse the structures of verbs in caregiver speech to these same children, using samples from two-year-old children and their caregivers, and assess the degree to which features of the input might account for the children’s early verb forms in these five Mayan languages. We found that the frequency with which adults produce verbal roots at the extreme right of words and sentences influences the frequency with which children produce bare verb roots in their early verb expressions, while production of verb roots at the extreme left does not, suggesting that the children ignore the extreme left of verbs and sentences when extracting verb roots.
  • Qin, S., Piekema, C., Petersson, K. M., Han, B., Luo, J., & Fernández, G. (2007). Probing the transformation of discontinuous associations into episodic memory: An event-related fMRI study. NeuroImage, 38(1), 212-222. doi:10.1016/j.neuroimage.2007.07.020.

    Abstract

    Using event-related functional magnetic resonance imaging, we identified brain regions involved in storing associations of events discontinuous in time into long-term memory. Participants were scanned while memorizing item-triplets including simultaneous and discontinuous associations. Subsequent memory tests showed that participants remembered both types of associations equally well. First, by constructing the contrast between the subsequent memory effects for discontinuous associations and simultaneous associations, we identified the left posterior parahippocampal region, dorsolateral prefrontal cortex, the basal ganglia, posterior midline structures, and the middle temporal gyrus as being specifically involved in transforming discontinuous associations into episodic memory. Second, we replicated that the prefrontal cortex and the medial temporal lobe (MTL) especially the hippocampus are involved in associative memory formation in general. Our findings provide evidence for distinct neural operation(s) that supports the binding and storing discontinuous associations in memory. We suggest that top-down signals from the prefrontal cortex and MTL may trigger reactivation of internal representation in posterior midline structures of the first event, thus allowing it to be associated with the second event. The dorsolateral prefrontal cortex together with basal ganglia may support this encoding operation by executive and binding processes within working memory, and the posterior parahippocampal region may play a role in binding and memory formation.
  • Rapold, C. J. (2007). From demonstratives to verb agreement in Benchnon: A diachronic perspective. In A. Amha, M. Mous, & G. Savà (Eds.), Omotic and Cushitic studies: Papers from the Fourth Cushitic Omotic Conference, Leiden, 10-12 April 2003 (pp. 69-88). Cologne: Rüdiger Köppe.
  • Räsänen, O., Seshadri, S., Lavechin, M., Cristia, A., & Casillas, M. (2021). ALICE: An open-source tool for automatic measurement of phoneme, syllable, and word counts from child-centered daylong recordings. Behavior Research Methods, 53, 818-835. doi:10.3758/s13428-020-01460-x.

    Abstract

    Recordings captured by wearable microphones are a standard method for investigating young children’s language environments. A key measure to quantify from such data is the amount of speech present in children’s home environments. To this end, the LENA recorder and software—a popular system for measuring linguistic input—estimates the number of adult words that children may hear over the course of a recording. However, word count estimation is challenging to do in a language-independent manner; the relationship between observable acoustic patterns and language-specific lexical entities is far from uniform across human languages. In this paper, we ask whether some alternative linguistic units, namely phone(me)s or syllables, could be measured instead of, or in parallel with, words in order to achieve improved cross-linguistic applicability and comparability of an automated system for measuring child language input. We discuss the advantages and disadvantages of measuring different units from theoretical and technical points of view. We also investigate the practical applicability of measuring such units using a novel system called Automatic LInguistic unit Count Estimator (ALICE) together with audio from seven child-centered daylong audio corpora from diverse cultural and linguistic environments. We show that language-independent measurement of phoneme counts is somewhat more accurate than syllables or words, but all three are highly correlated with human annotations on the same data. We share an open-source implementation of ALICE for use by the language research community, allowing automatic phoneme, syllable, and word count estimation from child-centered audio recordings.
  • Ravignani, A. (2021). Isochrony, vocal learning and the acquisition of rhythm and melody. Behavioral and Brain Sciences, 44: e88. doi:10.1017/S0140525X20001478.

    Abstract

    A cross-species perspective can extend and provide testable predictions for Savage et al.’s
    framework. Rhythm and melody, I argue, could bootstrap each other in the evolution of
    musicality. Isochrony may function as a temporal grid to support rehearsing and learning
    modulated, pitched vocalizations. Once this melodic plasticity is acquired, focus can shift back to refining rhythm processing and beat induction.
  • Ravignani, A., & De Boer, B. (2021). Joint origins of speech and music: Testing evolutionary hypotheses on modern humans. Semiotica, 239, 169-176. doi:10.1515/sem-2019-0048.

    Abstract

    How music and speech evolved is a mystery. Several hypotheses on their
    origins, including one on their joint origins, have been put forward but rarely
    tested. Here we report and comment on the first experiment testing the hypothesis
    that speech and music bifurcated from a common system. We highlight strengths
    of the reported experiment, point out its relatedness to animal work, and suggest
    three alternative interpretations of its results. We conclude by sketching a future
    empirical programme extending this work.
  • Raviv, L., De Heer Kloots, M., & Meyer, A. S. (2021). What makes a language easy to learn? A preregistered study on how systematic structure and community size affect language learnability. Cognition, 210: 104620. doi:10.1016/j.cognition.2021.104620.

    Abstract

    Cross-linguistic differences in morphological complexity could have important consequences for language learning. Specifically, it is often assumed that languages with more regular, compositional, and transparent grammars are easier to learn by both children and adults. Moreover, it has been shown that such grammars are more likely to evolve in bigger communities. Together, this suggests that some languages are acquired faster than others, and that this advantage can be traced back to community size and to the degree of systematicity in the language. However, the causal relationship between systematic linguistic structure and language learnability has not been formally tested, despite its potential importance for theories on language evolution, second language learning, and the origin of linguistic diversity. In this pre-registered study, we experimentally tested the effects of community size and systematic structure on adult language learning. We compared the acquisition of different yet comparable artificial languages that were created by big or small groups in a previous communication experiment, which varied in their degree of systematic linguistic structure. We asked (a) whether more structured languages were easier to learn; and (b) whether languages created by the bigger groups were easier to learn. We found that highly systematic languages were learned faster and more accurately by adults, but that the relationship between language learnability and linguistic structure was typically non-linear: high systematicity was advantageous for learning, but learners did not benefit from partly or semi-structured languages. Community size did not affect learnability: languages that evolved in big and small groups were equally learnable, and there was no additional advantage for languages created by bigger groups beyond their degree of systematic structure. Furthermore, our results suggested that predictability is an important advantage of systematic structure: participants who learned more structured languages were better at generalizing these languages to new, unfamiliar meanings, and different participants who learned the same more structured languages were more likely to produce similar labels. That is, systematic structure may allow speakers to converge effortlessly, such that strangers can immediately understand each other.
  • Rebuschat, P., Monaghan, P., & Schoetensack, C. (2021). Learning vocabulary and grammar from cross-situational statistics. Cognition, 206: 104475. doi:10.1016/j.cognition.2020.104475.

    Abstract

    Across multiple situations, child and adult learners are sensitive to co-occurrences between individual words and their referents in the environment, which provide a means by which the ambiguity of word-world mappings may be resolved (Monaghan & Mattock, 2012; Scott & Fisher, 2012; Smith & Yu, 2008; Yu & Smith, 2007). In three studies, we tested whether cross-situational learning is sufficiently powerful to support simultaneous learning the referents for words from multiple grammatical categories, a more realistic reflection of more complex natural language learning situations. In Experiment 1, adult learners heard sentences comprising nouns, verbs, adjectives, and grammatical markers indicating subject and object roles, and viewed a dynamic scene to which the sentence referred. In Experiments 2 and 3, we further increased the uncertainty of the referents by presenting two scenes alongside each sentence. In all studies, we found that cross-situational statistical learning was sufficiently powerful to facilitate acquisition of both vocabulary and grammar from complex sentence-to-scene correspondences, simulating the situations that more closely resemble the challenge facing the language learner.

    Additional information

    supplementary material
  • Redl, T. (2021). Masculine generic pronouns: Investigating the processing of an unintended gender cue. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Redl, T., Frank, S. L., De Swart, P., & De Hoop, H. (2021). The male bias of a generically-intended masculine pronoun: Evidence from eye-tracking and sentence evaluation. PLoS One, 16(4): e0249309. doi:10.1371/journal.pone.0249309.

    Abstract

    Two experiments tested whether the Dutch possessive pronoun zijn ‘his’ gives rise to a gender inference and thus causes a male bias when used generically in sentences such as Everyone was putting on his shoes. Experiment 1 (N = 120, 48 male) was a conceptual replication of a previous eye-tracking study that had not found evidence of a male bias. The results of the current eye-tracking experiment showed the generically-intended masculine pronoun to trigger a gender inference and cause a male bias, but for male participants and in stereotypically neutral stereotype contexts only. No evidence for a male bias was thus found in stereotypically female and male context nor for female participants altogether. Experiment 2 (N = 80, 40 male) used the same stimuli as Experiment 1, but employed the sentence evaluation paradigm. No evidence of a male bias was found in Experiment 2. Taken together, the results suggest that the generically-intended masculine pronoun zijn ‘his’ can cause a male bias for male participants even when the referents are previously introduced by inclusive and grammatically gender-unmarked iedereen ‘everyone’. This male bias surfaces with eye-tracking, which taps directly into early language processing, but not in offline sentence evaluations. Furthermore, the results suggest that the intended generic reading of the masculine possessive pronoun zijn ‘his’ is more readily available for women than for men.

    Additional information

    data
  • Redolfi, M., Soares, S. M. P., Czypionka, A., & Kupisch, T. (2021). Experimental evidence for the interpretation of definite plural articles as markers of genericity – How Italian can help. Glossa: a journal of general linguistics, 6(1): 16. doi:10.5334/gjgl.1165.

    Abstract

    In the Romance languages, definite plural articles (e.g., le rane ‘the frogs’) are generally ambiguous between a generic and a specific interpretation, and speakers must reconstruct the intended interpretation through the linguistic or extra-linguistic context. Following the “polar bear” paradigm implemented in Czypionka & Kupisch (2019)’s investigation on German, the goal of the present study is to check the suitability of their test on article semantics, by establishing to what extent native speakers of Italian interpret ambiguous definite plural DPs as generic or specific in the presence of a nonlinguistic picture context. We present judgment and reaction time data monitoring the preferred reading of sentences introduced by different kinds of noun phrases (e.g., Le rane/Queste rane/Le rane di solito sono verdi/gialle ‘The/These/Usually frogs are green/yellow’), while looking at pictures showing prototypical or non-prototypical properties (e.g., green vs. yellow frogs). Our results show that both possible interpretations of definite plural articles are routinely considered in Italian, despite the presence of a picture with specific referents, validating the “polar bear” paradigm as a suitable test of article semantics.
  • Reifegerste, J., Meyer, A. S., Zwitserlood, P., & Ullman, M. T. (2021). Aging affects steaks more than knives: Evidence that the processing of words related to motor skills is relatively spared in aging. Brain and Language, 218: 104941. doi:10.1016/j.bandl.2021.104941.

    Abstract

    Lexical-processing declines are a hallmark of aging. However, the extent of these declines may vary as a function of different factors. Motivated by findings from neurodegenerative diseases and healthy aging, we tested whether ‘motor-relatedness’ (the degree to which words are associated with particular human body movements) might moderate such declines. We investigated this question by examining data from three experiments. The experiments were carried out in different languages (Dutch, German, English) using different tasks (lexical decision, picture naming), and probed verbs and nouns, in all cases controlling for potentially confounding variables (e.g., frequency, age-of-acquisition, imageability). Whereas ‘non-motor words’ (e.g., steak) showed age-related performance decreases in all three experiments, ‘motor words’ (e.g., knife) yielded either smaller decreases (in one experiment) or no decreases (in two experiments). The findings suggest that motor-relatedness can attenuate or even prevent age-related lexical declines, perhaps due to the relative sparing of neural circuitry underlying such words.

    Additional information

    supplementary material
  • Reis, A., Faísca, L., Mendonça, S., Ingvar, M., & Petersson, K. M. (2007). Semantic interference on a phonological task in illiterate subjects. Scandinavian Journal of Psychology, 48(1), 69-74. doi:10.1111/j.1467-9450.2006.00544.x.

    Abstract

    Previous research suggests that learning an alphabetic written language influences aspects of the auditory-verbal language system. In this study, we examined whether literacy influences the notion of words as phonological units independent of lexical semantics in literate and illiterate subjects. Subjects had to decide which item in a word- or pseudoword pair was phonologically longest. By manipulating the relationship between referent size and phonological length in three word conditions (congruent, neutral, and incongruent) we could examine to what extent subjects focused on form rather than meaning of the stimulus material. Moreover, the pseudoword condition allowed us to examine global phonological awareness independent of lexical semantics. The results showed that literate performed significantly better than illiterate subjects in the neutral and incongruent word conditions as well as in the pseudoword condition. The illiterate group performed least well in the incongruent condition and significantly better in the pseudoword condition compared to the neutral and incongruent word conditions and suggest that performance on phonological word length comparisons is dependent on literacy. In addition, the results show that the illiterate participants are able to perceive and process phonological length, albeit less well than the literate subjects, when no semantic interference is present. In conclusion, the present results confirm and extend the finding that illiterate subjects are biased towards semantic-conceptual-pragmatic types of cognitive processing.
  • de Reus, K., Soma, M., Anichini, M., Gamba, M., de Heer Kloots, M., Lense, M., Bruno, J. H., Trainor, L., & Ravignani, A. (2021). Rhythm in dyadic interactions. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200337. doi:10.1098/rstb.2020.0337.

    Abstract

    This review paper discusses rhythmic dyadic interactions in social and sexual contexts. We report rhythmic interactions during communication within dyads, as found in humans, non-human primates, non-primate mammals, birds, anurans and insects. Based on the patterns observed, we infer adaptive explanations for the observed rhythm interactions and identify knowledge gaps. Across species, the social environment during ontogeny is a key factor in shaping adult signal repertoires and timing mechanisms used to regulate interactions. The degree of temporal coordination is influenced by the dynamic and strength of the dyadic interaction. Most studies of temporal structure in interactive signals mainly focus on one modality (acoustic and visual); we suggest more work should be performed on multimodal signals. Multidisciplinary approaches combining cognitive science, ethology and ecology should shed more light on the exact timing mechanisms involved. Taken together, rhythmic signalling behaviours are widespread and critical in regulating social interactions across taxa.
  • Rhie, A., McCarthy, S. A., Fedrigo, O., Damas, J., Formenti, G., Koren, S., Uliano-Silva, M., Chow, W., Fungtammasan, A., Kim, J., Lee, C., Ko, B. J., Chaisson, M., Gedman, G. L., Cantin, L. J., Thibaud-Nissen, F., Haggerty, L., Bista, I., Smith, M., Haase, B. and 107 moreRhie, A., McCarthy, S. A., Fedrigo, O., Damas, J., Formenti, G., Koren, S., Uliano-Silva, M., Chow, W., Fungtammasan, A., Kim, J., Lee, C., Ko, B. J., Chaisson, M., Gedman, G. L., Cantin, L. J., Thibaud-Nissen, F., Haggerty, L., Bista, I., Smith, M., Haase, B., Mountcastle, J., Winkler, S., Paez, S., Howard, J., Vernes, S. C., Lama, T. M., Grutzner, F., Warren, W. C., Balakrishnan, C. N., Burt, D., George, J. M., Biegler, M. T., Iorns, D., Digby, A., Eason, D., Robertson, B., Edwards, T., Wilkinson, M., Turner, G., Meyer, A., Kautt, A. F., Franchini, P., Detrich, H. W., Svardal, H., Wagner, M., Naylor, G. J. P., Pippel, M., Malinsky, M., Mooney, M., Simbirsky, M., Hannigan, B. T., Pesout, T., Houck, M., Misuraca, A., Kingan, S. B., Hall, R., Kronenberg, Z., Sović, I., Dunn, C., Ning, Z., Hastie, A., Lee, J., Selvaraj, S., Green, R. E., Putnam, N. H., Gut, I., Ghurye, J., Garrison, E., Sims, Y., Collins, J., Pelan, S., Torrance, J., Tracey, A., Wood, J., Dagnew, R. E., Guan, D., London, S. E., Clayton, D. F., Mello, C. V., Friedrich, S. R., Lovell, P. V., Osipova, E., Al-Ajli, F. O., Secomandi, S., Kim, H., Theofanopoulou, C., Hiller, M., Zhou, Y., Harris, R. S., Makova, K. D., Medvedev, P., Hoffman, J., Masterson, P., Clark, K., Martin, F., Howe, K., Flicek, P., Walenz, B. P., Kwak, W., Clawson, H., Diekhans, M., Nassar, L., Paten, B., Kraus, R. H. S., Crawford, A. J., Gilbert, M. T. P., Zhang, G., Venkatesh, B., Murphy, R. W., Koepfli, K.-P., Shapiro, B., Johnson, W. E., Di Palma, F., Marques-Bonet, T., Teeling, E. C., Warnow, T., Graves, J. M., Ryder, O. A., Haussler, D., O’Brien, S. J., Korlach, J., Lewin, H. A., Howe, K., Myers, E. W., Durbin, R., Phillippy, A. M., & Jarvis, E. D. (2021). Towards complete and error-free genome assemblies of all vertebrate species. Nature, 592, 737-746. doi:10.1038/s41586-021-03451-0.

    Abstract

    High-quality and complete reference genome assemblies are fundamental for the application of genomics to biology, disease, and biodiversity conservation. However, such assemblies are available for only a few non-microbial species1,2,3,4. To address this issue, the international Genome 10K (G10K) consortium5,6 has worked over a five-year period to evaluate and develop cost-effective methods for assembling highly accurate and nearly complete reference genomes. Here we present lessons learned from generating assemblies for 16 species that represent six major vertebrate lineages. We confirm that long-read sequencing technologies are essential for maximizing genome quality, and that unresolved complex repeats and haplotype heterozygosity are major sources of assembly error when not handled correctly. Our assemblies correct substantial errors, add missing sequence in some of the best historical reference genomes, and reveal biological discoveries. These include the identification of many false gene duplications, increases in gene sizes, chromosome rearrangements that are specific to lineages, a repeated independent chromosome breakpoint in bat genomes, and a canonical GC-rich pattern in protein-coding genes and their regulatory regions. Adopting these lessons, we have embarked on the Vertebrate Genomes Project (VGP), an international effort to generate high-quality, complete reference genomes for all of the roughly 70,000 extant vertebrate species and to help to enable a new era of discovery across the life sciences.
  • Ringersma, J., & Kemps-Snijders, M. (2007). Creating multimedia dictionaries of endangered languages using LEXUS. In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 65-68). Baixas, France: ISCA-Int.Speech Communication Assoc.

    Abstract

    This paper reports on the development of a flexible web based lexicon tool, LEXUS. LEXUS is targeted at linguists involved in language documentation (of endangered languages). It allows the creation of lexica within the structure of the proposed ISO LMF standard and uses the proposed concept naming conventions from the ISO data categories, thus enabling interoperability, search and merging. LEXUS also offers the possibility to visualize language, since it provides functionalities to include audio, video and still images to the lexicon. With LEXUS it is possible to create semantic network knowledge bases, using typed relations. The LEXUS tool is free for use. Index Terms: lexicon, web based application, endangered languages, language documentation.
  • Roberts, L., Marinis, T., Felser, C., & Clahsen, H. (2007). Antecedent priming at trace positions in children’s sentence processing. Journal of Psycholinguistic Research, 36(2), 175-188. doi: 10.1007/s10936-006-9038-3.

    Abstract

    The present study examines whether children reactivate a moved constituent at its gap position and how children’s more limited working memory span affects the way they process filler-gap dependencies. 46 5–7 year-old children and 54 adult controls participated in a cross-modal picture priming experiment and underwent a standardized working memory test. The results revealed a statistically significant interaction between the participants’ working memory span and antecedent reactivation: High-span children (n = 19) and high-span adults (n = 22) showed evidence of antecedent priming at the gap site, while for low-span children and adults, there was no such effect. The antecedent priming effect in the high-span participants indicates that in both children and adults, dislocated arguments access their antecedents at gap positions. The absence of an antecedent reactivation effect in the low-span participants could mean that these participants required more time to integrate the dislocated constituent and reactivated the filler later during the sentence.
  • Roberts, L., Gürel, A., Tatar, S., & Marti, L. (Eds.). (2007). EUROSLA Yearbook 7. Amsterdam: Benjamins.

    Abstract

    The annual conference of the European Second Language Association provides an opportunity for the presentation of second language research with a genuinely European flavour. The theoretical perspectives adopted are wide-ranging and may fall within traditions overlooked elsewhere. Moreover, the studies presented are largely multi-lingual and cross-cultural, as befits the make-up of modern-day Europe. At the same time, the work demonstrates sophisticated awareness of scholarly insights from around the world. The EUROSLA yearbook presents a selection each year of the very best research from the annual conference. Submissions are reviewed and professionally edited, and only those of the highest quality are selected. Contributions are in English.
  • Roberts, L. (2007). Investigating real-time sentence processing in the second language. Stem-, Spraak- en Taalpathologie, 15, 115-127.

    Abstract

    Second language (L2) acquisition researchers have always been concerned with what L2 learners know about the grammar of the target language but more recently there has been growing interest in how L2 learners put this knowledge to use in real-time sentence comprehension. In order to investigate real-time L2 sentence processing, the types of constructions studied and the methods used are often borrowed from the field of monolingual processing, but the overall issues are familiar from traditional L2 acquisition research. These cover questions relating to L2 learners’ native-likeness, whether or not L1 transfer is in evidence, and how individual differences such as proficiency and language experience might have an effect. The aim of this paper is to provide for those unfamiliar with the field, an overview of the findings of a selection of behavioral studies that have investigated such questions, and to offer a picture of how L2 learners and bilinguals may process sentences in real time.
  • Rodd, J., Decuyper, C., Bosker, H. R., & Ten Bosch, L. (2021). A tool for efficient and accurate segmentation of speech data: Announcing POnSS. Behavior Research Methods, 53, 744-756. doi:10.3758/s13428-020-01449-6.

    Abstract

    Despite advances in automatic speech recognition (ASR), human input is still essential to produce research-grade segmentations of speech data. Con- ventional approaches to manual segmentation are very labour-intensive. We introduce POnSS, a browser-based system that is specialized for the task of segmenting the onsets and offsets of words, that combines aspects of ASR with limited human input. In developing POnSS, we identified several sub- tasks of segmentation, and implemented each of these as separate interfaces for the annotators to interact with, to streamline their task as much as possible. We evaluated segmentations made with POnSS against a base- line of segmentations of the same data made conventionally in Praat. We observed that POnSS achieved comparable reliability to segmentation us- ing Praat, but required 23% less annotator time investment. Because of its greater efficiency without sacrificing reliability, POnSS represents a distinct methodological advance for the segmentation of speech data.
  • Roelofs, A. (2007). On the modelling of spoken word planning: Rejoinder to La Heij, Starreveld, and Kuipers (2007). Language and Cognitive Processes, 22(8), 1281-1286. doi:10.1080/01690960701462291.

    Abstract

    The author contests several claims of La Heij, Starreveld, and Kuipers (this issue) concerning the modelling of spoken word planning. The claims are about the relevance of error findings, the interaction between semantic and phonological factors, the explanation of word-word findings, the semantic relatedness paradox, and production rules.
  • Roelofs, A. (2007). A critique of simple name-retrieval models of spoken word planning. Language and Cognitive Processes, 22(8), 1237-1260. doi:10.1080/01690960701461582.

    Abstract

    Simple name-retrieval models of spoken word planning (Bloem & La Heij, 2003; Starreveld & La Heij, 1996) maintain (1) that there are two levels in word planning, a conceptual and a lexical phonological level, and (2) that planning a word in both object naming and oral reading involves the selection of a lexical phonological representation. Here, the name retrieval models are compared to more complex models with respect to their ability to account for relevant data. It appears that the name retrieval models cannot easily account for several relevant findings, including some speech error biases, types of morpheme errors, and context effects on the latencies of responding to pictures and words. New analyses of the latency distributions in previous studies also pose a challenge. More complex models account for all these findings. It is concluded that the name retrieval models are too simple and that the greater complexity of the other models is warranted
  • Roelofs, A. (2007). Attention and gaze control in picture naming, word reading, and word categorizing. Journal of Memory and Language, 57(2), 232-251. doi:10.1016/j.jml.2006.10.001.

    Abstract

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture–word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment 3) and shifted their gaze to the arrow to manually indicate its direction. The experiments showed that the temporal coordination of vocal responding and gaze shifting depends on the vocal task and, to a lesser extent, on the type of relationship between picture and word. There was a close temporal link between gaze shifting and manual responding, suggesting that the gaze shifts indexed shifts of attention between the vocal and manual tasks. Computer simulations showed that a simple extension of WEAVER++ [Roelofs, A. (1992). A spreading-activation theory of lemma retrieval in speaking. Cognition, 42, 107–142.; Roelofs, A. (2003). Goal-referenced selection of verbal action: modeling attentional control in the Stroop task. Psychological Review, 110, 88–125.] with assumptions about attentional control in the coordination of vocal responding, gaze shifting, and manual responding quantitatively accounts for the key findings.
  • Roelofs, A., Özdemir, R., & Levelt, W. J. M. (2007). Influences of spoken word planning on speech recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(5), 900-913. doi:10.1037/0278-7393.33.5.900.

    Abstract

    In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. (PsycINFO Database Record (c) 2007 APA, all rights reserved)
  • Roelofs, A., & Lamers, M. (2007). Modelling the control of visual attention in Stroop-like tasks. In A. S. Meyer, L. R. Wheeldon, & A. Krott (Eds.), Automaticity and control in language processing (pp. 123-142). Hove: Psychology Press.

    Abstract

    The authors discuss the issue of how visual orienting, selective stimulus processing, and vocal response planning are related in Stroop-like tasks. The evidence suggests that visual orienting is dependent on both visual processing and verbal response planning. They also discuss the issue of selective perceptual processing in Stroop-like tasks. The evidence suggests that space-based and object-based attention lead to a Trojan horse effect in the classic Stroop task, which can be moderated by increasing the spatial distance between colour and word and by making colour and word part of different objects. Reducing the presentation duration of the colour-word stimulus or the duration of either the colour or word dimension reduces Stroop interference. This paradoxical finding was correctly simulated by the WEAVER++ model. Finally, the authors discuss evidence on the neural correlates of executive attention, in particular, the ACC. The evidence suggests that the ACC plays a role in regulation itself rather than only signalling the need for regulation.
  • Rossi, G. (2021). Conversation analysis (CA). In J. Stanlaw (Ed.), The International Encyclopedia of Linguistic Anthropology. Wiley-Blackwell. doi:10.1002/9781118786093.iela0080.

    Abstract

    Conversation analysis (CA) is an approach to the study of language and social interaction that puts at center stage its sequential development. The chain of initiating and responding actions that characterizes any interaction is a source of internal evidence for the meaning of social behavior as it exposes the understandings that participants themselves give of what one another is doing. Such an analysis requires the close and repeated inspection of audio and video recordings of naturally occurring interaction, supported by transcripts and other forms of annotation. Distributional regularities are complemented by a demonstration of participants' orientation to deviant behavior. CA has long maintained a constructive dialogue and reciprocal influence with linguistic anthropology. This includes a recent convergence on the cross-linguistic and cross-cultural study of social interaction.
  • Rossi, G., & Stivers, T. (2021). Category-sensitive actions in interaction. Social Psychology Quarterly, 84(1), 49-74. doi:10.1177/0190272520944595.

    Abstract

    This article is concerned with how social categories (e.g., wife, mother, sister, tenant, guest) become visible through the actions that individuals perform in social interaction. Using audio and video recordings of social interaction as data and conversation analysis as a method, we examine how individuals display their rights or constraints to perform certain actions by virtue of occupying a certain social category. We refer to actions whose performance is sensitive to membership in a certain social category as category-sensitive actions. Most of the time, the social boundaries surrounding these actions remain invisible because participants in interaction typically act in ways that are consistent with their social status and roles. In this study, however, we specifically examine instances where category boundaries become visible as participants approach, expose, or transgress them. Our focus is on actions with relatively stringent category sensitivity such as requests, offers, invitations, or handling one’s possessions. Ultimately, we believe these are the tip of an iceberg that potentially includes most, if not all, actions.
  • Rowland, C. F. (2007). Explaining errors in children’s questions. Cognition, 104(1), 106-134. doi:10.1016/j.cognition.2006.05.011.

    Abstract

    The ability to explain the occurrence of errors in children’s speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813–842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children’s speech, and that errors occur when children resort to other operations to produce questions [e.g. Dąbrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83–102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157–181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.
  • Royo, J., Forkel, S. J., Pouget, P., & Thiebaut de Schotten, M. (2021). The squirrel monkey model in clinical neuroscience. Neuroscience and Biobehavioral Reviews, 128, 152-164. doi:10.1016/j.neubiorev.2021.06.006.

    Abstract

    Clinical neuroscience research relying on animal models brought valuable translational insights into the function and pathologies of the human brain. The anatomical, physiological, and behavioural similarities between humans and mammals have prompted researchers to study cerebral mechanisms at different levels to develop and test new treatments. The vast majority of biomedical research uses rodent models, which are easily manipulable and have a broadly resembling organisation to the human nervous system but cannot satisfactorily mimic some disorders. For these disorders, macaque monkeys have been used as they have a more comparable central nervous system. Still, this research has been hampered by limitations, including high costs and reduced samples. This review argues that a squirrel monkey model might bridge the gap by complementing translational research from rodents, macaque, and humans. With the advent of promising new methods such as ultrasound imaging, tool miniaturisation, and a shift towards open science, the squirrel monkey model represents a window of opportunity that will potentially fuel new translational discoveries in the diagnosis and treatment of brain pathologies.
  • Rubio-Fernández, P. (2021). Color discriminability makes over-specification efficient: Theoretical analysis and empirical evidence. Humanities and Social Sciences Communications, 8: 147. doi:10.1057/s41599-021-00818-6.

    Abstract

    A psychophysical analysis of referential communication establishes a causal link between a visual stimulus and a speaker’s perception of this stimulus, and between the speaker’s internal representation and their reference production. Here, I argue that, in addition to visual perception and language, social cognition plays an integral part in this complex process, as it enables successful speaker-listener coordination. This pragmatic analysis of referential communication tries to explain the redundant use of color adjectives. It is well documented that people use color words when it is not necessary to identify the referent; for instance, they may refer to “the blue star” in a display of shapes with a single star. This type of redundancy challenges influential work from cognitive science and philosophy of language, suggesting that human communication is fundamentally efficient. Here, I explain these seemingly contradictory findings by confirming the visual efficiency hypothesis: redundant color words can facilitate the listener’s visual search for a referent, despite making the description unnecessarily long. Participants’ eye movements revealed that they were faster to find “the blue star” than “the star” in a display of shapes with only one star. A language production experiment further revealed that speakers are highly sensitive to a target’s discriminability, systematically reducing their use of redundant color adjectives as the color of the target became more pervasive in a display. It is concluded that a referential expression’s efficiency should be based not only on its informational value, but also on its discriminatory value, which means that redundant color words can be more efficient than shorter descriptions.
  • Rubio-Fernández, P., Mollica, F., & Jara-Ettinger, J. (2021). Speakers and listeners exploit word order for communicative efficiency: A cross-linguistic investigation. Journal of Experimental Psychology: General, 150, 583-594. doi:10.1037/xge0000963.

    Abstract

    Pragmatic theories and computational models of reference must account for people’s frequent use of redundant color adjectives (e.g., referring to a single triangle as “the blue triangle”). The standard pragmatic view holds that the informativity of a referential expression depends on pragmatic contrast: Color adjectives should be used to contrast competitors of the same kind to preempt an ambiguity (e.g., between several triangles of different colors), otherwise they are redundant. Here we propose an alternative to the standard view, the incremental efficiency hypothesis, according to which the efficiency of a referential expression must be calculated incrementally over the entire visual context. This is the first theoretical account of referential efficiency that is sensitive to the incrementality of language processing, making different cross-linguistic predictions depending on word order. Experiment 1 confirmed that English speakers produced more redundant color adjectives (e.g., “the blue triangle”) than Spanish speakers (e.g., “el triángulo azul”), but both language groups used more redundant color adjectives in denser displays where it would be more efficient. In Experiments 2A and 2B, we used eye tracking to show that pragmatic contrast is not a processing constraint. Instead, incrementality and efficiency determine that English listeners establish color contrast across categories (BLUE SHAPES > TRIANGULAR ONE), whereas Spanish listeners establish color contrast within a category (TRIANGLES > BLUE ONE). Spanish listeners, however, reversed their visual search strategy when tested in English immediately after. Our results show that speakers and listeners of different languages exploit word order to increase communicative efficiency.
  • Rubio-Fernández, P. (2021). Pragmatic markers: the missing link between language and Theory of Mind. Synthese, 199, 1125-1158. doi:10.1007/s11229-020-02768-z.

    Abstract

    Language and Theory of Mind come together in communication, but their relationship has been intensely contested. I hypothesize that pragmatic markers connect language and Theory of Mind and enable their co-development and co-evolution through a positive feedback loop, whereby the development of one skill boosts the development of the other. I propose to test this hypothesis by investigating two types of pragmatic markers: demonstratives (e.g., ‘this’ vs. ‘that’ in English) and articles (e.g., ‘a’ vs. ‘the’). Pragmatic markers are closed-class words that encode non-representational information that is unavailable to consciousness, but accessed automatically in processing. These markers have been associated with implicit Theory of Mind because they are used to establish joint attention (e.g., ‘I prefer that one’) and mark shared knowledge (e.g., ‘We bought the house’ vs. ‘We bought a house’). Here I develop a theoretical account of how joint attention (as driven by the use of demonstratives) is the basis for children’s later tracking of common ground (as marked by definite articles). The developmental path from joint attention to common ground parallels language change, with demonstrative forms giving rise to definite articles. This parallel opens the possibility of modelling the emergence of Theory of Mind in human development in tandem with its routinization across language communities and generations of speakers. I therefore propose that, in order to understand the relationship between language and Theory of Mind, we should study pragmatics at three parallel timescales: during language acquisition, language use, and language change.
  • Rubio-Fernández, P., Southgate, V., & Király, I. (2021). Pragmatics for infants: commentary on Wenzelet al. (2020). Royal Society Open Science, 8: 210247. doi:10.1098/rsos.210247.
  • Rubio-Fernández, P. (2007). Suppression in metaphor interpretation: Differences between meaning selection and meaning construction. Journal of Semantics, 24(4), 345-371. doi:10.1093/jos/ffm006.

    Abstract

    Various accounts of metaphor interpretation propose that it involves constructing an ad hoc concept on the basis of the concept encoded by the metaphor vehicle (i.e. the expression used for conveying the metaphor). This paper discusses some of the differences between these theories and investigates their main empirical prediction: that metaphor interpretation involves enhancing properties of the metaphor vehicle that are relevant for interpretation, while suppressing those that are irrelevant. This hypothesis was tested in a cross-modal lexical priming study adapted from early studies on lexical ambiguity. The different patterns of suppression of irrelevant meanings observed in disambiguation studies and in the experiment on metaphor reported here are discussed in terms of differences between meaning selection and meaning construction.
  • De Rue, N., Snijders, T. M., & Fikkert, P. (2021). Contrast and conflict in Dutch vowels. Frontiers in Human Neuroscience, 15: 629648. doi:10.3389/fnhum.2021.629648.

    Abstract

    The nature of phonological representations has been extensively studied in phonology and psycholinguistics. While full specification is still the norm in psycholinguistic research, underspecified representations may better account for perceptual asymmetries. In this paper, we report on a mismatch negativity (MMN) study with Dutch listeners who took part in a passive oddball paradigm to investigate when the brain notices the difference between expected and observed vowels. In particular, we tested neural discrimination (indicating perceptual discrimination) of the tense mid vowel pairs /o/-/ø/ (place contrast), /e/-/ø/ (labiality or rounding contrast), and /e/-/o/ (place and labiality contrast). Our results show (a) a perceptual asymmetry for place in the /o/-/ø/ contrast, supporting underspecification of [CORONAL] and replicating earlier results for German, and (b) a perceptual asymmetry for labiality for the /e/-/ø/ contrast, which was not reported in the German study. A labial deviant [ø] (standard /e/) yielded a larger MMN than a deviant [e] (standard /ø/). No asymmetry was found for the two-feature contrast. This study partly replicates a similar MMN study on German vowels, and partly presents new findings indicating cross-linguistic differences. Although the vowel inventory of Dutch and German is to a large extent comparable, their (morpho)phonological systems are different, which is reflected in processing.

    Additional information

    supplementary material
  • Ruggeri, K., Većkalov, B., Bojanić, L., Andersen, T. L., Ashcroft-Jones, S., Ayacaxli, N., Barea-Arroyo, P., Berge, M. L., Bjørndal, L. D., Bursalıoğlu, A., Bühler, V., Čadek, M., Çetinçelik, M., Clay, G., Cortijos-Bernabeu, A., Damnjanović, K., Dugue, T. M., Esberg, M., Esteban-Serna, C., Felder, E. N. and 63 moreRuggeri, K., Većkalov, B., Bojanić, L., Andersen, T. L., Ashcroft-Jones, S., Ayacaxli, N., Barea-Arroyo, P., Berge, M. L., Bjørndal, L. D., Bursalıoğlu, A., Bühler, V., Čadek, M., Çetinçelik, M., Clay, G., Cortijos-Bernabeu, A., Damnjanović, K., Dugue, T. M., Esberg, M., Esteban-Serna, C., Felder, E. N., Friedemann, M., Frontera-Villanueva, D. I., Gale, P., Garcia-Garzon, E., Geiger, S. J., George, L., Girardello, A., Gracheva, A., Gracheva, A., Guillory, M., Hecht, M., Herte, K., Hubená, B., Ingalls, W., Jakob, L., Janssens, M., Jarke, H., Kácha, O., Kalinova, K. N., Karakasheva, R., Khorrami, P. R., Lep, Ž., Lins, S., Lofthus, I. S., Mamede, S., Mareva, S., Mascarenhas, M. F., McGill, L., Morales-Izquierdo, S., Moltrecht, B., Mueller, T. S., Musetti, M., Nelsson, J., Otto, T., Paul, A. F., Pavlović, I., Petrović, M. B., Popović, D., Prinz, G. M., Razum, J., Sakelariev, I., Samuels, V., Sanguino, I., Say, N., Schuck, J., Soysal, I., Todsen, A. L., Tünte, M. R., Vdovic, M., Vintr, J., Vovko, M., Vranka, M. A., Wagner, L., Wilkins, L., Willems, M., Wisdom, E., Yosifova, A., Zeng, S., Ahmed, M. A., Dwarkanath, T., Cikara, M., Lees, J., & Folke, T. (2021). The general fault in our fault lines. Nature Human Behaviour, 5, 1369-1380. doi:10.1038/s41562-021-01092-x.

    Abstract

    Pervading global narratives suggest that political polarization is increasing, yet the accuracy of such group meta-perceptions has been drawn into question. A recent US study suggests that these beliefs are inaccurate and drive polarized beliefs about out-groups. However, it also found that informing people of inaccuracies reduces those negative beliefs. In this work, we explore whether these results generalize to other countries. To achieve this, we replicate two of the original experiments with 10,207 participants across 26 countries. We focus on local group divisions, which we refer to as fault lines. We find broad generalizability for both inaccurate meta-perceptions and reduced negative motive attribution through a simple disclosure intervention. We conclude that inaccurate and negative group meta-perceptions are exhibited in myriad contexts and that informing individuals of their misperceptions can yield positive benefits for intergroup relations. Such generalizability highlights a robust phenomenon with implications for political discourse worldwide.

    Additional information

    supplementary information data via OSF
  • De Ruiter, J. P. (2007). Some multimodal signals in humans. In I. Van de Sluis, M. Theune, E. Reiter, & E. Krahmer (Eds.), Proceedings of the Workshop on Multimodal Output Generation (MOG 2007) (pp. 141-148).

    Abstract

    In this paper, I will give an overview of some well-studied multimodal signals that humans produce while they communicate with other humans, and discuss the implications of those studies for HCI. I will first discuss a conceptual framework that allows us to distinguish between functional and sensory modalities. This distinction is important, as there are multiple functional modalities using the same sensory modality (e.g., facial expression and eye-gaze in the visual modality). A second theoretically important issue is redundancy. Some signals appear to be redundant with a signal in another modality, whereas others give new information or even appear to give conflicting information (see e.g., the work of Susan Goldin-Meadows on speech accompanying gestures). I will argue that multimodal signals are never truly redundant. First, many gestures that appear at first sight to express the same meaning as the accompanying speech generally provide extra (analog) information about manner, path, etc. Second, the simple fact that the same information is expressed in more than one modality is itself a communicative signal. Armed with this conceptual background, I will then proceed to give an overview of some multimodalsignals that have been investigated in human-human research, and the level of understanding we have of the meaning of those signals. The latter issue is especially important for potential implementations of these signals in artificial agents. First, I will discuss pointing gestures. I will address the issue of the timing of pointing gestures relative to the speech it is supposed to support, the mutual dependency between pointing gestures and speech, and discuss the existence of alternative ways of pointing from other cultures. The most frequent form of pointing that does not involve the index finger is a cultural practice called lip-pointing which employs two visual functional modalities, mouth-shape and eye-gaze, simultaneously for pointing. Next, I will address the issue of eye-gaze. A classical study by Kendon (1967) claims that there is a systematic relationship between eye-gaze (at the interlocutor) and turn-taking states. Research at our institute has shown that this relationship is weaker than has often been assumed. If the dialogue setting contains a visible object that is relevant to the dialogue (e.g., a map), the rate of eye-gaze-at-other drops dramatically and its relationship to turn taking disappears completely. The implications for machine generated eye-gaze are discussed. Finally, I will explore a theoretical debate regarding spontaneous gestures. It has often been claimed that the class of gestures that is called iconic by McNeill (1992) are a “window into the mind”. That is, they are claimed to give the researcher (or even the interlocutor) a direct view into the speaker’s thought, without being obscured by the complex transformation that take place when transforming a thought into a verbal utterance. I will argue that this is an illusion. Gestures can be shown to be specifically designed such that the listener can be expected to interpret them. Although the transformations carried out to express a thought in gesture are indeed (partly) different from the corresponding transformations for speech, they are a) complex, and b) severely understudied. This obviously has consequences both for the gesture research agenda, and for the generation of iconic gestures by machines.
  • De Ruiter, J. P. (2007). Postcards from the mind: The relationship between speech, imagistic gesture and thought. Gesture, 7(1), 21-38.

    Abstract

    In this paper, I compare three different assumptions about the relationship between speech, thought and gesture. These assumptions have profound consequences for theories about the representations and processing involved in gesture and speech production. I associate these assumptions with three simplified processing architectures. In the Window Architecture, gesture provides us with a 'window into the mind'. In the Language Architecture, properties of language have an influence on gesture. In the Postcard Architecture, gesture and speech are planned by a single process to become one multimodal message. The popular Window Architecture is based on the assumption that gestures come, as it were, straight out of the mind. I argue that during the creation of overt imagistic gestures, many processes, especially those related to (a) recipient design, and (b) effects of language structure, cause an observable gesture to be very different from the original thought that it expresses. The Language Architecture and the Postcard Architecture differ from the Window Architecture in that they both incorporate a central component which plans gesture and speech together, however they differ from each other in the way they align gesture and speech. The Postcard Architecture assumes that the process creating a multimodal message involving both gesture and speech has access to the concepts that are available in speech, while the Language Architecture relies on interprocess communication to resolve potential conflicts between the content of gesture and speech.
  • De Ruiter, L. E., Lemen, H., Lieven, E. V. M., Brandt, S., & Theakston, A. L. (2021). Structural and interactional aspects of adverbial sentences in English mother-child interactions: an analysis of two dense corpora. Journal of Child Language, 48(6), 1150-1184. doi:10.1017/S0305000920000641.

    Abstract

    We analysed both structural and functional aspects of sentences containing the four
    adverbials “after”, “before”, “because”, and “if” in two dense corpora of parent-child
    interactions from two British English-acquiring children (2;00–4;07). In comparing
    mothers’ and children’s usage we separate out the effects of frequency, cognitive
    complexity and pragmatics in explaining the course of acquisition of adverbial
    sentences. We also compare these usage patterns to stimuli used in a range of
    experimental studies and show how differences may account for some of the difficulties
    that children have shown in experiments. In addition, we report descriptive data on
    various aspects of adverbial sentences that have not yet been studied as a resource for
    future investigations.

    Additional information

    S0305000920000641sup001.docx
  • De Ruiter, J. P., Noordzij, M. L., Newman-Norlund, S., Hagoort, P., & Toni, I. (2007). On the origins of intentions. In P. Haggard, Y. Rossetti, & M. Kawato (Eds.), Sensorimotor foundations of higher cognition (pp. 593-610). Oxford: Oxford University Press.
  • De Ruiter, J. P., & Enfield, N. J. (2007). The BIC model: A blueprint for the communicator. In C. Stephanidis (Ed.), Universal access in Human-Computer Interaction: Applications and services (pp. 251-258). Berlin: Springer.
  • Salverda, A. P., Dahan, D., Tanenhaus, M. K., Crosswhite, K., Masharov, M., & McDonough, J. (2007). Effects of prosodically modulated sub-phonetic variation on lexical competition. Cognition, 105(2), 466-476. doi:10.1016/j.cognition.2006.10.008.

    Abstract

    Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically conditioned phonetic variation.
  • San Jose, A., Roelofs, A., & Meyer, A. S. (2021). Modeling the distributional dynamics of attention and semantic interference in word production. Cognition, 211: 104636. doi:10.1016/j.cognition.2021.104636.

    Abstract

    In recent years, it has become clear that attention plays an important role in spoken word production. Some of this evidence comes from distributional analyses of reaction time (RT) in regular picture naming and picture-word interference. Yet we lack a mechanistic account of how the properties of RT distributions come to reflect attentional processes and how these processes may in turn modulate the amount of conflict between lexical representations. Here, we present a computational account according to which attentional lapses allow for existing conflict to build up unsupervised on a subset of trials, thus modulating the shape of the resulting RT distribution. Our process model resolves discrepancies between outcomes of previous studies on semantic interference. Moreover, the model's predictions were confirmed in a new experiment where participants' motivation to remain attentive determined the size and distributional locus of semantic interference in picture naming. We conclude that process modeling of RT distributions importantly improves our understanding of the interplay between attention and conflict in word production. Our model thus provides a framework for interpreting distributional analyses of RT data in picture naming tasks.
  • Santin, M., Van Hout, A., & Flecken, M. (2021). Event endings in memory and language. Language, Cognition and Neuroscience, 36(5), 625-648. doi:10.1080/23273798.2020.1868542.

    Abstract

    Memory is fundamental for comprehending and segmenting the flow of activity around us into units called “events”. Here, we investigate the effect of the movement dynamics of actions (ceased, ongoing) and the inner structure of events (with or without object-state change) on people's event memory. Furthermore, we investigate how describing events, and the meaning and form of verb predicates used (denoting a culmination moment, or not, in single verbs or verb-satellite constructions), affects event memory. Before taking a surprise recognition task, Spanish and Mandarin speakers (who lexicalise culmination in different verb predicate forms) watched short videos of events, either in a non-verbal (probe-recognition) or a verbal experiment (event description). Results show that culminated events (i.e. ceased change-of-state events) were remembered best across experiments. Language use showed to enhance memory overall. Further, the form of the verb predicates used for denoting culmination had a moderate effect on memory.
  • Sauppe, S., & Flecken, M. (2021). Speaking for seeing: Sentence structure guides visual event apprehension. Cognition, 206: 104516. doi:10.1016/j.cognition.2020.104516.

    Abstract

    Human experience and communication are centred on events, and event apprehension is a rapid process that draws on the visual perception and immediate categorization of event roles (“who does what to whom”). We demonstrate a role for syntactic structure in visual information uptake for event apprehension. An event structure foregrounding either the agent or patient was activated during speaking, transiently modulating the apprehension of subsequently viewed unrelated events. Speakers of Dutch described pictures with actives and passives (agent and patient foregrounding, respectively). First fixations on pictures of unrelated events that were briefly presented (for 300 ms) next were influenced by the active or passive structure of the previously produced sentence. Going beyond the study of how single words cue object perception, we show that sentence structure guides the viewpoint taken during rapid event apprehension.

    Additional information

    supplementary material
  • Sauter, D., & Scott, S. K. (2007). More than one kind of happiness: Can we recognize vocal expressions of different positive states? Motivation and Emotion, 31(3), 192-199.

    Abstract

    Several theorists have proposed that distinctions are needed between different positive emotional states, and that these discriminations may be particularly useful in the domain of vocal signals (Ekman, 1992b, Cognition and Emotion, 6, 169–200; Scherer, 1986, Psychological Bulletin, 99, 143–165). We report an investigation into the hypothesis that positive basic emotions have distinct vocal expressions (Ekman, 1992b, Cognition and Emotion, 6, 169–200). Non-verbal vocalisations are used that map onto five putative positive emotions: Achievement/Triumph, Amusement, Contentment, Sensual Pleasure, and Relief. Data from categorisation and rating tasks indicate that each vocal expression is accurately categorised and consistently rated as expressing the intended emotion. This pattern is replicated across two language groups. These data, we conclude, provide evidence for the existence of robustly recognisable expressions of distinct positive emotions.
  • Scala, M., Anijs, M., Battini, R., Madia, F., Capra, V., Scudieri, P., Verrotti, A., Zara, F., Minetti, C., Vernes, S. C., & Striano, P. (2021). Hyperkinetic stereotyped movements in a boy with biallelic CNTNAP2 variants. Italian Journal of Pediatrics, 47: 208. doi:10.1186/s13052-021-01162-w.

    Abstract

    Background

    Heterozygous variants in CNTNAP2 have been implicated in a wide range of neurological phenotypes, including intellectual disability (ID), epilepsy, autistic spectrum disorder (ASD), and impaired language. However, heterozygous variants can also be found in unaffected individuals. Biallelic CNTNAP2 variants are rarer and cause a well-defined genetic syndrome known as CASPR2 deficiency disorder, a condition characterised by ID, early-onset refractory epilepsy, language impairment, and autistic features.
    Case-report

    A 7-year-old boy presented with hyperkinetic stereotyped movements that started during early infancy and persisted over childhood. Abnormal movements consisted of rhythmic and repetitive shaking of the four limbs, with evident stereotypic features. Additional clinical features included ID, attention deficit-hyperactivity disorder (ADHD), ASD, and speech impairment, consistent with CASPR2 deficiency disorder. Whole-genome array comparative genomic hybridization detected a maternally inherited 0.402 Mb duplication, which involved intron 1, exon 2, and intron 2 of CNTNAP2 (c.97 +?_209-?dup). The affected region in intron 1 contains a binding site for the transcription factor FOXP2, potentially leading to abnormal CNTNAP2 expression regulation. Sanger sequencing of the coding region of CNTNAP2 also identified a paternally-inherited missense variant c.2752C > T, p.(Leu918Phe).
    Conclusion

    This case expands the molecular and phenotypic spectrum of CASPR2 deficiency disorder, suggesting that Hyperkinetic stereotyped movements may be a rare, yet significant, clinical feature of this complex neurological disorder. Furthermore, the identification of an in-frame, largely non-coding duplication in CNTNAP2 points to a sophisticated underlying molecular mechanism, likely involving impaired FOXP2 binding.

    Additional information

    additional files
  • Scharenborg, O., Ernestus, M., & Wan, V. (2007). Segmentation of speech: Child's play? In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 1953-1956). Adelaide: Causal Productions.

    Abstract

    The difficulty of the task of segmenting a speech signal into its words is immediately clear when listening to a foreign language; it is much harder to segment the signal into its words, since the words of the language are unknown. Infants are faced with the same task when learning their first language. This study provides a better understanding of the task that infants face while learning their native language. We employed an automatic algorithm on the task of speech segmentation without prior knowledge of the labels of the phonemes. An analysis of the boundaries erroneously placed inside a phoneme showed that the algorithm consistently placed additional boundaries in phonemes in which acoustic changes occur. These acoustic changes may be as great as the transition from the closure to the burst of a plosive or as subtle as the formant transitions in low or back vowels. Moreover, we found that glottal vibration may attenuate the relevance of acoustic changes within obstruents. An interesting question for further research is how infants learn to overcome the natural tendency to segment these ‘dynamic’ phonemes.
  • Scharenborg, O., & Wan, V. (2007). Can unquantised articulatory feature continuums be modelled? In INTERSPEECH 2007 - 8th Annual Conference of the International Speech Communication Association (pp. 2473-2476). ISCA Archive.

    Abstract

    Articulatory feature (AF) modelling of speech has received a considerable amount of attention in automatic speech recognition research. Although termed ‘articulatory’, previous definitions make certain assumptions that are invalid, for instance, that articulators ‘hop’ from one fixed position to the next. In this paper, we studied two methods, based on support vector classification (SVC) and regression (SVR), in which the articulation continuum is modelled without being restricted to using discrete AF value classes. A comparison with a baseline system trained on quantised values of the articulation continuum showed that both SVC and SVR outperform the baseline for two of the three investigated AFs, with improvements up to 5.6% absolute.
  • Scharenborg, O., Seneff, S., & Boves, L. (2007). A two-pass approach for handling out-of-vocabulary words in a large vocabulary recognition task. Computer, Speech & Language, 21, 206-218. doi:10.1016/j.csl.2006.03.003.

    Abstract

    This paper addresses the problem of recognizing a vocabulary of over 50,000 city names in a telephone access spoken dialogue system. We adopt a two-stage framework in which only major cities are represented in the first stage lexicon. We rely on an unknown word model encoded as a phone loop to detect OOV city names (referred to as ‘rare city’ names). We use SpeM, a tool that can extract words and word-initial cohorts from phone graphs from a large fallback lexicon, to provide an N-best list of promising city name hypotheses on the basis of the phone graph corresponding to the OOV. This N-best list is then inserted into the second stage lexicon for a subsequent recognition pass. Experiments were conducted on a set of spontaneous telephone-quality utterances; each containing one rare city name. It appeared that SpeM was able to include nearly 75% of the correct city names in an N-best hypothesis list of 3000 city names. With the names found by SpeM to extend the lexicon of the second stage recognizer, a word accuracy of 77.3% could be obtained. The best one-stage system yielded a word accuracy of 72.6%. The absolute number of correctly recognized rare city names almost doubled, from 62 for the best one-stage system to 102 for the best two-stage system. However, even the best two-stage system recognized only about one-third of the rare city names retrieved by SpeM. The paper discusses ways for improving the overall performance in the context of an application.
  • Scharenborg, O., ten Bosch, L., & Boves, L. (2007). Early decision making in continuous speech. In M. Grimm, & K. Kroschel (Eds.), Robust speech recognition and understanding (pp. 333-350). I-Tech Education and Publishing.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2007). 'Early recognition' of polysyllabic words in continuous speech. Computer, Speech & Language, 21, 54-71. doi:10.1016/j.csl.2005.12.001.

    Abstract

    Humans are able to recognise a word before its acoustic realisation is complete. This in contrast to conventional automatic speech recognition (ASR) systems, which compute the likelihood of a number of hypothesised word sequences, and identify the words that were recognised on the basis of a trace back of the hypothesis with the highest eventual score, in order to maximise efficiency and performance. In the present paper, we present an ASR system, SpeM, based on principles known from the field of human word recognition that is able to model the human capability of ‘early recognition’ by computing word activation scores (based on negative log likelihood scores) during the speech recognition process. Experiments on 1463 polysyllabic words in 885 utterances showed that 64.0% (936) of these polysyllabic words were recognised correctly at the end of the utterance. For 81.1% of the 936 correctly recognised polysyllabic words the local word activation allowed us to identify the word before its last phone was available, and 64.1% of those words were already identified one phone after their lexical uniqueness point. We investigated two types of predictors for deciding whether a word is considered as recognised before the end of its acoustic realisation. The first type is related to the absolute and relative values of the word activation, which trade false acceptances for false rejections. The second type of predictor is related to the number of phones of the word that have already been processed and the number of phones that remain until the end of the word. The results showed that SpeM’s performance increases if the amount of acoustic evidence in support of a word increases and the risk of future mismatches decreases.
  • Scharenborg, O. (2007). Reaching over the gap: A review of efforts to link human and automatic speech recognition research. Speech Communication, 49, 336-347. doi:10.1016/j.specom.2007.01.009.

    Abstract

    The fields of human speech recognition (HSR) and automatic speech recognition (ASR) both investigate parts of the speech recognition process and have word recognition as their central issue. Although the research fields appear closely related, their aims and research methods are quite different. Despite these differences there is, however, lately a growing interest in possible cross-fertilisation. Researchers from both ASR and HSR are realising the potential benefit of looking at the research field on the other side of the ‘gap’. In this paper, we provide an overview of past and present efforts to link human and automatic speech recognition research and present an overview of the literature describing the performance difference between machines and human listeners. The focus of the paper is on the mutual benefits to be derived from establishing closer collaborations and knowledge interchange between ASR and HSR. The paper ends with an argument for more and closer collaborations between researchers of ASR and HSR to further improve research in both fields.
  • Scharenborg, O., Wan, V., & Moore, R. K. (2007). Towards capturing fine phonetic variation in speech using articulatory features. Speech Communication, 49, 811-826. doi:10.1016/j.specom.2007.01.005.

    Abstract

    The ultimate goal of our research is to develop a computational model of human speech recognition that is able to capture the effects of fine-grained acoustic variation on speech recognition behaviour. As part of this work we are investigating automatic feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. In the experiments reported here, we analysed the classification results from support vector machines (SVMs) and multilayer perceptrons (MLPs). MLPs have been widely and successfully used for the task of multi-value articulatory feature classification, while (to the best of our knowledge) SVMs have not. This paper compares the performance of the two classifiers and analyses the results in order to better understand the articulatory representations. It was found that the SVMs outperformed the MLPs for five out of the seven articulatory feature classes we investigated while using only 8.8–44.2% of the training material used for training the MLPs. The structure in the misclassifications of the SVMs and MLPs suggested that there might be a mismatch between the characteristics of the classification systems and the characteristics of the description of the AF values themselves. The analyses showed that some of the misclassified features are inherently confusable given the acoustic space. We concluded that in order to come to a feature set that can be used for a reliable and accurate automatic description of the speech signal; it could be beneficial to move away from quantised representations.
  • Scheu, O., & Zinn, C. (2007). How did the e-learning session go? The student inspector. In Proceedings of the 13th International Conference on Artificial Intelligence and Education (AIED 2007). Amsterdam: IOS Press.

    Abstract

    Good teachers know their students, and exploit this knowledge to adapt or optimise their instruction. Traditional teachers know their students because they interact with them face-to-face in classroom or one-to-one tutoring sessions. In these settings, they can build student models, i.e., by exploiting the multi-faceted nature of human-human communication. In distance-learning contexts, teacher and student have to cope with the lack of such direct interaction, and this must have detrimental effects for both teacher and student. In a past study we have analysed teacher requirements for tracking student actions in computer-mediated settings. Given the results of this study, we have devised and implemented a tool that allows teachers to keep track of their learners'interaction in e-learning systems. We present the tool's functionality and user interfaces, and an evaluation of its usability.
  • Schilberg, L., Ten Oever, S., Schuhmann, T., & Sack, A. T. (2021). Phase and power modulations on the amplitude of TMS-induced motor evoked potentials. PLoS One, 16(9): e0255815. doi:10.1371/journal.pone.0255815.

    Abstract

    The evaluation of transcranial magnetic stimulation (TMS)-induced motor evoked potentials (MEPs) promises valuable information about fundamental brain related mechanisms and may serve as a diagnostic tool for clinical monitoring of therapeutic progress or surgery procedures. However, reports about spontaneous fluctuations of MEP amplitudes causing high intra-individual variability have led to increased concerns about the reliability of this measure. One possible cause for high variability of MEPs could be neuronal oscillatory activity, which reflects fluctuations of membrane potentials that systematically increase and decrease the excitability of neuronal networks. Here, we investigate the dependence of MEP amplitude on oscillation power and phase by combining the application of single pulse TMS over the primary motor cortex with concurrent recordings of electromyography and electroencephalography. Our results show that MEP amplitude is correlated to alpha phase, alpha power as well as beta phase. These findings may help explain corticospinal excitability fluctuations by highlighting the modulatory effect of alpha and beta phase on MEPs. In the future, controlling for such a causal relationship may allow for the development of new protocols, improve this method as a (diagnostic) tool and increase the specificity and efficacy of general TMS applications.

    Additional information

    data and supporting information
  • Schoenmakers, G.-J., & Storment, J. D. (2021). Going city: Directional predicates and preposition incorporation in youth vernaculars of Dutch. Linguistics in the Netherlands, 38(1), 65-80. doi:10.1075/avt.00050.sch.

    Abstract

    In certain varieties of Dutch spoken among young people, the preposition and determiner in locative and directional PPs can sometimes be omitted. We argue on the basis of language data taken from Twitter and intuitions of young speakers of Dutch that nominal arguments in these constructions do not have a DP layer, the absence of which leads to a special interpretation. The option to omit the preposition is related to the structural and semantic complexity of the verb. The bare construction is possible only with simple verbs, and not with manner-of-motion verbs. We present an analysis that accounts for the non-pronunciation of prepositions in directional predicates by claiming that they can be licensed through incorporation into the verb. This type of incorporation is blocked if the verb is structurally complex.
  • Schubotz, L., Holler, J., Drijvers, L., & Ozyurek, A. (2021). Aging and working memory modulate the ability to benefit from visible speech and iconic gestures during speech-in-noise comprehension. Psychological Research, 85, 1997-2011. doi:10.1007/s00426-020-01363-8.

    Abstract

    When comprehending speech-in-noise (SiN), younger and older adults benefit from seeing the speaker’s mouth, i.e. visible speech. Younger adults additionally benefit from manual iconic co-speech gestures. Here, we investigate to what extent younger and older adults benefit from perceiving both visual articulators while comprehending SiN, and whether this is modulated by working memory and inhibitory control. Twenty-eight younger and 28 older adults performed a word recognition task in three visual contexts: mouth blurred (speech-only), visible speech, or visible speech + iconic gesture. The speech signal was either clear or embedded in multitalker babble. Additionally, there were two visual-only conditions (visible speech, visible speech + gesture). Accuracy levels for both age groups were higher when both visual articulators were present compared to either one or none. However, older adults received a significantly smaller benefit than younger adults, although they performed equally well in speech-only and visual-only word recognition. Individual differences in verbal working memory and inhibitory control partly accounted for age-related performance differences. To conclude, perceiving iconic gestures in addition to visible speech improves younger and older adults’ comprehension of SiN. Yet, the ability to benefit from this additional visual information is modulated by age and verbal working memory. Future research will have to show whether these findings extend beyond the single word level.

    Additional information

    supplementary material
  • Schubotz, L. (2021). Effects of aging and cognitive abilities on multimodal language production and comprehension in context. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Schulte im Walde, S., Melinger, A., Roth, M., & Weber, A. (2007). An empirical characterization of response types in German association norms. In Proceedings of the GLDV workshop on lexical-semantic and ontological resources.
  • Segurado, R., Hamshere, M. L., Glaser, B., Nikolov, I., Moskvina, V., & Holmans, P. A. (2007). Combining linkage data sets for meta-analysis and mega-analysis: the GAW15 rheumatoid arthritis data set. BMC Proceedings, 1(Suppl 1): S104.

    Abstract

    We have used the genome-wide marker genotypes from Genetic Analysis Workshop 15 Problem 2 to explore joint evidence for genetic linkage to rheumatoid arthritis across several samples. The data consisted of four high-density genome scans on samples selected for rheumatoid arthritis. We cleaned the data, removed intermarker linkage disequilibrium, and assembled the samples onto a common genetic map using genome sequence positions as a reference for map interpolation. The individual studies were combined first at the genotype level (mega-analysis) prior to a multipoint linkage analysis on the combined sample, and second using the genome scan meta-analysis method after linkage analysis of each sample. The two approaches were compared, and give strong support to the HLA locus on chromosome 6 as a susceptibility locus. Other regions of interest include loci on chromosomes 11, 2, and 12.
  • Seijdel, N., Loke, J., Van de Klundert, R., Van der Meer, M., Quispel, E., Van Gaal, S., De Haan, E. H., & Scholte, H. S. (2021). On the necessity of recurrent processing during object recognition: It depends on the need for scene segmentation. Journal of Neuroscience, 41(29), 6281-6289. doi:10.1523/JNEUROSCI.2851-20.2021.

    Abstract

    Although feedforward activity may suffice for recognizing objects in isolation, additional visual operations that aid object recognition might be needed for real-world scenes. One such additional operation is figure-ground segmentation, extracting the relevant features and locations of the target object while ignoring irrelevant features. In this study of 60 human participants (female and male), we show objects on backgrounds of increasing complexity to investigate whether recurrent computations are increasingly important for segmenting objects from more complex backgrounds. Three lines of evidence show that recurrent processing is critical for recognition of objects embedded in complex scenes. First, behavioral results indicated a greater reduction in performance after masking objects presented on more complex backgrounds, with the degree of impairment increasing with increasing background complexity. Second, electroencephalography (EEG) measurements showed clear differences in the evoked response potentials between conditions around time points beyond feedforward activity, and exploratory object decoding analyses based on the EEG signal indicated later decoding onsets for objects embedded in more complex backgrounds. Third, deep convolutional neural network performance confirmed this interpretation. Feedforward and less deep networks showed a higher degree of impairment in recognition for objects in complex backgrounds compared with recurrent and deeper networks. Together, these results support the notion that recurrent computations drive figure-ground segmentation of objects in complex scenes.SIGNIFICANCE STATEMENT The incredible speed of object recognition suggests that it relies purely on a fast feedforward buildup of perceptual activity. However, this view is contradicted by studies showing that disruption of recurrent processing leads to decreased object recognition performance. Here, we resolve this issue by showing that how object recognition is resolved and whether recurrent processing is crucial depends on the context in which it is presented. For objects presented in isolation or in simple environments, feedforward activity could be sufficient for successful object recognition. However, when the environment is more complex, additional processing seems necessary to select the elements that belong to the object and by that segregate them from the background.
  • Seijdel, N., Scholte, H. S., & de Haan, E. H. (2021). Visual features drive the category-specific impairments on categorization tasks in a patient with object agnosia. Neuropsychologia, 161: 108017. doi:10.1016/j.neuropsychologia.2021.108017.

    Abstract

    Object and scene recognition both require mapping of incoming sensory information to existing conceptual knowledge about the world. A notable finding in brain-damaged patients is that they may show differentially impaired performance for specific categories, such as for “living exemplars”. While numerous patients with category-specific impairments have been reported, the explanations for these deficits remain controversial. In the current study, we investigate the ability of a brain injured patient with a well-established category-specific impairment of semantic memory to perform two categorization experiments: ‘natural’ vs. ‘manmade’ scenes (experiment 1) and objects (experiment 2). Our findings show that the pattern of categorical impairment does not respect the natural versus manmade distinction. This suggests that the impairments may be better explained by differences in visual features, rather than by category membership. Using Deep Convolutional Neural Networks (DCNNs) as ‘artificial animal models’ we further explored this idea. Results indicated that DCNNs with ‘lesions’ in higher order layers showed similar response patterns, with decreased relative performance for manmade scenes (experiment 1) and natural objects (experiment 2), even though they have no semantic category knowledge, apart from a mapping between pictures and labels. Collectively, these results suggest that the direction of category-effects to a large extent depends, at least in MS′ case, on the degree of perceptual differentiation called for, and not semantic knowledge.

    Additional information

    data and code
  • Senft, G. (2007). Reference and 'référence dangereuse' to persons in Kilivila: An overview and a case study. In N. Enfield, & T. Stivers (Eds.), Person reference in interaction: Linguistic, cultural, and social perspectives (pp. 309-337). Cambridge: Cambridge University Press.

    Abstract

    Based on the conversation analysts’ insights into the various forms of third person reference in English, this paper first presents the inventory of forms Kilivila, the Austronesian language of the Trobriand Islanders of Papua New Guinea, offers its speakers for making such references. To illustrate such references to third persons in talk-in-interaction in Kilivila, a case study on gossiping is presented in the second part of the paper. This case study shows that ambiguous anaphoric references to two first mentioned third persons turn out to not only exceed and even violate the frame of a clearly defined situational-intentional variety of Kilivila that is constituted by the genre “gossip”, but also that these references are extremely dangerous for speakers in the Trobriand Islanders’ society. I illustrate how this culturally dangerous situation escalates and how other participants of the group of gossiping men try to “repair” this violation of the frame of a culturally defined and metalinguistically labelled “way of speaking”. The paper ends with some general remarks on how the understanding of forms of person reference in a language is dependent on the culture specific context in which they are produced.
  • Senft, G. (2007). The Nijmegen space games: Studying the interrelationship between language, culture and cognition. In J. Wassmann, & K. Stockhaus (Eds.), Person, space and memory in the contemporary Pacific: Experiencing new worlds (pp. 224-244). New York: Berghahn Books.

    Abstract

    One of the central aims of the "Cognitive Anthropology Research Group" (since 1998 the "Department of Language and Cognition of the MPI for Psycholinguistics") is to research the relationship between language, culture and cognition and the conceptualization of space in various languages and cultures. Ever since its foundation in 1991 the group has been developing methods to elicit cross-culturally and cross-linguistically comparable data for this research project. After a brief summary of the central considerations that served as guidelines for the developing of these elicitation devices, this paper first presents a broad selection of the "space games" developed and used for data elicitation in the groups' various fieldsites so far. The paper then discusses the advantages and shortcomings of these data elicitation devices. Finally, it is argued that methodologists developing such devices find themselves in a position somewhere between Scylla and Charybdis - at least, if they take the requirement seriously that the elicited data should be comparable not only cross-culturally but also cross-linguistically.
  • Senft, G. (2021). A very special letter. In T. Szczerbowski (Ed.), Language "as round as an orange".. In memory of Professor Krystyna Pisarkowa on the 90th anniversary of her birth (pp. 367). Krakow: Uniwersytetu Pedagogicznj.
  • Senft, G. (2007). "Ich weiß nicht, was soll es bedeuten.." - Ethnolinguistische Winke zur Rolle von umfassenden Metadaten bei der (und für die) Arbeit mit Corpora. In W. Kallmeyer, & G. Zifonun (Eds.), Sprachkorpora - Datenmengen und Erkenntnisfortschritt (pp. 152-168). Berlin: Walter de Gruyter.

    Abstract

    Arbeitet man als muttersprachlicher Sprecher des Deutschen mit Corpora gesprochener oder geschriebener deutscher Sprache, dann reflektiert man in aller Regel nur selten über die Vielzahl von kulturspezifischen Informationen, die in solchen Texten kodifiziert sind – vor allem, wenn es sich bei diesen Daten um Texte aus der Gegenwart handelt. In den meisten Fällen hat man nämlich keinerlei Probleme mit dem in den Daten präsupponierten und als allgemein bekannt erachteten Hintergrundswissen. Betrachtet man dagegen Daten in Corpora, die andere – vor allem nicht-indoeuropäische – Sprachen dokumentieren, dann wird einem schnell bewußt, wieviel an kulturspezifischem Wissen nötig ist, um diese Daten adäquat zu verstehen. In meinem Vortrag illustriere ich diese Beobachtung an einem Beispiel aus meinem Corpus des Kilivila, der austronesischen Sprache der Trobriand-Insulaner von Papua-Neuguinea. Anhand eines kurzen Auschnitts einer insgesamt etwa 26 Minuten dauernden Dokumentation, worüber und wie sechs Trobriander miteinander tratschen und klatschen, zeige ich, was ein Hörer oder Leser eines solchen kurzen Daten-Ausschnitts wissen muß, um nicht nur dem Gespräch überhaupt folgen zu können, sondern auch um zu verstehen, was dabei abläuft und wieso ein auf den ersten Blick absolut alltägliches Gespräch plötzlich für einen Trobriander ungeheuer an Brisanz und Bedeutung gewinnt. Vor dem Hintergrund dieses Beispiels weise ich dann zum Schluß meines Beitrags darauf hin, wie unbedingt nötig und erforderlich es ist, in allen Corpora bei der Erschließung und Kommentierung von Datenmaterialien durch sogenannte Metadaten solche kulturspezifischen Informationen explizit zu machen.
  • Senft, G. (2007). [Review of the book Bislama reference grammar by Terry Crowley]. Linguistics, 45(1), 235-239.
  • Senft, G. (2007). [Review of the book Serial verb constructions - A cross-linguistic typology by Alexandra Y. Aikhenvald and Robert M. W. Dixon]. Linguistics, 45(4), 833-840. doi:10.1515/LING.2007.024.
  • Senft, G. (2007). Language, culture and cognition: Frames of spatial reference and why we need ontologies of space [Abstract]. In A. G. Cohn, C. Freksa, & B. Bebel (Eds.), Spatial cognition: Specialization and integration (pp. 12).

    Abstract

    One of the many results of the "Space" research project conducted at the MPI for Psycholinguistics is that there are three "Frames of spatial Reference" (FoRs), the relative, the intrinsic and the absolute FoR. Cross-linguistic research showed that speakers who prefer one FoR in verbal spatial references rely on a comparable coding system for memorizing spatial configurations and for making inferences with respect to these spatial configurations in non-verbal problem solving. Moreover, research results also revealed that in some languages these verbal FoRs also influence gestural behavior. These results document the close interrelationship between language, culture and cognition in the domain "Space". The proper description of these interrelationships in the spatial domain requires language and culture specific ontologies.
  • Senft, G. (2007). Nominal classification. In D. Geeraerts, & H. Cuyckens (Eds.), The Oxford handbook of cognitive linguistics (pp. 676-696). Oxford: Oxford University Press.

    Abstract

    This handbook chapter summarizes some of the problems of nominal classification in language, presents and illustrates the various systems or techniques of nominal classification, and points out why nominal classification is one of the most interesting topics in Cognitive Linguistics.
  • Senft, G., Majid, A., & Levinson, S. C. (2007). The language of taste. In A. Majid (Ed.), Field Manual Volume 10 (pp. 42-45). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492913.
  • Senft, G. (2021). [Review of the book Approaches to Language and Culture ed. by Svenja Völkel and Nico Nassenstein]. Anthropological Linguistics, 63(3), 318-321.
  • Seuren, P. A. M. (2007). The theory that dare not speak its name: A rejoinder to Mufwene and Francis. Language Sciences, 29(4), 571-573. doi:10.1016/j.langsci.2007.02.001.

Share this page