Publications

Displaying 301 - 400 of 479
  • Otten, M., Nieuwland, M. S., & Van Berkum, J. J. A. (2007). Great expectations: Specific lexical anticipation influences the processing of spoken language. BMC Neuroscience, 8: 89. doi:10.1186/1471-2202-8-89.

    Abstract

    Background Recently several studies have shown that people use contextual information to make predictions about the rest of the sentence or story as the text unfolds. Using event related potentials (ERPs) we tested whether these on-line predictions are based on a message-based representation of the discourse or on simple automatic activation by individual words. Subjects heard short stories that were highly constraining for one specific noun, or stories that were not specifically predictive but contained the same prime words as the predictive stories. To test whether listeners make specific predictions critical nouns were preceded by an adjective that was inflected according to, or in contrast with, the gender of the expected noun. Results When the message of the preceding discourse was predictive, adjectives with an unexpected gender-inflection evoked a negative deflection over right-frontal electrodes between 300 and 600 ms. This effect was not present in the prime control context, indicating that the prediction mismatch does not hinge on word-based priming but is based on the actual message of the discourse. Conclusions When listening to a constraining discourse people rapidly make very specific predictions about the remainder of the story, as the story unfolds. These predictions are not simply based on word-based automatic activation, but take into account the actual message of the discourse.
  • Owoyele, B., Trujillo, J. P., De Melo, G., & Pouw, W. (2022). Masked-Piper: Masking personal identities in visual recordings while preserving multimodal information. SoftwareX, 20: 101236. doi:10.1016/j.softx.2022.101236.

    Abstract

    In this increasingly data-rich world, visual recordings of human behavior are often unable to be shared due to concerns about privacy. Consequently, data sharing in fields such as behavioral science, multimodal communication, and human movement research is often limited. In addition, in legal and other non-scientific contexts, privacy-related concerns may preclude the sharing of video recordings and thus remove the rich multimodal context that humans recruit to communicate. Minimizing the risk of identity exposure while preserving critical behavioral information would maximize utility of public resources (e.g., research grants) and time invested in audio–visual​ research. Here we present an open-source computer vision tool that masks the identities of humans while maintaining rich information about communicative body movements. Furthermore, this masking tool can be easily applied to many videos, leveraging computational tools to augment the reproducibility and accessibility of behavioral research. The tool is designed for researchers and practitioners engaged in kinematic and affective research. Application areas include teaching/education, communication and human movement research, CCTV, and legal contexts.

    Additional information

    setup and usage
  • Özdemir, R., Roelofs, A., & Levelt, W. J. M. (2007). Perceptual uniqueness point effects in monitoring internal speech. Cognition, 105(2), 457-465. doi:10.1016/j.cognition.2006.10.006.

    Abstract

    Disagreement exists about how speakers monitor their internal speech. Production-based accounts assume that self-monitoring mechanisms exist within the production system, whereas comprehension-based accounts assume that monitoring is achieved through the speech comprehension system. Comprehension-based accounts predict perception-specific effects, like the perceptual uniqueness-point effect, in the monitoring of internal speech. We ran an extensive experiment testing this prediction using internal phoneme monitoring and picture naming tasks. Our results show an effect of the perceptual uniqueness point of a word in internal phoneme monitoring in the absence of such an effect in picture naming. These results support comprehension-based accounts of the monitoring of internal speech.
  • Ozker, M., Doyle, W., Devinsky, O., & Flinker, A. (2022). A cortical network processes auditory error signals during human speech production to maintain fluency. PLoS Biology, 20: e3001493. doi:10.1371/journal.pbio.3001493.

    Abstract

    Hearing one’s own voice is critical for fluent speech production as it allows for the detection and correction of vocalization errors in real time. This behavior known as the auditory feedback control of speech is impaired in various neurological disorders ranging from stuttering to aphasia; however, the underlying neural mechanisms are still poorly understood. Computational models of speech motor control suggest that, during speech production, the brain uses an efference copy of the motor command to generate an internal estimate of the speech output. When actual feedback differs from this internal estimate, an error signal is generated to correct the internal estimate and update necessary motor commands to produce intended speech. We were able to localize the auditory error signal using electrocorticographic recordings from neurosurgical participants during a delayed auditory feedback (DAF) paradigm. In this task, participants hear their voice with a time delay as they produced words and sentences (similar to an echo on a conference call), which is well known to disrupt fluency by causing slow and stutter-like speech in humans. We observed a significant response enhancement in auditory cortex that scaled with the duration of feedback delay, indicating an auditory speech error signal. Immediately following auditory cortex, dorsal precentral gyrus (dPreCG), a region that has not been implicated in auditory feedback processing before, exhibited a markedly similar response enhancement, suggesting a tight coupling between the 2 regions. Critically, response enhancement in dPreCG occurred only during articulation of long utterances due to a continuous mismatch between produced speech and reafferent feedback. These results suggest that dPreCG plays an essential role in processing auditory error signals during speech production to maintain fluency.

    Additional information

    data and code
  • Ozyurek, A., Willems, R. M., Kita, S., & Hagoort, P. (2007). On-line integration of semantic information from speech and gesture: Insights from event-related brain potentials. Journal of Cognitive Neuroscience, 19(4), 605-616. doi:10.1162/jocn.2007.19.4.605.

    Abstract

    During language comprehension, listeners use the global semantic representation from previous sentence or discourse context to immediately integrate the meaning of each upcoming word into the unfolding message-level representation. Here we investigate whether communicative gestures that often spontaneously co-occur with speech are processed in a similar fashion and integrated to previous sentence context in the same way as lexical meaning. Event-related potentials were measured while subjects listened to spoken sentences with a critical verb (e.g., knock), which was accompanied by an iconic co-speech gesture (i.e., KNOCK). Verbal and/or gestural semantic content matched or mismatched the content of the preceding part of the sentence. Despite the difference in the modality and in the specificity of meaning conveyed by spoken words and gestures, the latency, amplitude, and topographical distribution of both word and gesture mismatches are found to be similar, indicating that the brain integrates both types of information simultaneously. This provides evidence for the claim that neural processing in language comprehension involves the simultaneous incorporation of information coming from a broader domain of cognition than only verbal semantics. The neural evidence for similar integration of information from speech and gesture emphasizes the tight interconnection between speech and co-speech gestures.
  • Ozyurek, A., & Kelly, S. D. (2007). Gesture, language, and brain. Brain and Language, 101(3), 181-185. doi:10.1016/j.bandl.2007.03.006.
  • Park, B.-y., Larivière, S., Rodríguez-Cruces, R., Royer, J., Tavakol, S., Wang, Y., Caciagli, L., Caligiuri, M. E., Gambardella, A., Concha, L., Keller, S. S., Cendes, F., Alvim, M. K. M., Yasuda, C., Bonilha, L., Gleichgerrcht, E., Focke, N. K., Kreilkamp, B. A. K., Domin, M., Von Podewils, F. and 66 morePark, B.-y., Larivière, S., Rodríguez-Cruces, R., Royer, J., Tavakol, S., Wang, Y., Caciagli, L., Caligiuri, M. E., Gambardella, A., Concha, L., Keller, S. S., Cendes, F., Alvim, M. K. M., Yasuda, C., Bonilha, L., Gleichgerrcht, E., Focke, N. K., Kreilkamp, B. A. K., Domin, M., Von Podewils, F., Langner, S., Rummel, C., Rebsamen, M., Wiest, R., Martin, P., Kotikalapudi, R., Bender, B., O’Brien, T. J., Law, M., Sinclair, B., Vivash, L., Desmond, P. M., Malpas, C. B., Lui, E., Alhusaini, S., Doherty, C. P., Cavalleri, G. L., Delanty, N., Kälviäinen, R., Jackson, G. D., Kowalczyk, M., Mascalchi, M., Semmelroch, M., Thomas, R. H., Soltanian-Zadeh, H., Davoodi-Bojd, E., Zhang, J., Lenge, M., Guerrini, R., Bartolini, E., Hamandi, K., Foley, S., Weber, B., Depondt, C., Absil, J., Carr, S. J. A., Abela, E., Richardson, M. P., Devinsky, O., Severino, M., Striano, P., Parodi, C., Tortora, D., Hatton, S. N., Vos, S. B., Duncan, J. S., Galovic, M., Whelan, C. D., Bargalló, N., Pariente, J., Conde, E., Vaudano, A. E., Tondelli, M., Meletti, S., Kong, X., Francks, C., Fisher, S. E., Caldairou, B., Ryten, M., Labate, A., Sisodiya, S. M., Thompson, P. M., McDonald, C. R., Bernasconi, A., Bernasconi, N., & Bernhardt, B. C. (2022). Topographic divergence of atypical cortical asymmetry and atrophy patterns in temporal lobe epilepsy. Brain, 145(4), 1285-1298. doi:10.1093/brain/awab417.

    Abstract

    Temporal lobe epilepsy (TLE), a common drug-resistant epilepsy in adults, is primarily a limbic network disorder associated with predominant unilateral hippocampal pathology. Structural MRI has provided an in vivo window into whole-brain grey matter structural alterations in TLE relative to controls, by either mapping (i) atypical inter-hemispheric asymmetry or (ii) regional atrophy. However, similarities and differences of both atypical asymmetry and regional atrophy measures have not been systematically investigated.

    Here, we addressed this gap using the multi-site ENIGMA-Epilepsy dataset comprising MRI brain morphological measures in 732 TLE patients and 1,418 healthy controls. We compared spatial distributions of grey matter asymmetry and atrophy in TLE, contextualized their topographies relative to spatial gradients in cortical microstructure and functional connectivity calculated using 207 healthy controls obtained from Human Connectome Project and an independent dataset containing 23 TLE patients and 53 healthy controls, and examined clinical associations using machine learning.

    We identified a marked divergence in the spatial distribution of atypical inter-hemispheric asymmetry and regional atrophy mapping. The former revealed a temporo-limbic disease signature while the latter showed diffuse and bilateral patterns. Our findings were robust across individual sites and patients. Cortical atrophy was significantly correlated with disease duration and age at seizure onset, while degrees of asymmetry did not show a significant relationship to these clinical variables.

    Our findings highlight that the mapping of atypical inter-hemispheric asymmetry and regional atrophy tap into two complementary aspects of TLE-related pathology, with the former revealing primary substrates in ipsilateral limbic circuits and the latter capturing bilateral disease effects. These findings refine our notion of the neuropathology of TLE and may inform future discovery and validation of complementary MRI biomarkers in TLE.

    Additional information

    awab417_supplementary_data.pdf
  • Pearson, L., & Pouw, W. (2022). Gesture–vocal coupling in Karnatak music performance: A neuro–bodily distributed aesthetic entanglement. Annals of the New York Academy of Sciences, 1515(1), 219-236. doi:10.1111/nyas.14806.

    Abstract

    In many musical styles, vocalists manually gesture while they sing. Coupling between gesture kinematics and vocalization has been examined in speech contexts, but it is an open question how these couple in music making. We examine this in a corpus of South Indian, Karnatak vocal music that includes motion-capture data. Through peak magnitude analysis (linear mixed regression) and continuous time-series analyses (generalized additive modeling), we assessed whether vocal trajectories around peaks in vertical velocity, speed, or acceleration were coupling with changes in vocal acoustics (namely, F0 and amplitude). Kinematic coupling was stronger for F0 change versus amplitude, pointing to F0's musical significance. Acceleration was the most predictive for F0 change and had the most reliable magnitude coupling, showing a one-third power relation. That acceleration, rather than other kinematics, is maximally predictive for vocalization is interesting because acceleration entails force transfers onto the body. As a theoretical contribution, we argue that gesturing in musical contexts should be understood in relation to the physical connections between gesturing and vocal production that are brought into harmony with the vocalists’ (enculturated) performance goals. Gesture–vocal coupling should, therefore, be viewed as a neuro–bodily distributed aesthetic entanglement.

    Additional information

    tables
  • Pereira Soares, S. M., Kupisch, T., & Rothman, J. (2022). Testing potential transfer effects in heritage and adult L2 bilinguals acquiring a mini grammar as an additional language: An ERP approach. Brain Sciences, 12: 669. doi:10.3390/brainsci12050669.

    Abstract

    Models on L3/Ln acquisition differ with respect to how they envisage degree (holistic
    vs. selective transfer of the L1, L2 or both) and/or timing (initial stages vs. development) of how
    the influence of source languages unfolds. This study uses EEG/ERPs to examine these models,
    bringing together two types of bilinguals: heritage speakers (HSs) (Italian-German, n = 15) compared
    to adult L2 learners (L1 German, L2 English, n = 28) learning L3/Ln Latin. Participants were trained
    on a selected Latin lexicon over two sessions and, afterward, on two grammatical properties: case
    (similar between German and Latin) and adjective–noun order (similar between Italian and Latin).
    Neurophysiological findings show an N200/N400 deflection for the HSs in case morphology and a
    P600 effect for the German L2 group in adjectival position. None of the current L3/Ln models predict
    the observed results, which questions the appropriateness of this methodology. Nevertheless, the
    results are illustrative of differences in how HSs and L2 learners approach the very initial stages of
    additional language learning, the implications of which are discussed
  • Pereira Soares, S. M., Prystauka, Y., DeLuca, V., & Rothman, J. (2022). Type of bilingualism conditions individual differences in the oscillatory dynamics of inhibitory control. Frontiers in Human Neuroscience, 16: 910910. doi:10.3389/fnhum.2022.910910.

    Abstract

    The present study uses EEG time-frequency representations (TFRs) with a Flanker task to investigate if and how individual differences in bilingual language experience modulate neurocognitive outcomes (oscillatory dynamics) in two bilingual group types: late bilinguals (L2 learners) and early bilinguals (heritage speakers—HSs). TFRs were computed for both incongruent and congruent trials. The difference between the two (Flanker effect vis-à-vis cognitive interference) was then (1) compared between the HSs and the L2 learners, (2) modeled as a function of individual differences with bilingual experience within each group separately and (3) probed for its potential (a)symmetry between brain and behavioral data. We found no differences at the behavioral and neural levels for the between-groups comparisons. However, oscillatory dynamics (mainly theta increase and alpha suppression) of inhibition and cognitive control were found to be modulated by individual differences in bilingual language experience, albeit distinctly within each bilingual group. While the results indicate adaptations toward differential brain recruitment in line with bilingual language experience variation overall, this does not manifest uniformly. Rather, earlier versus later onset to bilingualism—the bilingual type—seems to constitute an independent qualifier to how individual differences play out.

    Additional information

    supplementary material
  • Pereiro Estevan, Y., Wan, V., & Scharenborg, O. (2007). Finding maximum margin segments in speech. Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference, IV, 937-940. doi:10.1109/ICASSP.2007.367225.

    Abstract

    Maximum margin clustering (MMC) is a relatively new and promising kernel method. In this paper, we apply MMC to the task of unsupervised speech segmentation. We present three automatic speech segmentation methods based on MMC, which are tested on TIMIT and evaluated on the level of phoneme boundary detection. The results show that MMC is highly competitive with existing unsupervised methods for the automatic detection of phoneme boundaries. Furthermore, initial analyses show that MMC is a promising method for the automatic detection of sub-phonetic information in the speech signal.
  • Perfors, A., & Kidd, E. (2022). The role of stimulus‐specific perceptual fluency in statistical learning. Cognitive Science, 46(2): e13100. doi:10.1111/cogs.13100.

    Abstract

    Humans have the ability to learn surprisingly complicated statistical information in a variety of modalities and situations, often based on relatively little input. These statistical learning (SL) skills appear to underlie many kinds of learning, but despite their ubiquity, we still do not fully understand precisely what SL is and what individual differences on SL tasks reflect. Here, we present experimental work suggesting that at least some individual differences arise from stimulus-specific variation in perceptual fluency: the ability to rapidly or efficiently code and remember the stimuli that SL occurs over. Experiment 1 demonstrates that participants show improved SL when the stimuli are simple and familiar; Experiment 2 shows that this improvement is not evident for simple but unfamiliar stimuli; and Experiment 3 shows that for the same stimuli (Chinese characters), SL is higher for people who are familiar with them (Chinese speakers) than those who are not (English speakers matched on age and education level). Overall, our findings indicate that performance on a standard SL task varies substantially within the same (visual) modality as a function of whether the stimuli involved are familiar or not, independent of stimulus complexity. Moreover, test–retest correlations of performance in an SL task using stimuli of the same level of familiarity (but distinct items) are stronger than correlations across the same task with stimuli of different levels of familiarity. Finally, we demonstrate that SL performance is predicted by an independent measure of stimulus-specific perceptual fluency that contains no SL component at all. Our results suggest that a key component of SL performance may be related to stimulus-specific processing and familiarity.
  • Perniss, P. M. (2007). Achieving spatial coherence in German sign language narratives: The use of classifiers and perspective. Lingua, 117(7), 1315-1338. doi:10.1016/j.lingua.2005.06.013.

    Abstract

    Spatial coherence in discourse relies on the use of devices that provide information about where referents are and where events take place. In signed language, two primary devices for achieving and maintaining spatial coherence are the use of classifier forms and signing perspective. This paper gives a unified account of the relationship between perspective and classifiers, and divides the range of possible correspondences between these two devices into prototypical and non-prototypical alignments. An analysis of German Sign Language narratives of complex events investigates the role of different classifier-perspective constructions in encoding spatial information about location, orientation, action and motion, as well as size and shape of referents. In particular, I show how non-prototypical alignments, including simultaneity of perspectives, contribute to the maintenance of spatial coherence, and provide functional explanations in terms of efficiency and informativeness constraints on discourse.
  • Petersson, K. M., Silva, C., Castro-Caldas, A., Ingvar, M., & Reis, A. (2007). Literacy: A cultural influence on functional left-right differences in the inferior parietal cortex. European Journal of Neuroscience, 26(3), 791-799. doi:10.1111/j.1460-9568.2007.05701.x.

    Abstract

    The current understanding of hemispheric interaction is limited. Functional hemispheric specialization is likely to depend on both genetic and environmental factors. In the present study we investigated the importance of one factor, literacy, for the functional lateralization in the inferior parietal cortex in two independent samples of literate and illiterate subjects. The results show that the illiterate group are consistently more right-lateralized than their literate controls. In contrast, the two groups showed a similar degree of left-right differences in early speech-related regions of the superior temporal cortex. These results provide evidence suggesting that a cultural factor, literacy, influences the functional hemispheric balance in reading and verbal working memory-related regions. In a third sample, we investigated grey and white matter with voxel-based morphometry. The results showed differences between literacy groups in white matter intensities related to the mid-body region of the corpus callosum and the inferior parietal and parietotemporal regions (literate > illiterate). There were no corresponding differences in the grey matter. This suggests that the influence of literacy on brain structure related to reading and verbal working memory is affecting large-scale brain connectivity more than grey matter per se.
  • Pickering, M. J., & Majid, A. (2007). What are implicit causality and consequentiality? Language and Cognitive Processes, 22(5), 780-788. doi:10.1080/01690960601119876.

    Abstract

    Much work in psycholinguistics and social psychology has investigated the notion of implicit causality associated with verbs. Crinean and Garnham (2006) relate implicit causality to another phenomenon, implicit consequentiality. We argue that they and other researchers have confused the meanings of events and the reasons for those events, so that particular thematic roles (e.g., Agent, Patient) are taken to be causes or consequences of those events by definition. In accord with Garvey and Caramazza (1974), we propose that implicit causality and consequentiality are probabilistic notions that are straightforwardly related to the explicit causes and consequences of events and are analogous to other biases investigated in psycholinguistics.
  • Pijls, F., & Kempen, G. (1986). Een psycholinguïstisch model voor grammatische samentrekking. De Nieuwe Taalgids, 79, 217-234.
  • Pijls, F., Daelemans, W., & Kempen, G. (1987). Artificial intelligence tools for grammar and spelling instruction. Instructional Science, 16(4), 319-336. doi:10.1007/BF00117750.

    Abstract

    In The Netherlands, grammar teaching is an especially important subject in the curriculum of children aged 10-15 for several reasons. However, in spite of all attention and time invested, the results are poor. This article describes the problems and our attempt to overcome them by developing an intelligent computational instructional environment consisting of: a linguistic expert system, containing a module representing grammar and spelling rules and a number of modules to manipulate these rules; a didactic module; and a student interface with special facilities for grammar and spelling. Three prototypes of the functionality are discussed: BOUWSTEEN and COGO, which are programs for constructing and analyzing Dutch sentences; and TDTDT, a program for the conjugation of Dutch verbs.
  • Pijls, F., & Kempen, G. (1987). Kennistechnologische leermiddelen in het grammatica- en spellingonderwijs. Nederlands Tijdschrift voor de Psychologie, 42, 354-363.
  • Poort, E. D., & Rodd, J. M. (2022). Cross-lingual priming of cognates and interlingual homographs from L2 to L1. Glossa Psycholinguistics, 1(1): 11. doi:10.5070/G601147.

    Abstract

    Many word forms exist in multiple languages, and can have either the same meaning (cognates) or a different meaning (interlingual homographs). Previous experiments have shown that processing of interlingual homographs in a bilingual’s second language is slowed down by recent experience with these words in the bilingual’s native language, while processing of cognates can be speeded up (Poort et al., 2016; Poort & Rodd, 2019a). The current experiment replicated Poort and Rodd’s (2019a) Experiment 2 but switched the direction of priming: Dutch–English bilinguals (n = 106) made Dutch semantic relatedness judgements to probes related to cognates (n = 50), interlingual homographs (n = 50) and translation equivalents (n = 50) they had seen 15 minutes previously embedded in English sentences. The current experiment is the first to show that a single encounter with an interlingual homograph in one’s second language can also affect subsequent processing in one’s native language. Cross-lingual priming did not affect the cognates. The experiment also extended Poort and Rodd (2019a)’s finding of a large interlingual homograph inhibition effect in a semantic relatedness task in the participants’ L2 to their L1, but again found no evidence for a cognate facilitation effect in a semantic relatedness task. These findings extend the growing literature that emphasises the high level of interaction in a bilingual’s mental lexicon, by demonstrating the influence of L2 experience on the processing of L1 words. Data, scripts, materials and pre-registration available via https://osf.io/2swyg/?view_only=b2ba2e627f6f4eaeac87edab2b59b236.
  • Postema, A., Van Mierlo, H., Bakker, A. B., & Barendse, M. T. (2022). Study-to-sports spillover among competitive athletes: A field study. International Journal of Sport and Exercise Psychology. Advance online publication. doi:10.1080/1612197X.2022.2058054.

    Abstract

    Combining academics and athletics is challenging but important for the psychological and psychosocial development of those involved. However, little is known about how experiences in academics spill over and relate to athletics. Drawing on the enrichment mechanisms proposed by the Work-Home Resources model, we posit that study crafting behaviours are positively related to volatile personal resources, which, in turn, are related to higher athletic achievement. Via structural equation modelling, we examine a path model among 243 student-athletes, incorporating study crafting behaviours and personal resources (i.e., positive affect and study engagement), and self- and coach-rated athletic achievement measured two weeks later. Results show that optimising the academic environment by crafting challenging study demands relates positively to positive affect and study engagement. In turn, positive affect related positively to self-rated athletic achievement, whereas – unexpectedly – study engagement related negatively to coach-rated athletic achievement. Optimising the academic environment through cognitive crafting and crafting social study resources did not relate to athletic outcomes. We discuss how these findings offer new insights into the interplay between academics and athletics.
  • Poulton, V. R., & Nieuwland, M. S. (2022). Can you hear what’s coming? Failure to replicate ERP evidence for phonological prediction. Neurobiology of Language, 3(4), 556 -574. doi:10.1162/nol_a_00078.

    Abstract

    Prediction-based theories of language comprehension assume that listeners predict both the meaning and phonological form of likely upcoming words. In alleged event-related potential (ERP) demonstrations of phonological prediction, prediction-mismatching words elicit a phonological mismatch negativity (PMN), a frontocentral negativity that precedes the centroparietal N400 component. However, classification and replicability of the PMN has proven controversial, with ongoing debate on whether the PMN is a distinct component or merely an early part of the N400. In this electroencephalography (EEG) study, we therefore attempted to replicate the PMN effect and its separability from the N400, using a participant sample size (N = 48) that was more than double that of previous studies. Participants listened to sentences containing either a predictable word or an unpredictable word with/without phonological overlap with the predictable word. Preregistered analyses revealed a widely distributed negative-going ERP in response to unpredictable words in both the early (150–250 ms) and the N400 (300–500 ms) time windows. Bayes factor analysis yielded moderate evidence against a different scalp distribution of the effects in the two time windows. Although our findings do not speak against phonological prediction during sentence comprehension, they do speak against the PMN effect specifically as a marker of phonological prediction mismatch. Instead of an PMN effect, our results demonstrate the early onset of the auditory N400 effect associated with unpredictable words. Our failure to replicate further highlights the risk associated with commonly employed data-contingent analyses (e.g., analyses involving time windows or electrodes that were selected based on visual inspection) and small sample sizes in the cognitive neuroscience of language.
  • Pouw, W., & Holler, J. (2022). Timing in conversation is dynamically adjusted turn by turn in dyadic telephone conversations. Cognition, 222: 105015. doi:10.1016/j.cognition.2022.105015.

    Abstract

    Conversational turn taking in humans involves incredibly rapid responding. The timing mechanisms underpinning such responses have been heavily debated, including questions such as who is doing the timing. Similar to findings on rhythmic tapping to a metronome, we show that floor transfer offsets (FTOs) in telephone conversations are serially dependent, such that FTOs are lag-1 negatively autocorrelated. Finding this serial dependence on a turn-by-turn basis (lag-1) rather than on the basis of two or more turns, suggests a counter-adjustment mechanism operating at the level of the dyad in FTOs during telephone conversations, rather than a more individualistic self-adjustment within speakers. This finding, if replicated, has major implications for models describing turn taking, and confirms the joint, dyadic nature of human conversational dynamics. Future research is needed to see how pervasive serial dependencies in FTOs are, such as for example in richer communicative face-to-face contexts where visual signals affect conversational timing.
  • Pouw, W., & Dixon, J. A. (2022). What you hear and see specifies the perception of a limb-respiratory-vocal act. Proceedings of the Royal Society B: Biological Sciences, 289(1979): 20221026. doi:10.1098/rspb.2022.1026.
  • Pouw, W., Harrison, S. J., & Dixon, J. A. (2022). The importance of visual control and biomechanics in the regulation of gesture-speech synchrony for an individual deprived of proprioceptive feedback of body position. Scientific Reports, 12: 14775. doi:10.1038/s41598-022-18300-x.

    Abstract

    Do communicative actions such as gestures fundamentally differ in their control mechanisms from other actions? Evidence for such fundamental differences comes from a classic gesture-speech coordination experiment performed with a person (IW) with deafferentation (McNeill, 2005). Although IW has lost both his primary source of information about body position (i.e., proprioception) and discriminative touch from the neck down, his gesture-speech coordination has been reported to be largely unaffected, even if his vision is blocked. This is surprising because, without vision, his object-directed actions almost completely break down. We examine the hypothesis that IW’s gesture-speech coordination is supported by the biomechanical effects of gesturing on head posture and speech. We find that when vision is blocked, there are micro-scale increases in gesture-speech timing variability, consistent with IW’s reported experience that gesturing is difficult without vision. Supporting the hypothesis that IW exploits biomechanical consequences of the act of gesturing, we find that: (1) gestures with larger physical impulses co-occur with greater head movement, (2) gesture-speech synchrony relates to larger gesture-concurrent head movements (i.e. for bimanual gestures), (3) when vision is blocked, gestures generate more physical impulse, and (4) moments of acoustic prominence couple more with peaks of physical impulse when vision is blocked. It can be concluded that IW’s gesturing ability is not based on a specialized language-based feedforward control as originally concluded from previous research, but is still dependent on a varied means of recurrent feedback from the body.

    Additional information

    supplementary tables
  • Pouw, W., & Fuchs, S. (2022). Origins of vocal-entangled gesture. Neuroscience and Biobehavioral Reviews, 141: 104836. doi:10.1016/j.neubiorev.2022.104836.

    Abstract

    Gestures during speaking are typically understood in a representational framework: they represent absent or distal states of affairs by means of pointing, resemblance, or symbolic replacement. However, humans also gesture along with the rhythm of speaking, which is amenable to a non-representational perspective. Such a perspective centers on the phenomenon of vocal-entangled gestures and builds on evidence showing that when an upper limb with a certain mass decelerates/accelerates sufficiently, it yields impulses on the body that cascade in various ways into the respiratory–vocal system. It entails a physical entanglement between body motions, respiration, and vocal activities. It is shown that vocal-entangled gestures are realized in infant vocal–motor babbling before any representational use of gesture develops. Similarly, an overview is given of vocal-entangled processes in non-human animals. They can frequently be found in rats, bats, birds, and a range of other species that developed even earlier in the phylogenetic tree. Thus, the origins of human gesture lie in biomechanics, emerging early in ontogeny and running deep in phylogeny.
  • Preisig, B., & Hervais-Adelman, A. (2022). The predictive value of individual electric field modeling for transcranial alternating current stimulation induced brain modulation. Frontiers in Cellular Neuroscience, 16: 818703. doi:10.3389/fncel.2022.818703.

    Abstract

    There is considerable individual variability in the reported effectiveness of non-invasive brain stimulation. This variability has often been ascribed to differences in the neuroanatomy and resulting differences in the induced electric field inside the brain. In this study, we addressed the question whether individual differences in the induced electric field can predict the neurophysiological and behavioral consequences of gamma band tACS. In a within-subject experiment, bi-hemispheric gamma band tACS and sham stimulation was applied in alternating blocks to the participants’ superior temporal lobe, while task-evoked auditory brain activity was measured with concurrent functional magnetic resonance imaging (fMRI) and a dichotic listening task. Gamma tACS was applied with different interhemispheric phase lags. In a recent study, we could show that anti-phase tACS (180° interhemispheric phase lag), but not in-phase tACS (0° interhemispheric phase lag), selectively modulates interhemispheric brain connectivity. Using a T1 structural image of each participant’s brain, an individual simulation of the induced electric field was computed. From these simulations, we derived two predictor variables: maximal strength (average of the 10,000 voxels with largest electric field values) and precision of the electric field (spatial correlation between the electric field and the task evoked brain activity during sham stimulation). We found considerable variability in the individual strength and precision of the electric fields. Importantly, the strength of the electric field over the right hemisphere predicted individual differences of tACS induced brain connectivity changes. Moreover, we found in both hemispheres a statistical trend for the effect of electric field strength on tACS induced BOLD signal changes. In contrast, the precision of the electric field did not predict any neurophysiological measure. Further, neither strength, nor precision predicted interhemispheric integration. In conclusion, we found evidence for the dose-response relationship between individual differences in electric fields and tACS induced activity and connectivity changes in concurrent fMRI. However, the fact that this relationship was stronger in the right hemisphere suggests that the relationship between the electric field parameters, neurophysiology, and behavior may be more complex for bi-hemispheric tACS.
  • Preisig, B., Riecke, L., & Hervais-Adelman, A. (2022). Speech sound categorization: The contribution of non-auditory and auditory cortical regions. NeuroImage, 258: 119375. doi:10.1016/j.neuroimage.2022.119375.

    Abstract

    Which processes in the human brain lead to the categorical perception of speech sounds? Investigation of this question is hampered by the fact that categorical speech perception is normally confounded by acoustic differences in the stimulus. By using ambiguous sounds, however, it is possible to dissociate acoustic from perceptual stimulus representations. Twenty-seven normally hearing individuals took part in an fMRI study in which they were presented with an ambiguous syllable (intermediate between /da/ and /ga/) in one ear and with disambiguating acoustic feature (third formant, F3) in the other ear. Multi-voxel pattern searchlight analysis was used to identify brain areas that consistently differentiated between response patterns associated with different syllable reports. By comparing responses to different stimuli with identical syllable reports and identical stimuli with different syllable reports, we disambiguated whether these regions primarily differentiated the acoustics of the stimuli or the syllable report. We found that BOLD activity patterns in left perisylvian regions (STG, SMG), left inferior frontal regions (vMC, IFG, AI), left supplementary motor cortex (SMA/pre-SMA), and right motor and somatosensory regions (M1/S1) represent listeners’ syllable report irrespective of stimulus acoustics. Most of these regions are outside of what is traditionally regarded as auditory or phonological processing areas. Our results indicate that the process of speech sound categorization implicates decision-making mechanisms and auditory-motor transformations.

    Additional information

    figures and table
  • Price, K. M., Wigg, K. G., Eising, E., Feng, Y., Blokland, K., Wilkinson, M., Kerr, E. N., Guger, S. L., Quantitative Trait Working Group of the GenLang Consortium, Fisher, S. E., Lovett, M. W., Strug, L. J., & Barr, C. L. (2022). Hypothesis-driven genome-wide association studies provide novel insights into genetics of reading disabilities. Translational Psychiatry, 12: 495. doi:10.1038/s41398-022-02250-z.

    Abstract

    Reading Disability (RD) is often characterized by difficulties in the phonology of the language. While the molecular mechanisms underlying it are largely undetermined, loci are being revealed by genome-wide association studies (GWAS). In a previous GWAS for word reading (Price, 2020), we observed that top single-nucleotide polymorphisms (SNPs) were located near to or in genes involved in neuronal migration/axon guidance (NM/AG) or loci implicated in autism spectrum disorder (ASD). A prominent theory of RD etiology posits that it involves disturbed neuronal migration, while potential links between RD-ASD have not been extensively investigated. To improve power to identify associated loci, we up-weighted variants involved in NM/AG or ASD, separately, and performed a new Hypothesis-Driven (HD)–GWAS. The approach was applied to a Toronto RD sample and a meta-analysis of the GenLang Consortium. For the Toronto sample (n = 624), no SNPs reached significance; however, by gene-set analysis, the joint contribution of ASD-related genes passed the threshold (p~1.45 × 10–2, threshold = 2.5 × 10–2). For the GenLang Cohort (n = 26,558), SNPs in DOCK7 and CDH4 showed significant association for the NM/AG hypothesis (sFDR q = 1.02 × 10–2). To make the GenLang dataset more similar to Toronto, we repeated the analysis restricting to samples selected for reading/language deficits (n = 4152). In this GenLang selected subset, we found significant association for a locus intergenic between BTG3-C21orf91 for both hypotheses (sFDR q < 9.00 × 10–4). This study contributes candidate loci to the genetics of word reading. Data also suggest that, although different variants may be involved, alleles implicated in ASD risk may be found in the same genes as those implicated in word reading. This finding is limited to the Toronto sample suggesting that ascertainment influences genetic associations.
  • Prieto, P., & Torreira, F. (2007). The segmental anchoring hypothesis revisited: Syllable structure and speech rate effects on peak timing in Spanish. Journal of Phonetics, 35, 473-500. doi:10.1016/j.wocn.2007.01.001.

    Abstract

    This paper addresses the validity of the segmental anchoring hypothesis for tonal landmarks (henceforth, SAH) as described in recent work by (among others) Ladd, Faulkner, D., Faulkner, H., & Schepman [1999. Constant ‘segmental’ anchoring of f0 movements under changes in speech rate. Journal of the Acoustical Society of America, 106, 1543–1554], Ladd [2003. Phonological conditioning of f0 target alignment. In: M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the XVth international congress of phonetic sciences, Vol. 1, (pp. 249–252). Barcelona: Causal Productions; in press. Segmental anchoring of pitch movements: Autosegmental association or gestural coordination? Italian Journal of Linguistics, 18 (1)]. The alignment of LH* prenuclear peaks with segmental landmarks in controlled speech materials in Peninsular Spanish is analyzed as a function of syllable structure type (open, closed) of the accented syllable, segmental composition, and speaking rate. Contrary to the predictions of the SAH, alignment was affected by syllable structure and speech rate in significant and consistent ways. In: CV syllables the peak was located around the end of the accented vowel, and in CVC syllables around the beginning-mid part of the sonorant coda, but still far from the syllable boundary. With respect to the effects of rate, peaks were located earlier in the syllable as speech rate decreased. The results suggest that the accent gestures under study are synchronized with the syllable unit. In general, the longer the syllable, the longer the rise time. Thus the fundamental idea of the anchoring hypothesis can be taken as still valid. On the other hand, the tonal alignment patterns reported here can be interpreted as the outcome of distinct modes of gestural coordination in syllable-initial vs. syllable-final position: gestures at syllable onsets appear to be more tightly coordinated than gestures at the end of syllables [Browman, C. P., & Goldstein, L.M. (1986). Towards an articulatory phonology. Phonology Yearbook, 3, 219–252; Browman, C. P., & Goldstein, L. (1988). Some notes on syllable structure in articulatory phonology. Phonetica, 45, 140–155; (1992). Articulatory Phonology: An overview. Phonetica, 49, 155–180; Krakow (1999). Physiological organization of syllables: A review. Journal of Phonetics, 27, 23–54; among others]. Intergestural timing can thus provide a unifying explanation for (1) the contrasting behavior between the precise synchronization of L valleys with the onset of the syllable and the more variable timing of the end of the f0 rise, and, more specifically, for (2) the right-hand tonal pressure effects and ‘undershoot’ patterns displayed by peaks at the ends of syllables and other prosodic domains.
  • Protopapas, A., Gerakaki, S., & Alexandri, S. (2007). Sources of information for stress assignment in reading Greek. Applied Psycholinguistics, 28(4), 695 -720. doi:10.1017/S0142716407070373.

    Abstract

    To assign lexical stress when reading, the Greek reader can potentially rely on lexical information (knowledge of the word), visual–orthographic information (processing of the written diacritic), or a default metrical strategy (penultimate stress pattern). Previous studies with secondary education children have shown strong lexical effects on stress assignment and have provided evidence for a default pattern. Here we report two experiments with adult readers, in which we disentangle and quantify the effects of these three potential sources using nonword materials. Stimuli either resembled or did not resemble real words, to manipulate availability of lexical information; and they were presented with or without a diacritic, in a word-congruent or word-incongruent position, to contrast the relative importance of the three sources. Dual-task conditions, in which cognitive load during nonword reading was increased with phonological retention carrying a metrical pattern different from the default, did not support the hypothesis that the default arises from cumulative lexical activation in working memory.
  • Qin, S., Piekema, C., Petersson, K. M., Han, B., Luo, J., & Fernández, G. (2007). Probing the transformation of discontinuous associations into episodic memory: An event-related fMRI study. NeuroImage, 38(1), 212-222. doi:10.1016/j.neuroimage.2007.07.020.

    Abstract

    Using event-related functional magnetic resonance imaging, we identified brain regions involved in storing associations of events discontinuous in time into long-term memory. Participants were scanned while memorizing item-triplets including simultaneous and discontinuous associations. Subsequent memory tests showed that participants remembered both types of associations equally well. First, by constructing the contrast between the subsequent memory effects for discontinuous associations and simultaneous associations, we identified the left posterior parahippocampal region, dorsolateral prefrontal cortex, the basal ganglia, posterior midline structures, and the middle temporal gyrus as being specifically involved in transforming discontinuous associations into episodic memory. Second, we replicated that the prefrontal cortex and the medial temporal lobe (MTL) especially the hippocampus are involved in associative memory formation in general. Our findings provide evidence for distinct neural operation(s) that supports the binding and storing discontinuous associations in memory. We suggest that top-down signals from the prefrontal cortex and MTL may trigger reactivation of internal representation in posterior midline structures of the first event, thus allowing it to be associated with the second event. The dorsolateral prefrontal cortex together with basal ganglia may support this encoding operation by executive and binding processes within working memory, and the posterior parahippocampal region may play a role in binding and memory formation.
  • Rasenberg, M., Pouw, W., Özyürek, A., & Dingemanse, M. (2022). The multimodal nature of communicative efficiency in social interaction. Scientific Reports, 12: 19111. doi:10.1038/s41598-022-22883-w.

    Abstract

    How does communicative efficiency shape language use? We approach this question by studying it at the level of the dyad, and in terms of multimodal utterances. We investigate whether and how people minimize their joint speech and gesture efforts in face-to-face interactions, using linguistic and kinematic analyses. We zoom in on other-initiated repair—a conversational microcosm where people coordinate their utterances to solve problems with perceiving or understanding. We find that efforts in the spoken and gestural modalities are wielded in parallel across repair turns of different types, and that people repair conversational problems in the most cost-efficient way possible, minimizing the joint multimodal effort for the dyad as a whole. These results are in line with the principle of least collaborative effort in speech and with the reduction of joint costs in non-linguistic joint actions. The results extend our understanding of those coefficiency principles by revealing that they pertain to multimodal utterance design.

    Additional information

    Data and analysis scripts
  • Rasenberg, M., Özyürek, A., Bögels, S., & Dingemanse, M. (2022). The primacy of multimodal alignment in converging on shared symbols for novel referents. Discourse Processes, 59(3), 209-236. doi:10.1080/0163853X.2021.1992235.

    Abstract

    When people establish shared symbols for novel objects or concepts, they have been shown to rely on the use of multiple communicative modalities as well as on alignment (i.e., cross-participant repetition of communicative behavior). Yet these interactional resources have rarely been studied together, so little is known about if and how people combine multiple modalities in alignment to achieve joint reference. To investigate this, we systematically track the emergence of lexical and gestural alignment in a referential communication task with novel objects. Quantitative analyses reveal that people frequently use a combination of lexical and gestural alignment, and that such multimodal alignment tends to emerge earlier compared to unimodal alignment. Qualitative analyses of the interactional contexts in which alignment emerges reveal how people flexibly deploy lexical and gestural alignment (independently, simultaneously or successively) to adjust to communicative pressures.
  • Ravignani, A., & Garcia, M. (2022). A cross-species framework to identify vocal learning abilities in mammals. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 377: 20200394. doi:10.1098/rstb.2020.0394.

    Abstract

    Vocal production learning (VPL) is the experience-driven ability to produce novel vocal signals through imitation or modification of existing vocalizations. A parallel strand of research investigates acoustic allometry, namely how information about body size is conveyed by acoustic signals. Recently, we proposed that deviation from acoustic allometry principles as a result of sexual selection may have been an intermediate step towards the evolution of vocal learning abilities in mammals. Adopting a more hypothesis-neutral stance, here we perform phylogenetic regressions and other analyses further testing a potential link between VPL and being an allometric outlier. We find that multiple species belonging to VPL clades deviate from allometric scaling but in the opposite direction to that expected from size exaggeration mechanisms. In other words, our correlational approach finds an association between VPL and being an allometric outlier. However, the direction of this association, contra our original hypothesis, may indicate that VPL did not necessarily emerge via sexual selection for size exaggeration: VPL clades show higher vocalization frequencies than expected. In addition, our approach allows us to identify species with potential for VPL abilities: we hypothesize that those outliers from acoustic allometry lying above the regression line may be VPL species. Our results may help better understand the cross-species diversity, variability and aetiology of VPL, which among other things is a key underpinning of speech in our species.

    This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part II)’.

    Additional information

    Raw data Supplementary material
  • Ravignani, A. (2022). Language evolution: Sound meets gesture? [Review of the book From signal to symbol: The evolution of language by By R. Planer and K. Sterelny]. Evolutionary Anthropology, 31, 317-318. doi:10.1002/evan.21961.
  • Raviv, L., Lupyan, G., & Green, S. C. (2022). How variability shapes learning and generalization. Trends in Cognitive Sciences, 26(6), 462-483. doi:10.1016/j.tics.2022.03.007.

    Abstract

    Learning is using past experiences to inform new behaviors and actions. Because all experiences are unique, learning always requires some generalization. An effective way of improving generalization is to expose learners to more variable (and thus often more representative) input. More variability tends to make initial learning more challenging, but eventually leads to more general and robust performance. This core principle has been repeatedly rediscovered and renamed in different domains (e.g., contextual diversity, desirable difficulties, variability of practice). Reviewing this basic result as it has been formulated in different domains allows us to identify key patterns, distinguish between different kinds of variability, discuss the roles of varying task-relevant versus irrelevant dimensions, and examine the effects of introducing variability at different points in training.
  • Raviv, L., Peckre, L. R., & Boeckx, C. (2022). What is simple is actually quite complex: A critical note on terminology in the domain of language and communication. Journal of Comparative Psychology, 136(4), 215-220. doi:10.1037/com0000328.

    Abstract

    On the surface, the fields of animal communication and human linguistics have arrived at conflicting theories and conclusions with respect to the effect of social complexity on communicative complexity. For example, an increase in group size is argued to have opposite consequences on human versus animal communication systems: although an increase in human community size leads to some types of language simplification, an increase in animal group size leads to an increase in signal complexity. But do human and animal communication systems really show such a fundamental discrepancy? Our key message is that the tension between these two adjacent fields is the result of (a) a focus on different levels of analysis (namely, signal variation or grammar-like rules) and (b) an inconsistent use of terminology (namely, the terms “simple” and “complex”). By disentangling and clarifying these terms with respect to different measures of communicative complexity, we show that although animal and human communication systems indeed show some contradictory effects with respect to signal variability, they actually display essentially the same patterns with respect to grammar-like structure. This is despite the fact that the definitions of complexity and simplicity are actually aligned for signal variability, but diverge for grammatical structure. We conclude by advocating for the use of more objective and descriptive terms instead of terms such as “complexity,” which can be applied uniformly for human and animal communication systems—leading to comparable descriptions of findings across species and promoting a more productive dialogue between fields.
  • Redl, T., Szuba, A., de Swart, P., Frank, S. L., & de Hoop, H. (2022). Masculine generic pronouns as a gender cue in generic statements. Discourse Processes, 59, 828-845. doi:10.1080/0163853X.2022.2148071.

    Abstract

    An eye-tracking experiment was conducted with speakers of Dutch (N = 84, 36 male), a language that falls between grammatical and natural-gender languages. We tested whether a masculine generic pronoun causes a male bias when used in generic statements—that is, in the absence of a specific referent. We tested two types of generic statements by varying conceptual number, hypothesizing that the pronoun zijn “his” was more likely to cause a male bias with a conceptually singular than a conceptually plural ante-cedent (e.g., Someone (conceptually singular)/Everyone (conceptually plural) with perfect pitch can tune his instrument quickly). We found male participants to exhibit a male bias but with the conceptually singular antecedent only. Female participants showed no signs of a male bias. The results show that the generically intended masculine pronoun zijn “his” leads to a male bias in conceptually singular generic contexts but that this further depends on participant gender.

    Additional information

    Data availability
  • Reinisch, E., & Bosker, H. R. (2022). Encoding speech rate in challenging listening conditions: White noise and reverberation. Attention, Perception & Psychophysics, 84, 2303 -2318. doi:10.3758/s13414-022-02554-8.

    Abstract

    Temporal contrasts in speech are perceived relative to the speech rate of the surrounding context. That is, following a fast context
    sentence, listeners interpret a given target sound as longer than following a slow context, and vice versa. This rate effect, often
    referred to as “rate-dependent speech perception,” has been suggested to be the result of a robust, low-level perceptual process,
    typically examined in quiet laboratory settings. However, speech perception often occurs in more challenging listening condi-
    tions. Therefore, we asked whether rate-dependent perception would be (partially) compromised by signal degradation relative to
    a clear listening condition. Specifically, we tested effects of white noise and reverberation, with the latter specifically distorting
    temporal information. We hypothesized that signal degradation would reduce the precision of encoding the speech rate in the
    context and thereby reduce the rate effect relative to a clear context. This prediction was borne out for both types of degradation in
    Experiment 1, where the context sentences but not the subsequent target words were degraded. However, in Experiment 2, which
    compared rate effects when contexts and targets were coherent in terms of signal quality, no reduction of the rate effect was
    found. This suggests that, when confronted with coherently degraded signals, listeners adapt to challenging listening situations,
    eliminating the difference between rate-dependent perception in clear and degraded conditions. Overall, the present study
    contributes towards understanding the consequences of different types of listening environments on the functioning of low-
    level perceptual processes that listeners use during speech perception.

    Additional information

    Data availability
  • Reis, A., Faísca, L., Mendonça, S., Ingvar, M., & Petersson, K. M. (2007). Semantic interference on a phonological task in illiterate subjects. Scandinavian Journal of Psychology, 48(1), 69-74. doi:10.1111/j.1467-9450.2006.00544.x.

    Abstract

    Previous research suggests that learning an alphabetic written language influences aspects of the auditory-verbal language system. In this study, we examined whether literacy influences the notion of words as phonological units independent of lexical semantics in literate and illiterate subjects. Subjects had to decide which item in a word- or pseudoword pair was phonologically longest. By manipulating the relationship between referent size and phonological length in three word conditions (congruent, neutral, and incongruent) we could examine to what extent subjects focused on form rather than meaning of the stimulus material. Moreover, the pseudoword condition allowed us to examine global phonological awareness independent of lexical semantics. The results showed that literate performed significantly better than illiterate subjects in the neutral and incongruent word conditions as well as in the pseudoword condition. The illiterate group performed least well in the incongruent condition and significantly better in the pseudoword condition compared to the neutral and incongruent word conditions and suggest that performance on phonological word length comparisons is dependent on literacy. In addition, the results show that the illiterate participants are able to perceive and process phonological length, albeit less well than the literate subjects, when no semantic interference is present. In conclusion, the present results confirm and extend the finding that illiterate subjects are biased towards semantic-conceptual-pragmatic types of cognitive processing.
  • de Reus, K., Carlson, D., Lowry, A., Gross, S., Garcia, M., Rubio-Garcia, A., Salazar-Casals, A., & Ravignani, A. (2022). Vocal tract allometry in a mammalian vocal learner. Journal of Experimental Biology, 225(8): jeb243766. doi:10.1242/jeb.243766.

    Abstract

    Acoustic allometry occurs when features of animal vocalisations can be predicted from body size measurements. Despite this being considered the norm, allometry sometimes breaks, resulting in species sounding smaller or larger than expected. A recent hypothesis suggests that allometry-breaking animals cluster into two groups: those with anatomical adaptations to their vocal tracts and those capable of learning new sounds (vocal learners). Here we test this hypothesis by probing vocal tract allometry in a proven mammalian vocal learner, the harbour seal (Phoca vitulina). We test whether vocal tract structures and body size scale allometrically in 68 individuals. We find that both body length and body weight accurately predict vocal tract length and one tracheal dimension. Independently, body length predicts vocal fold length while body weight predicts a second tracheal dimension. All vocal tract measures are larger in weaners than in pups and some structures are sexually dimorphic within age classes. We conclude that harbour seals do comply with allometric constraints, lending support to our hypothesis. However, allometry between body size and vocal fold length seems to emerge after puppyhood, suggesting that ontogeny may modulate the anatomy-learning distinction previously hypothesised as clear-cut. Species capable of producing non-allometric signals while their vocal tract scales allometrically, like seals, may then use non-morphological allometry-breaking mechanisms. We suggest that seals, and potentially other vocal learning mammals, may achieve allometry-breaking through developed neural control over their vocal organs.
  • Rinker, T., Papadopoulou, D., Ávila-Varela, D., Bosch, J., Castro, S., Olioumtsevits, K., Pereira Soares, S. M., Wodniecka, Z., & Marinis, T. (2022). Does multilingualism bring benefits?: What do teachers think about multilingualism? The Multilingual Mind: Policy Reports 2022, 3. doi:10.48787/kops/352-2-1m7py02eqd0b56.
  • Roberts, L., Marinis, T., Felser, C., & Clahsen, H. (2007). Antecedent priming at trace positions in children’s sentence processing. Journal of Psycholinguistic Research, 36(2), 175-188. doi: 10.1007/s10936-006-9038-3.

    Abstract

    The present study examines whether children reactivate a moved constituent at its gap position and how children’s more limited working memory span affects the way they process filler-gap dependencies. 46 5–7 year-old children and 54 adult controls participated in a cross-modal picture priming experiment and underwent a standardized working memory test. The results revealed a statistically significant interaction between the participants’ working memory span and antecedent reactivation: High-span children (n = 19) and high-span adults (n = 22) showed evidence of antecedent priming at the gap site, while for low-span children and adults, there was no such effect. The antecedent priming effect in the high-span participants indicates that in both children and adults, dislocated arguments access their antecedents at gap positions. The absence of an antecedent reactivation effect in the low-span participants could mean that these participants required more time to integrate the dislocated constituent and reactivated the filler later during the sentence.
  • Roberts, L. (2007). Investigating real-time sentence processing in the second language. Stem-, Spraak- en Taalpathologie, 15, 115-127.

    Abstract

    Second language (L2) acquisition researchers have always been concerned with what L2 learners know about the grammar of the target language but more recently there has been growing interest in how L2 learners put this knowledge to use in real-time sentence comprehension. In order to investigate real-time L2 sentence processing, the types of constructions studied and the methods used are often borrowed from the field of monolingual processing, but the overall issues are familiar from traditional L2 acquisition research. These cover questions relating to L2 learners’ native-likeness, whether or not L1 transfer is in evidence, and how individual differences such as proficiency and language experience might have an effect. The aim of this paper is to provide for those unfamiliar with the field, an overview of the findings of a selection of behavioral studies that have investigated such questions, and to offer a picture of how L2 learners and bilinguals may process sentences in real time.
  • Roelofs, A. (2007). On the modelling of spoken word planning: Rejoinder to La Heij, Starreveld, and Kuipers (2007). Language and Cognitive Processes, 22(8), 1281-1286. doi:10.1080/01690960701462291.

    Abstract

    The author contests several claims of La Heij, Starreveld, and Kuipers (this issue) concerning the modelling of spoken word planning. The claims are about the relevance of error findings, the interaction between semantic and phonological factors, the explanation of word-word findings, the semantic relatedness paradox, and production rules.
  • Roelofs, A. (2007). A critique of simple name-retrieval models of spoken word planning. Language and Cognitive Processes, 22(8), 1237-1260. doi:10.1080/01690960701461582.

    Abstract

    Simple name-retrieval models of spoken word planning (Bloem & La Heij, 2003; Starreveld & La Heij, 1996) maintain (1) that there are two levels in word planning, a conceptual and a lexical phonological level, and (2) that planning a word in both object naming and oral reading involves the selection of a lexical phonological representation. Here, the name retrieval models are compared to more complex models with respect to their ability to account for relevant data. It appears that the name retrieval models cannot easily account for several relevant findings, including some speech error biases, types of morpheme errors, and context effects on the latencies of responding to pictures and words. New analyses of the latency distributions in previous studies also pose a challenge. More complex models account for all these findings. It is concluded that the name retrieval models are too simple and that the greater complexity of the other models is warranted
  • Roelofs, A. (2007). Attention and gaze control in picture naming, word reading, and word categorizing. Journal of Memory and Language, 57(2), 232-251. doi:10.1016/j.jml.2006.10.001.

    Abstract

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture–word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment 3) and shifted their gaze to the arrow to manually indicate its direction. The experiments showed that the temporal coordination of vocal responding and gaze shifting depends on the vocal task and, to a lesser extent, on the type of relationship between picture and word. There was a close temporal link between gaze shifting and manual responding, suggesting that the gaze shifts indexed shifts of attention between the vocal and manual tasks. Computer simulations showed that a simple extension of WEAVER++ [Roelofs, A. (1992). A spreading-activation theory of lemma retrieval in speaking. Cognition, 42, 107–142.; Roelofs, A. (2003). Goal-referenced selection of verbal action: modeling attentional control in the Stroop task. Psychological Review, 110, 88–125.] with assumptions about attentional control in the coordination of vocal responding, gaze shifting, and manual responding quantitatively accounts for the key findings.
  • Roelofs, A., Özdemir, R., & Levelt, W. J. M. (2007). Influences of spoken word planning on speech recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(5), 900-913. doi:10.1037/0278-7393.33.5.900.

    Abstract

    In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. (PsycINFO Database Record (c) 2007 APA, all rights reserved)
  • Rohde, H., & Rubio-Fernández, P. (2022). Color interpretation is guided by informativity expectations, not by world knowledge about colors. Journal of Memory and Language, 127: 104371. doi:10.1016/j.jml.2022.104371.

    Abstract

    When people hear words for objects with prototypical colors (e.g., ‘banana’), they look at objects of the same color (e.g., lemon), suggesting a link in comprehension between objects and their prototypical colors. However, that link does not carry over to production: The experimental record also shows that when people speak, they tend to omit prototypical colors, using color adjectives when it is informative (e.g., when referring to clothes, which have no prototypical color). These findings yield an interesting prediction, which we tested here: while prior work shows that people look at yellow objects when hearing ‘banana’, they should look away from bananas when hearing ‘yellow’. The results of an offline sentence-completion task (N = 100) and an online eye-tracking task (N = 41) confirmed that when presented with truncated color descriptions (e.g., ‘Click on the yellow…’), people anticipate clothing items rather than stereotypical fruits. A corpus analysis ruled out the possibility that this association between color and clothing arises from simple context-free co-occurrence statistics. We conclude that comprehenders make linguistic predictions based not only on what they know about the world (e.g., which objects are yellow) but also on what speakers tend to say about the world (i.e., what content would be informative).

    Additional information

    supplementary data 1
  • Rojas-Berscia, L. M., Lehecka, T., Claassen, S. A., Peute, A. A. K., Escobedo, M. P., Escobedo, S. P., Tangoa, A. H., & Pizango, E. Y. (2022). Embedding in Shawi narrations: A quantitative analysis of embedding in a post-colonial Amazonian indigenous society. Language in Society, 51(3), 427-451. doi:10.1017/S0047404521000634.

    Abstract

    In this article, we provide the first quantitative account of the frequent use of embedding in Shawi, a Kawapanan language spoken in Peruvian Northwestern Amazonia. We collected a corpus of ninety-two Frog Stories (Mayer 1969) from three different field sites in 2015 and 2016. Using the glossed corpus as our data, we conducted a generalised mixed model analysis, where we predicted the use of embedding with several macrosocial variables, such as gender, age, and education level. We show that bilingualism (Amazonian Spanish-Shawi) and education, mostly restricted by complex gender differences in Shawi communities, play a significant role in the establishment of linguistic preferences in narration. Moreover, we argue that the use of embedding reflects the impact of the mestizo1 society from the nineteenth century until today in Santa Maria de Cahuapanas, reshaping not only Shawi demographics but also linguistic practices
  • Rösler, D., & Skiba, R. (1986). Ein vernetzter Lehrmaterial-Steinbruch für Deutsch als Zweitsprache (Projekt EKMAUS, FU Berlin). Deutsch Lernen: Zeitschrift für den Sprachunterricht mit ausländischen Arbeitnehmern, 2, 68-71. Retrieved from http://www.daz-didaktik.de/html/1986.html.
  • Rothman, J., Bayram, F., DeLuca, V., Di Pisa, G., Duñabeitia, J. A., Gharibi, K., Hao, J., Kolb, N., Kubota, M., Kupisch, T., Laméris, T., Luque, A., Van Osch, B., Pereira Soares, S. M., Prystauka, Y., Tat, D., Tomić, A., Voits, T., & Wulff, S. (2022). Monolingual comparative normativity in bilingualism research is out of “control”: Arguments and alternatives. Applied Psycholinguistics, 44(3), 316-329. doi:10.1017/S0142716422000315.

    Abstract

    Herein, we contextualize, problematize, and offer some insights for moving beyond the problem of monolingual comparative normativity in (psycho) linguistic research on bilingualism. We argue that, in the vast majority of cases, juxtaposing (functional) monolinguals to bilinguals fails to offer what the comparison is supposedly intended to do: meet the standards of empirical control in line with the scientific method. Instead, the default nature of monolingual comparative normativity has historically contributed to inequalities in many facets of bilingualism research and continues to impede progress on multiple levels. Beyond framing our views on the matter, we offer some epistemological considerations and methodological alternatives to this standard practice that improve empirical rigor while fostering increased diversity, inclusivity, and equity in our field.
  • Rowland, C. F. (2007). Explaining errors in children’s questions. Cognition, 104(1), 106-134. doi:10.1016/j.cognition.2006.05.011.

    Abstract

    The ability to explain the occurrence of errors in children’s speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813–842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children’s speech, and that errors occur when children resort to other operations to produce questions [e.g. Dąbrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83–102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157–181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.
  • Rubio-Fernandez, P., Long, M., Shukla, V., Bhatia, V., & Sinha, P. (2022). Visual perspective taking is not automatic in a simplified Dot task: Evidence from newly sighted children, primary school children and adults. Neuropsychologia, 172: e0153485. doi:10.1016/j.neuropsychologia.2022.108256.

    Abstract

    In the Dot task, children and adults involuntarily compute an avatar’s visual perspective, which has been interpreted by some as automatic Theory of Mind. This interpretation has been challenged by other researchers arguing that the task reveals automatic attentional orienting. Here we tested a new interpretation of previous findings: the seemingly automatic processes revealed by the Dot task result from the high Executive Control demands of this verification paradigm, which taxes short-term memory and imposes perspective-switching costs. We tested this hypothesis in three experiments conducted in India with newly sighted children (Experiment 1; N = 5; all girls), neurotypical children (Experiment 2; ages 5–10; N = 90; 38 girls) and adults (Experiment 3; N = 30; 18 women) in a highly simplified version of the Dot task. No evidence of automatic perspective-taking was observed, although all groups revealed perspective-taking costs. A newly sighted child and the youngest children in our sample also showed an egocentric bias, which disappeared by age 10, confirming that visual perspective taking develops during the school years. We conclude that the standard Dot task imposes such methodological demands on both children and adults that the alleged evidence of automatic processes (either mindreading or domain general) may simply reveal limitations in Executive Control.

    Additional information

    1-s2.0-S0028393222001154-mmc1.docx
  • Rubio-Fernández, P., Shukla, V., Bhatia, V., Ben-Ami, S., & Sinha, P. (2022). Head turning is an effective cue for gaze following: Evidence from newly sighted individuals, school children and adults. Neuropsychologia, 174: 108330. doi:10.1016/j.neuropsychologia.2022.108330.

    Abstract

    In referential communication, gaze is often interpreted as a social cue that facilitates comprehension and enables word learning. Here we investigated the degree to which head turning facilitates gaze following. We presented participants with static pictures of a man looking at a target object in a first and third block of trials (pre- and post-intervention), while they saw short videos of the same man turning towards the target in the second block of trials (intervention). In Experiment 1, newly sighted individuals (treated for congenital cataracts; N = 8) benefited from the motion cues, both when comparing their initial performance with static gaze cues to their performance with dynamic head turning, and their performance with static cues before and after the videos. In Experiment 2, neurotypical school children (ages 5–10 years; N = 90) and adults (N = 30) also revealed improved performance with motion cues, although most participants had started to follow the static gaze cues before they saw the videos. Our results confirm that head turning is an effective social cue when interpreting new words, offering new insights for a pathways approach to development.
  • Rubio-Fernández, P., Wienholz, A., Ballard, C. M., Kirby, S., & Lieberman, A. M. (2022). Adjective position and referential efficiency in American Sign Language: Effects of adjective semantics, sign type and age of sign exposure. Journal of Memory and Language, 126: 104348. doi:10.1016/j.jml.2022.104348.

    Abstract

    Previous research has pointed at communicative efficiency as a possible constraint on language structure. Here we investigated adjective position in American Sign Language (ASL), a language with relatively flexible word order, to test the incremental efficiency hypothesis, according to which both speakers and signers try to produce efficient referential expressions that are sensitive to the word order of their languages. The results of three experiments using a standard referential communication task confirmed that deaf ASL signers tend to produce absolute adjectives, such as color or material, in prenominal position, while scalar adjectives tend to be produced in prenominal position when expressed as lexical signs, but in postnominal position when expressed as classifiers. Age of ASL exposure also had an effect on referential choice, with early-exposed signers producing more classifiers than late-exposed signers, in some cases. Overall, our results suggest that linguistic, pragmatic and developmental factors affect referential choice in ASL, supporting the hypothesis that communicative efficiency is an important factor in shaping language structure and use.
  • Rubio-Fernández, P. (2007). Suppression in metaphor interpretation: Differences between meaning selection and meaning construction. Journal of Semantics, 24(4), 345-371. doi:10.1093/jos/ffm006.

    Abstract

    Various accounts of metaphor interpretation propose that it involves constructing an ad hoc concept on the basis of the concept encoded by the metaphor vehicle (i.e. the expression used for conveying the metaphor). This paper discusses some of the differences between these theories and investigates their main empirical prediction: that metaphor interpretation involves enhancing properties of the metaphor vehicle that are relevant for interpretation, while suppressing those that are irrelevant. This hypothesis was tested in a cross-modal lexical priming study adapted from early studies on lexical ambiguity. The different patterns of suppression of irrelevant meanings observed in disambiguation studies and in the experiment on metaphor reported here are discussed in terms of differences between meaning selection and meaning construction.
  • Rubio-Fernandez, P. (2022). Demonstrative systems: From linguistic typology to social cognition. Cognitive Psychology, 139: 101519. doi:10.1016/j.cogpsych.2022.101519.

    Abstract

    This study explores the connection between language and social cognition by empirically testing different typological analyses of various demonstrative systems. Linguistic typology classifies demonstrative systems as distance-oriented or person-oriented, depending on whether they indicate the location of a referent relative only to the speaker, or to both the speaker and the listener. From the perspective of social cognition, speakers of languages with person-oriented systems must monitor their listener’s spatial location in order to accurately use their demonstratives, while speakers of languages with distance-oriented systems can use demonstratives from their own, egocentric perspective. Resolving an ongoing controversy around the nature of the Spanish demonstrative system, the results of Experiment 1 confirmed that this demonstrative system is person oriented, while the English system is distance oriented. Experiment 2 revealed that not all three-way demonstrative systems are person oriented, with Japanese speakers showing sensitivity to the listener’s spatial location, while Turkish speakers did not show such an effect in their demonstrative choice. In Experiment 3, Catalan-Spanish bilinguals showed sensitivity to listener position in their choice of the Spanish distal form, but not in their choice of the medial form. These results were interpreted as a transfer effect from Catalan, which revealed analogous results to English. Experiment 4 investigated the use of demonstratives to redirect a listener’s attention to the intended referent, which is a universal function of demonstratives that also hinges on social cognition. Japanese and Spanish speakers chose between their proximal and distal demonstratives flexibly, depending on whether the listener was looking closer or further from the referent, whereas Turkish speakers chose their medial form for attention correction. In conclusion, the results of this study support the view that investigating how speakers of different languages jointly use language and social cognition in communication has the potential to unravel the deep connection between these two fundamentally human capacities.
  • Ruggeri, K., Panin, A., Vdovic, M., Većkalov, B., Abdul-Salaam, N., Achterberg, J., Akil, C., Amatya, J., Amatya, K., Andersen, T. L., Aquino, S. D., Arunasalam, A., Ashcroft-Jones, S., Askelund, A. D., Ayacaxli, N., Bagheri Sheshdeh, A., Bailey, A., Barea Arroyo, P., Basulto Mejía, G., Benvenuti, M. and 151 moreRuggeri, K., Panin, A., Vdovic, M., Većkalov, B., Abdul-Salaam, N., Achterberg, J., Akil, C., Amatya, J., Amatya, K., Andersen, T. L., Aquino, S. D., Arunasalam, A., Ashcroft-Jones, S., Askelund, A. D., Ayacaxli, N., Bagheri Sheshdeh, A., Bailey, A., Barea Arroyo, P., Basulto Mejía, G., Benvenuti, M., Berge, M. L., Bermaganbet, A., Bibilouri, K., Bjørndal, L. D., Black, S., Blomster Lyshol, J. K., Brik, T., Buabang, E. K., Burghart, M., Bursalıoğlu, A., Buzayu, N. M., Čadek, M., De Carvalho, N. M., Cazan, A.-M., Çetinçelik, M., Chai, V. E., Chen, P., Chen, S., Clay, G., D’Ambrogio, S., Damnjanović, K., Duffy, G., Dugue, T., Dwarkanath, T., Envuladu, E. A., Erceg, N., Esteban-Serna, C., Farahat, E., Farrokhnia, R. A., Fawad, M., Fedryansyah, M., Feng, D., Filippi, S., Fonollá, M. A., Freichel, R., Freira, L., Friedemann, M., Gao, Z., Ge, S., Geiger, S. J., George, L., Grabovski, I., Gracheva, A., Gracheva, A., Hajian, A., Hasan, N., Hecht, M., Hong, X., Hubená, B., Ikonomeas, A. G. F., Ilić, S., Izydorczyk, D., Jakob, L., Janssens, M., Jarke, H., Kácha, O., Kalinova, K. N., Kapingura, F. M., Karakasheva, R., Kasdan, D. O., Kemel, E., Khorrami, P., Krawiec, J. M., Lagidze, N., Lazarević, A., Lazić, A., Lee, H. S., Lep, Ž., Lins, S., Lofthus, I. S., Macchia, L., Mamede, S., Mamo, M. A., Maratkyzy, L., Mareva, S., Marwaha, S., McGill, L., McParland, S., Melnic, A., Meyer, S. A., Mizak, S., Mohammed, A., Mukhyshbayeva, A., Navajas, J., Neshevska, D., Niazi, S. J., Nieves, A. E. N., Nippold, F., Oberschulte, J., Otto, T., Pae, R., Panchelieva, T., Park, S. Y., Pascu, D. S., Pavlović, I., Petrović, M. B., Popović, D., Prinz, G. M., Rachev, N. R., Ranc, P., Razum, J., Rho, C. E., Riitsalu, L., Rocca, F., Rosenbaum, R. S., Rujimora, J., Rusyidi, B., Rutherford, C., Said, R., Sanguino, I., Sarikaya, A. K., Say, N., Schuck, J., Shiels, M., Shir, Y., Sievert, E. D. C., Soboleva, I., Solomonia, T., Soni, S., Soysal, I., Stablum, F., Sundström, F. T. A., Tang, X., Tavera, F., Taylor, J., Tebbe, A.-L., Thommesen, K. K., Tobias-Webb, J., Todsen, A. L., Toscano, F., Tran, T., Trinh, J., Turati, A., Ueda, K., Vacondio, M., Vakhitov, V., Valencia, A. J., Van Reyn, C., Venema, T. A. G., Verra, S. E., Vintr, J., Vranka, M. A., Wagner, L., Wu, X., Xing, K. Y., Xu, K., Xu, S., Yamada, Y., Yosifova, A., Zupan, Z., & García-Garzon, E. (2022). The globalizability of temporal discounting. Nature Human Behaviour, 6, 1386-1397. doi:10.1038/s41562-022-01392-w.

    Abstract

    Economic inequality is associated with preferences for smaller, immediate gains over larger, delayed ones. Such temporal discounting may feed into rising global inequality, yet it is unclear whether it is a function of choice preferences or norms, or rather the absence of sufficient resources for immediate needs. It is also not clear whether these reflect true differences in choice patterns between income groups. We tested temporal discounting and five intertemporal choice anomalies using local currencies and value standards in 61 countries (N = 13,629). Across a diverse sample, we found consistent, robust rates of choice anomalies. Lower-income groups were not significantly different, but economic inequality and broader financial circumstances were clearly correlated with population choice patterns.
  • De Ruiter, J. P. (2007). Postcards from the mind: The relationship between speech, imagistic gesture and thought. Gesture, 7(1), 21-38.

    Abstract

    In this paper, I compare three different assumptions about the relationship between speech, thought and gesture. These assumptions have profound consequences for theories about the representations and processing involved in gesture and speech production. I associate these assumptions with three simplified processing architectures. In the Window Architecture, gesture provides us with a 'window into the mind'. In the Language Architecture, properties of language have an influence on gesture. In the Postcard Architecture, gesture and speech are planned by a single process to become one multimodal message. The popular Window Architecture is based on the assumption that gestures come, as it were, straight out of the mind. I argue that during the creation of overt imagistic gestures, many processes, especially those related to (a) recipient design, and (b) effects of language structure, cause an observable gesture to be very different from the original thought that it expresses. The Language Architecture and the Postcard Architecture differ from the Window Architecture in that they both incorporate a central component which plans gesture and speech together, however they differ from each other in the way they align gesture and speech. The Postcard Architecture assumes that the process creating a multimodal message involving both gesture and speech has access to the concepts that are available in speech, while the Language Architecture relies on interprocess communication to resolve potential conflicts between the content of gesture and speech.
  • Sainburg, T., Mai, A., & Gentner, T. Q. (2022). Long-range sequential dependencies precede complex syntactic production in language acquisition. Proceedings of the Royal Society B: Biological Sciences, 289: 20212657. doi:10.1098/rspb.2021.2657.

    Abstract

    To convey meaning, human language relies on hierarchically organized, long-
    range relationships spanning words, phrases, sentences and discourse. As the
    distances between elements (e.g. phonemes, characters, words) in human
    language sequences increase, the strength of the long-range relationships
    between those elements decays following a power law. This power-law
    relationship has been attributed variously to long-range sequential organiz-
    ation present in human language syntax, semantics and discourse structure.
    However, non-linguistic behaviours in numerous phylogenetically distant
    species, ranging from humpback whale song to fruit fly motility, also demon-
    strate similar long-range statistical dependencies. Therefore, we hypothesized
    that long-range statistical dependencies in human speech may occur indepen-
    dently of linguistic structure. To test this hypothesis, we measured long-range
    dependencies in several speech corpora from children (aged 6 months–
    12 years). We find that adult-like power-law statistical dependencies are present
    in human vocalizations at the earliest detectable ages, prior to the production of
    complex linguistic structure. These linguistic structures cannot, therefore, be
    the sole cause of long-range statistical dependencies in language
  • Salazar-Casals, A., de Reus, K., Greskewitz, N., Havermans, J., Geut, M., Villanueva, S., & Rubio-Garcia, A. (2022). Increased incidence of entanglements and ingested marine debris in Dutch seals from 2010 to 2020. Oceans, 3(3), 389-400. doi:10.3390/oceans3030026.

    Abstract

    In recent decades, the amount of marine debris has increased in our oceans. As wildlife interactions with debris increase, so does the number of entangled animals, impairing normal behavior and potentially affecting the survival of these individuals. The current study summarizes data on two phocid species, harbor (Phoca vitulina) and gray seals (Halichoerus grypus), affected by marine debris in Dutch waters from 2010 to 2020. The findings indicate that the annual entanglement rate (13.2 entanglements/year) has quadrupled compared with previous studies. Young seals, particularly gray seals, are the most affected individuals, with most animals found or sighted with fishing nets wrapped around their necks. Interestingly, harbor seals showed a higher incidence of ingested debris. Species differences with regard to behavior, foraging strategies, and habitat preferences may explain these findings. The lack of consistency across reports suggests that it is important to standardize data collection from now on. Despite increased public awareness about the adverse environmental effects of marine debris, more initiatives and policies are needed to ensure the protection of the marine environment in the Netherlands.
  • Salverda, A. P., Dahan, D., Tanenhaus, M. K., Crosswhite, K., Masharov, M., & McDonough, J. (2007). Effects of prosodically modulated sub-phonetic variation on lexical competition. Cognition, 105(2), 466-476. doi:10.1016/j.cognition.2006.10.008.

    Abstract

    Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically conditioned phonetic variation.
  • Sauter, D., & Scott, S. K. (2007). More than one kind of happiness: Can we recognize vocal expressions of different positive states? Motivation and Emotion, 31(3), 192-199.

    Abstract

    Several theorists have proposed that distinctions are needed between different positive emotional states, and that these discriminations may be particularly useful in the domain of vocal signals (Ekman, 1992b, Cognition and Emotion, 6, 169–200; Scherer, 1986, Psychological Bulletin, 99, 143–165). We report an investigation into the hypothesis that positive basic emotions have distinct vocal expressions (Ekman, 1992b, Cognition and Emotion, 6, 169–200). Non-verbal vocalisations are used that map onto five putative positive emotions: Achievement/Triumph, Amusement, Contentment, Sensual Pleasure, and Relief. Data from categorisation and rating tasks indicate that each vocal expression is accurately categorised and consistently rated as expressing the intended emotion. This pattern is replicated across two language groups. These data, we conclude, provide evidence for the existence of robustly recognisable expressions of distinct positive emotions.
  • Scharenborg, O., Seneff, S., & Boves, L. (2007). A two-pass approach for handling out-of-vocabulary words in a large vocabulary recognition task. Computer, Speech & Language, 21, 206-218. doi:10.1016/j.csl.2006.03.003.

    Abstract

    This paper addresses the problem of recognizing a vocabulary of over 50,000 city names in a telephone access spoken dialogue system. We adopt a two-stage framework in which only major cities are represented in the first stage lexicon. We rely on an unknown word model encoded as a phone loop to detect OOV city names (referred to as ‘rare city’ names). We use SpeM, a tool that can extract words and word-initial cohorts from phone graphs from a large fallback lexicon, to provide an N-best list of promising city name hypotheses on the basis of the phone graph corresponding to the OOV. This N-best list is then inserted into the second stage lexicon for a subsequent recognition pass. Experiments were conducted on a set of spontaneous telephone-quality utterances; each containing one rare city name. It appeared that SpeM was able to include nearly 75% of the correct city names in an N-best hypothesis list of 3000 city names. With the names found by SpeM to extend the lexicon of the second stage recognizer, a word accuracy of 77.3% could be obtained. The best one-stage system yielded a word accuracy of 72.6%. The absolute number of correctly recognized rare city names almost doubled, from 62 for the best one-stage system to 102 for the best two-stage system. However, even the best two-stage system recognized only about one-third of the rare city names retrieved by SpeM. The paper discusses ways for improving the overall performance in the context of an application.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2007). 'Early recognition' of polysyllabic words in continuous speech. Computer, Speech & Language, 21, 54-71. doi:10.1016/j.csl.2005.12.001.

    Abstract

    Humans are able to recognise a word before its acoustic realisation is complete. This in contrast to conventional automatic speech recognition (ASR) systems, which compute the likelihood of a number of hypothesised word sequences, and identify the words that were recognised on the basis of a trace back of the hypothesis with the highest eventual score, in order to maximise efficiency and performance. In the present paper, we present an ASR system, SpeM, based on principles known from the field of human word recognition that is able to model the human capability of ‘early recognition’ by computing word activation scores (based on negative log likelihood scores) during the speech recognition process. Experiments on 1463 polysyllabic words in 885 utterances showed that 64.0% (936) of these polysyllabic words were recognised correctly at the end of the utterance. For 81.1% of the 936 correctly recognised polysyllabic words the local word activation allowed us to identify the word before its last phone was available, and 64.1% of those words were already identified one phone after their lexical uniqueness point. We investigated two types of predictors for deciding whether a word is considered as recognised before the end of its acoustic realisation. The first type is related to the absolute and relative values of the word activation, which trade false acceptances for false rejections. The second type of predictor is related to the number of phones of the word that have already been processed and the number of phones that remain until the end of the word. The results showed that SpeM’s performance increases if the amount of acoustic evidence in support of a word increases and the risk of future mismatches decreases.
  • Scharenborg, O. (2007). Reaching over the gap: A review of efforts to link human and automatic speech recognition research. Speech Communication, 49, 336-347. doi:10.1016/j.specom.2007.01.009.

    Abstract

    The fields of human speech recognition (HSR) and automatic speech recognition (ASR) both investigate parts of the speech recognition process and have word recognition as their central issue. Although the research fields appear closely related, their aims and research methods are quite different. Despite these differences there is, however, lately a growing interest in possible cross-fertilisation. Researchers from both ASR and HSR are realising the potential benefit of looking at the research field on the other side of the ‘gap’. In this paper, we provide an overview of past and present efforts to link human and automatic speech recognition research and present an overview of the literature describing the performance difference between machines and human listeners. The focus of the paper is on the mutual benefits to be derived from establishing closer collaborations and knowledge interchange between ASR and HSR. The paper ends with an argument for more and closer collaborations between researchers of ASR and HSR to further improve research in both fields.
  • Scharenborg, O., Wan, V., & Moore, R. K. (2007). Towards capturing fine phonetic variation in speech using articulatory features. Speech Communication, 49, 811-826. doi:10.1016/j.specom.2007.01.005.

    Abstract

    The ultimate goal of our research is to develop a computational model of human speech recognition that is able to capture the effects of fine-grained acoustic variation on speech recognition behaviour. As part of this work we are investigating automatic feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. In the experiments reported here, we analysed the classification results from support vector machines (SVMs) and multilayer perceptrons (MLPs). MLPs have been widely and successfully used for the task of multi-value articulatory feature classification, while (to the best of our knowledge) SVMs have not. This paper compares the performance of the two classifiers and analyses the results in order to better understand the articulatory representations. It was found that the SVMs outperformed the MLPs for five out of the seven articulatory feature classes we investigated while using only 8.8–44.2% of the training material used for training the MLPs. The structure in the misclassifications of the SVMs and MLPs suggested that there might be a mismatch between the characteristics of the classification systems and the characteristics of the description of the AF values themselves. The analyses showed that some of the misclassified features are inherently confusable given the acoustic space. We concluded that in order to come to a feature set that can be used for a reliable and accurate automatic description of the speech signal; it could be beneficial to move away from quantised representations.
  • Schlag, F., Allegrini, A. G., Buitelaar, J., Verhoef, E., Van Donkelaar, M. M. J., Plomin, R., Rimfeld, K., Fisher, S. E., & St Pourcain, B. (2022). Polygenic risk for mental disorder reveals distinct association profiles across social behaviour in the general population. Molecular Psychiatry, 27, 1588-1598. doi:10.1038/s41380-021-01419-0.

    Abstract

    Many mental health conditions present a spectrum of social difficulties that overlaps with social behaviour in the general population including shared but little characterised genetic links. Here, we systematically investigate heterogeneity in shared genetic liabilities with attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorders (ASD), bipolar disorder (BP), major depression (MD) and schizophrenia across a spectrum of different social symptoms. Longitudinally assessed low-prosociality and peer-problem scores in two UK population-based cohorts (4–17 years; parent- and teacher-reports; Avon Longitudinal Study of Parents and Children(ALSPAC): N ≤ 6,174; Twins Early Development Study(TEDS): N ≤ 7,112) were regressed on polygenic risk scores for disorder, as informed by genome-wide summary statistics from large consortia, using negative binomial regression models. Across ALSPAC and TEDS, we replicated univariate polygenic associations between social behaviour and risk for ADHD, MD and schizophrenia. Modelling variation in univariate genetic effects jointly using random-effect meta-regression revealed evidence for polygenic links between social behaviour and ADHD, ASD, MD, and schizophrenia risk, but not BP. Differences in age, reporter and social trait captured 45–88% in univariate effect variation. Cross-disorder adjusted analyses demonstrated that age-related heterogeneity in univariate effects is shared across mental health conditions, while reporter- and social trait-specific heterogeneity captures disorder-specific profiles. In particular, ADHD, MD, and ASD polygenic risk were more strongly linked to peer problems than low prosociality, while schizophrenia was associated with low prosociality only. The identified association profiles suggest differences in the social genetic architecture across mental disorders when investigating polygenic overlap with population-based social symptoms spanning 13 years of child and adolescent development.
  • Schoenmakers, G.-J., Poortvliet, M., & Schaeffer, J. (2022). Topicality and anaphoricity in Dutch scrambling. Natural Language & Linguistic Theory, 40, 541-571. doi:10.1007/s11049-021-09516-z.

    Abstract

    Direct objects in Dutch can precede or follow adverbs, a phenomenon commonly referred to as scrambling. The linguistic literature agrees in its assumption that scrambling is regulated by the topicality and anaphoricity status of definite objects, but theories vary as to what kinds of objects exactly are predicted to scramble. This study reports experimental data from a sentence completion experiment with adult native speakers of Dutch, showing that topics are scrambled more often than foci, and that anaphoric objects are scrambled more often than non-anaphoric objects. However, while the data provide support for the assumption that topicality and anaphoricity play an important role in scrambling, they also indicate that the discourse status of the object in and of itself cannot explain the full scrambling variation.
  • Schubotz, L., Özyürek, A., & Holler, J. (2022). Individual differences in working memory and semantic fluency predict younger and older adults' multimodal recipient design in an interactive spatial task. Acta Psychologica, 229: 103690. doi:10.1016/j.actpsy.2022.103690.

    Abstract

    Aging appears to impair the ability to adapt speech and gestures based on knowledge shared with an addressee
    (common ground-based recipient design) in narrative settings. Here, we test whether this extends to spatial settings
    and is modulated by cognitive abilities. Younger and older adults gave instructions on how to assemble 3D-
    models from building blocks on six consecutive trials. We induced mutually shared knowledge by either
    showing speaker and addressee the model beforehand, or not. Additionally, shared knowledge accumulated
    across the trials. Younger and crucially also older adults provided recipient-designed utterances, indicated by a
    significant reduction in the number of words and of gestures when common ground was present. Additionally, we
    observed a reduction in semantic content and a shift in cross-modal distribution of information across trials.
    Rather than age, individual differences in verbal and visual working memory and semantic fluency predicted the
    extent of addressee-based adaptations. Thus, in spatial tasks, individual cognitive abilities modulate the inter-
    active language use of both younger and older adul

    Additional information

    1-s2.0-S0001691822002050-mmc1.docx
  • Segurado, R., Hamshere, M. L., Glaser, B., Nikolov, I., Moskvina, V., & Holmans, P. A. (2007). Combining linkage data sets for meta-analysis and mega-analysis: the GAW15 rheumatoid arthritis data set. BMC Proceedings, 1(Suppl 1): S104.

    Abstract

    We have used the genome-wide marker genotypes from Genetic Analysis Workshop 15 Problem 2 to explore joint evidence for genetic linkage to rheumatoid arthritis across several samples. The data consisted of four high-density genome scans on samples selected for rheumatoid arthritis. We cleaned the data, removed intermarker linkage disequilibrium, and assembled the samples onto a common genetic map using genome sequence positions as a reference for map interpolation. The individual studies were combined first at the genotype level (mega-analysis) prior to a multipoint linkage analysis on the combined sample, and second using the genome scan meta-analysis method after linkage analysis of each sample. The two approaches were compared, and give strong support to the HLA locus on chromosome 6 as a susceptibility locus. Other regions of interest include loci on chromosomes 11, 2, and 12.
  • Senft, G. (1986). [Review of the book Under the Tumtum tree: From nonsense to sense in nonautomatic comprehension by Marlene Dolitsky]. Journal of Pragmatics, 10, 273-278. doi:10.1016/0378-2166(86)90094-9.
  • Senft, G. (2007). [Review of the book Bislama reference grammar by Terry Crowley]. Linguistics, 45(1), 235-239.
  • Senft, G. (2007). [Review of the book Serial verb constructions - A cross-linguistic typology by Alexandra Y. Aikhenvald and Robert M. W. Dixon]. Linguistics, 45(4), 833-840. doi:10.1515/LING.2007.024.
  • Senft, G. (1987). Kilivila color terms. Studies in Language, 11, 313-346.
  • Senft, G. (1987). Nanam'sa Bwena - Gutes Denken: Eine ethnolinguistische Fallstudie über eine Dorfversammlung auf den Trobriand Inseln. Zeitschrift für Ethnologie, 112, 181-222.
  • Senft, B., & Senft, G. (1986). Ninikula - Fadenspiele auf den Trobriand Inseln: Untersuchungen zum Spiele-Repertoire unter besonderer Berürcksichtigung der Spiel-begeleitenden Texte. Baessler Archiv: Beiträge zur Völkerkunde, N.F. 34, 92-235.
  • Senft, G., & Senft, B. (1986). Ninikula Fadenspiele auf den Trobriand-Inseln, Papua-Neuguinea: Untersuchungen zum Spiele-Repertoire unter besonderer Berücksichtigung der Spiel-begleitendenden Texte. Baessler-Archiv: Beiträge zur Völkerkunde, 34(1), 93-235.
  • Senft, G. (1987). Rituelle Kommunikation auf den Trobriand Inseln. Zeitschrift für Literaturwissenschaft und Linguistik, 65, 105-130.
  • Senft, G. (1987). The system of classificatory particles in Kilivila reconsidered: First results on its inventory, its acquisition, and its usage. Language and Linguistics in Melanesia, 16, 100-125.
  • Seuren, P. A. M. (2007). The theory that dare not speak its name: A rejoinder to Mufwene and Francis. Language Sciences, 29(4), 571-573. doi:10.1016/j.langsci.2007.02.001.
  • Seuren, P. A. M. (1987). A note on siki. Journal of Pidgin and Creole Languages, 2(1), 57-62. doi:10.1075/jpcl.2.1.07pie.
  • Seuren, P. A. M. (1986). Adjectives as adjectives in Sranan. Journal of Pidgin and Creole Languages, 1(1), 123-134.
  • Seuren, P. A. M. (1973). [Review of the book A comprehensive etymological dictionary of the English language by Ernst Klein]. Neophilologus, 57(4), 423-426. doi:10.1007/BF01515518.
  • Seuren, P. A. M. (1979). [Review of the book Approaches to natural language ed. by K. Hintikka, J. Moravcsik and P. Suppes]. Leuvense Bijdragen, 68, 163-168.
  • Seuren, P. A. M. (1973). [Review of the book Philosophy of language by Robert J. Clack and Bertrand Russell]. Foundations of Language, 9(3), 440-441.
  • Seuren, P. A. M. (1973). [Review of the book Semantics. An interdisciplinary reader in philosophy, linguistics and psychology ed. by Danny D. Steinberg and Leon A. Jakobovits]. Neophilologus, 57(2), 198-213. doi:10.1007/BF01514332.
  • Seuren, P. A. M. (1986). Formal theory and the ecology of language. Theoretical Linguistics, 13(1), 1-18. doi:10.1515/thli.1986.13.1-2.1.
  • Seuren, P. A. M. (1987). How relevant?: A commentary on Sperber and Wilson "Précis of relevance: Communication and cognition'. Behavioral and Brain Sciences, 10, 731-733. doi:10.1017/S0140525X00055564.
  • Seuren, P. A. M. (1979). Meer over minder dan hoeft. De Nieuwe Taalgids, 72(3), 236-239.
  • Seuren, P. A. M. (1963). Naar aanleiding van Dr. F. Balk-Smit Duyzentkunst "De Grammatische Functie". Levende Talen, 219, 179-186.
  • Seuren, P. A. M. (1986). La transparence sémantique et la genèse des langues créoles: Le cas du Créole mauricien. Études Créoles, 9, 169-183.
  • Seuren, P. A. M. (1987). Les paradoxes et le langage. Logique et Analyse, 30(120), 365-383.
  • Seuren, P. A. M. (1986). Helpen en helpen is twee. Glot, 9(1/2), 110-117.
  • Seuren, P. A. M. (1986). The self-styling of relevance theory [Review of the book Relevance, Communication and Cognition by Dan Sperber and Deirdre Wilson]. Journal of Semantics, 5(2), 123-143. doi:10.1093/jos/5.2.123.
  • Seuren, P. A. M. (1973). Zero-output rules. Foundations of Language, 10(2), 317-328.
  • Sha, Z., Van Rooij, D., Anagnostou, E., Arango, C., Auzias, G., Behrmann, M., Bernhardt, B., Bolte, S., Busatto, G. F., Calderoni, S., Calvo, R., Daly, E., Deruelle, C., Duan, M., Duran, F. L. S., Durston, S., Ecker, C., Ehrlich, S., Fair, D., Fedor, J. and 38 moreSha, Z., Van Rooij, D., Anagnostou, E., Arango, C., Auzias, G., Behrmann, M., Bernhardt, B., Bolte, S., Busatto, G. F., Calderoni, S., Calvo, R., Daly, E., Deruelle, C., Duan, M., Duran, F. L. S., Durston, S., Ecker, C., Ehrlich, S., Fair, D., Fedor, J., Fitzgerald, J., Floris, D. L., Franke, B., Freitag, C. M., Gallagher, L., Glahn, D. C., Haar, S., Hoekstra, L., Jahanshad, N., Jalbrzikowski, M., Janssen, J., King, J. A., Lazaro, L., Luna, B., McGrath, J., Medland, S. E., Muratori, F., Murphy, D. G., Neufeld, J., O’Hearn, K., Oranje, B., Parellada, M., Pariente, J. C., Postema, M., Remnelius, K. L., Retico, A., Rosa, P. G. P., Rubia, K., Shook, D., Tammimies, K., Taylor, M. J., Tosetti, M., Wallace, G. L., Zhou, F., Thompson, P. M., Fisher, S. E., Buitelaar, J. K., & Francks, C. (2022). Subtly altered topological asymmetry of brain structural covariance networks in autism spectrum disorder across 43 datasets from the ENIGMA consortium. Molecular Psychiatry, 27, 2114-2125. doi:10.1038/s41380-022-01452-7.

    Abstract

    Small average differences in the left-right asymmetry of cerebral cortical thickness have been reported in individuals with autism spectrum disorder (ASD) compared to typically developing controls, affecting widespread cortical regions. The possible impacts of these regional alterations in terms of structural network effects have not previously been characterized. Inter-regional morphological covariance analysis can capture network connectivity between different cortical areas at the macroscale level. Here, we used cortical thickness data from 1455 individuals with ASD and 1560 controls, across 43 independent datasets of the ENIGMA consortium’s ASD Working Group, to assess hemispheric asymmetries of intra-individual structural covariance networks, using graph theory-based topological metrics. Compared with typical features of small-world architecture in controls, the ASD sample showed significantly altered average asymmetry of networks involving the fusiform, rostral middle frontal, and medial orbitofrontal cortex, involving higher randomization of the corresponding right-hemispheric networks in ASD. A network involving the superior frontal cortex showed decreased right-hemisphere randomization. Based on comparisons with meta-analyzed functional neuroimaging data, the altered connectivity asymmetry particularly affected networks that subserve executive functions, language-related and sensorimotor processes. These findings provide a network-level characterization of altered left-right brain asymmetry in ASD, based on a large combined sample. Altered asymmetrical brain development in ASD may be partly propagated among spatially distant regions through structural connectivity.
  • Shebani, Z., Carota, F., Hauk, O., Rowe, J. B., Barsalou, L. W., Tomasello, R., & Pulvermüller, F. (2022). Brain correlates of action word memory revealed by fMRI. Scientific Reports, 12: 16053. doi:10.1038/s41598-022-19416-w.

    Abstract

    Understanding language semantically related to actions activates the motor cortex. This activation is sensitive to semantic information such as the body part used to perform the action (e.g. arm-/leg-related action words). Additionally, motor movements of the hands/feet can have a causal effect on memory maintenance of action words, suggesting that the involvement of motor systems extends to working memory. This study examined brain correlates of verbal memory load for action-related words using event-related fMRI. Seventeen participants saw either four identical or four different words from the same category (arm-/leg-related action words) then performed a nonmatching-to-sample task. Results show that verbal memory maintenance in the high-load condition produced greater activation in left premotor and supplementary motor cortex, along with posterior-parietal areas, indicating that verbal memory circuits for action-related words include the cortical action system. Somatotopic memory load effects of arm- and leg-related words were observed, but only at more anterior cortical regions than was found in earlier studies employing passive reading tasks. These findings support a neurocomputational model of distributed action-perception circuits (APCs), according to which language understanding is manifest as full ignition of APCs, whereas working memory is realized as reverberant activity receding to multimodal prefrontal and lateral temporal areas.

    Additional information

    supplementary figure S1 caption
  • Shukla, V., Long, M., & Rubio-Fernandez, P. (2022). Children’s acquisition of new/given markers in English, Hindi, Mandinka and Spanish: Exploring the effect of optionality during grammaticalization. Glossa Psycholinguistics, 1(1): 13. doi:10.5070/G6011120.

    Abstract

    We investigated the effect of optionality on the acquisition of new/given markers, with a special focus on grammaticalization as a stage of optional use of the emerging form. To this end, we conducted a narrative-elicitation task with 5-year-old children and adults across four typologically-distinct languages with different new/given markers: English, Hindi, Mandinka and Spanish. Our starting assumption was that the Hindi numeral ‘ek’ (one) is developing into an indefinite article, which should delay children’s acquisition because of its optional use to introduce discourse referents. Supporting the Optionality Hypothesis, Experiment 1 revealed that obligatory markers are acquired earlier than optional markers. Experiment 2 focused on Hindi and showed that 10-year-old children’s use of ‘ek’ to introduce discourse characters was higher than 5-year-olds’ and comparable to adults’, replicating this pattern of results in two different cities in Northern India. Lastly, a follow-up study showed that Mandinka-speaking children and adults made use of all available discourse markers when tested on a familiar story, rather than with pictorial prompts, highlighting the importance of using culturally-appropriate methods of narrative elicitation in cross-linguistic research. We conclude by discussing the implications of article grammaticalization for common ground management in a speech community.

Share this page