Publications

Displaying 301 - 400 of 722
  • Hubers, F., Redl, T., De Vos, H., Reinarz, L., & De Hoop, H. (2020). Processing prescriptively incorrect comparative particles: Evidence from sentence-matching and eye-tracking. Frontiers in Psychology, 11: 186. doi:10.3389/fpsyg.2020.00186.

    Abstract

    Speakers of a language sometimes use particular constructions which violate prescriptive grammar rules. Despite their prescriptive ungrammaticality, they can occur rather frequently. One such example is the comparative construction in Dutch and similarly in German, where the equative particle is used in comparative constructions instead of the prescriptively correct comparative particle (Dutch beter als Jan and German besser wie Jan ‘lit. better as John’). From a theoretical linguist’s point of view, these so-called grammatical norm violations are perfectly grammatical, even though they are not part of the language’s prescriptive grammar. In a series of three experiments using sentence-matching and eye-tracking methodology, we investigated whether grammatical norm violations are processed as truly grammatical, as truly ungrammatical, or whether they fall in between these two. We hypothesized that the latter would be the case. We analyzed our data using linear mixed effects models in order to capture possible individual differences. The results of the sentence-matching experiments, which were conducted in both Dutch and German, showed that the grammatical norm violation patterns with ungrammatical sentences in both languages. Our hypothesis was therefore not borne out. However, using the more sensitive eye-tracking method on Dutch speakers only, we found that the ungrammatical alternative leads to higher reading times than the grammatical norm violation. We also found significant individual variation regarding this very effect. We furthermore replicated the processing difference between the grammatical norm violation and the prescriptively correct variant. In summary, we conclude that while the results of the more sensitive eye-tracking experiment suggest that grammatical norm violations are not processed on a par with ungrammatical sentences, the results of all three experiments clearly show that grammatical norm violations cannot be considered grammatical, either.

    Additional information

    Supplementary Material
  • Hubers, F., Trompenaars, T., Collin, S., De Schepper, K., & De hoop, H. (2020). Hypercorrection as a by-product of education. Applied Linguistics, 41(4), 552-574. doi:10.1093/applin/amz001.

    Abstract

    Prescriptive grammar rules are taught in education, generally to ban the use of certain frequently encountered constructions in everyday language. This may lead to hypercorrection, meaning that the prescribed form in one construction is extended to another one in which it is in fact prohibited by prescriptive grammar. We discuss two such cases in Dutch: the hypercorrect use of the comparative particle dan ‘than’ in equative constructions, and the hypercorrect use of the accusative pronoun hen ‘them’ for a dative object. In two experiments, high school students of three educational levels were tested on their use of these hypercorrect forms (nexp1 = 162, nexp2 = 159). Our results indicate an overall large amount of hypercorrection across all levels of education, including pre-university level students who otherwise perform better in constructions targeted by prescriptive grammar rules. We conclude that while teaching prescriptive grammar rules to high school students seems to increase their use of correct forms in certain constructions, this comes at a cost of hypercorrection in others.
  • Huettig, F., & McQueen, J. M. (2011). The nature of the visual environment induces implicit biases during language-mediated visual search. Memory & Cognition, 39, 1068-1084. doi:10.3758/s13421-011-0086-z.

    Abstract

    Four eye-tracking experiments examined whether semantic and visual-shape representations are routinely retrieved from printed-word displays and used during language-mediated visual search. Participants listened to sentences containing target words which were similar semantically or in shape to concepts invoked by concurrently-displayed printed words. In Experiment 1 the displays contained semantic and shape competitors of the targets, and two unrelated words. There were significant shifts in eye gaze as targets were heard towards semantic but not shape competitors. In Experiments 2-4, semantic competitors were replaced with unrelated words, semantically richer sentences were presented to encourage visual imagery, or participants rated the shape similarity of the stimuli before doing the eye-tracking task. In all cases there were no immediate shifts in eye gaze to shape competitors, even though, in response to the Experiment 1 spoken materials, participants looked to these competitors when they were presented as pictures (Huettig & McQueen, 2007). There was a late shape-competitor bias (more than 2500 ms after target onset) in all experiments. These data show that shape information is not used in online search of printed-word displays (whereas it is used with picture displays). The nature of the visual environment appears to induce implicit biases towards particular modes of processing during language-mediated visual search.
  • Huettig, F., Rommers, J., & Meyer, A. S. (2011). Using the visual world paradigm to study language processing: A review and critical evaluation. Acta Psychologica, 137, 151-171. doi:10.1016/j.actpsy.2010.11.003.

    Abstract

    We describe the key features of the visual world paradigm and review the main research areas where it has been used. In our discussion we highlight that the paradigm provides information about the way language users integrate linguistic information with information derived from the visual environment. Therefore the paradigm is well suited to study one of the key issues of current cognitive psychology, namely the interplay between linguistic and visual information processing. However, conclusions about linguistic processing (e.g., about activation, competition, and timing of access of linguistic representations) in the absence of relevant visual information must be drawn with caution.
  • Huettig, F., Guerra, E., & Helo, A. (2020). Towards understanding the task dependency of embodied language processing: The influence of colour during language-vision interactions. Journal of Cognition, 3(1): 41. doi:10.5334/joc.135.

    Abstract

    A main challenge for theories of embodied cognition is to understand the task dependency of embodied language processing. One possibility is that perceptual representations (e.g., typical colour of objects mentioned in spoken sentences) are not activated routinely but the influence of perceptual representation emerges only when context strongly supports their involvement in language. To explore this question, we tested the effects of colour representations during language processing in three visual- world eye-tracking experiments. On critical trials, participants listened to sentence- embedded words associated with a prototypical colour (e.g., ‘...spinach...’) while they inspected a visual display with four printed words (Experiment 1), coloured or greyscale line drawings (Experiment 2) and a ‘blank screen’ after a preview of coloured or greyscale line drawings (Experiment 3). Visual context always presented a word/object (e.g., frog) associated with the same prototypical colour (e.g. green) as the spoken target word and three distractors. When hearing spinach participants did not prefer the written word frog compared to other distractor words (Experiment 1). In Experiment 2, colour competitors attracted more overt attention compared to average distractors, but only for the coloured condition and not for greyscale trials. Finally, when the display was removed at the onset of the sentence, and in contrast to the previous blank-screen experiments with semantic competitors, there was no evidence of colour competition in the eye-tracking record (Experiment 3). These results fit best with the notion that the main role of perceptual representations in language processing is to contextualize language in the immediate environment.

    Additional information

    Data files and script
  • Huettig, F., & Altmann, G. (2011). Looking at anything that is green when hearing ‘frog’: How object surface colour and stored object colour knowledge influence language-mediated overt attention. Quarterly Journal of Experimental Psychology, 64(1), 122-145. doi:10.1080/17470218.2010.481474.

    Abstract

    Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., "spinach"; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of a frog), (b) objects associated with a diagnostic colour but presented in an appropriate but atypical colour (e.g., a colour photograph of a yellow frog), and (c) objects not associated with a diagnostic colour but presented in the diagnostic colour of the target concept (e.g., a green blouse; blouses are not typically green). We observed that colour-mediated shifts in overt attention are primarily due to the perceived surface attributes of the visual objects rather than stored knowledge about the typical colour of the object. In addition our data reveal that conceptual category information is the primary determinant of overt attention if both conceptual category and surface colour competitors are copresent in the visual environment.
  • Huettig, F., Olivers, C. N. L., & Hartsuiker, R. J. (2011). Looking, language, and memory: Bridging research from the visual world and visual search paradigms. Acta Psychologica, 137, 138-150. doi:10.1016/j.actpsy.2010.07.013.

    Abstract

    In the visual world paradigm as used in psycholinguistics, eye gaze (i.e. visual orienting) is measured in order to draw conclusions about linguistic processing. However, current theories are underspecified with respect to how visual attention is guided on the basis of linguistic representations. In the visual search paradigm as used within the area of visual attention research, investigators have become more and more interested in how visual orienting is affected by higher order representations, such as those involved in memory and language. Within this area more specific models of orienting on the basis of visual information exist, but they need to be extended with mechanisms that allow for language-mediated orienting. In the present paper we review the evidence from these two different – but highly related – research areas. We arrive at a model in which working memory serves as the nexus in which long-term visual as well as linguistic representations (i.e. types) are bound to specific locations (i.e. tokens or indices). The model predicts that the interaction between language and visual attention is subject to a number of conditions, such as the presence of the guiding representation in working memory, capacity limitations, and cognitive control mechanisms.
  • Huettig, F., Singh, N., & Mishra, R. K. (2011). Language-mediated visual orienting behavior in low and high literates. Frontiers in Psychology, 2: e285. doi:10.3389/fpsyg.2011.00285.

    Abstract

    The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task (cf. Huettig & Altmann, 2005) which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., 'magar', crocodile) while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., 'matar', peas; a semantic competitor, e.g., 'kachuwa', turtle, and two unrelated distractors). In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1). In both experiments high literates shifted their eye gaze towards phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were impossible (Experiment 2) but in contrast to high literates these phonologically-mediated shifts in eye gaze were not closely time-locked to the speech input. We conclude that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word-object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2), low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate counterparts.
  • Huizeling, E., Wang, H., Holland, C., & Kessler, K. (2020). Age-related changes in attentional refocusing during simulated driving. Brain sciences, 10(8): 530. doi:10.3390/brainsci10080530.

    Abstract

    We recently reported that refocusing attention between temporal and spatial tasks becomes more difficult with increasing age, which could impair daily activities such as driving (Callaghan et al., 2017). Here, we investigated the extent to which difficulties in refocusing attention extend to naturalistic settings such as simulated driving. A total of 118 participants in five age groups (18–30; 40–49; 50–59; 60–69; 70–91 years) were compared during continuous simulated driving, where they repeatedly switched from braking due to traffic ahead (a spatially focal yet temporally complex task) to reading a motorway road sign (a spatially more distributed task). Sequential-Task (switching) performance was compared to Single-Task performance (road sign only) to calculate age-related switch-costs. Electroencephalography was recorded in 34 participants (17 in the 18–30 and 17 in the 60+ years groups) to explore age-related changes in the neural oscillatory signatures of refocusing attention while driving. We indeed observed age-related impairments in attentional refocusing, evidenced by increased switch-costs in response times and by deficient modulation of theta and alpha frequencies. Our findings highlight virtual reality (VR) and Neuro-VR as important methodologies for future psychological and gerontological research.

    Additional information

    supplementary file
  • Iacozza, S., Meyer, A. S., & Lev-Ari, S. (2020). How in-group bias influences the level of detail of speaker-specific information encoded in novel lexical representations. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(5), 894-906. doi:10.1037/xlm0000765.

    Abstract

    An important issue in theories of word learning is how abstract or context-specific representations of novel words are. One aspect of this broad issue is how well learners maintain information about the source of novel words. We investigated whether listeners’ source memory was better for words learned from members of their in-group (students of their own university) than it is for words learned from members of an out-group (students from another institution). In the first session, participants saw 6 faces and learned which of the depicted students attended either their own or a different university. In the second session, they learned competing labels (e.g., citrus-peller and citrus-schiller; in English, lemon peeler and lemon stripper) for novel gadgets, produced by the in-group and out-group speakers. Participants were then tested for source memory of these labels and for the strength of their in-group bias, that is, for how much they preferentially process in-group over out-group information. Analyses of source memory accuracy demonstrated an interaction between speaker group membership status and participants’ in-group bias: Stronger in-group bias was associated with less accurate source memory for out-group labels than in-group labels. These results add to the growing body of evidence on the importance of social variables for adult word learning.
  • Indefrey, P. (2011). The spatial and temporal signatures of word production components: a critical update. Frontiers in Psychology, 2(255): 255. doi:10.3389/fpsyg.2011.00255.

    Abstract

    In the first decade of neurocognitive word production research the predominant approach was brain mapping, i.e., investigating the regional cerebral brain activation patterns correlated with word production tasks, such as picture naming and word generation. Indefrey and Levelt (2004) conducted a comprehensive meta-analysis of word production studies that used this approach and combined the resulting spatial information on neural correlates of component processes of word production with information on the time course of word production provided by behavioral and electromagnetic studies. In recent years, neurocognitive word production research has seen a major change toward a hypothesis-testing approach. This approach is characterized by the design of experimental variables modulating single component processes of word production and testing for predicted effects on spatial or temporal neurocognitive signatures of these components. This change was accompanied by the development of a broader spectrum of measurement and analysis techniques. The article reviews the findings of recent studies using the new approach. The time course assumptions of Indefrey and Levelt (2004) have largely been confirmed requiring only minor adaptations. Adaptations of the brain structure/function relationships proposed by Indefrey and Leven (2004) include the precise role of subregions of the left inferior frontal gyrus as well as a probable, yet to date unclear role of the inferior parietal cortex in word production.
  • Ingason, A., Rujescu, D., Cichon, S., Sigurdsson, E., Sigmundsson, T., Pietilainen, O. P. H., Buizer-Voskamp, J. E., Strengman, E., Francks, C., Muglia, P., Gylfason, A., Gustafsson, O., Olason, P. I., Steinberg, S., Hansen, T., Jakobsen, K. D., Rasmussen, H. B., Giegling, I., Möller, H.-J., Hartmann, A. and 28 moreIngason, A., Rujescu, D., Cichon, S., Sigurdsson, E., Sigmundsson, T., Pietilainen, O. P. H., Buizer-Voskamp, J. E., Strengman, E., Francks, C., Muglia, P., Gylfason, A., Gustafsson, O., Olason, P. I., Steinberg, S., Hansen, T., Jakobsen, K. D., Rasmussen, H. B., Giegling, I., Möller, H.-J., Hartmann, A., Crombie, C., Fraser, G., Walker, N., Lonnqvist, J., Suvisaari, J., Tuulio-Henriksson, A., Bramon, E., Kiemeney, L. A., Franke, B., Murray, R., Vassos, E., Toulopoulou, T., Mühleisen, T. W., Tosato, S., Ruggeri, M., Djurovic, S., Andreassen, O. A., Zhang, Z., Werge, T., Ophoff, R. A., Rietschel, M., Nöthen, M. M., Petursson, H., Stefansson, H., Peltonen, L., Collier, D., Stefansson, K., & St Clair, D. M. (2011). Copy number variations of chromosome 16p13.1 region associated with schizophrenia. Molecular Psychiatry, 16, 17-25. doi:10.1038/mp.2009.101.

    Abstract

    Deletions and reciprocal duplications of the chromosome 16p13.1 region have recently been reported in several cases of autism and mental retardation (MR). As genomic copy number variants found in these two disorders may also associate with schizophrenia, we examined 4345 schizophrenia patients and 35 079 controls from 8 European populations for duplications and deletions at the 16p13.1 locus, using microarray data. We found a threefold excess of duplications and deletions in schizophrenia cases compared with controls, with duplications present in 0.30% of cases versus 0.09% of controls (P=0.007) and deletions in 0.12 % of cases and 0.04% of controls (P>0.05). The region can be divided into three intervals defined by flanking low copy repeats. Duplications spanning intervals I and II showed the most significant (P=0.00010) association with schizophrenia. The age of onset in duplication and deletion carriers among cases ranged from 12 to 35 years, and the majority were males with a family history of psychiatric disorders. In a single Icelandic family, a duplication spanning intervals I and II was present in two cases of schizophrenia, and individual cases of alcoholism, attention deficit hyperactivity disorder and dyslexia. Candidate genes in the region include NTAN1 and NDE1. We conclude that duplications and perhaps also deletions of chromosome 16p13.1, previously reported to be associated with autism and MR, also confer risk of schizophrenia.
  • Isbilen, E. S., McCauley, S. M., Kidd, E., & Christiansen, M. H. (2020). Statistically induced chunking recall: A memory‐based approach to statistical learning. Cognitive Science, 44(7): e12848. doi:10.1111/cogs.12848.

    Abstract

    The computations involved in statistical learning have long been debated. Here, we build on work suggesting that a basic memory process, chunking , may account for the processing of statistical regularities into larger units. Drawing on methods from the memory literature, we developed a novel paradigm to test statistical learning by leveraging a robust phenomenon observed in serial recall tasks: that short‐term memory is fundamentally shaped by long‐term distributional learning. In the statistically induced chunking recall (SICR) task, participants are exposed to an artificial language, using a standard statistical learning exposure phase. Afterward, they recall strings of syllables that either follow the statistics of the artificial language or comprise the same syllables presented in a random order. We hypothesized that if individuals had chunked the artificial language into word‐like units, then the statistically structured items would be more accurately recalled relative to the random controls. Our results demonstrate that SICR effectively captures learning in both the auditory and visual modalities, with participants displaying significantly improved recall of the statistically structured items, and even recall specific trigram chunks from the input. SICR also exhibits greater test–retest reliability in the auditory modality and sensitivity to individual differences in both modalities than the standard two‐alternative forced‐choice task. These results thereby provide key empirical support to the chunking account of statistical learning and contribute a valuable new tool to the literature.
  • Jacoby, N., Margulis, E. H., Clayton, M., Hannon, E., Honing, H., Iversen, J., Klein, T. R., Mehr, S. A., Pearson, L., Peretz, I., Perlman, M., Polak, R., Ravignani, A., Savage, P. E., Steingo, G., Stevens, C. J., Trainor, L., Trehub, S., Veal, M., & Wald-Fuhrmann, M. (2020). Cross-cultural work in music cognition: Challenges, insights, and recommendations. Music Perception, 37(3), 185-195. doi:10.1525/mp.2020.37.3.185.

    Abstract

    Many foundational questions in the psychology of music require cross-cultural approaches, yet the vast majority of work in the field to date has been conducted with Western participants and Western music. For cross-cultural research to thrive, it will require collaboration between people from different disciplinary backgrounds, as well as strategies for overcoming differences in assumptions, methods, and terminology. This position paper surveys the current state of the field and offers a number of concrete recommendations focused on issues involving ethics, empirical methods, and definitions of “music” and “culture.”
  • Janse, E., & Ernestus, M. (2011). The roles of bottom-up and top-down information in the recognition of reduced speech: Evidence from listeners with normal and impaired hearing. Journal of Phonetics, 39(3), 330-343. doi:10.1016/j.wocn.2011.03.005.
  • Janse, E., Nooteboom, S. G., & Quené, H. (2003). Word-level intelligibility of time-compressed speech: Prosodic and segmental factors. Speech Communication, 41, 287-301. doi:10.1016/S0167-6393(02)00130-9.

    Abstract

    In this study we investigate whether speakers, in line with the predictions of the Hyper- and Hypospeech theory, speed up most during the least informative parts and less during the more informative parts, when they are asked to speak faster. We expected listeners to benefit from these changes in timing, and our main goal was to find out whether making the temporal organisation of artificially time-compressed speech more like that of natural fast speech would improve intelligibility over linear time compression. Our production study showed that speakers reduce unstressed syllables more than stressed syllables, thereby making the prosodic pattern more pronounced. We extrapolated fast speech timing to even faster rates because we expected that the more salient prosodic pattern could be exploited in difficult listening situations. However, at very fast speech rates, applying fast speech timing worsens intelligibility. We argue that the non-uniform way of speeding up may not be due to an underlying communicative principle, but may result from speakers’ inability to speed up otherwise. As both prosodic and segmental information contribute to word recognition, we conclude that extrapolating fast speech timing to extremely fast rates distorts this balance between prosodic and segmental information.
  • Jebb, D., Huang, Z., Pippel, M., Hughes, G. M., Lavrichenko, K., Devanna, P., Winkler, S., Jermiin, L. S., Skirmuntt, E. C., Katzourakis, A., Burkitt-Gray, L., Ray, D. A., Sullivan, K. A. M., Roscito, J. G., Kirilenko, B. M., Dávalos, L. M., Corthals, A. P., Power, M. L., Jones, G., Ransome, R. D. and 9 moreJebb, D., Huang, Z., Pippel, M., Hughes, G. M., Lavrichenko, K., Devanna, P., Winkler, S., Jermiin, L. S., Skirmuntt, E. C., Katzourakis, A., Burkitt-Gray, L., Ray, D. A., Sullivan, K. A. M., Roscito, J. G., Kirilenko, B. M., Dávalos, L. M., Corthals, A. P., Power, M. L., Jones, G., Ransome, R. D., Dechmann, D., Locatelli, A. G., Puechmaille, S. J., Fedrigo, O., Jarvis, E. D., Hiller, M., Vernes, S. C., Myers, E. W., & Teeling, E. C. (2020). Six reference-quality genomes reveal evolution of bat adaptations. Nature, 583, 578-584. doi:10.1038/s41586-020-2486-3.

    Abstract

    Bats possess extraordinary adaptations, including flight, echolocation, extreme longevity and unique immunity. High-quality genomes are crucial for understanding the molecular basis and evolution of these traits. Here we incorporated long-read sequencing and state-of-the-art scaffolding protocols1 to generate, to our knowledge, the first reference-quality genomes of six bat species (Rhinolophus ferrumequinum, Rousettus aegyptiacus, Phyllostomus discolor, Myotis myotis, Pipistrellus kuhlii and Molossus molossus). We integrated gene projections from our ‘Tool to infer Orthologs from Genome Alignments’ (TOGA) software with de novo and homology gene predictions as well as short- and long-read transcriptomics to generate highly complete gene annotations. To resolve the phylogenetic position of bats within Laurasiatheria, we applied several phylogenetic methods to comprehensive sets of orthologous protein-coding and noncoding regions of the genome, and identified a basal origin for bats within Scrotifera. Our genome-wide screens revealed positive selection on hearing-related genes in the ancestral branch of bats, which is indicative of laryngeal echolocation being an ancestral trait in this clade. We found selection and loss of immunity-related genes (including pro-inflammatory NF-κB regulators) and expansions of anti-viral APOBEC3 genes, which highlights molecular mechanisms that may contribute to the exceptional immunity of bats. Genomic integrations of diverse viruses provide a genomic record of historical tolerance to viral infection in bats. Finally, we found and experimentally validated bat-specific variation in microRNAs, which may regulate bat-specific gene-expression programs. Our reference-quality bat genomes provide the resources required to uncover and validate the genomic basis of adaptations of bats, and stimulate new avenues of research that are directly relevant to human health and disease

    Additional information

    41586_2020_2486_MOESM1_ESM.pdf
  • Jescheniak, J. D., Levelt, W. J. M., & Meyer, A. S. (2003). Specific word frequency is not all that counts in speech production: Comments on Caramazza, Costa, et al. (2001) and new experimental data. Journal of Experimental Psychology: Learning, Memory, & Cognition, 29(3), 432-438. doi:10.1037/0278-7393.29.3.432.

    Abstract

    A. Caramazza, A. Costa, M. Miozzo, and Y. Bi(2001) reported a series of experiments demonstrating that the ease of producing a word depends only on the frequency of that specific word but not on the frequency of a homophone twin. A. Caramazza, A. Costa, et al. concluded that homophones have separate word form representations and that the absence of frequency-inheritance effects for homophones undermines an important argument in support of 2-stage models of lexical access, which assume that syntactic (lemma) representations mediate between conceptual and phonological representations. The authors of this article evaluate the empirical basis of this conclusion, report 2 experiments demonstrating a frequency-inheritance effect, and discuss other recent evidence. It is concluded that homophones share a common word form and that the distinction between lemmas and word forms should be upheld.
  • Jesse, A., & McQueen, J. M. (2011). Positional effects in the lexical retuning of speech perception. Psychonomic Bulletin & Review, 18, 943-950. doi:10.3758/s13423-011-0129-2.

    Abstract

    Listeners use lexical knowledge to adjust to speakers’ idiosyncratic pronunciations. Dutch listeners learn to interpret an ambiguous sound between /s/ and /f/ as /f/ if they hear it word-finally in Dutch words normally ending in /f/, but as /s/ if they hear it in normally /s/-final words. Here, we examined two positional effects in lexically guided retuning. In Experiment 1, ambiguous sounds during exposure always appeared in word-initial position (replacing the first sounds of /f/- or /s/-initial words). No retuning was found. In Experiment 2, the same ambiguous sounds always appeared word-finally during exposure. Here, retuning was found. Lexically guided perceptual learning thus appears to emerge reliably only when lexical knowledge is available as the to-be-tuned segment is initially being processed. Under these conditions, however, lexically guided retuning was position independent: It generalized across syllabic positions. Lexical retuning can thus benefit future recognition of particular sounds wherever they appear in words.
  • Jesse, A., Vrignaud, N., Cohen, M. M., & Massaro, D. W. (2000). The processing of information from multiple sources in simultaneous interpreting. Interpreting, 5(2), 95-115. doi:10.1075/intp.5.2.04jes.

    Abstract

    Language processing is influenced by multiple sources of information. We examined whether the performance in simultaneous interpreting would be improved when providing two sources of information, the auditory speech as well as corresponding lip-movements, in comparison to presenting the auditory speech alone. Although there was an improvement in sentence recognition when presented with visible speech, there was no difference in performance between these two presentation conditions when bilinguals simultaneously interpreted from English to German or from English to Spanish. The reason why visual speech did not contribute to performance could be the presentation of the auditory signal without noise (Massaro, 1998). This hypothesis should be tested in the future. Furthermore, it should be investigated if an effect of visible speech can be found for other contexts, when visual information could provide cues for emotions, prosody, or syntax.
  • Jessop, A., & Chang, F. (2020). Thematic role information is maintained in the visual object-tracking system. Quarterly journal of experimental psychology, 73(1), 146-163. doi:10.1177%2F1747021819882842.

    Abstract

    Thematic roles characterise the functions of participants in events, but there is no agreement on how these roles are identified in the real world. In three experiments, we examined how role identification in push events is supported by the visual object-tracking system. Participants saw one to three push events in visual scenes with nine identical randomly moving circles. After a period of random movement, two circles from one of the push events and a foil object were given different colours and the participants had to identify their roles in the push with an active sentence, such as red pushed blue. It was found that the participants could track the agent and patient targets and generate descriptions that identified their roles at above chance levels, even under difficult conditions, such as when tracking multiple push events (Experiments 1–3), fixating their gaze (Experiment 1), performing a concurrent speeded-response task (Experiment 2), and when tracking objects that were temporarily invisible (Experiment 3). The results were consistent with previous findings of an average tracking capacity limit of four objects, individual differences in this capacity, and the use of attentional strategies. The studies demonstrated that thematic role information can be maintained when tracking the identity of visually identical objects, then used to map role fillers (e.g., the agent of a push event) into their appropriate sentence positions. This suggests that thematic role features are stored temporarily in the visual object-tracking system.
  • Johnson, E., McQueen, J. M., & Huettig, F. (2011). Toddlers’ language-mediated visual search: They need not have the words for it. The Quarterly Journal of Experimental Psychology, 64, 1672-1682. doi:10.1080/17470218.2011.594165.

    Abstract

    Eye movements made by listeners during language-mediated visual search reveal a strong link between
    visual processing and conceptual processing. For example, upon hearing the word for a missing referent
    with a characteristic colour (e.g., “strawberry”), listeners tend to fixate a colour-matched distractor (e.g.,
    a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these
    shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children
    who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in
    common exhibit language-mediated eye movements like those made by older children and adults?
    That is, do toddlers look at a red plane when hearing “strawberry”? We observed that 24-montholds
    lacking colour term knowledge nonetheless recognized the perceptual–conceptual commonality
    between named and seen objects. This indicates that language-mediated visual search need not
    depend on stored labels for concepts.
  • Johnson, E. K., & Huettig, F. (2011). Eye movements during language-mediated visual search reveal a strong link between overt visual attention and lexical processing in 36-months-olds. Psychological Research, 75, 35-42. doi:10.1007/s00426-010-0285-4.

    Abstract

    The nature of children’s early lexical processing was investigated by asking what information 36-month-olds access and use when instructed to find a known but absent referent. Children readily retrieved stored knowledge about characteristic color, i.e. when asked to find an object with a typical color (e.g. strawberry), children tended to fixate more upon an object that had the same (e.g. red plane) as opposed to a different (e.g. yellow plane) color. They did so regardless of the fact that they have had plenty of time to recognize the pictures for what they are, i.e. planes not strawberries. These data represent the first demonstration that language-mediated shifts of overt attention in young children can be driven by individual stored visual attributes of known words that mismatch on most other dimensions. The finding suggests that lexical processing and overt attention are strongly linked from an early age.
  • Johnson, E. K., Jusczyk, P. W., Cutler, A., & Norris, D. (2003). Lexical viability constraints on speech segmentation by infants. Cognitive Psychology, 46(1), 65-97. doi:10.1016/S0010-0285(02)00507-8.

    Abstract

    The Possible Word Constraint limits the number of lexical candidates considered in speech recognition by stipulating that input should be parsed into a string of lexically viable chunks. For instance, an isolated single consonant is not a feasible word candidate. Any segmentation containing such a chunk is disfavored. Five experiments using the head-turn preference procedure investigated whether, like adults, 12-month-olds observe this constraint in word recognition. In Experiments 1 and 2, infants were familiarized with target words (e.g., rush), then tested on lists of nonsense items containing these words in “possible” (e.g., “niprush” [nip + rush]) or “impossible” positions (e.g., “prush” [p + rush]). The infants listened significantly longer to targets in “possible” versus “impossible” contexts when targets occurred at the end of nonsense items (rush in “prush”), but not when they occurred at the beginning (tan in “tance”). In Experiments 3 and 4, 12-month-olds were similarly familiarized with target words, but test items were real words in sentential contexts (win in “wind” versus “window”). The infants listened significantly longer to words in the “possible” condition regardless of target location. Experiment 5 with targets at the beginning of isolated real words (e.g., win in “wind”) replicated Experiment 2 in showing no evidence of viability effects in beginning position. Taken together, the findings suggest that, in situations in which 12-month-olds are required to rely on their word segmentation abilities, they give evidence of observing lexical viability constraints in the way that they parse fluent speech.
  • Johnson, J. S., Sutterer, D. W., Acheson, D. J., Lewis-Peacock, J. A., & Postle, B. R. (2011). Increased alpha-band power during the retention of shapes and shape-location associations in visual short-term memory. Frontiers in Psychology, 2(128), 1-9. doi:10.3389/fpsyg.2011.00128.

    Abstract

    Studies exploring the role of neural oscillations in cognition have revealed sustained increases in alpha-band (∼8–14 Hz) power during the delay period of delayed-recognition short-term memory tasks. These increases have been proposed to reflect the inhibition, for example, of cortical areas representing task-irrelevant information, or of potentially interfering representations from previous trials. Another possibility, however, is that elevated delay-period alpha-band power (DPABP) reflects the selection and maintenance of information, rather than, or in addition to, the inhibition of task-irrelevant information. In the present study, we explored these possibilities using a delayed-recognition paradigm in which the presence and task relevance of shape information was systematically manipulated across trial blocks and electroencephalographic was used to measure alpha-band power. In the first trial block, participants remembered locations marked by identical black circles. The second block featured the same instructions, but locations were marked by unique shapes. The third block featured the same stimulus presentation as the second, but with pretrial instructions indicating, on a trial-by-trial basis, whether memory for shape or location was required, the other dimension being irrelevant. In the final block, participants remembered the unique pairing of shape and location for each stimulus. Results revealed minimal DPABP in each of the location-memory conditions, whether locations were marked with identical circles or with unique task-irrelevant shapes. In contrast, alpha-band power increases were observed in both the shape-memory condition, in which location was task irrelevant, and in the critical final condition, in which both shape and location were task relevant. These results provide support for the proposal that alpha-band oscillations reflect the retention of shape information and/or shape–location associations in short-term memory.
  • Johnson, E. K., Westrek, E., Nazzi, T., & Cutler, A. (2011). Infant ability to tell voices apart rests on language experience. Developmental Science, 14(5), 1002-1011. doi:10.1111/j.1467-7687.2011.01052.x.

    Abstract

    A visual fixation study tested whether seven-month-olds can discriminate between different talkers. The infants were first habituated to talkers producing sentences in either a familiar or unfamiliar language, then heard test sentences from previously unheard speakers, either in the language used for habituation, or in another language. When the language at test mismatched that in habituation, infants always noticed the change. When language remained constant and only talker altered, however, infants detected the change only if the language was the native tongue. Adult listeners with a different native tongue than the infants did not reproduce the discriminability patterns shown by the infants, and infants detected neither voice nor language changes in reversed speech; both these results argue against explanation of the native-language voice discrimination in terms of acoustic properties of the stimuli. The ability to identify talkers is, like many other perceptual abilities, strongly influenced by early life experience.
  • Johnson, E., & Matsuo, A. (2003). Max-Planck-Institute for Psycholinguistics: Annual Report 2003. Nijmegen: MPI for Psycholinguistics.
  • Jones, C. R., Pickles, A., Falcaro, M., Marsden, A. J., Happé, F., Scott, S. K., Sauter, D., Tregay, J., Phillips, R. J., Baird, G., Simonoff, E., & Charman, T. (2011). A multimodal approach to emotion recognition ability in autism spectrum disorders. Journal of Child Psychology and Psychiatry, 52(3), 275-285. doi:10.1111/j.1469-7610.2010.02328.x.

    Abstract

    Background: Autism spectrum disorders (ASD) are characterised by social and communication difficulties in day-to-day life, including problems in recognising emotions. However, experimental investigations of emotion recognition ability in ASD have been equivocal; hampered by small sample sizes, narrow IQ range and over-focus on the visual modality. Methods: We tested 99 adolescents (mean age 15;6 years, mean IQ 85) with an ASD and 57 adolescents without an ASD (mean age 15;6 years, mean IQ 88) on a facial emotion recognition task and two vocal emotion recognition tasks (one verbal; one non-verbal). Recognition of happiness, sadness, fear, anger, surprise and disgust were tested. Using structural equation modelling, we conceptualised emotion recognition ability as a multimodal construct, measured by the three tasks. We examined how the mean levels of recognition of the six emotions differed by group (ASD vs. non-ASD) and IQ (>= 80 vs. < 80). Results: There was no significant difference between groups for the majority of emotions and analysis of error patterns suggested that the ASD group were vulnerable to the same pattern of confusions between emotions as the non-ASD group. However, recognition ability was significantly impaired in the ASD group for surprise. IQ had a strong and significant effect on performance for the recognition of all six emotions, with higher IQ adolescents outperforming lower IQ adolescents. Conclusions: The findings do not suggest a fundamental difficulty with the recognition of basic emotions in adolescents with ASD.
  • Jongman, S. R., Roelofs, A., & Lewis, A. G. (2020). Attention for speaking: Prestimulus motor-cortical alpha power predicts picture naming latencies. Journal of Cognitive Neuroscience, 32(5), 747-761. doi:10.1162/jocn_a_01513.

    Abstract

    There is a range of variability in the speed with which a single speaker will produce the same word from one instance to another. Individual differences studies have shown that the speed of production and the ability to maintain attention are related. This study investigated whether fluctuations in production latencies can be explained by spontaneous fluctuations in speakers' attention just prior to initiating speech planning. A relationship between individuals' incidental attentional state and response performance is well attested in visual perception, with lower prestimulus alpha power associated with faster manual responses. Alpha is thought to have an inhibitory function: Low alpha power suggests less inhibition of a specific brain region, whereas high alpha power suggests more inhibition. Does the same relationship hold for cognitively demanding tasks such as word production? In this study, participants named pictures while EEG was recorded, with alpha power taken to index an individual's momentary attentional state. Participants' level of alpha power just prior to picture presentation and just prior to speech onset predicted subsequent naming latencies. Specifically, higher alpha power in the motor system resulted in faster speech initiation. Our results suggest that one index of a lapse of attention during speaking is reduced inhibition of motor-cortical regions: Decreased motor-cortical alpha power indicates reduced inhibition of this area while early stages of production planning unfold, which leads to increased interference from motor-cortical signals and longer naming latencies. This study shows that the language production system is not impermeable to the influence of attention.
  • Jongman, S. R., Piai, V., & Meyer, A. S. (2020). Planning for language production: The electrophysiological signature of attention to the cue to speak. Language, Cognition and Neuroscience, 35(7), 915-932. doi:10.1080/23273798.2019.1690153.

    Abstract

    In conversation, speech planning can overlap with listening to the interlocutor. It has been
    postulated that once there is enough information to formulate a response, planning is initiated
    and the response is maintained in working memory. Concurrently, the auditory input is
    monitored for the turn end such that responses can be launched promptly. In three EEG
    experiments, we aimed to identify the neural signature of phonological planning and monitoring
    by comparing delayed responding to not responding (reading aloud, repetition and lexical
    decision). These comparisons consistently resulted in a sustained positivity and beta power
    reduction over posterior regions. We argue that these effects reflect attention to the sequence
    end. Phonological planning and maintenance were not detected in the neural signature even
    though it is highly likely these were taking place. This suggests that EEG must be used cautiously
    to identify response planning when the neural signal is overridden by attention effects
  • Jordan, F. (2011). A phylogenetic analysis of the evolution of Austronesian sibling terminologies. Human Biology, 83, 297-321. doi:10.3378/027.083.0209.

    Abstract

    Social structure in human societies is underpinned by the variable expression of ideas about relatedness between different types of kin. We express these ideas through language in our kin terminology: to delineate who is kin and who is not, and to attach meanings to the types of kin labels associated with different individuals. Cross-culturally, there is a regular and restricted range of patterned variation in kin terminologies, and to date, our understanding of this diversity has been hampered by inadequate techniques for dealing with the hierarchical relatedness of languages (Galton’s Problem). Here I use maximum-likelihood and Bayesian phylogenetic comparative methods to begin to tease apart the processes underlying the evolution of kin terminologies in the Austronesian language family, focusing on terms for siblings. I infer (1) the probable ancestral states and (2) evolutionary models of change for the semantic distinctions of relative age (older/younger sibling) and relative sex (same sex/opposite-sex). Analyses show that early Austronesian languages contained the relative-age, but not the relative-sex distinction; the latter was reconstructed firmly only for the ancestor of Eastern Malayo-Polynesian languages. Both distinctions were best characterized by evolutionary models where the gains and losses of the semantic distinctions were equally likely. A multi-state model of change examined how the relative-sex distinction could be elaborated and found that some transitions in kin terms were not possible: jumps from absence to heavily elaborated were very unlikely, as was piece-wise dismantling of elaborate distinctions. Cultural ideas about what types of kin distinctions are important can be embedded in the semantics of language; using a phylogenetic evolutionary framework we can understand how those distinctions in meaning change through time.
  • Kaufeld, G., Naumann, W., Meyer, A. S., Bosker, H. R., & Martin, A. E. (2020). Contextual speech rate influences morphosyntactic prediction and integration. Language, Cognition and Neuroscience, 35(7), 933-948. doi:10.1080/23273798.2019.1701691.

    Abstract

    Understanding spoken language requires the integration and weighting of multiple cues, and may call on cue integration mechanisms that have been studied in other areas of perception. In the current study, we used eye-tracking (visual-world paradigm) to examine how contextual speech rate (a lower-level, perceptual cue) and morphosyntactic knowledge (a higher-level, linguistic cue) are iteratively combined and integrated. Results indicate that participants used contextual rate information immediately, which we interpret as evidence of perceptual inference and the generation of predictions about upcoming morphosyntactic information. Additionally, we observed that early rate effects remained active in the presence of later conflicting lexical information. This result demonstrates that (1) contextual speech rate functions as a cue to morphosyntactic inferences, even in the presence of subsequent disambiguating information; and (2) listeners iteratively use multiple sources of information to draw inferences and generate predictions during speech comprehension. We discuss the implication of these demonstrations for theories of language processing
  • Kaufeld, G., Ravenschlag, A., Meyer, A. S., Martin, A. E., & Bosker, H. R. (2020). Knowledge-based and signal-based cues are weighted flexibly during spoken language comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(3), 549-562. doi:10.1037/xlm0000744.

    Abstract

    During spoken language comprehension, listeners make use of both knowledge-based and signal-based sources of information, but little is known about how cues from these distinct levels of representational hierarchy are weighted and integrated online. In an eye-tracking experiment using the visual world paradigm, we investigated the flexible weighting and integration of morphosyntactic gender marking (a knowledge-based cue) and contextual speech rate (a signal-based cue). We observed that participants used the morphosyntactic cue immediately to make predictions about upcoming referents, even in the presence of uncertainty about the cue’s reliability. Moreover, we found speech rate normalization effects in participants’ gaze patterns even in the presence of preceding morphosyntactic information. These results demonstrate that cues are weighted and integrated flexibly online, rather than adhering to a strict hierarchy. We further found rate normalization effects in the looking behavior of participants who showed a strong behavioral preference for the morphosyntactic gender cue. This indicates that rate normalization effects are robust and potentially automatic. We discuss these results in light of theories of cue integration and the two-stage model of acoustic context effects
  • Kaufeld, G., Bosker, H. R., Ten Oever, S., Alday, P. M., Meyer, A. S., & Martin, A. E. (2020). Linguistic structure and meaning organize neural oscillations into a content-specific hierarchy. The Journal of Neuroscience, 49(2), 9467-9475. doi:10.1523/JNEUROSCI.0302-20.2020.

    Abstract

    Neural oscillations track linguistic information during speech comprehension (e.g., Ding et al., 2016; Keitel et al., 2018), and are known to be modulated by acoustic landmarks and speech intelligibility (e.g., Doelling et al., 2014; Zoefel & VanRullen, 2015). However, studies investigating linguistic tracking have either relied on non-naturalistic isochronous stimuli or failed to fully control for prosody. Therefore, it is still unclear whether low frequency activity tracks linguistic structure during natural speech, where linguistic structure does not follow such a palpable temporal pattern. Here, we measured electroencephalography (EEG) and manipulated the presence of semantic and syntactic information apart from the timescale of their occurrence, while carefully controlling for the acoustic-prosodic and lexical-semantic information in the signal. EEG was recorded while 29 adult native speakers (22 women, 7 men) listened to naturally-spoken Dutch sentences, jabberwocky controls with morphemes and sentential prosody, word lists with lexical content but no phrase structure, and backwards acoustically-matched controls. Mutual information (MI) analysis revealed sensitivity to linguistic content: MI was highest for sentences at the phrasal (0.8-1.1 Hz) and lexical timescale (1.9-2.8 Hz), suggesting that the delta-band is modulated by lexically-driven combinatorial processing beyond prosody, and that linguistic content (i.e., structure and meaning) organizes neural oscillations beyond the timescale and rhythmicity of the stimulus. This pattern is consistent with neurophysiologically inspired models of language comprehension (Martin, 2016, 2020; Martin & Doumas, 2017) where oscillations encode endogenously generated linguistic content over and above exogenous or stimulus-driven timing and rhythm information.
  • Kelly, S., Byrne, K., & Holler, J. (2011). Raising the stakes of communication: Evidence for increased gesture production as predicted by the GSA framework. Information, 2(4), 579-593. doi:10.3390/info2040579.

    Abstract

    Theorists of language have argued that co-­speech hand gestures are an
    intentional part of social communication. The present study provides evidence for these
    claims by showing that speakers adjust their gesture use according to their perceived relevance to the audience. Participants were asked to read about items that were and were not useful in a wilderness survival scenario, under the pretense that they would then
    explain (on camera) what they learned to one of two different audiences. For one audience (a group of college students in a dormitory orientation activity), the stakes of successful
    communication were low;; for the other audience (a group of students preparing for a
    rugged camping trip in the mountains), the stakes were high. In their explanations to the camera, participants in the high stakes condition produced three times as many
    representational gestures, and spent three times as much time gesturing, than participants in the low stakes condition. This study extends previous research by showing that the anticipated consequences of one’s communication—namely, the degree to which information may be useful to an intended recipient—influences speakers’ use of gesture.
  • Kempen, G. (2000). Could grammatical encoding and grammatical decoding be subserved by the same processing module? Behavioral and Brain Sciences, 23, 38-39.
  • Kempen, G., & Harbusch, K. (2003). An artificial opposition between grammaticality and frequency: Comment on Bornkessel, Schlesewsky & Friederici (2002). Cognition, 90(2), 205-210 [Rectification on p. 215]. doi:10.1016/S0010-0277(03)00145-8.

    Abstract

    In a recent Cognition paper (Cognition 85 (2002) B21), Bornkessel, Schlesewsky, and Friederici report ERP data that they claim “show that online processing difficulties induced by word order variations in German cannot be attributed to the relative infrequency of the constructions in question, but rather appear to reflect the application of grammatical principles during parsing” (p. B21). In this commentary we demonstrate that the posited contrast between grammatical principles and construction (in)frequency as sources of parsing problems is artificial because it is based on factually incorrect assumptions about the grammar of German and on inaccurate corpus frequency data concerning the German constructions involved.
  • Kendrick, K. H., Brown, P., Dingemanse, M., Floyd, S., Gipper, S., Hayano, K., Hoey, E., Hoymann, G., Manrique, E., Rossi, G., & Levinson, S. C. (2020). Sequence organization: A universal infrastructure for social action. Journal of Pragmatics, 168, 119-138. doi:10.1016/j.pragma.2020.06.009.

    Abstract

    This article makes the case for the universality of the sequence organization observable in informal human conversational interaction. Using the descriptive schema developed by Schegloff (2007), we examine the major patterns of action-sequencing in a dozen nearly all unrelated languages. What we find is that these patterns are instantiated in very similar ways for the most part right down to the types of different action sequences. There are also some notably different cultural exploitations of the patterns, but the patterns themselves look strongly universal. Recent work in gestural communication in the great apes suggests that sequence organization may have been a crucial route into the development of language. Taken together with the fundamental role of this organization in language acquisition, sequential behavior of this kind seems to have both phylogenetic and ontogenetic priority, which probably puts substantial functional pressure on language form.

    Additional information

    Supplementary data
  • Kendrick, K. H., & Majid, A. (Eds.). (2011). Field manual volume 14. Nijmegen: Max Planck Institute for Psycholinguistics.
  • Khan, M., & Cremers, F. P. (2020). ABCA4-Associated Stargardt Disease. Klinische Monatsblätter für Augenheilkunde, 237, 267-274. doi:10.1055/a-1057-9939.

    Abstract

    Autosomal recessive Stargardt disease (STGD1) is associated with variants in the ABCA4 gene. The phenotypes range from early-onset STGD1, that clinically resembles severe cone-rod dystrophy, to intermediate STGD1 and late-onset STGD1. These different phenotypes can be correlated with different combinations of ABCA4 variants which can be classified according to their degree of severity. A significant fraction of STGD1 cases, particularly late-onset STGD1 cases, were shown to carry only a single ABCA4 variant. A frequent coding variant (p.Asn1868Ile) was recently identified which – in combination with a severe ABCA4 variant – is generally associated with late-onset STGD1. In addition, an increasing number of rare deep-intronic variants have been found and some of these are also associated with late-onset STGD1. The effect of these and other variants on ABCA4 RNA was tested using in vitro assays in human kidney cells using specially designed midigenes. With stem cells and photoreceptor progenitor cells derived from patient skin or blood cells, retina-specific splice defects can be assessed. With expert clinical examination to distinguish STGD1 cases from other maculopathies, as well as in-depth genomics and transcriptomics data, it is now possible to identify both mutant ABCA4 alleles in > 95% of cases.
  • Khan, M., Cornelis, S. S., Sangermano, R., Post, I. J. M., Janssen Groesbeek, A., Amsu, J., Gilissen, C., Garanto, A., Collin, R. W. J., & Cremers, F. P. M. (2020). In or out? New insights on exon recognition through splice-site interdependency. International Journal of Molecular Sciences, 21: 2300. doi:10.3390/ijms21072300.

    Abstract

    Noncanonical splice-site mutations are an important cause of inherited diseases. Based on in vitro and stem-cell-based studies, some splice-site variants show a stronger splice defect than expected based on their predicted effects, suggesting that other sequence motifs influence the outcome. We investigated whether splice defects due to human-inherited-disease-associated variants in noncanonical splice-site sequences in ABCA4, DMD, and TMC1 could be rescued by strengthening the splice site on the other side of the exon. Noncanonical 5′- and 3′-splice-site variants were selected. Rescue variants were introduced based on an increase in predicted splice-site strength, and the effects of these variants were analyzed using in vitro splice assays in HEK293T cells. Exon skipping due to five variants in noncanonical splice sites of exons in ABCA4, DMD, and TMC1 could be partially or completely rescued by increasing the predicted strengths of the other splice site of the same exon. We named this mechanism “splicing interdependency”, and it is likely based on exon recognition by splicing machinery. Awareness of this interdependency is of importance in the classification of noncanonical splice-site variants associated with disease and may open new opportunities for treatments
  • Kidd, E., & Donnelly, S. (2020). Individual differences in first language acquisition. Annual Review of Linguistics, 6, 319-340. doi:10.1146/annurev-linguistics-011619-030326.

    Abstract

    Humans vary in almost every dimension imaginable, and language is no
    exception. In this article, we review the past research that has focused on
    individual differences (IDs) in first language acquisition. We first consider
    how different theoretical traditions in language acquisition treat IDs, and
    we argue that a focus on IDs is important given its potential to reveal the
    developmental dynamics and architectural constraints of the linguistic system.
    We then review IDs research that has examined variation in children’s
    linguistic input, early speech perception, and vocabulary and grammatical
    development. In each case, we observe systematic and meaningful variation,
    such that variation in one domain (e.g., early auditory and speech
    processing) has meaningful developmental consequences for development
    in higher-order domains (e.g., vocabulary). The research suggests a high
    degree of integration across the linguistic system, in which development
    across multiple linguistic domains is tightly coupled.
  • Kidd, E., Stewart, A. J., & Serratrice, L. (2011). Children do not overcome lexical biases where adults do: The role of the referential scene in garden-path recovery. Journal of Child Language, 38(1), 222-234. doi:10.1017/s0305000909990316.

    Abstract

    In this paper we report on a visual world eye-tracking experiment that investigated the differing abilities of adults and children to use referential scene information during reanalysis to overcome lexical biases during sentence processing. The results showed that adults incorporated aspects of the referential scene into their parse as soon as it became apparent that a test sentence was syntactically ambiguous, suggesting they considered the two alternative analyses in parallel. In contrast, the children appeared not to reanalyze their initial analysis, even over shorter distances than have been investigated in prior research. We argue that this reflects the children's over-reliance on bottom-up, lexical cues to interpretation. The implications for the development of parsing routines are discussed
  • Kidd, E., Kemp, N., & Quinn, S. (2011). Did you have a choccie bickie this arvo? A quantitative look at Australian hypocoristics. Language Sciences, 33(3), 359-368. doi:10.1016/j.langsci.2010.11.006.

    Abstract

    This paper considers the use and representation of Australian hypocoristics (e.g., choccie → chocolate, arvo → afternoon). One-hundred-and-fifteen adult speakers of Australian English aged 17–84 years generated as many tokens of hypocoristics as they could in 10 min. The resulting corpus was analysed along a number of dimensions in an attempt to identify (i) general age- and gender-related trends in hypocoristic knowledge and use, and (ii) linguistic properties of each hypocoristic class. Following Bybee’s (1985, 1995) lexical network approach, we conclude that Australian hypocoristics are the product of the same linguistic processes that capture other inflectional morphological processes.
  • Kidd, E., Arciuli, J., Christiansen, M. H., Isbilen, E. S., Revius, K., & Smithson, M. (2020). Measuring children’s auditory statistical learning via serial recall. Journal of Experimental Child Psychology, 200: 104964. doi:10.1016/j.jecp.2020.104964.

    Abstract

    Statistical learning (SL) has been a prominent focus of research in developmental and adult populations, guided by the assumption that it is a fundamental component of learning underlying higher-order cognition. In developmental populations, however, there have been recent concerns regarding the degree to which many current tasks reliably measure SL, particularly in younger children. In the current article, we present the results of two studies that measured auditory statistical learning (ASL) of linguistic stimuli in children aged 5–8 years. Children listened to 6 min of continuous syllables comprising four trisyllabic pseudowords. Following the familiarization phase, children completed (a) a two-alternative forced-choice task and (b) a serial recall task in which they repeated either target sequences embedded during familiarization or foils, manipulated for sequence length. Results showed that, although both measures consistently revealed learning at the group level, the recall task better captured learning across the full range of abilities and was more reliable at the individual level. We conclude that, as has also been demonstrated in adults, the method holds promise for future studies of individual differences in ASL of linguistic stimuli.
  • Kidd, E., & Kirjavainen, M. (2011). Investigating the contribution of procedural and declarative memory to the acquisition of past tense morphology: Evidence from Finnish. Language and Cognitive Processes, 26(4-6), 794-829. doi:10.1080/01690965.2010.493735.

    Abstract

    The present paper reports on a study that investigated the role of procedural and declarative memory in the acquisition of Finnish past tense morphology. Two competing models were tested. Ullman's (2004) declarative/procedural model predicts that procedural memory supports the acquisition of regular morphology, whereas declarative memory supports the acquisition of irregular morphology. In contrast, single-route approaches predict that declarative memory should support lexical learning, which in turn should predict morphological acquisition. One-hundred and twenty-four (N=124) monolingual Finnish-speaking children aged 4;0–6;7 completed tests of procedural and declarative memory, tests of vocabulary knowledge and nonverbal ability, and a test of past test knowledge. The results best supported the single-route approach, suggesting that this account best extends to languages that possess greater morphological complexity than English.
  • Kidd, E. (2003). Relative clause comprehension revisited: Commentary on Eisenberg (2002). Journal of Child Language, 30(3), 671-679. doi:10.1017/S0305000903005683.

    Abstract

    Eisenberg (2002) presents data from an experiment investigating three- and four-year-old children's comprehension of restrictive relative clauses (RC). From the results she argues, contrary to Hamburger & Crain (1982), that children do not have discourse knowledge of the felicity conditions of RCs before acquiring the syntax of relativization. This note evaluates this conclusion on the basis of the methodology used, and proposes that an account of syntactic development needs to be sensitive to the real-time processing requirements acquisition places on the learner.
  • Kidd, E. (Ed.). (2011). The acquisition of relative clauses: Processing, typology and function. Amsterdam: Benjamins.
  • Kim, N., Brehm, L., Sturt, P., & Yoshida, M. (2020). How long can you hold the filler: Maintenance and retrieval. Language, Cognition and Neuroscience, 35(1), 17-42. doi:10.1080/23273798.2019.1626456.

    Abstract

    This study attempts to reveal the mechanisms behind the online formation of Wh-Filler-Gap Dependencies (WhFGD). Specifically, we aim to uncover the way in which maintenance and retrieval work in WhFGD processing, by paying special attention to the information that is retrieved when the gap is recognized. We use the agreement attraction phenomenon (Wagers, M. W., Lau, E. F., & Phillips, C. (2009). Agreement attraction in comprehension: Representations and processes. Journal of Memory and Language, 61(2), 206-237) as a probe. The first and second experiments examined the type of information that is maintained and how maintenance is motivated, investigating the retrieved information at the gap for reactivated fillers and definite NPs. The third experiment examined the role of the retrieval, comparing reactivated and active fillers. We contend that the information being accessed reflects the extent to which the filler is maintained, where the reader is able to access fine-grained information including category information as well as a representation of both the head and the modifier at the verb.

    Additional information

    Supplemental material
  • Kita, S. (Ed.). (2003). Pointing: Where language, culture, and cognition meet. Mahwah, NJ: Erlbaum.
  • Kita, S., & Ozyurek, A. (2003). What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language, 48(1), 16-32. doi:10.1016/S0749-596X(02)00505-3.

    Abstract

    Gestures that spontaneously accompany speech convey information coordinated with the concurrent speech. There has been considerable theoretical disagreement about the process by which this informational coordination is achieved. Some theories predict that the information encoded in gesture is not influenced by how information is verbally expressed. However, others predict that gestures encode only what is encoded in speech. This paper investigates this issue by comparing informational coordination between speech and gesture across different languages. Narratives in Turkish, Japanese, and English were elicited using an animated cartoon as the stimulus. It was found that gestures used to express the same motion events were influenced simultaneously by (1) how features of motion events were expressed in each language, and (2) spatial information in the stimulus that was never verbalized. From this, it is concluded that gestures are generated from spatio-motoric processes that interact on-line with the speech production process. Through the interaction, spatio-motoric information to be expressed is packaged into chunks that are verbalizable within a processing unit for speech formulation. In addition, we propose a model of speech and gesture production as one of a class of frameworks that are compatible with the data.
  • Klein, W. (2003). Wozu braucht man eigentlich Flexionsmorphologie? Zeitschrift für Literaturwissenschaft und Linguistik, 131, 23-54.
  • Klein, W., & Meibauer, J. (2011). Einleitung. LiLi, Zeitschrift für Literaturwissenschaft und Linguistik, 41(162), 5-8.

    Abstract

    Nannten die Erwachsenen irgend einen Gegenstand und wandten sie sich dabei ihm zu, so nahm ich das wahr und ich begriff, daß der Gegenstand durch die Laute, die sie aussprachen, bezeichnet wurde, da sie auf ihn hinweisen wollten. Dies aber entnahm ich aus ihren Gebärden, der natürlichen Sprache aller Völker, der Sprache, die durch Mienen- und Augenspiel, durch die Bewegungen der Glieder und den Klang der Stimme die Empfindungen der Seele anzeigt, wenn diese irgend etwas begehrt, oder festhält, oder zurückweist, oder flieht. So lernte ich nach und nach verstehen, welche Dinge die Wörter bezeichneten, die ich wieder und wieder, an ihren bestimmten Stellen in verschiedenen Sätzen, aussprechen hörte. Und ich brachte, als nun mein Mund sich an diese Zeichen gewöhnt hatte, durch sie meine Wünsche zum Ausdruck. (Augustinus, Confessiones I, 8) Dies ist das Zitat eines Zitats: Zu Beginn der Philosophischen Untersuchungen führt Ludwig Wittgenstein diese Stelle aus Augustinus’ Bekenntnissen an, in denen dieser beschreibt, wie er seiner Erinnerung nach seine Muttersprache gelernt hat (Wittgenstein führt den lateinischen Text an und gibt dann seine Übersetzung, hier ist nur letztere zitiert). Sie bilden den Ausgangspunkt für Wittgensteins berühmte Überlegungen über die Funktionsweise der menschlichen Sprache und für seine Idee des Sprachspiels. Nun weiß man nicht, wie genau sich Augustinus wirklich erinnert und ob er sich all dies, wie so viel, was seither über den Spracherwerb gesagt und geschrieben wurde, bloß zurechtgelegt hat, in der Meinung, so müsse es sein. Aber anders als so vieles, was seither über den Spracherwerb gesagt und geschrieben wurde, ist es wunderbar formuliert und enthält zwei Momente, die in der wissenschaftlichen Forschung bis heute, wenn denn nicht bestritten, so doch oft nicht gesehen und dort, wo sie denn gesehen, nicht wirklich ernstgenommen wurden: A. Wir lernen die Sprache in der alltäglichen Kommunikation mit der sozialen Umgebung. B. Um eine Sprache zu lernen, genügt es nicht, diese Sprache zu hören; vielmehr benötigen wir eine Fülle an begleitender Information, wie hier Gestik und Mimik der Erwachsenen. Beides möchte man eigentlich für selbstverständlich halten. Herodot erzählt die berühmte Geschichte des Pharaos Psammetich, der wissen wollte, was die erste und eigentliche Sprache der Menschen sei, und befahl, zwei Neugeborene aufwachsen zu lassen, ohne dass jemand zu ihnen spricht; das erste Wort, das sie äußern, klang, so erzählt Herodot, wie das phrygische Wort für Brot, und so nahm man denn an, die Ursprache des Menschen sei das Phrygische. In dieser Vorstellung vom Spracherwerb spielt der Input aus der sozialen Umgebung nur insofern eine Rolle, als die eigentliche, von Geburt an vorhandene Sprache durch eine andere verdrängt werden kann: Kinder, die in einer englischsprachigen Umgebung aufwachsen, sprechen nicht die Ursprache. Diese Theorie gilt heute als obsolet. Sie ist aber in ihrer Einschätzung vom relativen Gewicht dessen, was an sprachlichem Wissen von Anfang an vorhanden ist, und dem, was der sozialen Umgebung entnommen werden muss, manchen neueren Theorien des Spracherwerbs nicht ganz fern: In der Chomsky’schen Idee der Universalgrammatik, theoretische Grundlage eines wesentlichen Teils der modernen Spracherwerbsforschung, ist „die Sprache” hauptsächlich etwas Angeborenes, insoweit gleich für alle Menschen und vom jeweiligen Input unabhängig. Das, was das Kind oder, beim Zweitspracherwerb, der erwachsene Lerner an Sprachlichem aus seiner Umgebung erfährt, wird nicht genutzt, um daraus bestimmte Regelhaftigkeiten abzuleiten und sich diese anzueignen; der Input fungiert eher als eine Art externer Auslöser für latent bereits vorhandenes Wissen. Für das Erlernen des Wortschatzes gilt dies sicher nicht. Es kann nicht angeboren sein, dass der Mond luna heißt. Für andere Bereiche der Sprache ist das Ausmaß des Angeborenen aber durchaus umstritten. Bei dieser Denkweise gilt das unter A Gesagte nicht. Die meisten modernen Spracherwerbsforscher schreiben dem Input ein wesentlich höheres Gewicht zu: Wir kopieren die charakteristischen Eigenschaften eines bestimmten sprachlichen Systems, indem wir den Input analysieren, um so die ihm zugrundeliegenden Regularitäten abzuleiten. Der Input tritt uns in Form von Schallfolgen (oder Gesten und später geschriebenen Zeichen) entgegen, die von anderen, die das System beherrschen, zu kommunikativen Zwecken verwendet werden. Diese Schallfolgen müssen die Lernenden in kleinere Einheiten zerlegen, diese mit Bedeutungen versehen und nach den Regularitäten abklopfen, denen gemäß sie sich zu komplexeren Ausdrücken verbinden lassen. Dies – und vieles andere – ist es, was das dem Menschen angeborene Sprachvermögen leistet, keine andere Spezies kann es (einem Pferd kann man so viel Chinesisch vorspielen, wie man will, es wird es nicht lernen). Aber auch wir könnten es nicht, wenn wir nur den Schall hätten. Wenn man, in einer Abwandlung des Psammetich’schen Versuchs, jemanden in ein Zimmer einsperren und tagaus tagein mit Chinesisch beschallen und im Übrigen gut versorgen würde, so würde er es, gleich ob als Kind oder als Erwachsener, nicht lernen. Vielleicht würde er einige strukturelle Eigenschaften des Schallstroms ausfindig machen; aber er würde auch nach Jahren kein Chinesisch können. Man benötigt den Schallstrom als sinnlich fassbaren Ausdruck der zugrundeliegenden Sprache, und man benötigt all die Informationen, die man der jeweiligen Redesituation oder aber seinem bereits vorhandenen anderweitigen Wissen entnehmen kann. Augustinus hat beides radikal vereinfacht; aber im Prinzip hat er Recht, und man sollte daher von der Spracherwerbsforschung erwarten, dass sie dies in Rechnung stellt. Das tut sie aber selten. Soweit sie überhaupt aus dem Gehäuse der Theorie tritt und sich den tatsächlichen Verlauf des Spracherwerbs anschaut, konzentriert sie sich weithin auf das, was die Kinder selbst sagen – dazu dienen ausgedehnte Corpora –, oder aber sie untersucht in experimentellen Settings, wie Kinder bestimmte Wörter oder Strukturen verstehen oder auch nicht verstehen. Das hat auch, wenn denn gut gemacht, einen hohen Aufschlusswert. Aber die eigentliche Verarbeitung des Inputs im doppelten Sinne – Schallwellen und Parallelinformation – wird selten in den Mittelpunkt des Interesses gerückt. Dies führt zu eigentümlichen Verzerrungen. So betrachtet man in der Spracherwerbsforschung vor allem deklarative Hauptsätze. Ein nicht unwesentlicher Teil dessen, was Kinder hören, besteht aber aus Imperativen („Tu das!“, „Tu das nicht!“). In solchen Imperativen gibt es normalerweise kein Subjekt. Ein intelligentes Kind muss daher zu dem Schluss kommen, dass das Deutsche in einem nicht unwesentlichen Teil seiner grammatischen Strukturen eine „pro drop-Sprache” ist, d.h. eine Sprache, in dem man das Subjekt weglassen kann. Kein Linguist käme auf diese Idee; sie entspricht aber den tatsächlichen Verhältnissen, und dies schlägt sich in dem Input, den das Kind verarbeiten muss, nieder. Dieses Heft befasst sich mit einer Spracherwerbssituation, in der – anders als beispielsweise bei einem Gespräch am Frühstückstisch – der Input in seiner doppelten Form gut zu überschauen ist, ohne dass die Situation, wie etwa bei einem kontrollierten Experiment, unnatürlich und der normalen Lernumgebung ferne wäre: mit dem Anschauen, Vorlesen und Lesen von Kinderbüchern. Man kann sich eine solche Situation als eine natürliche Ausweitung dessen vorstellen, was Augustinus beschreibt: Die Kinder hören, was die Erwachsenen sagen, und ihre Aufmerksamkeit wird auf bestimmte Dinge gerichtet, während sie hören und schauen – nur geht es hier nicht um einzelne Wörter, sondern um komplexe Ausdrücke und um komplexe, aber dennoch überschaubare begleitende Informationen. Nun haben Kinderbücher in der Spracherwerbsforschung durchaus eine Rolle gespielt. Dabei dienen sie – sei es als reine Folge von Bildern, sei es mit Text oder gar nur als Text – aber meistens nur als eine Art Vorlage für die Sprachproduktion der Kinder: Sie sollen aus der Vorlage eine Geschichte ableiten und in ihren eigenen Worten erzählen. Das bekannteste, aber keineswegs das einzige Beispiel sind die von Michael Bamberg, Ruth Berman und Dan Slobin in den 1980er Jahren initiierten „frog stories” – Nacherzählungen einer einfachen Bildgeschichte, die inzwischen in zahlreichen Sprachen vorliegen und viele Aufschlüsse über die unterschiedlichsten Aspekte der sich entwickelnden Sprachbeherrschung, von der Flexionsmorphologie bis zur Textstruktur, gebracht haben. Das ist gut und sinnvoll; aber im Grunde müsste man einen Schritt weiter gehen, nämlich gleichsam wir durch ein Mikroskop zu schauen, wie sich die Kinder ihre Regularitäten aus der Interaktion ableiten. Dies würde unsere Vorstellungen über den Verlauf des Spracherwerbs und die Gesetzlichkeiten, nach denen er erfolgt, wesentlich bereichern, vielleicht auf eine ganz neue Basis stellen. Die Beiträge dieses Heftes geben dafür eine Reihe von Beispielen, von denen nur ein kleines, aber besonders schlagendes erwähnt werden soll. Es gibt zahlreiche, auf Bildgeschichten beruhende Analysen, in denen untersucht wird, wie Kinder eine bestimmte Person oder eine Sache im fortlaufenden Diskurs benennen – ob sie etwa definite und indefinite Nominalausdrücke (ein Junge – der Junge), lexikalische oder pronominale Nominalphrasen (der Junge – er) oder gar leere Elemente (der Junge wacht auf und 0 schaut nach seinem Hund) richtig verwenden können. Das Bild, das die Forschung in diesem wesentlichen Teil der Sprachbeherrschung heute bietet, ist alles andere als einheitlich. So umfassen die Ansichten darüber, wann die Definit-Indefinit-Unterscheidung gemeistert wird, den größten Teil der Kindheit, je nachdem, welche Untersuchungen man zu Rate zieht. In dem Aufsatz von Katrin Dammann-Thedens wird deutlich, dass Kindern in einem bestimmen Alter oft überhaupt nicht klar ist, dass eine bestimmte Person, eine bestimmte Sache auf fortlaufenden Bildern dieselbe ist – auch wenn sie ähnlich aussieht –, und das ist bei Licht besehen ja auch keine triviale Frage. Diese Beobachtungen werfen ein ganz neues Licht auf die Idee der referentiellen Kontinuität im Diskurs und ihren Ausdruck durch nominale Ausdrücke wie die eben genannten. Vielleicht haben wir ganz falsche Vorstellungen darüber, wie Kinder die begleitende Information – hier durch die Bilder einer Geschichte geliefert – verstehen und damit für den Spracherwerb verarbeiten. Derlei Beobachtungen sind zunächst einmal etwas Punktuelles, keine Antworten, sondern Hinweise auf Dinge, die man bedenken muss. Aber ihre Analyse, und allgemeiner, ein genauerer Blick auf das, was sich tatsächlich abspielt, wenn Kinder sich Kinderbücher anschauen, mag uns vielleicht zu einem wesentlich tieferen Verständnis dessen führen, was beim Erwerb einer Sprache tatsächlich geschieht.
  • Klein, W. (2000). An analysis of the German perfekt. Language, 76, 358-382.

    Abstract

    The German Perfekt has two quite different temporal readings, as illustrated by the two possible continuations of the sentence Peter hat gearbeitet in i, ii, respectively: (i) Peter hat gearbeitet und ist müde. Peter has worked and is tired. (ii) Peter hat gearbeitet und wollte nicht gestört werden. Peter has worked and wanted not to be disturbed. The first reading essentially corresponds to the English present perfect; the second can take a temporal adverbial with past time reference ('yesterday at five', 'when the phone rang', and so on), and an English translation would require a past tense ('Peter worked/was working'). This article shows that the Perfekt has a uniform temporal meaning that results systematically from the interaction of its three components-finiteness marking, auxiliary and past participle-and that the two readings are the consequence of a structural ambiguity. This analysis also predicts the properties of other participle constructions, in particular the passive in German.
  • Klein, W., Li, P., & Hendriks, H. (2000). Aspect and assertion in Mandarin Chinese. Natural Language & Linguistic Theory, 18, 723-770. doi:10.1023/A:1006411825993.

    Abstract

    Chinese has a number of particles such as le, guo, zai and zhe that add a particular aspectual value to the verb to which they are attached. There have been many characterisations of this value in the literature. In this paper, we review several existing influential accounts of these particles, including those in Li and Thompson (1981), Smith (1991), and Mangione and Li (1993). We argue that all these characterisations are intuitively plausible, but none of them is precise.We propose that these particles serve to mark which part of the sentence''s descriptive content is asserted, and that their aspectual value is a consequence of this function. We provide a simple and precise definition of the meanings of le, guo, zai and zhe in terms of the relationship between topic time and time of situation, and show the consequences of their interaction with different verb expressions within thisnew framework of interpretation.
  • Klein, W. (2000). Fatale Traditionen. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (120), 11-40.
  • Klein, W., & Berliner Arbeitsgruppe (2000). Sprache des Rechts: Vermitteln, Verstehen, Verwechseln. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (118), 7-33.
  • Klein, W. (2000). Was uns die Sprache des Rechts über die Sprache sagt. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (118), 115-149.
  • Klenova, A. V., Goncharova, M. V., Kashentseva, T. A., & Naidenko, S. V. (2020). Voice breaking and its relation to body mass and testosterone level in the Siberian Crane (Leucogeranus leucogeranus). Journal of Ornithology, 161, 859-871. doi:10.1007/s10336-020-01773-w.

    Abstract

    Vocal development of cranes (Gruidae) has attracted scientifc interest due to its special stage, voice breaking. During voice breaking, chicks of diferent crane species produce calls with two fundamental frequencies that correspond to those in adult low-frequency and juvenile high-frequency vocalizations. However, triggers that afect voice breaking in cranes are mainly unknown. Here we studied the voice breaking in the Siberian Crane (Leucogeranus leucogeranus) and test its relation to the body mass and testosterone level. We analyzed 5846 calls, 39 body mass measurements and 60 blood samples from 11
    Siberian Crane chicks in 8 ages from 2.5 to 18 months of life together with 90 body mass measurements and 61 blood samples from 24 Siberian Crane adults. The individual duration of voice breaking and dates of its onset, culmination and completion depended neither on the body mass nor on the testosterone level at various ages. But we found correlation between the testosterone level and mean deltas of percentages of the high and low frequency components in Siberian Crane calls between the closest recording sessions. We also observed some coincidence in time between the mean dates of voice breaking onset and the termination of body mass gain (at 7.5 months of age), and between the mean dates of voice breaking completion and the start of a new breeding season. Similar relations have been shown previously for some other crane species. We also showed for the frst time that the mean dates of voice breaking culmination correlated with the signifcant increase of the testosterone level (at 10.5 months of age). So, we suggest that voice breaking in cranes may be triggered by the end of chicks’ body
    growth, is stimulated by the increase of testosterone level and ends soon after adult cranes stop taking care of their chicks.
  • Klenova, A. V., Goncharova, M. V., & Kashentseva, T. A. (2020). Long-term stability in the vocal duets of the endangered Siberian Crane Leucogeranus leucogeranus. Polar Biology, 43, 813-823. doi:10.1007/s00300-020-02689-0.

    Abstract

    Vocal-based monitoring is increasingly being used as a non-invasive method for identifying individuals within avian populations and is promising for the Siberian Crane, Leucogeranus leucogeranus. This is a poorly studied, long-lived, secretive and critically endangered bird species that breeds in the Arctic tundra of western and eastern regions of Siberia. We assessed between- and within-year stability of individual-specific vocal features in duets of Siberian Crane and tested the effect of pair-mate change on their stability. Previous findings showed that duets are specific to different pairs of birds; however, it is still unknown how long pair-specific traits of duets remain and if they change in the course of a year or when birds re-mate. We recorded duets of 15 reproductively active pairs in the Oka Crane Breeding Centre in 2003–2006 and 2013–2017. We found that pair-specific vocal signatures remained stable both within the year and across ~ 10 years. After a change of mate, most of the variables we measured in the call did not change in any of the birds. Our data suggest that the stability of the
    individually specific vocal features may enable Siberian Cranes to be reliably identified by their duets over the birds’ lifetime. We believe that our work can increase confidence in the use of acoustic recognition techniques for endangered crane monitoring programs. Our results also suggest that Siberian Cranes may use their duets to form long-term social bonds between neighbours.
  • Knudsen, B., Creemers, A., & Meyer, A. S. (2020). Forgotten little words: How backchannels and particles may facilitate speech planning in conversation? Frontiers in Psychology, 11: 593671. doi:10.3389/fpsyg.2020.593671.

    Abstract

    In everyday conversation, turns often follow each other immediately or overlap in time. It has been proposed that speakers achieve this tight temporal coordination between their turns by engaging in linguistic dual-tasking, i.e., by beginning to plan their utterance during the preceding turn. This raises the question of how speakers manage to co-ordinate speech planning and listening with each other. Experimental work addressing this issue has mostly concerned the capacity demands and interference arising when speakers retrieve some content words while listening to others. However, many contributions to conversations are not content words, but backchannels, such as “hm”. Backchannels do not provide much conceptual content and are therefore easy to plan and respond to. To estimate how much they might facilitate speech planning in conversation, we determined their frequency in a Dutch and a German corpus of conversational speech. We found that 19% of the contributions in the Dutch corpus, and 16% of contributions in the German corpus were backchannels. In addition, many turns began with fillers or particles, most often translation equivalents of “yes” or “no,” which are likewise easy to plan.We proposed that to generate comprehensive models of using language in conversation psycholinguists should study not only the generation and processing of content words, as is commonly done, but also consider backchannels, fillers, and particles.
  • Koenigs, M., Acheson, D. J., Barbey, A. K., Soloman, J., Postle, B. R., & Grafman, J. (2011). Areas of left perisylvian cortex mediate auditory-verbal short-term memory. Neuropsychologia, 49(13), 3612-3619. doi:10.1016/j.neuropsychologia.2011.09.013.

    Abstract

    A contentious issue in memory research is whether verbal short-term memory (STM) depends on a neural system specifically dedicated to the temporary maintenance of information, or instead relies on the same brain areas subserving the comprehension and production of language. In this study, we examined a large sample of adults with acquired brain lesions to identify the critical neural substrates underlying verbal STM and the relationship between verbal STM and language processing abilities. We found that patients with damage to selective regions of left perisylvian cortex – specifically the inferior frontal and posterior temporal sectors – were impaired on auditory–verbal STM performance (digit span), as well as on tests requiring the production and/or comprehension of language. These results support the conclusion that verbal STM and language processing are mediated by the same areas of left perisylvian cortex.

    Files private

    Request files
  • Kokal, I., Engel, A., Kirschner, S., & Keysers, C. (2011). Synchronized drumming enhances activity in the caudate and facilitates prosocial commitment - If the rhythm comes easily. PLoS One, 6(11), e27272. doi:10.1371/journal.pone.0027272.

    Abstract

    Why does chanting, drumming or dancing together make people feel united? Here we investigate the neural mechanisms underlying interpersonal synchrony and its subsequent effects on prosocial behavior among synchronized individuals. We hypothesized that areas of the brain associated with the processing of reward would be active when individuals experience synchrony during drumming, and that these reward signals would increase prosocial behavior toward this synchronous drum partner. 18 female non-musicians were scanned with functional magnetic resonance imaging while they drummed a rhythm, in alternating blocks, with two different experimenters: one drumming in-synchrony and the other out-of-synchrony relative to the participant. In the last scanning part, which served as the experimental manipulation for the following prosocial behavioral test, one of the experimenters drummed with one half of the participants in-synchrony and with the other out-of-synchrony. After scanning, this experimenter "accidentally" dropped eight pencils, and the number of pencils collected by the participants was used as a measure of prosocial commitment. Results revealed that participants who mastered the novel rhythm easily before scanning showed increased activity in the caudate during synchronous drumming. The same area also responded to monetary reward in a localizer task with the same participants. The activity in the caudate during experiencing synchronous drumming also predicted the number of pencils the participants later collected to help the synchronous experimenter of the manipulation run. In addition, participants collected more pencils to help the experimenter when she had drummed in-synchrony than out-of-synchrony during the manipulation run. By showing an overlap in activated areas during synchronized drumming and monetary reward, our findings suggest that interpersonal synchrony is related to the brain's reward system.
  • Kolinsky, R., Gabriel, R., Demoulin, C., Gregory, M. M., Saraiva de Carvalho, K., & Morais, J. (2020). The influence of age, schooling, literacy, and socioeconomic status on serial-order memory. Journal of Cultural Cognitive Science, 4, 343-365. doi:10.1007/s41809-020-00056-3.

    Abstract

    We aimed at investigating whether formal schooling and literacy favor progress in verbal serial-order short-term memory (STM). In Experiment 1, we presented children varying on age, school level, and socioeconomic background with a serial-order reconstruction task and observed differences related to both age and school level. Two subsequent experiments aimed at separating schooling- and literacy-related effects from age-related ones. In Experiment 2 we compared, on the one hand, performance of children of similar age but different school levels and, on the other hand, performance of children of same school level but different ages. We observed a schooling effect but no age effect on serial-order reconstruction: the youngest first-graders outperformed age-matched kindergartners but performed similarly as older first-graders of the same literacy level. Furthermore, children’s literacy abilities were strongly correlated to their serial-order reconstruction performance, even after controlling for the effects of non-verbal reasoning and vocabulary. In Experiment 3 we examined low socio-economic background adults presenting varying (correlated) levels of schooling and literacy: some had attended school in childhood for only a few years and were either illiterate or very poor readers, whereas others had attended school for at least 12 years. In addition to serial-order memory, item STM was assessed by a delayed single nonword repetition task. The groups differed on both STM tasks, which both correlated with their literacy abilities. Thus, literacy and schooling do not impact only order memory, but more generally verbal STM. Moreover, comparison between the adults’ and children’s data strongly suggests that it is schooling and/or literacy rather than age per se that matters for serial-order memory.
  • Kolipakam, V., & Shanker, K. (2011). Comparing human-wildlife conflict across different landscapes: A framework for examing social, political and economic issues and a preliminary comparison between sites. Trondheim/Bangalore: Norwegian Institute of Nature Research (NINA) & Centre for Ecological Sciences (CES), Indian Institute of Science.
  • Kong, X., Tzourio-Mazoyer, N., Joliot, M., Fedorenko, E., Liu, J., Fisher, S. E., & Francks, C. (2020). Gene expression correlates of the cortical network underlying sentence processing. Neurobiology of Language, 1(1), 77-103. doi:10.1162/nol_a_00004.

    Abstract

    A pivotal question in modern neuroscience is which genes regulate brain circuits that underlie cognitive functions. However, the field is still in its infancy. Here we report an integrated investigation of the high-level language network (i.e., sentence processing network) in the human cerebral cortex, combining regional gene expression profiles, task fMRI, large-scale neuroimaging meta-analysis, and resting-state functional network approaches. We revealed reliable gene expression-functional network correlations using three different network definition strategies, and identified a consensus set of genes related to connectivity within the sentence-processing network. The genes involved showed enrichment for neural development and actin-related functions, as well as association signals with autism, which can involve disrupted language functioning. Our findings help elucidate the molecular basis of the brain’s infrastructure for language. The integrative approach described here will be useful to study other complex cognitive traits.
  • Kong, X., Boedhoe, P. S. W., Abe, Y., Alonso, P., Ameis, S. H., Arnold, P. D., Assogna, F., Baker, J. T., Batistuzzo, M. C., Benedetti, F., Beucke, J. C., Bollettini, I., Bose, A., Brem, S., Brennan, B. P., Buitelaar, J., Calvo, R., Cheng, Y., Cho, K. I. K., Dallaspezia, S. and 71 moreKong, X., Boedhoe, P. S. W., Abe, Y., Alonso, P., Ameis, S. H., Arnold, P. D., Assogna, F., Baker, J. T., Batistuzzo, M. C., Benedetti, F., Beucke, J. C., Bollettini, I., Bose, A., Brem, S., Brennan, B. P., Buitelaar, J., Calvo, R., Cheng, Y., Cho, K. I. K., Dallaspezia, S., Denys, D., Ely, B. A., Feusner, J., Fitzgerald, K. D., Fouche, J.-P., Fridgeirsson, E. A., Glahn, D. C., Gruner, P., Gürsel, D. A., Hauser, T. U., Hirano, Y., Hoexter, M. Q., Hu, H., Huyser, C., James, A., Jaspers-Fayer, F., Kathmann, N., Kaufmann, C., Koch, K., Kuno, M., Kvale, G., Kwon, J. S., Lazaro, L., Liu, Y., Lochner, C., Marques, P., Marsh, R., Martínez-Zalacaín, I., Mataix-Cols, D., Medland, S. E., Menchón, J. M., Minuzzi, L., Moreira, P. S., Morer, A., Morgado, P., Nakagawa, A., Nakamae, T., Nakao, T., Narayanaswamy, J. C., Nurmi, E. L., O'Neill, J., Pariente, J. C., Perriello, C., Piacentini, J., Piras, F., Piras, F., Pittenger, C., Reddy, Y. J., Rus-Oswald, O. G., Sakai, Y., Sato, J. R., Schmaal, L., Simpson, H. B., Soreni, N., Soriano-Mas, C., Spalletta, G., Stern, E. R., Stevens, M. C., Stewart, S. E., Szeszko, P. R., Tolin, D. F., Tsuchiyagaito, A., Van Rooij, D., Van Wingen, G. A., Venkatasubramanian, G., Wang, Z., Yun, J.-Y., ENIGMA-OCD Working Group, Thompson, P. M., Stein, D. J., Van den Heuvel, O. A., & Francks, C. (2020). Mapping cortical and subcortical asymmetry in obsessive-compulsive disorder: Findings from the ENIGMA Consortium. Biological Psychiatry, 87(12), 1022-1034. doi:10.1016/j.biopsych.2019.04.022.

    Abstract

    Objective

    Lateralized dysfunction has been suggested in Obsessive-Compulsive Disorder (OCD). However, it is currently unclear whether OCD is characterized by abnormal patterns of structural brain asymmetry. Here we carried out by far the largest study of brain structural asymmetry in OCD.
    Method

    We studied a collection of 16 pediatric datasets (501 OCD patients and 439 healthy controls), as well as 30 adult datasets (1777 patients and 1654 controls) from the OCD Working Group within the ENIGMA (Enhancing Neuro-Imaging Genetics through Meta-Analysis) consortium. Asymmetries of the volumes of subcortical structures, and of regional cortical thickness and surface area measures, were assessed based on T1-weighted MRI scans, using harmonized image analysis and quality control protocols. We investigated possible alterations of brain asymmetry in OCD patients. We also explored potential associations of asymmetry with specific aspects of the disorder and medication status.
    Results

    In the pediatric datasets, the largest case-control differences were observed for volume asymmetry of the thalamus (more leftward; Cohen’s d = 0.19) and the pallidum (less leftward; d = -0.21). Additional analyses suggested putative links between these asymmetry patterns and medication status, OCD severity, and/or anxiety and depression comorbidities. No significant case-control differences were found in the adult datasets.
    Conclusions

    The results suggest subtle changes of the average asymmetry of subcortical structures in pediatric OCD, which are not detectable in adults with the disorder. These findings may reflect altered neurodevelopmental processes in OCD.
  • König, C. J., Langer, M., Fell, C. B., Pathak, R. D., Bajwa, N. u. H., Derous, E., Geißler, S. M., Hirose, S., Hülsheger, U., Javakhishvili, N., Junges, N., Knudsen, B., Lee, M. S. W., Mariani, M. G., Nag, G. C., Petrescu, C., Robie, C., Rohorua, H., Sammel, L. D., Schichtel, D. and 4 moreKönig, C. J., Langer, M., Fell, C. B., Pathak, R. D., Bajwa, N. u. H., Derous, E., Geißler, S. M., Hirose, S., Hülsheger, U., Javakhishvili, N., Junges, N., Knudsen, B., Lee, M. S. W., Mariani, M. G., Nag, G. C., Petrescu, C., Robie, C., Rohorua, H., Sammel, L. D., Schichtel, D., Titov, S., Todadze, K., von Lautz, A. H., & Ziem, M. (2020). Economic predictors of differences in interview faking between countries: Economic inequality matters, not the state of economy. Applied Psychology. doi:10.1111/apps.12278.

    Abstract

    Many companies recruit employees from different parts of the globe, and faking behavior by
    potential employees is a ubiquitous phenomenon. It seems that applicants from some
    countries are more prone to faking compared to others, but the reasons for these differences
    are largely unexplored. This study relates country-level economic variables to faking
    behavior in hiring processes. In a cross-national study across 20 countries, participants (N =
    3839) reported their faking behavior in their last job interview. This study used the random
    response technique (RRT) to ensure participants anonymity and to foster honest answers
    regarding faking behavior. Results indicate that general economic indicators (gross domestic
    product per capita [GDP] and unemployment rate) show negligible correlations with faking
    across the countries, whereas economic inequality is positively related to the extent of
    applicant faking to a substantial extent. These findings imply that people are sensitive to
    inequality within countries and that inequality relates to faking, because inequality might
    actuate other psychological processes (e.g., envy) which in turn increase the probability for
    unethical behavior in many forms.
  • Kösem, A., Bosker, H. R., Jensen, O., Hagoort, P., & Riecke, L. (2020). Biasing the perception of spoken words with transcranial alternating current stimulation. Journal of Cognitive Neuroscience, 32(8), 1428-1437. doi:10.1162/jocn_a_01579.

    Abstract

    Recent neuroimaging evidence suggests that the frequency of entrained oscillations in auditory cortices influences the perceived duration of speech segments, impacting word perception (Kösem et al. 2018). We further tested the causal influence of neural entrainment frequency during speech processing, by manipulating entrainment with continuous transcranial alternating
    current stimulation (tACS) at distinct oscillatory frequencies (3 Hz and 5.5 Hz) above the auditory cortices. Dutch participants listened to speech and were asked to report their percept of a target Dutch word, which contained a vowel with an ambiguous duration. Target words
    were presented either in isolation (first experiment) or at the end of spoken sentences (second experiment). We predicted that the tACS frequency would influence neural entrainment and
    therewith how speech is perceptually sampled, leading to a perceptual over- or underestimation of the vowel’s duration. Whereas results from Experiment 1 did not confirm this prediction, results from experiment 2 suggested a small effect of tACS frequency on target word
    perception: Faster tACS lead to more long-vowel word percepts, in line with the previous neuroimaging findings. Importantly, the difference in word perception induced by the different tACS frequencies was significantly larger in experiment 1 vs. experiment 2, suggesting that the
    impact of tACS is dependent on the sensory context. tACS may have a stronger effect on spoken word perception when the words are presented in continuous speech as compared to when they are isolated, potentially because prior (stimulus-induced) entrainment of brain oscillations
    might be a prerequisite for tACS to be effective.

    Additional information

    Data availability
  • Kucera, K. S., Reddy, T. E., Pauli, F., Gertz, J., Logan, J. E., Myers, R. M., & Willard, H. F. (2011). Allele-specific distribution of RNA polymerase II on female X chromosomes. Human Molecular Genetics, 20, 3964-3973. doi:10.1093/hmg/ddr315.

    Abstract

    While the distribution of RNA polymerase II (PolII) in a variety of complex genomes is correlated with gene expression, the presence of PolII at a gene does not necessarily indicate active expression. Various patterns of PolII binding have been described genome wide; however, whether or not PolII binds at transcriptionally inactive sites remains uncertain. The two X chromosomes in female cells in mammals present an opportunity to examine each of the two alleles of a given locus in both active and inactive states, depending on which X chromosome is silenced by X chromosome inactivation. Here, we investigated PolII occupancy and expression of the associated genes across the active (Xa) and inactive (Xi) X chromosomes in human female cells to elucidate the relationship of gene expression and PolII binding. We find that, while PolII in the pseudoautosomal region occupies both chromosomes at similar levels, it is significantly biased toward the Xa throughout the rest of the chromosome. The general paucity of PolII on the Xi notwithstanding, detectable (albeit significantly reduced) binding can be observed, especially on the evolutionarily younger short arm of the X. PolII levels at genes that escape inactivation correlate with the levels of their expression; however, additional PolII sites can be found at apparently silenced regions, suggesting the possibility of a subset of genes on the Xi that are poised for expression. Consistent with this hypothesis, we show that a high proportion of genes associated with PolII-accessible sites, while silenced in GM12878, are expressed in other female cell lines.
  • Kuiper, K., McCann, H., Quinn, H., Aitchison, T., & Van der Veer, K. (2003). A syntactically annotated idiom dataset (SAID). Philadelphia: Linguistic Data Consortium, University of Pennsylvania.
  • Kuzla, C., & Ernestus, M. (2011). Prosodic conditioning of phonetic detail in German plosives. Journal of Phonetics, 39, 143-155. doi:10.1016/j.wocn.2011.01.001.

    Abstract

    This study investigates the prosodic conditioning of phonetic details which are candidate cues to phonological contrasts. German /b, d, g, p, t, k/ were examined in three prosodic positions. Lenis plosives /b, d, g/ were produced with less glottal vibration at larger prosodic boundaries, whereas their VOT showed no effect of prosody. VOT of fortis plosives /p, t, k/ decreased at larger boundaries, as did their burst intensity maximum. Vowels (when measured from consonantal release) following fortis plosives and lenis velars were shorter after larger boundaries. Closure duration, which did not contribute to the fortis/lenis contrast, was heavily affected by prosody. These results support neither of the hitherto proposed accounts of prosodic strengthening (Uniform Strengthening and Feature Enhancement). We propose a different account, stating that the phonological identity of speech sounds remains stable not only within, but also across prosodic positions (contrast-over-prosody hypothesis). Domain-initial strengthening hardly diminishes the contrast between prosodically weak fortis and strong lenis plosives.
  • Kyriacou, M., Conklin, K., & Thompson, D. (2020). Passivizability of idioms: Has the wrong tree been barked up? Language and Speech, 63(2), 404-435. doi:10.1177/0023830919847691.

    Abstract

    A growing number of studies support the partial compositionality of idiomatic phrases, while idioms are thought to vary in their syntactic flexibility. Some idioms, like kick the bucket, have been classified as inflexible and incapable of being passivized without losing their figurative interpretation (i.e., the bucket was kicked ≠ died). Crucially, this has never been substantiated by empirical findings. In the current study, we used eye-tracking to examine whether the passive forms of (flexible and inflexible) idioms retain or lose their figurative meaning. Active and passivized idioms (he kicked the bucket/the bucket was kicked) and incongruous active and passive control phrases (he kicked the apple/the apple was kicked) were inserted in sentences biasing the figurative meaning of the respective idiom (die). Active idioms served as a baseline. We hypothesized that if passivized idioms retain their figurative meaning (the bucket was kicked = died), they should be processed more efficiently than the control phrases, since their figurative meaning would be congruous in the context. If, on the other hand, passivized idioms lose their figurative interpretation (the bucket was kicked = the pail was kicked), then their meaning should be just as incongruous as that of both control phrases, in which case we would expect no difference in their processing. Eye movement patterns demonstrated a processing advantage for passivized idioms (flexible and inflexible) over control phrases, thus indicating that their figurative meaning was not compromised. These findings challenge classifications of idiom flexibility and highlight the creative nature of language.
  • Laaksonen, H., Kujala, J., Hultén, A., Liljeström, M., & Salmelin, R. (2011). MEG evoked responses and rhythmic activity provide spatiotemporally complementary measures of neural activity in language production. NeuroImage, 60, 29-36. doi:MEG evoked responses and rhythmic activity provide spatiotemporally complementary measures of neural activity in language production.

    Abstract

    Phase-locked evoked responses and event-related modulations of spontaneous rhythmic activity are the two main approaches used to quantify stimulus- or task-related changes in electrophysiological measures. The relationship between the two has beenwidely theorized upon but empirical research has been limited to the primary visual and sensorimotor cortex. However, both evoked responses and rhythms have been used as markers of neural activity in paradigms ranging from simple sensory to complex cognitive tasks.While some spatial agreement between the two phenomena has been observed, typically only one of the measures has been used in any given study, thus disallowing a direct evaluation of their exact spatiotemporal relationship. In this study, we sought to systematically clarify the connection between evoked responses and rhythmic activity. Using both measures, we identified the spatiotemporal patterns of task effects in three magnetoencephalography (MEG) data sets, all variants of a picture naming task. Evoked responses and rhythmic modulation yielded largely separate networks, with spatial overlap mainly in the sensorimotor and primary visual areas.Moreover, in the cortical regions thatwere identified with both measures the experimental effects they conveyed differed in terms of timing and function. Our results suggest that the two phenomena are largely detached and that both measures are needed for an accurate portrayal of brain activity.
  • Lacan, M., Keyser, C., Ricaut, F.-X., Brucato, N., Duranthon, F., Guilaine, J., Crubézy, E., & Ludes, B. (2011). Ancient DNA reveals male diffusion through the Neolithic Mediterranean route. Proceedings of the National Academy of Sciences of the United States of America, 108, 9788-9791. doi:10.1073/pnas.1100723108.

    Abstract

    The Neolithic is a key period in the history of the European settlement. Although archaeological and present-day genetic data suggest several hypotheses regarding the human migration patterns at this period, validation of these hypotheses with the use of ancient genetic data has been limited. In this context, we studied DNA extracted from 53 individuals buried in a necropolis used by a French local community 5,000 y ago. The relatively good DNA preservation of the samples allowed us to obtain autosomal, Y-chromosomal, and/or mtDNA data for 29 of the 53 samples studied. From these datasets, we established close parental relationships within the necropolis and determined maternal and paternal lineages as well as the absence of an allele associated with lactase persistence, probably carried by Neolithic cultures of central Europe. Our study provides an integrative view of the genetic past in southern France at the end of the Neolithic period. Furthermore, the Y-haplotype lineages characterized and the study of their current repartition in European populations confirm a greater influence of the Mediterranean than the Central European route in the peopling of southern Europe during the Neolithic transition.
  • Lacan, M., Keyser, C., Ricaut, F.-X., Brucato, N., Tarrús, J., Bosch, A., Guilaine, J., Crubézy, E., & Ludes, B. (2011). Ancient DNA suggests the leading role played by men in the Neolithic dissemination. Proceedings of the National Academy of Sciences of the United States of America, 108, 18255-18259. doi:10.1073/pnas.1113061108.

    Abstract

    The impact of the Neolithic dispersal on the western European populations is subject to continuing debate. To trace and date genetic lineages potentially brought during this transition and so understand the origin of the gene pool of current populations, we studied DNA extracted from human remains excavated in a Spanish funeral cave dating from the beginning of the fifth millennium B.C. Thanks to a “multimarkers” approach based on the analysis of mitochondrial and nuclear DNA (autosomes and Y-chromosome), we obtained information on the early Neolithic funeral practices and on the biogeographical origin of the inhumed individuals. No close kinship was detected. Maternal haplogroups found are consistent with pre-Neolithic settlement, whereas the Y-chromosomal analyses permitted confirmation of the existence in Spain approximately 7,000 y ago of two haplogroups previously associated with the Neolithic transition: G2a and E1b1b1a1b. These results are highly consistent with those previously found in Neolithic individuals from French Late Neolithic individuals, indicating a surprising temporal genetic homogeneity in these groups. The high frequency of G2a in Neolithic samples in western Europe could suggest, furthermore, that the role of men during Neolithic dispersal could be greater than currently estimated.

    Additional information

    Supporting_Information_Lacan.pdf
  • Lai, C. S. L., Gerrelli, D., Monaco, A. P., Fisher, S. E., & Copp, A. J. (2003). FOXP2 expression during brain development coincides with adult sites of pathology in a severe speech and language disorder. Brain, 126(11), 2455-2462. doi:10.1093/brain/awg247.

    Abstract

    Disruption of FOXP2, a gene encoding a forkhead-domain transcription factor, causes a severe developmental disorder of verbal communication, involving profound articulation deficits, accompanied by linguistic and grammatical impairments. Investigation of the neural basis of this disorder has been limited previously to neuroimaging of affected children and adults. The discovery of the gene responsible, FOXP2, offers a unique opportunity to explore the relevant neural mechanisms from a molecular perspective. In the present study, we have determined the detailed spatial and temporal expression pattern of FOXP2 mRNA in the developing brain of mouse and human. We find expression in several structures including the cortical plate, basal ganglia, thalamus, inferior olives and cerebellum. These data support a role for FOXP2 in the development of corticostriatal and olivocerebellar circuits involved in motor control. We find intriguing concordance between regions of early expression and later sites of pathology suggested by neuroimaging. Moreover, the homologous pattern of FOXP2/Foxp2 expression in human and mouse argues for a role for this gene in development of motor-related circuits throughout mammalian species. Overall, this study provides support for the hypothesis that impairments in sequencing of movement and procedural learning might be central to the FOXP2-related speech and language disorder.
  • Lai, C. S. L., Fisher, S. E., Hurst, J. A., Levy, E. R., Hodgson, S., Fox, M., Jeremiah, S., Povey, S., Jamison, D. C., Green, E. D., Vargha-Khadem, F., & Monaco, A. P. (2000). The SPCH1 region on human 7q31: Genomic characterization of the critical interval and localization of translocations associated with speech and language disorder. American Journal of Human Genetics, 67(2), 357-368. doi:10.1086/303011.

    Abstract

    The KE family is a large three-generation pedigree in which half the members are affected with a severe speech and language disorder that is transmitted as an autosomal dominant monogenic trait. In previously published work, we localized the gene responsible (SPCH1) to a 5.6-cM region of 7q31 between D7S2459 and D7S643. In the present study, we have employed bioinformatic analyses to assemble a detailed BAC-/PAC-based sequence map of this interval, containing 152 sequence tagged sites (STSs), 20 known genes, and >7.75 Mb of completed genomic sequence. We screened the affected chromosome 7 from the KE family with 120 of these STSs (average spacing <100 kb), but we did not detect any evidence of a microdeletion. Novel polymorphic markers were generated from the sequence and were used to further localize critical recombination breakpoints in the KE family. This allowed refinement of the SPCH1 interval to a region between new markers 013A and 330B, containing ∼6.1 Mb of completed sequence. In addition, we have studied two unrelated patients with a similar speech and language disorder, who have de novo translocations involving 7q31. Fluorescence in situ hybridization analyses with BACs/PACs from the sequence map localized the t(5;7)(q22;q31.2) breakpoint in the first patient (CS) to a single clone within the newly refined SPCH1 interval. This clone contains the CAGH44 gene, which encodes a brain-expressed protein containing a large polyglutamine stretch. However, we found that the t(2;7)(p23;q31.3) breakpoint in the second patient (BRD) resides within a BAC clone mapping >3.7 Mb distal to this, outside the current SPCH1 critical interval. Finally, we investigated the CAGH44 gene in affected individuals of the KE family, but we found no mutations in the currently known coding sequence. These studies represent further steps toward the isolation of the first gene to be implicated in the development of speech and language.
  • Lai, J., & Poletiek, F. H. (2011). The impact of adjacent-dependencies and staged-input on the learnability of center-embedded hierarchical structures. Cognition, 118(2), 265-273. doi:10.1016/j.cognition.2010.11.011.

    Abstract

    A theoretical debate in artificial grammar learning (AGL) regards the learnability of hierarchical structures. Recent studies using an AnBn grammar draw conflicting conclusions (Bahlmann and Friederici, 2006, De Vries et al., 2008). We argue that 2 conditions crucially affect learning AnBn structures: sufficient exposure to zero-level-of-embedding (0-LoE) exemplars and a staged-input. In 2 AGL experiments, learning was observed only when the training set was staged and contained 0-LoE exemplars. Our results might help understanding how natural complex structures are learned from exemplars.
  • Lattenkamp, E. Z., Vernes, S. C., & Wiegrebe, L. (2020). Vocal production learning in the pale spear-nosed bat, Phyllostomus discolor. Biology Letters, 16: 20190928. doi:10.1098/rsbl.2019.0928.

    Abstract

    Vocal production learning (VPL), or the ability to modify vocalizations through the imitation of sounds, is a rare trait in the animal kingdom. While humans are exceptional vocal learners, few other mammalian species share this trait. Owing to their singular ecology and lifestyle, bats are highly specialized for the precise emission and reception of acoustic signals. This specialization makes them ideal candidates for the study of vocal learning, and several bat species have previously shown evidence supportive of vocal learning. Here we use a sophisticated automated set-up and a contingency training paradigm to explore the vocal learning capacity of pale spear-nosed bats. We show that these bats are capable of directional change of the fundamental frequency of their calls according to an auditory target. With this study, we further highlight the importance of bats for the study of vocal learning and provide evidence for the VPL capacity of the pale spear-nosed bat.

    Additional information

    Supplemental material dataset
  • Lausberg, H., Cruz, R. F., Kita, S., Zaidel, E., & Ptito, A. (2003). Pantomime to visual presentation of objects: Left hand dyspraxia in patients with complete callosotomy. Brain, 126(2), 343-360. doi:10.1093/brain/awg042.

    Abstract

    Investigations of left hand praxis in imitation and object use in patients with callosal disconnection have yielded divergent results, inducing a debate between two theoretical positions. Whereas Liepmann suggested that the left hemisphere is motor dominant, others maintain that both hemispheres have equal motor competences and propose that left hand apraxia in patients with callosal disconnection is secondary to left hemispheric specialization for language or other task modalities. The present study aims to gain further insight into the motor competence of the right hemisphere by investigating pantomime of object use in split-brain patients. Three patients with complete callosotomy and, as control groups, five patients with partial callosotomy and nine healthy subjects were examined for their ability to pantomime object use to visual object presentation and demonstrate object manipulation. In each condition, 11 objects were presented to the subjects who pantomimed or demonstrated the object use with either hand. In addition, six object pairs were presented to test bimanual coordination. Two independent raters evaluated the videotaped movement demonstrations. While object use demonstrations were perfect in all three groups, the split-brain patients displayed apraxic errors only with their left hands in the pantomime condition. The movement analysis of concept and execution errors included the examination of ipsilateral versus contralateral motor control. As the right hand/left hemisphere performances demonstrated retrieval of the correct movement concepts, concept errors by the left hand were taken as evidence for right hemisphere control. Several types of execution errors reflected a lack of distal motor control indicating the use of ipsilateral pathways. While one split-brain patient controlled his left hand predominantly by ipsilateral pathways in the pantomime condition, the error profile in the other two split-brain patients suggested that the right hemisphere controlled their left hands. In the object use condition, in all three split-brain patients fine-graded distal movements in the left hand indicated right hemispheric control. Our data show left hand apraxia in split-brain patients is not limited to verbal commands, but also occurs in pantomime to visual presentation of objects. As the demonstration with object in hand was unimpaired in either hand, both hemispheres must contain movement concepts for object use. However, the disconnected right hemisphere is impaired in retrieving the movement concept in response to visual object presentation, presumably because of a deficit in associating perceptual object representation with the movement concepts.
  • Lausberg, H., Kita, S., Zaidel, E., & Ptito, A. (2003). Split-brain patients neglect left personal space during right-handed gestures. Neuropsychologia, 41(10), 1317-1329. doi:10.1016/S0028-3932(03)00047-2.

    Abstract

    Since some patients with right hemisphere damage or with spontaneous callosal disconnection neglect the left half of space, it has been suggested that the left cerebral hemisphere predominantly attends to the right half of space. However, clinical investigations of patients having undergone surgical callosal section have not shown neglect when the hemispheres are tested separately. These observations question the validity of theoretical models that propose a left hemispheric specialisation for attending to the right half of space. The present study aims to investigate neglect and the use of space by either hand in gestural demonstrations in three split-brain patients as compared to five patients with partial callosotomy and 11 healthy subjects. Subjects were asked to demonstrate with precise gestures and without speaking the content of animated scenes with two moving objects. The results show that in the absence of primary perceptual or representational neglect, split-brain patients neglect left personal space in right-handed gestural demonstrations. Since this neglect of left personal space cannot be explained by directional or spatial akinesia, it is suggested that it originates at the conceptual level, where the spatial coordinates for right-hand gestures are planned. The present findings are at odds with the position that the separate left hemisphere possesses adequate mechanisms for acting in both halves of space and neglect results from right hemisphere suppression of this potential. Rather, the results provide support for theoretical models that consider the left hemisphere as specialised for processing the right half of space during the execution of descriptive gestures.
  • Lausberg, H., & Kita, S. (2003). The content of the message influences the hand choice in co-speech gestures and in gesturing without speaking. Brain and Language, 86(1), 57-69. doi:10.1016/S0093-934X(02)00534-5.

    Abstract

    The present study investigates the hand choice in iconic gestures that accompany speech. In 10 right-handed subjects gestures were elicited by verbal narration and by silent gestural demonstrations of animations with two moving objects. In both conditions, the left-hand was used as often as the right-hand to display iconic gestures. The choice of the right- or left-hands was determined by semantic aspects of the message. The influence of hemispheric language lateralization on the hand choice in co-speech gestures appeared to be minor. Instead, speaking seemed to induce a sequential organization of the iconic gestures.
  • Leckband, D. E., Menon, S., Rosenberg, K., Graham, S. A., Taylor, M. E., & Drickamer, K. (2011). Geometry and adhesion of extracellular domains of DC-SIGNR neck length variants analyzed by force-distance measurements. Biochemistry, 50, 6125-6132. doi:10.1021/bi2003444.

    Abstract

    Force-distance measurements have been used to examine differences in the interaction of the dendritic cell glycan-binding receptor DC-SIGN and the closely related endothelial cell receptor DC-SIGNR (L-SIGN) with membranes bearing glycan ligands. The results demonstrate that upon binding to membrane-anchored ligand, DC-SIGNR undergoes a conformational change similar to that previously observed for DC-SIGN. The results also validate a model for the extracellular domain of DC-SIGNR derived from crystallographic studies. Force measurements were performed with DC-SIGNR variants that differ in the length of the neck that result from genetic polymorphisms, which encode different numbers of the 23-amino acid repeat sequences that constitute the neck. The findings are consistent with an elongated, relatively rigid structure of the neck repeat observed in crystals. In addition, differences in the lengths of DC-SIGN and DC-SIGNR extracellular domains with equivalent numbers of neck repeats support a model in which the different dispositions of the carbohydrate-recognition domains in DC-SIGN and DC-SIGNR result from variations in the sequences of the necks.
  • Lemhöfer, K., Schriefers, H., & Indefrey, P. (2020). Syntactic processing in L2 depends on perceived reliability of the input: Evidence from P600 responses to correct input. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(10), 1948-1965. doi:10.1037/xlm0000895.

    Abstract

    In 3 ERP experiments, we investigated how experienced L2 speakers process natural and correct syntactic input that deviates from their own, sometimes incorrect, syntactic representations. Our previous study (Lemhöfer, Schriefers, & Indefrey, 2014) had shown that L2 speakers do engage in native-like syntactic processing of gender agreement but base this processing on their own idiosyncratic (and sometimes incorrect) grammars. However, as in other standard ERP studies, but different from realistic L2 input, the materials in that study contained a large proportion of incorrect sentences. In the present study, German speakers of Dutch read exclusively objectively correct Dutch sentences that did or did not contain subjective determiner “errors” (e.g., de boot “the boat,” which conflicts with the intuition of many German speakers that the correct phrase should be het boot). During reading for comprehension (Experiment 1), no syntax-related ERP responses for subjectively incorrect compared to correct phrases were observed. The same was true even when participants explicitly attended to and learned from the determiners in the sentences (Experiment 2). Only when participants judged the correctness of determiners in each sentence (Experiment 3) did a clear P600 appear. These results suggest that the full and native-like use of subjective grammars, as reflected in the P600 to subjective violations, occurs only when speakers have reason to mistrust the grammaticality of the input, either because of the nature of the task (grammaticality judgments) or because of the salient presence of incorrect sentences.
  • Lev-Ari, S., & Sebanz, N. (2020). Interacting with multiple partners improves communication skills. Cognitive Science, 44(4): e12836. doi:10.1111/cogs.12836.

    Abstract

    Successful communication is important for both society and people’s personal life. Here we show that people can improve their communication skills by interacting with multiple others, and that this improvement seems to come about by a greater tendency to take the addressee’s perspective when there are multiple partners. In Experiment 1, during a training phase, participants described figures to a new partner in each round or to the same partner in all rounds. Then all participants interacted with a new partner and their recordings from that round were presented to naïve listeners. Participants who had interacted with multiple partners during training were better understood. This occurred despite the fact that the partners had not provided the participants with any input other than feedback on comprehension during the interaction. In Experiment 2, participants were asked to provide descriptions to a different future participant in each round or to the same future participant in all rounds. Next they performed a surprise memory test designed to tap memory for global details, in line with the addressee’s perspective. Those who had provided descriptions for multiple future participants performed better. These results indicate that people can improve their communication skills by interacting with multiple people, and that this advantage might be due to a greater tendency to take the addressee’s perspective in such cases. Our findings thus show how the social environment can influence our communication skills by shaping our own behavior during interaction in a manner that promotes the development of our communication skills.
  • Levelt, W. J. M. (2000). Uit talloos veel miljoenen. Natuur & Techniek, 68(11), 90.
  • Levelt, W. J. M. (2000). Dyslexie. Natuur & Techniek, 68(4), 64.
  • Levelt, W. J. M. (2000). Met twee woorden spreken [Simon Dik Lezing 2000]. Amsterdam: Vossiuspers AUP.
  • Levelt, W. J. M. (2000). Links en rechts: Waarom hebben we zo vaak problemen met die woorden? Natuur & Techniek, 68(7/8), 90.
  • Levelt, W. J. M. (2020). On becoming a physicist of mind. Annual Review of Linguistics, 6(1), 1-23. doi:10.1146/annurev-linguistics-011619-030256.

    Abstract

    In 1976, the German Max Planck Society established a new research enterprise in psycholinguistics, which became the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands. I was fortunate enough to be invited to direct this institute. It enabled me, with my background in visual and auditory psychophysics and the theory of formal grammars and automata, to develop a long-term chronometric endeavor to dissect the process of speaking. It led, among other work, to my book Speaking (1989) and to my research team's article in Brain and Behavioral Sciences “A Theory of Lexical Access in Speech Production” (1999). When I later became president of the Royal Netherlands Academy of Arts and Sciences, I helped initiate the Women for Science research project of the Inter Academy Council, a project chaired by my physicist sister at the National Institute of Standards and Technology. As an emeritus I published a comprehensive History of Psycholinguistics (2013). As will become clear, many people inspired and joined me in these undertakings.
  • Levelt, C. C., Schiller, N. O., & Levelt, W. J. M. (2000). The acquisition of syllable types. Language Acquisition, 8(3), 237-263. doi:10.1207/S15327817LA0803_2.

    Abstract

    In this article, we present an account of developmental data regarding the acquisition of syllable types. The data come from a longitudinal corpus of phonetically transcribed speech of 12 children acquiring Dutch as their first language. A developmental order of acquisition of syllable types was deduced by aligning the syllabified data on a Guttman scale. This order could be analyzed as following from an initial ranking and subsequent rerankings in the grammar of the structural constraints ONSET, NO-CODA, *COMPLEX-O, and *COMPLEX-C; some local conjunctions of these constraints; and a faithfulness constraint FAITH. The syllable type frequencies in the speech surrounding the language learner are also considered. An interesting correlation is found between the frequencies and the order of development of the different syllable types.
  • Levelt, W. J. M. (2000). The brain does not serve linguistic theory so easily [Commentary to target article by Grodzinksy]. Behavioral and Brain Sciences, 23(1), 40-41.
  • Levelt, W. J. M., & Meyer, A. S. (2000). Word for word: Multiple lexical access in speech production. European Journal of Cognitive Psychology, 12(4), 433-452. doi:10.1080/095414400750050178.

    Abstract

    It is quite normal for us to produce one or two million word tokens every year. Speaking is a dear occupation and producing words is at the core of it. Still, producing even a single word is a highly complex affair. Recently, Levelt, Roelofs, and Meyer (1999) reviewed their theory of lexical access in speech production, which dissects the word-producing mechanism as a staged application of various dedicated operations. The present paper begins by presenting a bird eye's view of this mechanism. We then square the complexity by asking how speakers control multiple access in generating simple utterances such as a table and a chair. In particular, we address two issues. The first one concerns dependency: Do temporally contiguous access procedures interact in any way, or do they run in modular fashion? The second issue concerns temporal alignment: How much temporal overlap of processing does the system tolerate in accessing multiple content words, such as table and chair? Results from picture-word interference and eye tracking experiments provide evidence for restricted cases of dependency as well as for constraints on the temporal alignment of access procedures.
  • Levinson, S. C. (2003). Space in language and cognition: Explorations in cognitive diversity. Cambridge: Cambridge University Press.
  • Levinson, S. C. (2020). On technologies of the intellect: Goody Lecture 2020. Halle: Max Planck Institute for Social Anthropology.
  • Levinson, S. C., & Brown, P. (2003). Emmanuel Kant chez les Tenejapans: L'Anthropologie comme philosophie empirique [Translated by Claude Vandeloise for 'Langues et Cognition']. Langues et Cognition, 239-278.

    Abstract

    This is a translation of Levinson and Brown (1994).
  • Levinson, S. C., & Meira, S. (2003). 'Natural concepts' in the spatial topological domain - adpositional meanings in crosslinguistic perspective: An exercise in semantic typology. Language, 79(3), 485-516.

    Abstract

    Most approaches to spatial language have assumed that the simplest spatial notions are (after Piaget) topological and universal (containment, contiguity, proximity, support, represented as semantic primitives suchas IN, ON, UNDER, etc.). These concepts would be coded directly in language, above all in small closed classes suchas adpositions—thus providing a striking example of semantic categories as language-specific projections of universal conceptual notions. This idea, if correct, should have as a consequence that the semantic categories instantiated in spatial adpositions should be essentially uniform crosslinguistically. This article attempts to verify this possibility by comparing the semantics of spatial adpositions in nine unrelated languages, with the help of a standard elicitation procedure, thus producing a preliminary semantic typology of spatial adpositional systems. The differences between the languages turn out to be so significant as to be incompatible withstronger versions of the UNIVERSAL CONCEPTUAL CATEGORIES hypothesis. Rather, the language-specific spatial adposition meanings seem to emerge as compact subsets of an underlying semantic space, withcertain areas being statistical ATTRACTORS or FOCI. Moreover, a comparison of systems withdifferent degrees of complexity suggests the possibility of positing implicational hierarchies for spatial adpositions. But such hierarchies need to be treated as successive divisions of semantic space, as in recent treatments of basic color terms. This type of analysis appears to be a promising approachfor future work in semantic typology.
  • Levinson, S. C. (2000). Presumptive meanings: The theory of generalized conversational implicature. Cambridge: MIT press.
  • Levinson, S. C. (2011). Pojmowanie przestrzeni w różnych kulturach [Polish translation of Levinson, S. C. 1998. Studying spatial conceptualization across cultures]. Autoportret, 33, 16-23.

    Abstract

    Polish translation of Levinson, S. C. (1998). Studying spatial conceptualization across cultures: Anthropology and cognitive science. Ethos, 26(1), 7-24. doi:10.1525/eth.1998.26.1.7

Share this page