Publications

Displaying 1 - 100 of 485
  • Abbondanza, F., Dale, P. S., Wang, C. A., Hayiou‐Thomas, M. E., Toseeb, U., Koomar, T. S., Wigg, K. G., Feng, Y., Price, K. M., Kerr, E. N., Guger, S. L., Lovett, M. W., Strug, L. J., Van Bergen, E., Dolan, C. V., Tomblin, J. B., Moll, K., Schulte‐Körne, G., Neuhoff, N., Warnke, A. and 13 moreAbbondanza, F., Dale, P. S., Wang, C. A., Hayiou‐Thomas, M. E., Toseeb, U., Koomar, T. S., Wigg, K. G., Feng, Y., Price, K. M., Kerr, E. N., Guger, S. L., Lovett, M. W., Strug, L. J., Van Bergen, E., Dolan, C. V., Tomblin, J. B., Moll, K., Schulte‐Körne, G., Neuhoff, N., Warnke, A., Fisher, S. E., Barr, C. L., Michaelson, J. J., Boomsma, D. I., Snowling, M. J., Hulme, C., Whitehouse, A. J. O., Pennell, C. E., Newbury, D. F., Stein, J., Talcott, J. B., Bishop, D. V. M., & Paracchini, S. (2023). Language and reading impairments are associated with increased prevalence of non‐right‐handedness. Child Development, 94(4), 970-984. doi:10.1111/cdev.13914.

    Abstract

    Handedness has been studied for association with language-related disorders because of its link with language hemispheric dominance. No clear pattern has emerged, possibly because of small samples, publication bias, and heterogeneous criteria across studies. Non-right-handedness (NRH) frequency was assessed in N = 2503 cases with reading and/or language impairment and N = 4316 sex-matched controls identified from 10 distinct cohorts (age range 6–19 years old; European ethnicity) using a priori set criteria. A meta-analysis (Ncases = 1994) showed elevated NRH % in individuals with language/reading impairment compared with controls (OR = 1.21, CI = 1.06–1.39, p = .01). The association between reading/language impairments and NRH could result from shared pathways underlying brain lateralization, handedness, and cognitive functions.

    Additional information

    supplementary information
  • Acheson, D. J., Hamidi, M., Binder, J. R., & Postle, B. R. (2011). A common neural substrate for language production and verbal working memory. Journal of Cognitive Neuroscience, 23(6), 1358-1367. doi:10.1162/jocn.2010.21519.

    Abstract

    Verbal working memory (VWM), the ability to maintain and manipulate representations of speech sounds over short periods, is held by some influential models to be independent from the systems responsible for language production and comprehension [e.g., Baddeley, A. D. Working memory, thought, and action. New York, NY: Oxford University Press, 2007]. We explore the alternative hypothesis that maintenance in VWM is subserved by temporary activation of the language production system [Acheson, D. J., & MacDonald, M. C. Verbal working memory and language production: Common approaches to the serial ordering of verbal information. Psychological Bulletin, 135, 50–68, 2009b]. Specifically, we hypothesized that for stimuli lacking a semantic representation (e.g., nonwords such as mun), maintenance in VWM can be achieved by cycling information back and forth between the stages of phonological encoding and articulatory planning. First, fMRI was used to identify regions associated with two different stages of language production planning: the posterior superior temporal gyrus (pSTG) for phonological encoding (critical for VWM of nonwords) and the middle temporal gyrus (MTG) for lexical–semantic retrieval (not critical for VWM of nonwords). Next, in the same subjects, these regions were targeted with repetitive transcranial magnetic stimulation (rTMS) during language production and VWM task performance. Results showed that rTMS to the pSTG, but not the MTG, increased error rates on paced reading (a language production task) and on delayed serial recall of nonwords (a test of VWM). Performance on a lexical–semantic retrieval task (picture naming), in contrast, was significantly sensitive to rTMS of the MTG. Because rTMS was guided by language production-related activity, these results provide the first causal evidence that maintenance in VWM directly depends on the long-term representations and processes used in speech production.
  • Acheson, D. J., Postle, B. R., & MacDonald, M. C. (2011). The effect of concurrent semantic categorization on delayed serial recall. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 44-59. doi:10.1037/a0021205.

    Abstract

    The influence of semantic processing on the serial ordering of items in short-term memory was explored using a novel dual-task paradigm. Participants engaged in 2 picture-judgment tasks while simultaneously performing delayed serial recall. List material varied in the presence of phonological overlap (Experiments 1 and 2) and in semantic content (concrete words in Experiment 1 and 3; nonwords in Experiments 2 and 3). Picture judgments varied in the extent to which they required accessing visual semantic information (i.e., semantic categorization and line orientation judgments). Results showed that, relative to line-orientation judgments, engaging in semantic categorization judgments increased the proportion of item-ordering errors for concrete lists but did not affect error proportions for nonword lists. Furthermore, although more ordering errors were observed for phonologically similar relative to dissimilar lists, no interactions were observed between the phonological overlap and picture-judgment task manipulations. These results demonstrate that lexical-semantic representations can affect the serial ordering of items in short-term memory. Furthermore, the dual-task paradigm provides a new method for examining when and how semantic representations affect memory performance.
  • Acheson, D. J., & MacDonald, M. C. (2011). The rhymes that the reader perused confused the meaning: Phonological effects during on-line sentence comprehension. Journal of Memory and Language, 65, 193-207. doi:10.1016/j.jml.2011.04.006.

    Abstract

    Research on written language comprehension has generally assumed that the phonological properties of a word have little effect on sentence comprehension beyond the processes of word recognition. Two experiments investigated this assumption. Participants silently read relative clauses in which two pairs of words either did or did not have a high degree of phonological overlap. Participants were slower reading and less accurate comprehending the overlap sentences compared to the non-overlapping controls, even though sentences were matched for plausibility and differed by only two words across overlap conditions. A comparison across experiments showed that the overlap effects were larger in the more difficult object relative than in subject relative sentences. The reading patterns showed that phonological representations affect not only memory for recently encountered sentences but also the developing sentence interpretation during on-line processing. Implications for theories of sentence processing and memory are discussed. Highlights The work investigates the role of phonological information in sentence comprehension, which is poorly understood. ► Subjects read object and subject relative clauses +/- phonological overlap in two pairs of words. ► Unique features of the study were online reading measures and pinpointed overlap locations. ► Phonological overlap slowed reading speed and impaired sentence comprehension, especially for object relatives. ► The results show a key role for phonological information during online comprehension, not just later sentence memory.
  • Alhama, R. G., Rowland, C. F., & Kidd, E. (2023). How does linguistic context influence word learning? Journal of Child Language, 50(6), 1374-1393. doi:10.1017/S0305000923000302.

    Abstract

    While there are well-known demonstrations that children can use distributional information to acquire multiple components of language, the underpinnings of these achievements are unclear. In the current paper, we investigate the potential pre-requisites for a distributional learning model that can explain how children learn their first words. We review existing literature and then present the results of a series of computational simulations with Vector Space Models, a type of distributional semantic model used in Computational Linguistics, which we evaluate against vocabulary acquisition data from children. We focus on nouns and verbs, and we find that: (i) a model with flexibility to adjust for the frequency of events provides a better fit to the human data, (ii) the influence of context words is very local, especially for nouns, and (iii) words that share more contexts with other words are harder to learn.
  • Ambridge, B., Pine, J. M., & Rowland, C. F. (2011). Children use verb semantics to retreat from overgeneralization errors: A novel verb grammaticality judgment study. Cognitive Linguistics, 22(2), 303-323. doi:10.1515/cogl.2011.012.

    Abstract

    Whilst certain verbs may appear in both the intransitive inchoative and the transitive causative constructions (The ball rolled/The man rolled the ball), others may appear in only the former (The man laughed/*The joke laughed the man). Some accounts argue that children acquire these restrictions using only (or mainly) statistical learning mechanisms such as entrenchment and pre-emption. Others have argued that verb semantics are also important. To test these competing accounts, adults (Experiment 1) and children aged 5–6 and 9–10 (Experiment 2) were taught novel verbs designed to be construed — on the basis of their semantics — as either intransitive-only or alternating. In support of the latter claim, participants' grammaticality judgments revealed that even the youngest group respected these semantic constraints. Frequency (entrenchment) effects were observed for familiar, but not novel, verbs (Experiment 1). We interpret these findings in the light of a new theoretical account designed to yield effects of both verb semantics and entrenchment/pre-emption.
  • Ameka, F. K. (1989). [Review of The case for lexicase: An outline of lexicase grammatical theory by Stanley Starosta]. Studies in Language, 13(2), 506-518.
  • Anichini, M., de Reus, K., Hersh, T. A., Valente, D., Salazar-Casals, A., Berry, C., Keller, P. E., & Ravignani, A. (2023). Measuring rhythms of vocal interactions: A proof of principle in harbour seal pups. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 378(1875): 20210477. doi:10.1098/rstb.2021.0477.

    Abstract

    Rhythmic patterns in interactive contexts characterize human behaviours such as conversational turn-taking. These timed patterns are also present in other animals, and often described as rhythm. Understanding fine-grained temporal adjustments in interaction requires complementary quantitative methodologies. Here, we showcase how vocal interactive rhythmicity in a non-human animal can be quantified using a multi-method approach. We record vocal interactions in harbour seal pups (Phoca vitulina) under controlled conditions. We analyse these data by combining analytical approaches, namely categorical rhythm analysis, circular statistics and time series analyses. We test whether pups' vocal rhythmicity varies across behavioural contexts depending on the absence or presence of a calling partner. Four research questions illustrate which analytical approaches are complementary versus orthogonal. For our data, circular statistics and categorical rhythms suggest that a calling partner affects a pup's call timing. Granger causality suggests that pups predictively adjust their call timing when interacting with a real partner. Lastly, the ADaptation and Anticipation Model estimates statistical parameters for a potential mechanism of temporal adaptation and anticipation. Our analytical complementary approach constitutes a proof of concept; it shows feasibility in applying typically unrelated techniques to seals to quantify vocal rhythmic interactivity across behavioural contexts.

    Additional information

    supplemental information
  • Arana, S., Pesnot Lerousseau, J., & Hagoort, P. (2023). Deep learning models to study sentence comprehension in the human brain. Language, Cognition and Neuroscience. Advance online publication. doi:10.1080/23273798.2023.2198245.

    Abstract

    Recent artificial neural networks that process natural language achieve unprecedented performance in tasks requiring sentence-level understanding. As such, they could be interesting models of the integration of linguistic information in the human brain. We review works that compare these artificial language models with human brain activity and we assess the extent to which this approach has improved our understanding of the neural processes involved in natural language comprehension. Two main results emerge. First, the neural representation of word meaning aligns with the context-dependent, dense word vectors used by the artificial neural networks. Second, the processing hierarchy that emerges within artificial neural networks broadly matches the brain, but is surprisingly inconsistent across studies. We discuss current challenges in establishing artificial neural networks as process models of natural language comprehension. We suggest exploiting the highly structured representational geometry of artificial neural networks when mapping representations to brain data.

    Additional information

    link to preprint
  • Araújo, S., Inácio, F., Francisco, A., Faísca, L., Petersson, K. M., & Reis, A. (2011). Component processes subserving rapid automatized naming in dyslexic and non-dyslexic readers. Dyslexia, 17, 242-255. doi:10.1002/dys.433.

    Abstract

    The current study investigated which time components of rapid automatized naming (RAN) predict group differences between dyslexic and non-dyslexic readers (matched for age and reading level), and how these components relate to different reading measures. Subjects performed two RAN tasks (letters and objects), and data were analyzed through a response time analysis. Our results demonstrated that impaired RAN performance in dyslexic readers mainly stem from enhanced inter-item pause times and not from difficulties at the level of post-access motor production (expressed as articulation rates). Moreover, inter-item pause times account for a significant proportion of variance in reading ability in addition to the effect of phonological awareness in the dyslexic group. This suggests that non-phonological factors may lie at the root of the association between RAN inter-item pauses and reading ability. In normal readers, RAN performance was associated with reading ability only at early ages (i.e. in the reading-matched controls), and again it was the RAN inter-item pause times that explain the association.
  • Araújo, S., Faísca, L., Bramão, I., Inácio, F., Petersson, K. M., & Reis, A. (2011). Object naming in dyslexic children: More than a phonological deficit. The Journal of General Psychology, 138, 215-228. doi:10.1080/00221309.2011.582525.

    Abstract

    In the present study, the authors investigate how some visual factors related to early stages of visual-object naming modulate naming performance in dyslexia. The performance of dyslexic children was compared with 2 control groups—normal readers matched for age and normal readers matched for reading level—while performing a discrete naming task in which color and dimensionality of the visually presented objects were manipulated. The results showed that 2-dimensional naming performance improved for color representations in control readers but not in dyslexics. In contrast to control readers, dyslexics were also insensitive to the stimulus's dimensionality. These findings are unlikely to be explained by a phonological processing problem related to phonological access or retrieval but suggest that dyslexics have a lower capacity for coding and decoding visual surface features of 2-dimensional representations or problems with the integration of visual information stored in long-term memory.
  • Araújo, S., Faísca, L., Petersson, K. M., & Reis, A. (2011). What does rapid naming tell us about dyslexia? Avances en Psicología Latinoamericana, 29, 199-213.

    Abstract

    This article summarizes some of the important findings from research evaluating the relationship between poor rapid naming and impaired reading performance. Substantial evidence shows that dyslexic readers have problems with rapid naming of visual items. Early research assumed that this was a consequence of phonological processing deficits, but recent findings suggest that non-phonological processes may lie at the root of the association between slow naming speed and poor reading. The hypothesis that rapid naming reflects an independent core deficit in dyslexia is supported by the main findings: (1) some dyslexics are characterized by rapid naming difficulties but intact phonological skills; (2) evidence for an independent association between rapid naming and reading competence in the dyslexic readers, when the effect of phonological skills was controlled; (3) rapid naming and phonological processing measures are not reliably correlated. Recent research also reveals greater predictive power of rapid naming, in particular the inter-item pause time, for high-frequency word reading compared to pseudoword reading in developmental dyslexia. Altogether, the results are more consistent with the view that a phonological component alone cannot account for the rapid naming performance in dyslexia. Rather, rapid naming problems may emerge from the inefficiencies in visual-orthographic processing as well as in phonological processing.
  • Araujo, S., Narang, V., Misra, D., Lohagun, N., Khan, O., Singh, A., Mishra, R. K., Hervais-Adelman, A., & Huettig, F. (2023). A literacy-related color-specific deficit in rapid automatized naming: Evidence from neurotypical completely illiterate and literate adults. Journal of Experimental Psychology: General, 152(8), 2403-2409. doi:10.1037/xge0001376.

    Abstract

    There is a robust positive relationship between reading skills and the time to name aloud an array of letters, digits, objects, or colors as quickly as possible. A convincing and complete explanation for the direction and locus of this association remains, however, elusive. In this study we investigated rapid automatized naming (RAN) of every-day objects and basic color patches in neurotypical illiterate and literate adults. Literacy acquisition and education enhanced RAN performance for both conceptual categories but this advantage was much larger for (abstract) colors than every-day objects. This result suggests that (i) literacy/education may be causal for serial rapid naming ability of non-alphanumeric items, (ii) differences in the lexical quality of conceptual representations can underlie the reading-related differential RAN performance.

    Additional information

    supplementary text
  • Aravena-Bravo, P., Cristia, A., Garcia, R., Kotera, H., Nicolas, R. K., Laranjo, R., Arokoyo, B. E., Benavides-Varela, S., Benders, T., Boll-Avetisyan, N., Cychosz, M., Ben, R. D., Diop, Y., Durán-Urzúa, C., Havron, N., Manalili, M., Narasimhan, B., Omane, P. O., Rowland, C. F., Kolberg, L. S. Aravena-Bravo, P., Cristia, A., Garcia, R., Kotera, H., Nicolas, R. K., Laranjo, R., Arokoyo, B. E., Benavides-Varela, S., Benders, T., Boll-Avetisyan, N., Cychosz, M., Ben, R. D., Diop, Y., Durán-Urzúa, C., Havron, N., Manalili, M., Narasimhan, B., Omane, P. O., Rowland, C. F., Kolberg, L. S., Ssemata, A. S., Styles, S. J., Troncoso-Acosta, B., & Woon, F. T. (2023). Towards diversifying early language development research: The first truly global international summer/winter school on language acquisition (/L+/) 2021. Journal of Cognition and Development. Advance online publication. doi:10.1080/15248372.2023.2231083.

    Abstract

    With a long-term aim of empowering researchers everywhere to contribute to work on language development, we organized the First Truly Global /L+/ International Summer/ Winter School on Language Acquisition, a free 5-day virtual school for early career researchers. In this paper, we describe the school, our experience organizing it, and lessons learned. The school had a diverse organizer team, composed of 26 researchers (17 from under represented areas: Subsaharan Africa, South and Southeast Asia, and Central and South America); and a diverse volunteer team, with a total of 95 volunteers from 35 different countries, nearly half from under represented areas. This helped world-wide Page 5 of 5 promotion of the school, leading to 958 registrations from 88 different countries, with 300 registrants (based in 63 countries, 80% from under represented areas) selected to participate in the synchronous aspects of the event. The school employed asynchronous (pre-recorded lectures, which were close-captioned) and synchronous elements (e.g., discussions to place the recorded lectures into participants' context; networking events) across three time zones. A post-school questionnaire revealed that 99% of participants enjoyed taking part in the school. Not with standing these positive quantitative outcomes, qualitative comments suggested we fell short in several areas, including the geographic diversity among lecturers and greater customization of contents to the participants’ contexts. Although much remains to be done to promote inclusivity in linguistic research, we hope our school will contribute to empowering researchers to investigate and publish on language acquisition in their home languages, to eventually result in more representative theories and empirical generalizations

    Additional information

    https://osf.io/fbnda
  • Artigas, M. S., Loth, D. W., Wain, L. V., Gharib, S. A., Obeidat, M., Tang, W., Zhai, G., Zhao, J. H., Smith, A. V., Huffman, J. E., Albrecht, E., Jackson, C. M., Evans, D. M., Cadby, G., Fornage, M., Manichaikul, A., Lopez, L. M., Johnson, T., Aldrich, M. C., Aspelund, T. and 149 moreArtigas, M. S., Loth, D. W., Wain, L. V., Gharib, S. A., Obeidat, M., Tang, W., Zhai, G., Zhao, J. H., Smith, A. V., Huffman, J. E., Albrecht, E., Jackson, C. M., Evans, D. M., Cadby, G., Fornage, M., Manichaikul, A., Lopez, L. M., Johnson, T., Aldrich, M. C., Aspelund, T., Barroso, I., Campbell, H., Cassano, P. A., Couper, D. J., Eiriksdottir, G., Franceschini, N., Garcia, M., Gieger, C., Gislason, G. K., Grkovic, I., Hammond, C. J., Hancock, D. B., Harris, T. B., Ramasamy, A., Heckbert, S. R., Heliövaara, M., Homuth, G., Hysi, P. G., James, A. L., Jankovic, S., Joubert, B. R., Karrasch, S., Klopp, N., Koch, B., Kritchevsky, S. B., Launer, L. J., Liu, Y., Loehr, L. R., Lohman, K., Loos, R. J., Lumley, T., Al Balushi, K. A., Ang, W. Q., Barr, R. G., Beilby, J., Blakey, J. D., Boban, M., Boraska, V., Brisman, J., Britton, J. R., Brusselle, G., Cooper, C., Curjuric, I., Dahgam, S., Deary, I. J., Ebrahim, S., Eijgelsheim, M., Francks, C., Gaysina, D., Granell, R., Gu, X., Hankinson, J. L., Hardy, R., Harris, S. E., Henderson, J., Henry, A., Hingorani, A. D., Hofman, A., Holt, P. G., Hui, J., Hunter, M. L., Imboden, M., Jameson, K. A., Kerr, S. M., Kolcic, I., Kronenberg, F., Liu, J. Z., Marchini, J., McKeever, T., Morris, A. D., Olin, A. C., Porteous, D. J., Postma, D. S., Rich, S. S., Ring, S. M., Rivadeneira, F., Rochat, T., Sayer, A. A., Sayers, I., Sly, P. D., Smith, G. D., Sood, A., Starr, J. M., Uitterlinden, A. G., Vonk, J. M., Wannamethee, S. G., Whincup, P. H., Wijmenga, C., Williams, O. D., Wong, A., Mangino, M., Marciante, K. D., McArdle, W. L., Meibohm, B., Morrison, A. C., North, K. E., Omenaas, E., Palmer, L. J., Pietiläinen, K. H., Pin, I., Pola Sbreve Ek, O., Pouta, A., Psaty, B. M., Hartikainen, A. L., Rantanen, T., Ripatti, S., Rotter, J. I., Rudan, I., Rudnicka, A. R., Schulz, H., Shin, S. Y., Spector, T. D., Surakka, I., Vitart, V., Völzke, H., Wareham, N. J., Warrington, N. M., Wichmann, H. E., Wild, S. H., Wilk, J. B., Wjst, M., Wright, A. F., Zgaga, L., Zemunik, T., Pennell, C. E., Nyberg, F., Kuh, D., Holloway, J. W., Boezen, H. M., Lawlor, D. A., Morris, R. W., Probst-Hensch, N., The International Lung Cancer Consortium, Giant consortium, Kaprio, J., Wilson, J. F., Hayward, C., Kähönen, M., Heinrich, J., Musk, A. W., Jarvis, D. L., Gläser, S., Järvelin, M. R., Ch Stricker, B. H., Elliott, P., O'Connor, G. T., Strachan, D. P., London, S. J., Hall, I. P., Gudnason, V., & Tobin, M. D. (2011). Genome-wide association and large-scale follow up identifies 16 new loci influencing lung function. Nature Genetics, 43, 1082-1090. doi:10.1038/ng.941.

    Abstract

    Pulmonary function measures reflect respiratory health and are used in the diagnosis of chronic obstructive pulmonary disease. We tested genome-wide association with forced expiratory volume in 1 second and the ratio of forced expiratory volume in 1 second to forced vital capacity in 48,201 individuals of European ancestry with follow up of the top associations in up to an additional 46,411 individuals. We identified new regions showing association (combined P < 5 × 10(-8)) with pulmonary function in or near MFAP2, TGFB2, HDAC4, RARB, MECOM (also known as EVI1), SPATA9, ARMC2, NCR3, ZKSCAN3, CDC123, C10orf11, LRP1, CCDC38, MMP15, CFDP1 and KCNE2. Identification of these 16 new loci may provide insight into the molecular mechanisms regulating pulmonary function and into molecular targets for future therapy to alleviate reduced lung function.
  • Assmann, M., Büring, D., Jordanoska, I., & Prüller, M. (2023). Towards a theory of morphosyntactic focus marking. Natural Language & Linguistic Theory. doi:10.1007/s11049-023-09567-4.

    Abstract

    Based on six detailed case studies of languages in which focus is marked morphosyntactically, we propose a novel formal theory of focus marking, which can capture these as well as the familiar English-type prosodic focus marking. Special attention is paid to the patterns of focus syncretism, that is, when different size and/or location of focus are indistinguishably realized by the same form.

    The key ingredients to our approach are that complex constituents (not just words) may be directly focally marked, and that the choice of focal marking is governed by blocking.
  • Avitabile, D., Crespi, A., Brioschi, C., Parente, V., Toietta, G., Devanna, P., Baruscotti, M., Truffa, S., Scavone, A., Rusconi, F., Biondi, A., D'Alessandra, Y., Vigna, E., DiFrancesco, D., Pesce, M., Capogrossi, M. C., & Barbuti, A. (2011). Human cord blood CD34+ progenitor cells acquire functional cardiac properties through a cell fusion process. American Journal of Physiology-Heart and Circulatory Physiology, 300(5), H1875-H1884. doi:10.1161/ATVBAHA.111.226969.

    Abstract

    The efficacy of cardiac repair by stem cell administration relies on a successful functional integration of injected cells into the host myocardium. Safety concerns have been raised about the possibility that stem cells may induce foci of arrhythmia in the ischemic myocardium. In a previous work (36), we showed that human cord blood CD34+ cells, when cocultured on neonatal mouse cardiomyocytes, exhibit excitation-contraction coupling features similar to those of cardiomyocytes, even though no human genes were upregulated. The aims of the present work are to investigate whether human CD34+ cells, isolated after 1 wk of coculture with neonatal ventricular myocytes, possess molecular and functional properties of cardiomyocytes and to discriminate, using a reporter gene system, whether cardiac differentiation derives from a (trans)differentiation or a cell fusion process. Umbilical cord blood CD34+ cells were isolated by a magnetic cell sorting method, transduced with a lentiviral vector carrying the enhanced green fluorescent protein (EGFP) gene, and seeded onto primary cultures of spontaneously beating rat neonatal cardiomyocytes. Cocultured EGFP+/CD34+-derived cells were analyzed for their electrophysiological features at different time points. After 1 wk in coculture, EGFP+ cells, in contact with cardiomyocytes, were spontaneously contracting and had a maximum diastolic potential (MDP) of −53.1 mV, while those that remained isolated from the surrounding myocytes did not contract and had a depolarized resting potential of −11.4 mV. Cells were then resuspended and cultured at low density to identify EGFP+ progenitor cell derivatives. Under these conditions, we observed single EGFP+ beating cells that had acquired an hyperpolarization-activated current typical of neonatal cardiomyocytes (EGFP+ cells, −2.24 ± 0.89 pA/pF; myocytes, −1.99 ± 0.63 pA/pF, at −125 mV). To discriminate between cell autonomous differentiation and fusion, EGFP+/CD34+ cells were cocultured with cardiac myocytes infected with a red fluorescence protein-lentiviral vector; under these conditions we found that 100% of EGFP+ cells were also red fluorescent protein positive, suggesting cell fusion as the mechanism by which cardiac functional features are acquired.
  • Baggio, G., & Hagoort, P. (2011). The balance between memory and unification in semantics: A dynamic account of the N400. Language and Cognitive Processes, 26, 1338-1367. doi:10.1080/01690965.2010.542671.

    Abstract

    At least three cognitive brain components are necessary in order for us to be able to produce and comprehend language: a Memory repository for the lexicon, a Unification buffer where lexical information is combined into novel structures, and a Control apparatus presiding over executive function in language. Here we describe the brain networks that support Memory and Unification in semantics. A dynamic account of their interactions is presented, in which a balance between the two components is sought at each word-processing step. We use the theory to provide an explanation of the N400 effect.
  • Bank, R., Crasborn, O., & Van Hout, R. (2011). Variation in mouth actions with manual signs in Sign Language of the Netherlands (NGT). Sign Language & Linguistics, 14(2), 248-270. doi:10.1075/sll.14.2.02ban.

    Abstract

    Mouthings and mouth gestures are omnipresent in Sign Language of the Netherlands (NGT). Mouthings in NGT commonly have their origin in spoken Dutch. We conducted a corpus study to explore how frequent mouthings in fact are in NGT, whether there is variation within and between signs in mouthings, and how frequent temporal reduction occurs in mouthings. Answers to these questions can help us classify mouthings as being specified in the sign lexicon or as being instances of code-blending. We investigated a sample of 20 frequently occurring signs. We found that each sign in the sample co-occurs frequently with a mouthing, usually that of a specific Dutch lexical item. On the other hand, signs show variation in the way they co-occur with mouthings and mouth gestures. By using a relatively large amount of natural data, we succeeded in gaining more insight into the way mouth actions are utilized in sign languages.

    Files private

    Request files
  • Barak, L., Harmon, Z., Feldman, N. H., Edwards, J., & Shafto, P. (2023). When children's production deviates from observed input: Modeling the variable production of the English past tense. Cognitive Science, 47(8): e13328. doi:10.1111/cogs.13328.

    Abstract

    As children gradually master grammatical rules, they often go through a period of producing form-meaning associations that were not observed in the input. For example, 2- to 3-year-old English-learning children use the bare form of verbs in settings that require obligatory past tense meaning while already starting to produce the grammatical –ed inflection. While many studies have focused on overgeneralization errors, fewer studies have attempted to explain the root of this earlier stage of rule acquisition. In this work, we use computational modeling to replicate children's production behavior prior to the generalization of past tense production in English. We illustrate how seemingly erroneous productions emerge in a model, without being licensed in the grammar and despite the model aiming at conforming to grammatical forms. Our results show that bare form productions stem from a tension between two factors: (1) trying to produce a less frequent meaning (the past tense) and (2) being unable to restrict the production of frequent forms (the bare form) as learning progresses. Like children, our model goes through a stage of bare form production and then converges on adult-like production of the regular past tense, showing that these different stages can be accounted for through a single learning mechanism.
  • Barendse, M. T., & Rosseel, Y. (2023). Multilevel SEM with random slopes in discrete data using the pairwise maximum likelihood. British Journal of Mathematical and Statistical Psychology, 76(2), 327-352. doi:10.1111/bmsp.12294.

    Abstract

    Pairwise maximum likelihood (PML) estimation is a promising method for multilevel models with discrete responses. Multilevel models take into account that units within a cluster tend to be more alike than units from different clusters. The pairwise likelihood is then obtained as the product of bivariate likelihoods for all within-cluster pairs of units and items. In this study, we investigate the PML estimation method with computationally intensive multilevel random intercept and random slope structural equation models (SEM) in discrete data. In pursuing this, we first reconsidered the general ‘wide format’ (WF) approach for SEM models and then extend the WF approach with random slopes. In a small simulation study we the determine accuracy and efficiency of the PML estimation method by varying the sample size (250, 500, 1000, 2000), response scales (two-point, four-point), and data-generating model (mediation model with three random slopes, factor model with one and two random slopes). Overall, results show that the PML estimation method is capable of estimating computationally intensive random intercept and random slopes multilevel models in the SEM framework with discrete data and many (six or more) latent variables with satisfactory accuracy and efficiency. However, the condition with 250 clusters combined with a two-point response scale shows more bias.

    Additional information

    figures
  • Barrios, A., & Garcia, R. (2023). Filipino children’s acquisition of nominal and verbal markers in L1 and L2 Tagalog. Languages, 8(3): 188. doi:10.3390/languages8030188.

    Abstract

    Western Austronesian languages, like Tagalog, have unique, complex voice systems that require the correct combinations of verbal and nominal markers, raising many questions about their learnability. In this article, we review the experimental and observational studies on both the L1 and L2 acquisition of Tagalog. The reviewed studies reveal error patterns that reflect the complex nature of the Tagalog voice system. The main goal of the article is to present a full picture of commission errors in young Filipino children’s expression of causation and agency in Tagalog by describing patterns of nominal marking and voice marking in L1 Tagalog and L2 Tagalog. It also aims to provide an overview of existing research, as well as characterize research on nominal and verbal acquisition, specifically in terms of research problems, data sources, and methodology. Additionally, we discuss the research gaps in at least fifty years’ worth of studies in the area from the 1960’s to the present, as well as ideas for future research to advance the state of the art.
  • Bastiaanse, R., & Ohlerth, A.-K. (2023). Presurgical language mapping: What are we testing? Journal of Personalized Medicine, 13: 376. doi:10.3390/jpm13030376.

    Abstract

    Gliomas are brain tumors infiltrating healthy cortical and subcortical areas that may host cognitive functions, such as language. If these areas are damaged during surgery, the patient might develop word retrieval or articulation problems. For this reason, many glioma patients are operated on awake, while their language functions are tested. For this practice, quite simple tests are used, for example, picture naming. This paper describes the process and timeline of picture naming (noun retrieval) and shows the timeline and localization of the distinguished stages. This is relevant information for presurgical language testing with navigated Magnetic Stimulation (nTMS). This novel technique allows us to identify cortical involved in the language production process and, thus, guides the neurosurgeon in how to approach and remove the tumor. We argue that not only nouns, but also verbs should be tested, since sentences are built around verbs, and sentences are what we use in daily life. This approach’s relevance is illustrated by two case studies of glioma patients.
  • Bauer, B. L. M. (2011). [Review of the book Het einde van de standaardtaal. Een wisseling van Europese cultuur. The end of standard language. A change in European language culture by Joop van der Horst]. Folia Linguistica Historica, 32(1), 253-260. doi:10.1515/flih.2011.009.
  • Bauer, B. L. M. (2023). Multiplication, addition, and subtraction in numerals: Formal variation in Latin’s decads+ from an Indo-European perspective. Journal of Latin Linguistics, 22(1), 1-56. doi:10.1515/joll-2023-2001.

    Abstract

    While formal variation in Latin’s numerals is generally acknowledged, little is known about (relative) incidence, distribution, context, or linguistic productivity. Addressing this lacuna, this article examines “decads+” in Latin, which convey the numbers between the full decads: the teens (‘eleven’ through ‘nineteen’) as well as the numerals between the higher decads starting at ‘twenty-one’ through ‘ninety-nine’. Latin’s decads+ are compounds and prone to variation. The data, which are drawn from a variety of sources, reveal (a) substantial formal variation in Latin, both internally and typologically; (b) co-existence of several types of formation; (c) productivity of potential borrowings; (d) resilience of early formations; (e) patterns in structure and incidence that anticipate the Romance numerals; and (f) historical trends. From a typological and general linguistic perspective as well, Latin’s decads+ are most relevant because their formal variation involves sequence, connector, and arithmetical operations and because their historical depth shows a gradual shift away from widespread formal variation, eventually resulting in the relatively rigid system found in Romance. Moreover, the combined system attested in decads+ in Latin – based on a combination of inherited, innovative and borrowed patterns and reflecting different stages of development – presents a number of typological inconsistencies that require further assessment

    Files private

    Request files
  • Bayram, A., Bayraktaroglu, Z., Karahan, E., Erdogan, B., Bilgic, B., Ozker, M., Kasikci, I., Duru, A., Ademoglu, A., Öztürk, C., Arikan, K., Tarhan, N., & Demiralp, T. (2011). Simultaneous EEG/fMRI analysis of the resonance phenomena in steady-state visual evoked responses. Clinical EEG and Neuroscience, 42(2), 98-106. doi:10.1177/155005941104200210.
  • Benetti, S., Ferrari, A., & Pavani, F. (2023). Multimodal processing in face-to-face interactions: A bridging link between psycholinguistics and sensory neuroscience. Frontiers in Human Neuroscience, 17: 1108354. doi:10.3389/fnhum.2023.1108354.

    Abstract

    In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective (“lateral processing pathway”). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model.
  • Bergelson, E., Soderstrom, M., Schwarz, I.-C., Rowland, C. F., Ramírez-Esparza, N., Rague Hamrick, L., Marklund, E., Kalashnikova, M., Guez, A., Casillas, M., Benetti, L., Van Alphen, P. M., & Cristia, A. (2023). Everyday language input and production in 1,001 children from six continents. Proceedings of the National Academy of Sciences of the United States of America, 120(52): 2300671120. doi:10.1073/pnas.2300671120.

    Abstract

    Language is a universal human ability, acquired readily by young children, whootherwise struggle with many basics of survival. And yet, language ability is variableacross individuals. Naturalistic and experimental observations suggest that children’slinguistic skills vary with factors like socioeconomic status and children’s gender.But which factors really influence children’s day-to-day language use? Here, weleverage speech technology in a big-data approach to report on a unique cross-culturaland diverse data set: >2,500 d-long, child-centered audio-recordings of 1,001 2- to48-mo-olds from 12 countries spanning six continents across urban, farmer-forager,and subsistence-farming contexts. As expected, age and language-relevant clinical risksand diagnoses predicted how much speech (and speech-like vocalization) childrenproduced. Critically, so too did adult talk in children’s environments: Children whoheard more talk from adults produced more speech. In contrast to previous conclusionsbased on more limited sampling methods and a different set of language proxies,socioeconomic status (operationalized as maternal education) was not significantlyassociated with children’s productions over the first 4 y of life, and neither weregender or multilingualism. These findings from large-scale naturalistic data advanceour understanding of which factors are robust predictors of variability in the speechbehaviors of young learners in a wide range of everyday contexts
  • Bien, H., Baayen, H. R., & Levelt, W. J. M. (2011). Frequency effects in the production of Dutch deverbal adjectives and inflected verbs. Language and Cognitive Processes, 26, 683-715. doi:10.1080/01690965.2010.511475.

    Abstract

    In two experiments, we studied the role of frequency information in the production of deverbal adjectives and inflected verbs in Dutch. Naming latencies were triggered in a position-response association task and analysed using stepwise mixed-effects modelling, with subject and word as crossed random effects. The production latency of deverbal adjectives was affected by the cumulative frequencies of their verbal stems, arguing for decomposition and against full listing. However, for the inflected verbs, there was an inhibitory effect of Inflectional Entropy, and a nonlinear effect of Lemma Frequency. Additional effects of Position-specific Neighbourhood Density and Cohort Entropy in both types of words underline the importance of paradigmatic relations in the mental lexicon. Taken together, the data suggest that the word-form level does neither contain full forms nor strictly separated morphemes, but rather morphemes with links to phonologically andin case of inflected verbsmorphologically related word forms.
  • Blasi, A., Mercure, E., Lloyd-Fox, S., Thomson, A., Brammer, M., Sauter, D., Deeley, Q., Barker, G. J., Renvall, V., Deoni, S., Gasston, D., Williams, S. C., Johnson, M. H., Simmons, A., & Murphy, D. G. (2011). Early specialization for voice and emotion processing in the infant brain. Current Biology, 21, 1220-1224. doi:10.1016/j.cub.2011.06.009.

    Abstract

    Human voices play a fundamental role in social communication, and areas of the adult ‘social brain’ show specialization for processing voices and its emotional content (superior temporal sulcus - STS, inferior prefrontal cortex, premotor cortical regions, amygdala and insula [1-8]. However, it is unclear when this specialization develops. Functional magnetic resonance (fMRI) studies suggest the infant temporal cortex does not differentiate speech from music or backward speech [10, 11], but a prior study with functional near infrared spectroscopy revealed preferential activation for human voices in 7-month-olds, in a more posterior location of the temporal cortex than in adults [12]. Yet, the brain networks involved in processing non-speech human vocalizations in early development are still unknown. For this purpose, in the present fMRI study, 3 to 7 month olds were presented with adult non-speech vocalizations (emotionally neutral, emotionally positive and emotionally negative), and non-vocal environmental sounds. Infants displayed significant activation in the anterior portion of the temporal cortex, similarly to adults [1]. Moreover, sad vocalizations modulated the activity of brain regions known to be involved in processing affective stimuli such as the orbitofrontal cortex [13] and insula [7, 8]. These results suggest remarkably early functional specialization for processing human voice and negative emotions.
  • Bögels, S., Schriefers, H. J., Vonk, W., & Chwilla, D. (2011). Prosodic breaks in sentence processing investigated by event-related potentials. Language and Linguistics Compass, 5, 424-440. doi:10.1111/j.1749-818X.2011.00291.x.

    Abstract

    Prosodic breaks (PBs) can indicate a sentence’s syntactic structure. Event-related brain potentials (ERPs) are an excellent way to study auditory sentence processing, since they provide an on-line measure across a complete sentence, in contrast to other on- and off-line methods. ERPs for the first time allowed investigating the processing of a PB itself. PBs reliably elicit a closure positive shift (CPS). We first review several studies on the CPS, leading to the conclusion that it is elicited by abstract structuring or phrasing of the input. Then we review ERP findings concerning the role of PBs in sentence processing as indicated by ERP components like the N400, P600 and LAN. We focus on whether and how PBs can (help to) disambiguate locally ambiguous sentences. Differences in results between different studies can be related to differences in items, initial parsing preferences and tasks. Finally, directions for future research are discussed.
  • Bögels, S., Schriefers, H., Vonk, W., & Chwilla, D. (2011). Pitch accents in context: How listeners process accentuation in referential communication. Neuropsychologia, 49, 2022-2036. doi:10.1016/j.neuropsychologia.2011.03.032.

    Abstract

    We investigated whether listeners are sensitive to (mis)matching accentuation patterns with respect to contrasts in the linguistic and visual context, using Event-Related Potentials. We presented participants with displays of two pictures followed by a spoken reference to one of these pictures (e.g., “the red ball”). The referent was contrastive with respect to the linguistic context (utterance in the previous trial: e.g., “the blue ball”) or with respect to the visual context (other picture in the display; e.g., a display with a red ball and a blue ball). The spoken reference carried a pitch accent on the noun (“the red BALL”) or on the adjective (“the RED ball”), or an intermediate (‘neutral’) accentuation. For the linguistic context, we found evidence for the Missing Accent Hypothesis: Listeners showed processing difficulties, in the form of increased negativities in the ERPs, for missing accents, but not for superfluous accents. ‘Neutral’ or intermediate accents were interpreted as ‘missing’ accents when they occurred late in the referential utterance, but not when they occurred early. For the visual context, we found evidence for the Missing Accent Hypothesis for a missing accent on the adjective (an increase in negativity in the ERPs) and a superfluous accent on the noun (no effect). However, a redundant color adjective (e.g., in the case of a display with a red ball and a red hat) led to less processing problems when the adjective carried a pitch accent.

    Files private

    Request files
  • Bögels, S., Schriefers, H., Vonk, W., & Chwilla, D. J. (2011). The role of prosodic breaks and pitch accents in grouping words during on-line sentence processing. Journal of Cognitive Neuroscience, 23, 2447-2467. doi:10.1162/jocn.2010.21587.

    Abstract

    The present study addresses the question whether accentuation and prosodic phrasing can have a similar function, namely, to group words in a sentence together. Participants listened to locally ambiguous sentences containing object- and subject-control verbs while ERPs were measured. In Experiment 1, these sentences contained a prosodic break, which can create a certain syntactic grouping of words, or no prosodic break. At the disambiguation, an N400 effect occurred when the disambiguation was in conflict with the syntactic grouping created by the break. We found a similar N400 effect without the break, indicating that the break did not strengthen an already existing preference. This pattern held for both object- and subject-control items. In Experiment 2, the same sentences contained a break and a pitch accent on the noun following the break. We argue that the pitch accent indicates a broad focus covering two words [see Gussenhoven, C. On the limits of focus projection in English. In P. Bosch & R. van der Sandt (Eds.), Focus: Linguistic, cognitive, and computational perspectives. Cambridge: University Press, 1999], thus grouping these words together. For object-control items, this was semantically possible, which led to a “good-enough” interpretation of the sentence. Therefore, both sentences were interpreted equally well and the N400 effect found in Experiment 1 was absent. In contrast, for subject-control items, a corresponding grouping of the words was impossible, both semantically and syntactically, leading to processing difficulty in the form of an N400 effect and a late positivity. In conclusion, accentuation can group words together on the level of information structure, leading to either a semantically “good-enough” interpretation or a processing problem when such a semantic interpretation is not possible.
  • Bögels, S., & Levinson, S. C. (2023). Ultrasound measurements of interactive turn-taking in question-answer sequences: Articulatory preparation is delayed but not tied to the response. PLoS One, 18: e0276470. doi:10.1371/journal.pone.0276470.

    Abstract

    We know that speech planning in conversational turn-taking can happen in overlap with the previous turn and research suggests that it starts as early as possible, that is, as soon as the gist of the previous turn becomes clear. The present study aimed to investigate whether planning proceeds all the way up to the last stage of articulatory preparation (i.e., putting the articulators in place for the first phoneme of the response) and what the timing of this process is. Participants answered pre-recorded quiz questions (being under the illusion that they were asked live), while their tongue movements were measured using ultrasound. Planning could start early for some quiz questions (i.e., midway during the question), but late for others (i.e., only at the end of the question). The results showed no evidence for a difference between tongue movements in these two types of questions for at least two seconds after planning could start in early-planning questions, suggesting that speech planning in overlap with the current turn proceeds more slowly than in the clear. On the other hand, when time-locking to speech onset, tongue movements differed between the two conditions from up to two seconds before this point. This suggests that articulatory preparation can occur in advance and is not fully tied to the overt response itself.

    Additional information

    supporting information
  • Wu, M., Bosker, H. R., & Riecke, L. (2023). Sentential contextual facilitation of auditory word processing builds up during sentence tracking. Journal of Cognitive Neuroscience, 35(8), 1262 -1278. doi:10.1162/jocn_a_02007.

    Abstract

    While listening to meaningful speech, auditory input is processed more rapidly near the end (vs. beginning) of sentences. Although several studies have shown such word-to-word changes in auditory input processing, it is still unclear from which processing level these word-to-word dynamics originate. We investigated whether predictions derived from sentential context can result in auditory word-processing dynamics during sentence tracking. We presented healthy human participants with auditory stimuli consisting of word sequences, arranged into either predictable (coherent sentences) or less predictable (unstructured, random word sequences) 42-Hz amplitude-modulated speech, and a continuous 25-Hz amplitude-modulated distractor tone. We recorded RTs and frequency-tagged neuroelectric responses 1(auditory steady-state responses) to individual words at multiple temporal positions within the sentences, and quantified sentential context effects at each position while controlling for individual word characteristics (i.e., phonetics, frequency, and familiarity). We found that sentential context increasingly facilitates auditory word processing as evidenced by accelerated RTs and increased auditory steady-state responses to later-occurring words within sentences. These purely top–down contextually driven auditory word-processing dynamics occurred only when listeners focused their attention on the speech and did not transfer to the auditory processing of the concurrent distractor tone. These findings indicate that auditory word-processing dynamics during sentence tracking can originate from sentential predictions. The predictions depend on the listeners' attention to the speech, and affect only the processing of the parsed speech, not that of concurrently presented auditory streams.
  • Bramão, B., Reis, A., Petersson, K. M., & Faísca, L. (2011). The role of color in object recognition: A review and meta-analysis. Acta Psychologica, 138, 244-253. doi:10.1016/j.actpsy.2011.06.010.

    Abstract

    In this study, we systematically review the scientific literature on the effect of color on object recognition. Thirty-five independent experiments, comprising 1535 participants, were included in a meta-analysis. We found a moderate effect of color on object recognition (d = 0.28). Specific effects of moderator variables were analyzed and we found that color diagnosticity is the factor with the greatest moderator effect on the influence of color in object recognition; studies using color diagnostic objects showed a significant color effect (d = 0.43), whereas a marginal color effect was found in studies that used non-color diagnostic objects (d = 0.18). The present study did not permit the drawing of specific conclusions about the moderator effect of the object recognition task; while the meta-analytic review showed that color information improves object recognition mainly in studies using naming tasks (d = 0.36), the literature review revealed a large body of evidence showing positive effects of color information on object recognition in studies using a large variety of visual recognition tasks. We also found that color is important for the ability to recognize artifacts and natural objects, to recognize objects presented as types (line-drawings) or as tokens (photographs), and to recognize objects that are presented without surface details, such as texture or shadow. Taken together, the results of the meta-analysis strongly support the contention that color plays a role in object recognition. This suggests that the role of color should be taken into account in models of visual object recognition.

    Files private

    Request files
  • Bramão, I., Inácio, F., Faísca, L., Reis, A., & Petersson, K. M. (2011). The influence of color information on the recognition of color diagnostic and noncolor diagnostic objects. The Journal of General Psychology, 138(1), 49-65. doi:10.1080/00221309.2010.533718.

    Abstract

    In the present study, the authors explore in detail the level of visual object recognition at which perceptual color information improves the recognition of color diagnostic and noncolor diagnostic objects. To address this issue, 3 object recognition tasks, with different cognitive demands, were designed: (a) an object verification task; (b) a category verification task; and (c) a name verification task. They found that perceptual color information improved color diagnostic object recognition mainly in tasks for which access to the semantic knowledge about the object was necessary to perform the task; that is, in category and name verification. In contrast, the authors found that perceptual color information facilitates noncolor diagnostic object recognition when access to the object’s structural description from long-term memory was necessary—that is, object verification. In summary, the present study shows that the role of perceptual color information in object recognition is dependent on color diagnosticity
  • Brandmeyer, A., Sadakata, M., Timmers, R., & Desain, P. (2011). Learning expressive percussion performance under different visual feedback conditions. Psychological Research, 75, 107-121. doi:10.1007/s00426-010-0291-6.

    Abstract

    A study was conducted to test the effect of two different forms of real-time visual feedback on expressive percussion performance. Conservatory percussion students performed imitations of recorded teacher performances while receiving either high-level feedback on the expressive style of their performances, low-level feedback on the timing and dynamics of the performed notes, or no feedback. The high-level feedback was based on a Bayesian analysis of the performances, while the low-level feedback was based on the raw participant timing and dynamics data. Results indicated that neither form of feedback led to significantly smaller timing and dynamics errors. However, high-level feedback did lead to a higher proficiency in imitating the expressive style of the target performances, as indicated by a probabilistic measure of expressive style. We conclude that, while potentially disruptive to timing processes involved in music performance due to extraneous cognitive load, high-level visual feedback can improve participant imitations of expressive performance features.
  • Braun, B., Dainora, A., & Ernestus, M. (2011). An unfamiliar intonation contour slows down online speech comprehension. Language and Cognitive Processes, 26(3), 350 -375. doi:10.1080/01690965.2010.492641.

    Abstract

    This study investigates whether listeners' familiarity with an intonation contour affects speech processing. In three experiments, Dutch participants heard Dutch sentences with normal intonation contours and with unfamiliar ones and performed word-monitoring, lexical decision, or semantic categorisation tasks (the latter two with cross-modal identity priming). The unfamiliar intonation contour slowed down participants on all tasks, which demonstrates that an unfamiliar intonation contour has a robust detrimental effect on speech processing. Since cross-modal identity priming with a lexical decision task taps into lexical access, this effect obtained in this task suggests that an unfamiliar intonation contour hinders lexical access. Furthermore, results from the semantic categorisation task show that the effect of an uncommon intonation contour is long-lasting and hinders subsequent processing. Hence, intonation not only contributes to utterance meaning (emotion, sentence type, and focus), but also affects crucial aspects of the speech comprehension process and is more important than previously thought.
  • Braun, B., & Tagliapietra, L. (2011). On-line interpretation of intonational meaning in L2. Language and Cognitive Processes, 26(2), 224 -235. doi:10.1080/01690965.2010.486209.

    Abstract

    Despite their relatedness, Dutch and German differ in the interpretation of a particular intonation contour, the hat pattern. In the literature, this contour has been described as neutral for Dutch, and as contrastive for German. A recent study supports the idea that Dutch listeners interpret this contour neutrally, compared to the contrastive interpretation of a lexically identical utterance realised with a double peak pattern. In particular, this study showed shorter lexical decision latencies to visual targets (e.g., PELIKAAN, “pelican”) following a contrastively related prime (e.g., flamingo, “flamingo”) only when the primes were embedded in sentences with a contrastive double peak contour, not in sentences with a neutral hat pattern. The present study replicates Experiment 1a of Braun and Tagliapietra (2009) with German learners of Dutch. Highly proficient learners of Dutch differed from Dutch natives in that they showed reliable priming effects for both intonation contours. Thus, the interpretation of intonational meaning in L2 appears to be fast, automatic, and driven by the associations learned in the native language.
  • Braun, B., Lemhofer, K., & Mani, N. (2011). Perceiving unstressed vowels in foreign-accented English. Journal of the Acoustical Society of America, 129, 376-387. doi:10.1121/1.3500688.

    Abstract

    This paper investigated how foreign-accented stress cues affect on-line speech comprehension in British speakers of English. While unstressed English vowels are usually reduced to /@/, Dutch speakers of English only slightly centralize them. Speakers of both languages differentiate stress by suprasegmentals (duration and intensity). In a cross-modal priming experiment, English listeners heard sentences ending in monosyllabic prime fragments—produced by either an English or a Dutch speaker of English—and performed lexical decisions on visual targets. Primes were either stress-matching (“ab” excised from absurd), stress-mismatching (“ab” from absence), or unrelated (“pro” from profound) with respect to the target (e.g., ABSURD). Results showed a priming effect for stress-matching primes only when produced by the English speaker, suggesting that vowel quality is a more important cue to word stress than suprasegmental information. Furthermore, for visual targets with word-initial secondary stress that do not require vowel reduction (e.g., CAMPAIGN), resembling the Dutch way of realizing stress, there was a priming effect for both speakers. Hence, our data suggest that Dutch-accented English is not harder to understand in general, but it is in instances where the language-specific implementation of lexical stress differs across languages.
  • Broeder, D., Schonefeld, O., Trippel, T., Van Uytvanck, D., & Witt, A. (2011). A pragmatic approach to XML interoperability — the Component Metadata Infrastructure (CMDI). Proceedings of Balisage: The Markup Conference 2011. Balisage Series on Markup Technologies, 7. doi:10.4242/BalisageVol7.Broeder01.
  • Broersma, M., & Cutler, A. (2011). Competition dynamics of second-language listening. Quarterly Journal of Experimental Psychology, 64, 74-95. doi:10.1080/17470218.2010.499174.

    Abstract

    Spoken-word recognition in a nonnative language is particularly difficult where it depends on discrimination between confusable phonemes. Four experiments here examine whether this difficulty is in part due to phantom competition from “near-words” in speech. Dutch listeners confuse English /aelig/ and /ε/, which could lead to the sequence daf being interpreted as deaf, or lemp being interpreted as lamp. In auditory lexical decision, Dutch listeners indeed accepted such near-words as real English words more often than English listeners did. In cross-modal priming, near-words extracted from word or phrase contexts (daf from DAFfodil, lemp from eviL EMPire) induced activation of corresponding real words (deaf; lamp) for Dutch, but again not for English, listeners. Finally, by the end of untruncated carrier words containing embedded words or near-words (definite; daffodil) no activation of the real embedded forms (deaf in definite) remained for English or Dutch listeners, but activation of embedded near-words (deaf in daffodil) did still remain, for Dutch listeners only. Misinterpretation of the initial vowel here favoured the phantom competitor and disfavoured the carrier (lexically represented as containing a different vowel). Thus, near-words compete for recognition and continue competing for longer than actually embedded words; nonnative listening indeed involves phantom competition.
  • Brown, A., & Gullberg, M. (2011). Bidirectional cross-linguistic influence in event conceptualization? Expressions of Path among Japanese learners of English. Bilingualism: Language and Cognition, 14, 79 -94. doi:10.1017/S1366728910000064.

    Abstract

    Typological differences in expressions of motion are argued to have consequences for event conceptualization. In SLA, studies generally find transfer of L1 expressions and accompanying event construals, suggesting resistance to the restructuring of event conceptualization. The current study tackles such restructuring in SLA within the context of bidirectional cross-linguistic influence, focusing on expressions of Path in English and Japanese. We probe the effects of lexicalization patterns on event construal by focusing on different Path components: Source, Via and Goal. Crucially, we compare the same speakers performing both in the L1 and L2 to ascertain whether the languages influence each other. We argue for the potential for restructuring, even at modest levels of L2 proficiency, by showing that not only do L1 patterns shape construal in the L2, but that L2 patterns may subtly and simultaneously broaden construal in the L1 within an individual learner.
  • Brown, P. (2011). Color me bitter: Crossmodal compounding in Tzeltal perception words. The Senses & Society, 6(1), 106-116. doi:10.2752/174589311X12893982233957.

    Abstract

    Within a given language and culture, distinct sensory modalities are often given differential linguistic treatment in ways reflecting cultural ideas about, and uses for, the senses. This article reports on sensory expressions in the Mayan language Tzeltal, spoken in southeastern Mexico. Drawing both on data derived from Tzeltal consultants’ responses to standardized sensory elicitation stimuli and on sensory descriptions produced in more natural contexts, I examine words characterizing sensations in the domains of color and taste. In just these two domains, a limited set of basic terms along with productive word-formation processes of compounding and reduplication are used in analogous ways to produce words that distinguish particular complex sensations or gestalts: e.g. in the color domain, yax-boj-boj (yax ‘grue’ + boj ‘cut’), of mouth stained green from eating green vegetables, or, in the taste domain, chi’-pik-pik (chi’ ‘sweet/salty’ + pik ‘touch’) of a slightly prickly salty taste. I relate the semantics of crossmodal compounds to material technologies involving color and taste (weaving, food production), and to ideas about “hot”/“cold” categories, which provide a cultural rationale for eating practices and medical interventions. I argue that language plays a role in promoting crossmodal associations, resulting in a (partially) culture-specific construction of sensory experience.
  • Brown, P. (1989). [Review of the book Language, gender, and sex in comparative perspective ed. by Susan U. Philips, Susan Steeleand Christine Tanz]. Man, 24(1), 192.
  • Brown-Schmidt, S., & Konopka, A. E. (2011). Experimental approaches to referential domains and the on-line processing of referring expressions in unscripted conversation. Information, 2, 302-326. doi:10.3390/info2020302.

    Abstract

    This article describes research investigating the on-line processing of language in unscripted conversational settings. In particular, we focus on the process of formulating and interpreting definite referring expressions. Within this domain we present results of two eye-tracking experiments addressing the problem of how speakers interrogate the referential domain in preparation to speak, how they select an appropriate expression for a given referent, and how addressees interpret these expressions. We aim to demonstrate that it is possible, and indeed fruitful, to examine unscripted, conversational language using modified experimental designs and standard hypothesis testing procedures.
  • Bruggeman, L., & Cutler, A. (2023). Listening like a native: Unprofitable procedures need to be discarded. Bilingualism: Language and Cognition, 26(5), 1093-1102. doi:10.1017/S1366728923000305.

    Abstract

    Two languages, historically related, both have lexical stress, with word stress distinctions signalled in each by the same suprasegmental cues. In each language, words can overlap segmentally but differ in placement of primary versus secondary stress (OCtopus, ocTOber). However, secondary stress occurs more often in the words of one language, Dutch, than in the other, English, and largely because of this, Dutch listeners find it helpful to use suprasegmental stress cues when recognising spoken words. English listeners, in contrast, do not; indeed, Dutch listeners can outdo English listeners in correctly identifying the source words of English word fragments (oc-). Here we show that Dutch-native listeners who reside in an English-speaking environment and have become dominant in English, though still maintaining their use of these stress cues in their L1, ignore the same cues in their L2 English, performing as poorly in the fragment identification task as the L1 English do.
  • De Bruin, A., De Groot, A., De Heer, L., Bok, J., Wielinga, P., Hamans, M., van Rotterdam, B., & Janse, I. (2011). Detection of Coxiella burnetii in complex matrices by using multiplex quantitative PCR during a major Q fever outbreak in the Netherlands. Applied and Environmental Microbiology, 77, 6516-6523. doi:10.1128/AEM.05097-11.

    Abstract

    Q fever, caused by Coxiella burnetii, is a zoonosis with a worldwide distribution. A large rural area in the southeast of the Netherlands was heavily affected by Q fever between 2007 and 2009. This initiated the development of a robust and internally controlled multiplex quantitative PCR (qPCR) assay for the detection of C. burnetii DNA in veterinary and environmental matrices on suspected Q fever-affected farms. The qPCR detects three C. burnetii targets (icd, com1, and IS1111) and one Bacillus thuringiensis internal control target (cry1b). Bacillus thuringiensis spores were added to samples to control both DNA extraction and PCR amplification. The performance of the qPCR assay was investigated and showed a high efficiency; a limit of detection of 13.0, 10.6, and 10.4 copies per reaction for the targets icd, com1, and IS1111, respectively; and no crossreactivity with the nontarget organisms tested. Screening for C. burnetii DNA on 29 suspected Q fever-affected farms during the Q fever epidemic in 2008 showed that swabs from dust-accumulating surfaces contained higher levels of C. burnetii DNA than vaginal swabs from goats or sheep. PCR inhibition by coextracted substances was observed in some environmental samples, and 10- or 100-fold dilutions of samples were sufficient to obtain interpretable signals for both the C. burnetii targets and the internal control. The inclusion of an internal control target and three C. burnetii targets in one multiplex qPCR assay showed that complex veterinary and environmental matrices can be screened reliably for the presence of C. burnetii DNA during an outbreak. © 2011, American Society for Microbiology.
  • Bulut, T. (2023). Domain‐general and domain‐specific functional networks of Broca's area underlying language processing. Brain and Behavior, 13(7): e3046. doi:10.1002/brb3.3046.

    Abstract

    Introduction
    Despite abundant research on the role of Broca's area in language processing, there is still no consensus on language specificity of this region and its connectivity network.

    Methods
    The present study employed the meta-analytic connectivity modeling procedure to identify and compare domain-specific (language-specific) and domain-general (shared between language and other domains) functional connectivity patterns of three subdivisions within the broadly defined Broca's area: pars opercularis (IFGop), pars triangularis (IFGtri), and pars orbitalis (IFGorb) of the left inferior frontal gyrus.

    Results
    The findings revealed a left-lateralized frontotemporal network for all regions of interest underlying domain-specific linguistic functions. The domain-general network, however, spanned frontoparietal regions that overlap with the multiple-demand network and subcortical regions spanning the thalamus and the basal ganglia.

    Conclusions
    The findings suggest that language specificity of Broca's area emerges within a left-lateralized frontotemporal network, and that domain-general resources are garnered from frontoparietal and subcortical networks when required by task demands.

    Additional information

    Supporting Information Data availability
  • Burba, I., Colombo, G. I., Staszewsky, L. I., De Simone, M., Devanna, P., Nanni, S., Avitabile, D., Molla, F., Cosentino, S., Russo, I., De Angelis, N., Soldo, A., Biondi, A., Gambini, E., Gaetano, C., Farsetti, A., Pompilio, G., Latini, R., Capogrossi, M. C., & Pesce, M. (2011). Histone Deacetylase Inhibition Enhances Self Renewal and Cardioprotection by Human Cord Blood-Derived CD34+ Cells. PLoS One, 6(7): e22158. doi:10.1371/journal.pone.0022158.

    Abstract

    Use of peripheral blood- or bone marrow-derived progenitors for ischemic heart repair is a feasible option to induce neo-vascularization in ischemic tissues. These cells, named Endothelial Progenitors Cells (EPCs), have been extensively characterized phenotypically and functionally. The clinical efficacy of cardiac repair by EPCs cells remains, however, limited, due to cell autonomous defects as a consequence of risk factors. The devise of “enhancement” strategies has been therefore sought to improve repair ability of these cells and increase the clinical benefit
  • Burenhult, N. (2011). [Review of the book New approaches to Slavic verbs of motion ed. by Victoria Hasko and Renee Perelmutter]. Linguistics, 49, 645-648.
  • Burenhult, N., & Majid, A. (2011). Olfaction in Aslian ideology and language. The Senses & Society, 6(1), 19-29. doi:10.2752/174589311X12893982233597.

    Abstract

    The cognitive- and neurosciences have supposed that the perceptual world of the individual is dominated by vision, followed closely by audition, but that olfaction is merely vestigial. Aslian-speaking communities (Austroasiatic, Malay Peninsula) challenge this view. For the Jahai - a small group of rainforest foragers - odor plays a central role in both culture and language. Jahai ideology revolves around a complex set of beliefs that structures the human relationship with the supernatural. Central to this relationship are hearing, vision, and olfaction. In Jahai language, olfaction also receives special attention. There are at least a dozen or so abstract descriptive odor categories that are basic, everyday terms. This lexical elaboration of odor is not unique to the Jahai but can be seen across many contemporary Austroasiatic languages and transcends major cultural and environmental boundaries. These terms appear to be inherited from ancestral language states, suggesting a longstanding preoccupation with odor in this part of the world. Contrary to the prevailing assumption in the cognitive sciences, these languages and cultures demonstrate that odor is far from vestigial in humans.
  • Bürki, A., Ernestus, M., Gendrot, C., Fougeron, C., & Frauenfelder, U. H. (2011). What affects the presence versus absence of schwa and its duration: A corpus analysis of French connected speech. Journal of the Acoustical Society of America, 130, 3980-3991. doi:10.1121/1.3658386.

    Abstract

    This study presents an analysis of over 4000 tokens of words produced as variants with and without schwa in a French corpus of radio-broadcasted speech. In order to determine which of the many variables mentioned in the literature influence variant choice, 17 predictors were tested in the same analysis. Only five of these variables appeared to condition variant choice. The question of the processing stage, or locus, of this alternation process is also addressed in a comparison of the variables that predict variant choice with the variables that predict the acoustic duration of schwa in variants with schwa. Only two variables predicting variant choice also predict schwa duration. The limited overlap between the predictors for variant choice and for schwa duration, combined with the nature of these variables, suggest that the variants without schwa do not result from a phonetic process of reduction; that is, they are not the endpoint of gradient schwa shortening. Rather, these variants are generated early in the production process, either during phonological encoding or word-form retrieval. These results, based on naturally produced speech, provide a useful complement to on-line production experiments using artificial speech tasks.
  • Carota, F., Nili, H., Kriegeskorte, N., & Pulvermüller, F. (2023). Experientially-grounded and distributional semantic vectors uncover dissociable representations of semantic categories. Language, Cognition and Neuroscience. Advance online publication. doi:10.1080/23273798.2023.2232481.

    Abstract

    Neuronal populations code similar concepts by similar activity patterns across the human brain's semantic networks. However, it is unclear to what extent such meaning-to-symbol mapping reflects distributional statistics, or experiential information grounded in sensorimotor and emotional knowledge. We asked whether integrating distributional and experiential data better distinguished conceptual categories than each method taken separately. We examined the similarity structure of fMRI patterns elicited by visually presented action- and object-related words using representational similarity analysis (RSA). We found that the distributional and experiential/integrative models respectively mapped the high-dimensional semantic space in left inferior frontal, anterior temporal, and in left precentral, posterior inferior/middle temporal cortex. Furthermore, results from model comparisons uncovered category-specific similarity patterns, as both distributional and experiential models matched the similarity patterns for action concepts in left fronto-temporal cortex, whilst the experiential/integrative (but not distributional) models matched the similarity patterns for object concepts in left fusiform and angular gyrus.
  • Carota, F., Schoffelen, J.-M., Oostenveld, R., & Indefrey, P. (2023). Parallel or sequential? Decoding conceptual and phonological/phonetic information from MEG signals during language production. Cognitive Neuropsychology, 40(5-6), 298-317. doi:10.1080/02643294.2023.2283239.

    Abstract

    Speaking requires the temporally coordinated planning of core linguistic information, from conceptual meaning to articulation. Recent neurophysiological results suggested that these operations involve a cascade of neural events with subsequent onset times, whilst competing evidence suggests early parallel neural activation. To test these hypotheses, we examined the sources of neuromagnetic activity recorded from 34 participants overtly naming 134 images from 4 object categories (animals, tools, foods and clothes). Within each category, word length and phonological neighbourhood density were co-varied to target phonological/phonetic processes. Multivariate pattern analyses (MVPA) searchlights in source space decoded object categories in occipitotemporal and middle temporal cortex, and phonological/phonetic variables in left inferior frontal (BA 44) and motor cortex early on. The findings suggest early activation of multiple variables due to intercorrelated properties and interactivity of processing, thus raising important questions about the representational properties of target words during the preparatory time enabling overt speaking.
  • Casasanto, D. (2011). Different bodies, different minds: The body-specificity of language and thought. Current Directions in Psychological Science, 20, 378-383. doi:10.1177/0963721411422058.

    Abstract

    Do people with different kinds of bodies think differently? According to the bodyspecificity hypothesis (Casasanto 2009), they should. In this article, I review evidence that right- and left-handers, who perform actions in systematically different ways, use correspondingly different areas of the brain for imagining actions and representing the meanings of action verbs. Beyond concrete actions, the way people use their hands also influences the way they represent abstract ideas with positive and negative emotional valence like “goodness,” “honesty,” and “intelligence,” and how they communicate about them in spontaneous speech and gesture. Changing how people use their right and left hands can cause them to think differently, suggesting that motoric differences between right- and left-handers are not merely correlated with cognitive differences. Body-specific patterns of motor experience shape the way we think, communicate, and make decisions
  • Casasanto, D., & Chrysikou, E. G. (2011). When left is "Right": Motor fluency shapes abstract concepts. Psychological Science, 22, 419-422. doi:10.1177/0956797611401755.

    Abstract

    Right- and left-handers implicitly associate positive ideas like "goodness"and "honesty"more strongly with their dominant side
    of space, the side on which they can act more fluently, and negative ideas more strongly with their nondominant side. Here we show that right-handers’ tendency to associate "good" with "right" and "bad" with "left" can be reversed as a result of both
    long- and short-term changes in motor fluency. Among patients who were right-handed prior to unilateral stroke, those with disabled left hands associated "good" with "right," but those with disabled right hands associated "good" with "left,"as natural left-handers do. A similar pattern was found in healthy right-handers whose right or left hand was temporarily handicapped in the laboratory. Even a few minutes of acting more fluently with the left hand can change right-handers’ implicit associations between space and emotional valence, causing a reversal of their usual judgments. Motor experience plays a causal role in shaping abstract thought.
  • Catani, M., Craig, M. C., Forkel, S. J., Kanaan, R., Picchioni, M., Toulopoulou, T., Shergill, S., Williams, S., Murphy, D. G., & McGuire, P. (2011). Altered integrity of perisylvian language pathways in schizophrenia: Relationship to auditory hallucinations. Biological Psychiatry, 70(12), 1143-1150. doi:10.1016/j.biopsych.2011.06.013.

    Abstract

    Background: Functional neuroimaging supports the hypothesis that auditory verbal hallucinations (AVH) in schizophrenia result from altered functional connectivity between perisylvian language regions, although the extent to which AVH are also associated with an altered tract anatomy is less clear.

    Methods: Twenty-eight patients with schizophrenia subdivided into 17 subjects with a history of AVH and 11 without a history of hallucinations and 59 age- and IQ-matched healthy controls were recruited. The number of streamlines, fractional anisotropy (FA), and mean diffusivity were measured along the length of the arcuate fasciculus and its medial and lateral components.

    Results: Patients with schizophrenia had bilateral reduction of FA relative to controls in the arcuate fasciculi (p < .001). Virtual dissection of the subcomponents of the arcuate fasciculi revealed that these reductions were specific to connections between posterior temporal and anterior regions in the inferior frontal and parietal lobe. Also, compared with controls, the reduction in FA of these tracts was highest, and bilateral, in patients with AVH, but in patients without AVH, this reduction was reported only on the left.

    Conclusions: These findings point toward a supraregional network model of AVH in schizophrenia. They support the hypothesis that there may be selective vulnerability of specific anatomical connections to posterior temporal regions in schizophrenia and that extensive bilateral damage is associated with a greater vulnerability to AVH. If confirmed by further studies, these findings may advance our understanding of the anatomical factors that are protective against AVH and predictive of a treatment response.
  • Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2023). Ten-month-old infants’ neural tracking of naturalistic speech is not facilitated by the speaker’s eye gaze. Developmental Cognitive Neuroscience, 64: 101297. doi:10.1016/j.dcn.2023.101297.

    Abstract

    Eye gaze is a powerful ostensive cue in infant-caregiver interactions, with demonstrable effects on language acquisition. While the link between gaze following and later vocabulary is well-established, the effects of eye gaze on other aspects of language, such as speech processing, are less clear. In this EEG study, we examined the effects of the speaker’s eye gaze on ten-month-old infants’ neural tracking of naturalistic audiovisual speech, a marker for successful speech processing. Infants watched videos of a speaker telling stories, addressing the infant with direct or averted eye gaze. We assessed infants’ speech-brain coherence at stress (1–1.75 Hz) and syllable (2.5–3.5 Hz) rates, tested for differences in attention by comparing looking times and EEG theta power in the two conditions, and investigated whether neural tracking predicts later vocabulary. Our results showed that infants’ brains tracked the speech rhythm both at the stress and syllable rates, and that infants’ neural tracking at the syllable rate predicted later vocabulary. However, speech-brain coherence did not significantly differ between direct and averted gaze conditions and infants did not show greater attention to direct gaze. Overall, our results suggest significant neural tracking at ten months, related to vocabulary development, but not modulated by speaker’s gaze.

    Additional information

    supplementary material
  • Chang, F., Tatsumi, T., Hiranuma, Y., & Bannard, C. (2023). Visual heuristics for verb production: Testing a deep‐learning model with experiments in Japanese. Cognitive Science, 47(8): e13324. doi:10.1111/cogs.13324.

    Abstract

    Tense/aspect morphology on verbs is often thought to depend on event features like telicity, but it is not known how speakers identify these features in visual scenes. To examine this question, we asked Japanese speakers to describe computer-generated animations of simple actions with variation in visual features related to telicity. Experiments with adults and children found that they could use goal information in the animations to select appropriate past and progressive verb forms. They also produced a large number of different verb forms. To explain these findings, a deep-learning model of verb production from visual input was created that could produce a human-like distribution of verb forms. It was able to use visual cues to select appropriate tense/aspect morphology. The model predicted that video duration would be related to verb complexity, and past tense production would increase when it received the endpoint as input. These predictions were confirmed in a third study with Japanese adults. This work suggests that verb production could be tightly linked to visual heuristics that support the understanding of events.
  • Chen, X. S., Penny, D., & Collins, L. J. (2011). Characterization of RNase MRP RNA and novel snoRNAs from Giardia intestinalis and Trichomonas vaginalis. BMC Genomics, 12, 550. doi:10.1186/1471-2164-12-550.

    Abstract

    Background: Eukaryotic cells possess a complex network of RNA machineries which function in RNA-processing and cellular regulation which includes transcription, translation, silencing, editing and epigenetic control. Studies of model organisms have shown that many ncRNAs of the RNA-infrastructure are highly conserved, but little is known from non-model protists. In this study we have conducted a genome-scale survey of medium-length ncRNAs from the protozoan parasites Giardia intestinalis and Trichomonas vaginalis. Results: We have identified the previously ‘missing’ Giardia RNase MRP RNA, which is a key ribozyme involved in pre-rRNA processing. We have also uncovered 18 new H/ACA box snoRNAs, expanding our knowledge of the H/ ACA family of snoRNAs. Conclusions: Results indicate that Giardia intestinalis and Trichomonas vaginalis, like their distant multicellular relatives, contain a rich infrastructure of RNA-based processing. From here we can investigate the evolution of RNA processing networks in eukaryotes.
  • Chen, A. (2011). Tuning information packaging: Intonational realization of topic and focus in child Dutch. Journal of Child Language, 38, 1055-1083. doi:10.1017/S0305000910000541.

    Abstract

    This study examined how four- to five-year-olds and seven- to eight-year-olds used intonation (accent placement and accent type) to encode topic and focus in Dutch. Naturally spoken declarative sentences with either sentence-initial topic and sentence-final focus or sentence-initial focus and sentence-final topic were elicited via a picture-matching game. Results showed that the four- to five-year-olds were adult-like in topic-marking, but were not yet fully adult-like in focus-marking, in particular, in the use of accent type in sentence-final focus (i.e. showing no preference for H*L). Between age five and seven, the use of accent type was further developed. In contrast to the four- to five-year-olds, the seven- to eight-year-olds showed a preference for H*L in sentence-final focus. Furthermore, they used accent type to distinguish sentence-initial focus from sentence-initial topic in addition to phonetic cues.
  • Chen, A., Çetinçelik, M., Roncaglia-Denissen, M. P., & Sadakata, M. (2023). Native language, L2 experience, and pitch processing in music. Linguistic Approaches to Bilingualism, 13(2), 218-237. doi:10.1075/lab.20030.che.

    Abstract

    The current study investigated how the role of pitch in one’s native language and L2 experience influenced musical melodic processing by testing Turkish and Mandarin Chinese advanced and beginning learners of English as an L2. Pitch has a lower functional load and shows a simpler pattern in Turkish than in Chinese as the former only contrasts between presence and the absence of pitch elevation, while the latter makes use of four different pitch contours lexically. Using the Musical Ear Test as the tool, we found that the Chinese listeners outperformed the Turkish listeners, and the advanced L2 learners outperformed the beginning learners. The Turkish listeners were further tested on their discrimination of bisyllabic Chinese lexical tones, and again an L2 advantage was observed. No significant difference was found for working memory between the beginning and advanced L2 learners. These results suggest that richness of tonal inventory of the native language is essential for triggering a music processing advantage, and on top of the tone language advantage, the L2 experience yields a further enhancement. Yet, unlike the tone language advantage that seems to relate to pitch expertise, learning an L2 seems to improve sound discrimination in general, and such improvement exhibits in non-native lexical tone discrimination.
  • Cho, T., & McQueen, J. M. (2011). Perceptual recovery from consonant-cluster simplification using language-specific phonological knowledge. Journal of Psycholinguistic Research, 40, 253-274. doi:10.1007/s10936-011-9168-0.

    Abstract

    Two experiments examined whether perceptual recovery from Korean consonant-cluster simplification is based on language-specific phonological knowledge. In tri-consonantal C1C2C3 sequences such as /lkt/ and /lpt/ in Seoul Korean, either C1 or C2 can be completely deleted. Seoul Koreans monitored for C2 targets (/p/ or / k/, deleted or preserved) in the second word of a two-word phrase with an underlying /l/-C2-/t/ sequence. In Experiment 1 the target-bearing words had contextual lexical-semantic support. Listeners recovered deleted targets as fast and as accurately as preserved targets with both Word and Intonational Phrase (IP) boundaries between the two words. In Experiment 2, contexts were low-pass filtered. Listeners were still able to recover deleted targets as well as preserved targets in IP-boundary contexts, but better with physically-present targets than with deleted targets in Word-boundary contexts. This suggests that the benefit of having target acoustic-phonetic information emerges only when higher-order (contextual and phrase-boundary) information is not available. The strikingly efficient recovery of deleted phonemes with neither acoustic-phonetic cues nor contextual support demonstrates that language-specific phonological knowledge, rather than language-universal perceptual processes which rely on fine-grained phonetic details, is employed when the listener perceives the results of a continuous-speech process in which reduction is phonetically complete.
  • Cholin, J., Dell, G. S., & Levelt, W. J. M. (2011). Planning and articulation in incremental word production: Syllable-frequency effects in English. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 109-122. doi:10.1037/a0021322.

    Abstract

    We investigated the role of syllables during speech planning in English by measuring syllable-frequency effects. So far, syllable-frequency effects in English have not been reported. English has poorly defined syllable boundaries, and thus the syllable might not function as a prominent unit in English speech production. Speakers produced either monosyllabic (Experiment 1) or disyllabic (Experiment 2–4) pseudowords as quickly as possible in response to symbolic cues. Monosyllabic targets consisted of either high- or low-frequency syllables, whereas disyllabic items contained either a 1st or 2nd syllable that was frequency-manipulated. Significant syllable-frequency effects were found in all experiments. Whereas previous findings for disyllables in Dutch and Spanish—languages with relatively clear syllable boundaries—showed effects of a frequency manipulation on 1st but not 2nd syllables, in our study English speakers were sensitive to the frequency of both syllables. We interpret this sensitivity as an indication that the production of English has more extensive planning scopes at the interface of phonetic encoding and articulation.
  • Chu, M., & Kita, S. (2011). The nature of gestures’ beneficial role in spatial problem solving. Journal of Experimental Psychology: General, 140, 102-116. doi:10.1037/a0021790.

    Abstract

    Co-thought gestures are hand movements produced in silent, noncommunicative, problem-solving situations. In the study, we investigated whether and how such gestures enhance performance in spatial visualization tasks such as a mental rotation task and a paper folding task. We found that participants gestured more often when they had difficulties solving mental rotation problems Experiment 1). The gesture-encouraged group solved more mental rotation problems correctly than did the gesture-allowed and gesture-prohibited groups (Experiment 2). Gestures produced by the gesture-encouraged group enhanced performance in the very trials in which they were produced Experiments 2 & 3). Furthermore, gesture frequency decreased as the participants in the gesture-encouraged group solved more problems (Experiments 2 & 3). In addition, the advantage of the gesture-encouraged group persisted into subsequent spatial visualization problems in which gesturing was prohibited: another mental rotation block (Experiment 2) and a newly introduced paper folding task (Experiment 3). The results indicate that when people have difficulty in solving spatial visualization problems, they spontaneously produce gestures to help them, and gestures can indeed improve performance. As they solve more problems, the spatial computation supported by gestures becomes internalized, and the gesture frequency decreases. The benefit of gestures persists even in subsequent spatial visualization problems in which gesture is prohibited. Moreover, the beneficial effect of gesturing can be generalized to a different spatial visualization task when two tasks require similar spatial transformation processes. We conclude that gestures enhance performance on spatial visualization tasks by improving the internal computation of spatial transformations.
  • Cleary, R. A., Poliakoff, E., Galpin, A., Dick, J. P., & Holler, J. (2011). An investigation of co-speech gesture production during action description in Parkinson’s disease. Parkinsonism & Related Disorders, 17, 753-756. doi:10.1016/j.parkreldis.2011.08.001.

    Abstract

    Methods
    The present study provides a systematic analysis of co-speech gestures which spontaneously accompany the description of actions in a group of PD patients (N = 23, Hoehn and Yahr Stage III or less) and age-matched healthy controls (N = 22). The analysis considers different co-speech gesture types, using established classification schemes from the field of gesture research. The analysis focuses on the rate of these gestures as well as on their qualitative nature. In doing so, the analysis attempts to overcome several methodological shortcomings of research in this area.
    Results
    Contrary to expectation, gesture rate was not significantly affected in our patient group, with relatively mild PD. This indicates that co-speech gestures could compensate for speech problems. However, while gesture rate seems unaffected, the qualitative precision of gestures representing actions was significantly reduced.
    Conclusions
    This study demonstrates the feasibility of carrying out fine-grained, detailed analyses of gestures in PD and offers insights into an as yet neglected facet of communication in patients with PD. Based on the present findings, an important next step is the closer investigation of the qualitative changes in gesture (including different communicative situations) and an analysis of the heterogeneity in co-speech gesture production in PD.
  • Clough, S., Morrow, E., Mutlu, B., Turkstra, L., & Duff, M. C. C. (2023). Emotion recognition of faces and emoji in individuals with moderate-severe traumatic brain injury. Brain Injury, 37(7), 596-610. doi:10.1080/02699052.2023.2181401.

    Abstract

    Background. Facial emotion recognition deficits are common after moderate-severe traumatic brain injury (TBI) and linked to poor social outcomes. We examine whether emotion recognition deficits extend to facial expressions depicted by emoji.
    Methods. Fifty-one individuals with moderate-severe TBI (25 female) and fifty-one neurotypical peers (26 female) viewed photos of human faces and emoji. Participants selected the best-fitting label from a set of basic emotions (anger, disgust, fear, sadness, neutral, surprise, happy) or social emotions (embarrassed, remorseful, anxious, neutral, flirting, confident, proud).
    Results. We analyzed the likelihood of correctly labeling an emotion by group (neurotypical, TBI), stimulus condition (basic faces, basic emoji, social emoji), sex (female, male), and their interactions. Participants with TBI did not significantly differ from neurotypical peers in overall emotion labeling accuracy. Both groups had poorer labeling accuracy for emoji compared to faces. Participants with TBI (but not neurotypical peers) had poorer accuracy for labeling social emotions depicted by emoji compared to basic emotions depicted by emoji. There were no effects of participant sex.
    Discussion. Because emotion representation is more ambiguous in emoji than human faces, studying emoji use and perception in TBI is an important consideration for understanding functional communication and social participation after brain injury.
  • Clough, S., Padilla, V.-G., Brown-Schmidt, S., & Duff, M. C. (2023). Intact speech-gesture integration in narrative recall by adults with moderate-severe traumatic brain injury. Neuropsychologia, 189: 108665. doi:10.1016/j.neuropsychologia.2023.108665.

    Abstract

    Purpose

    Real-world communication is situated in rich multimodal contexts, containing speech and gesture. Speakers often convey unique information in gesture that is not present in the speech signal (e.g., saying “He searched for a new recipe” while making a typing gesture). We examine the narrative retellings of participants with and without moderate-severe traumatic brain injury across three timepoints over two online Zoom sessions to investigate whether people with TBI can integrate information from co-occurring speech and gesture and if information from gesture persists across delays.

    Methods

    60 participants with TBI and 60 non-injured peers watched videos of a narrator telling four short stories. On key details, the narrator produced complementary gestures that conveyed unique information. Participants retold the stories at three timepoints: immediately after, 20-min later, and one-week later. We examined the words participants used when retelling these key details, coding them as a Speech Match (e.g., “He searched for a new recipe”), a Gesture Match (e.g., “He searched for a new recipe online), or Other (“He looked for a new recipe”). We also examined whether participants produced representative gestures themselves when retelling these details.

    Results

    Despite recalling fewer story details, participants with TBI were as likely as non-injured peers to report information from gesture in their narrative retellings. All participants were more likely to report information from gesture and produce representative gestures themselves one-week later compared to immediately after hearing the story.

    Conclusion

    We demonstrated that speech-gesture integration is intact after TBI in narrative retellings. This finding has exciting implications for the utility of gesture to support comprehension and memory after TBI and expands our understanding of naturalistic multimodal language processing in this population.
  • Clough, S., Tanguay, A. F. N., Mutlu, B., Turkstra, L., & Duff, M. C. (2023). How do individuals with and without traumatic brain injury interpret emoji? Similarities and differences in perceived valence, arousal, and emotion representation. Journal of Nonverbal Communication, 47, 489-511. doi:10.1007/s10919-023-00433-w.

    Abstract

    Impaired facial affect recognition is common after traumatic brain injury (TBI) and linked to poor social outcomes. We explored whether perception of emotions depicted by emoji is also impaired after TBI. Fifty participants with TBI and 50 non-injured peers generated free-text labels to describe emotions depicted by emoji and rated their levels of valence and arousal on nine-point rating scales. We compared how the two groups’ valence and arousal ratings were clustered and examined agreement in the words participants used to describe emoji. Hierarchical clustering of affect ratings produced four emoji clusters in the non-injured group and three emoji clusters in the TBI group. Whereas the non-injured group had a strongly positive and a moderately positive cluster, the TBI group had a single positive valence cluster, undifferentiated by arousal. Despite differences in cluster numbers, hierarchical structures of the two groups’ emoji ratings were significantly correlated. Most emoji had high agreement in the words participants with and without TBI used to describe them. Participants with TBI perceived emoji similarly to non-injured peers, used similar words to describe emoji, and rated emoji similarly on the valence dimension. Individuals with TBI showed small differences in perceived arousal for a minority of emoji. Overall, results suggest that basic recognition processes do not explain challenges in computer-mediated communication reported by adults with TBI. Examining perception of emoji in context by people with TBI is an essential next step for advancing our understanding of functional communication in computer-mediated contexts after brain injury.

    Additional information

    supplementary information
  • Cohen, E. (2011). Broadening the critical perspective on supernatural punishment theories. Religion, Brain & Behavior, 1(1), 70-72. doi:10.1080/2153599X.2011.558709.
  • Cohen, E., Burdett, E., Knight, N., & Barrett, J. (2011). Cross-cultural similarities and differences in person-body reasoning: Experimental evidence from the United Kingdom and Brazilian Amazon. Cognitive Science, 35, 1282-1304. doi:10.1111/j.1551-6709.2011.01172.x.

    Abstract

    We report the results of a cross-cultural investigation of person-body reasoning in the United Kingdom and northern Brazilian Amazon (Marajo´ Island). The study provides evidence that directly bears upon divergent theoretical claims in cognitive psychology and anthropology, respectively, on the cognitive origins and cross-cultural incidence of mind-body dualism. In a novel reasoning task, we found that participants across the two sample populations parsed a wide range of capacities similarly in terms of the capacities’ perceived anchoring to bodily function. Patterns of reasoning concerning the respective roles of physical and biological properties in sustaining various capacities did vary between sample populations, however. Further, the data challenge prior ad-hoc categorizations in the empirical literature on the developmental origins of and cognitive constraints on psycho-physical reasoning (e.g., in afterlife concepts). We suggest cross-culturally validated categories of ‘‘Body Dependent’’ and ‘‘Body Independent’’ items for future developmental and cross-cultural research in this emerging area.
  • Coopmans, C. W., Struiksma, M. E., Coopmans, P. H. A., & Chen, A. (2023). Processing of grammatical agreement in the face of variation in lexical stress: A mismatch negativity study. Language and Speech, 66(1), 202-213. doi:10.1177/00238309221098116.

    Abstract

    Previous electroencephalography studies have yielded evidence for automatic processing of syntax and lexical stress. However, these studies looked at both effects in isolation, limiting their generalizability to everyday language comprehension. In the current study, we investigated automatic processing of grammatical agreement in the face of variation in lexical stress. Using an oddball paradigm, we measured the Mismatch Negativity (MMN) in Dutch-speaking participants while they listened to Dutch subject–verb sequences (linguistic context) or acoustically similar sequences in which the subject was replaced by filtered noise (nonlinguistic context). The verb forms differed in the inflectional suffix, rendering the subject–verb sequences grammatically correct or incorrect, and leading to a difference in the stress pattern of the verb forms. We found that the MMNs were modulated in both the linguistic and nonlinguistic condition, suggesting that the processing load induced by variation in lexical stress can hinder early automatic processing of grammatical agreement. However, as the morphological differences between the verb forms correlated with differences in number of syllables, an interpretation in terms of the prosodic structure of the sequences cannot be ruled out. Future research is needed to determine which of these factors (i.e., lexical stress, syllabic structure) most strongly modulate early syntactic processing.

    Additional information

    supplementary material
  • Coopmans, C. W., Mai, A., Slaats, S., Weissbart, H., & Martin, A. E. (2023). What oscillations can do for syntax depends on your theory of structure building. Nature Reviews Neuroscience, 24, 723. doi:10.1038/s41583-023-00734-5.
  • Coopmans, C. W., Kaushik, K., & Martin, A. E. (2023). Hierarchical structure in language and action: A formal comparison. Psychological Review, 130(4), 935-952. doi:10.1037/rev0000429.

    Abstract

    Since the cognitive revolution, language and action have been compared as cognitive systems, with cross-domain convergent views recently gaining renewed interest in biology, neuroscience, and cognitive science. Language and action are both combinatorial systems whose mode of combination has been argued to be hierarchical, combining elements into constituents of increasingly larger size. This structural similarity has led to the suggestion that they rely on shared cognitive and neural resources. In this article, we compare the conceptual and formal properties of hierarchy in language and action using set theory. We show that the strong compositionality of language requires a particular formalism, a magma, to describe the algebraic structure corresponding to the set of hierarchical structures underlying sentences. When this formalism is applied to actions, it appears to be both too strong and too weak. To overcome these limitations, which are related to the weak compositionality and sequential nature of action structures, we formalize the algebraic structure corresponding to the set of actions as a trace monoid. We aim to capture the different system properties of language and action in terms of the distinction between hierarchical sets and hierarchical sequences and discuss the implications for the way both systems could be represented in the brain.
  • Corps, R. E., Liao, M., & Pickering, M. J. (2023). Evidence for two stages of prediction in non-native speakers: A visual-world eye-tracking study. Bilingualism: Language and Cognition, 26(1), 231-243. doi:10.1017/S1366728922000499.

    Abstract

    Comprehenders predict what a speaker is likely to say when listening to non-native (L2) and native (L1) utterances. But what are the characteristics of L2 prediction, and how does it relate to L1 prediction? We addressed this question in a visual-world eye-tracking experiment, which tested when L2 English comprehenders integrated perspective into their predictions. Male and female participants listened to male and female speakers producing sentences (e.g., I would like to wear the nice…) about stereotypically masculine (target: tie; distractor: drill) and feminine (target: dress; distractor: hairdryer) objects. Participants predicted associatively, fixating objects semantically associated with critical verbs (here, the tie and the dress). They also predicted stereotypically consistent objects (e.g., the tie rather than the dress, given the male speaker). Consistent predictions were made later than associative predictions, and were delayed for L2 speakers relative to L1 speakers. These findings suggest prediction involves both automatic and non-automatic stages.
  • Corps, R. E., & Pickering, M. (2023). Response planning during question-answering: Does deciding what to say involve deciding how to say it? Psychonomic Bulletin & Review. Advance online publication. doi:10.3758/s13423-023-02382-3.

    Abstract

    To answer a question, speakers must determine their response and formulate it in words. But do they decide on a response before formulation, or do they formulate different potential answers before selecting one? We addressed this issue in a verbal question-answering experiment. Participants answered questions more quickly when they had one potential answer (e.g., Which tourist attraction in Paris is very tall?) than when they had multiple potential answers (e.g., What is the name of a Shakespeare play?). Participants also answered more quickly when the set of potential answers were on average short rather than long, regardless of whether there was only one or multiple potential answers. Thus, participants were not affected by the linguistic complexity of unselected but plausible answers. These findings suggest that participants select a single answer before formulation.
  • Corps, R. E., & Meyer, A. S. (2023). Word frequency has similar effects in picture naming and gender decision: A failure to replicate Jescheniak and Levelt (1994). Acta Psychologica, 241: 104073. doi:10.1016/j.actpsy.2023.104073.

    Abstract

    Word frequency plays a key role in theories of lexical access, which assume that the word frequency effect (WFE, faster access to high-frequency than low-frequency words) occurs as a result of differences in the representation and processing of the words. In a seminal paper, Jescheniak and Levelt (1994) proposed that the WFE arises during the retrieval of word forms, rather than the retrieval of their syntactic representations (their lemmas) or articulatory commands. An important part of Jescheniak and Levelt's argument was that they found a stable WFE in a picture naming task, which requires complete lexical access, but not in a gender decision task, which only requires access to the words' lemmas and not their word forms. We report two attempts to replicate this pattern, one with new materials, and one with Jescheniak and Levelt's orginal pictures. In both studies we found a strong WFE when the pictures were shown for the first time, but much weaker effects on their second and third presentation. Importantly these patterns were seen in both the picture naming and the gender decision tasks, suggesting that either word frequency does not exclusively affect word form retrieval, or that the gender decision task does not exclusively tap lemma access.

    Additional information

    raw data and analysis scripts
  • Corps, R. E., Yang, F., & Pickering, M. (2023). Evidence against egocentric prediction during language comprehension. Royal Society Open Science, 10(12): 231252. doi:10.1098/rsos.231252.

    Abstract

    Although previous research has demonstrated that language comprehension can be egocentric, there is little evidence for egocentricity during prediction. In particular, comprehenders do not appear to predict egocentrically when the context makes it clear what the speaker is likely to refer to. But do comprehenders predict egocentrically when the context does not make it clear? We tested this hypothesis using a visual-world eye-tracking paradigm, in which participants heard sentences containing the gender-neutral pronoun They (e.g. They would like to wear…) while viewing four objects (e.g. tie, dress, drill, hairdryer). Two of these objects were plausible targets of the verb (tie and dress), and one was stereotypically compatible with the participant's gender (tie if the participant was male; dress if the participant was female). Participants rapidly fixated targets more than distractors, but there was no evidence that participants ever predicted egocentrically, fixating objects stereotypically compatible with their own gender. These findings suggest that participants do not fall back on their own egocentric perspective when predicting, even when they know that context does not make it clear what the speaker is likely to refer to.
  • Corradi, Z., Khan, M., Hitti-Malin, R., Mishra, K., Whelan, L., Cornelis, S. S., ABCA4-Study Group, Hoyng, C. B., Kämpjärvi, K., Klaver, C. C. W., Liskova, P., Stohr, H., Weber, B. H. F., Banfi, S., Farrar, G. J., Sharon, D., Zernant, J., Allikmets, R., Dhaenens, C.-M., & Cremers, F. P. M. (2023). Targeted sequencing and in vitro splice assays shed light on ABCA4-associated retinopathies missing heritability. Human Genetics and Genomics Advances, 4(4): 100237. doi:10.1016/j.xhgg.2023.100237.

    Abstract

    The ABCA4 gene is the most frequently mutated Mendelian retinopathy-associated gene. Biallelic variants lead to a variety of phenotypes, however, for thousands of cases the underlying variants remain unknown. Here, we aim to shed further light on the missing heritability of ABCA4-associated retinopathy by analyzing a large cohort of macular dystrophy probands. A total of 858 probands were collected from 26 centers, of whom 722 carried no or one pathogenic ABCA4 variant while 136 cases carried two ABCA4 alleles, one of which was a frequent mild variant, suggesting that deep-intronic variants (DIVs) or other cis-modifiers might have been missed. After single molecule molecular inversion probes (smMIPs)-based sequencing of the complete 128-kb ABCA4 locus, the effect of putative splice variants was assessed in vitro by midigene splice assays in HEK293T cells. The breakpoints of copy number variants (CNVs) were determined by junction PCR and Sanger sequencing. ABCA4 sequence analysis solved 207/520 (39.8%) naïve or unsolved cases and 70/202 (34.7%) monoallelic cases, while additional causal variants were identified in 54/136 (39.7%) of probands carrying two variants. Seven novel DIVs and six novel non-canonical splice site variants were detected in a total of 35 alleles and characterized, including the c.6283-321C>G variant leading to a complex splicing defect. Additionally, four novel CNVs were identified and characterized in five alleles. These results confirm that smMIPs-based sequencing of the complete ABCA4 gene provides a cost-effective method to genetically solve retinopathy cases and that several rare structural and splice altering defects remain undiscovered in STGD1 cases.
  • Coventry, K. R., Gudde, H. B., Diessel, H., Collier, J., Guijarro-Fuentes, P., Vulchanova, M., Vulchanov, V., Todisco, E., Reile, M., Breunesse, M., Plado, H., Bohnemeyer, J., Bsili, R., Caldano, M., Dekova, R., Donelson, K., Forker, D., Park, Y., Pathak, L. S., Peeters, D. and 25 moreCoventry, K. R., Gudde, H. B., Diessel, H., Collier, J., Guijarro-Fuentes, P., Vulchanova, M., Vulchanov, V., Todisco, E., Reile, M., Breunesse, M., Plado, H., Bohnemeyer, J., Bsili, R., Caldano, M., Dekova, R., Donelson, K., Forker, D., Park, Y., Pathak, L. S., Peeters, D., Pizzuto, G., Serhan, B., Apse, L., Hesse, F., Hoang, L., Hoang, P., Igari, Y., Kapiley, K., Haupt-Khutsishvili, T., Kolding, S., Priiki, K., Mačiukaitytė, I., Mohite, V., Nahkola, T., Tsoi, S. Y., Williams, S., Yasuda, S., Cangelosi, A., Duñabeitia, J. A., Mishra, R. K., Rocca, R., Šķilters, J., Wallentin, M., Žilinskaitė-Šinkūnienė, E., & Incel, O. D. (2023). Spatial communication systems across languages reflect universal action constraints. Nature Human Behaviour, 77, 2099-2110. doi:10.1038/s41562-023-01697-4.

    Abstract

    The extent to which languages share properties reflecting the non-linguistic constraints of the speakers who speak them is key to the debate regarding the relationship between language and cognition. A critical case is spatial communication, where it has been argued that semantic universals should exist, if anywhere. Here, using an experimental paradigm able to separate variation within a language from variation between languages, we tested the use of spatial demonstratives—the most fundamental and frequent spatial terms across languages. In n = 874 speakers across 29 languages, we show that speakers of all tested languages use spatial demonstratives as a function of being able to reach or act on an object being referred to. In some languages, the position of the addressee is also relevant in selecting between demonstrative forms. Commonalities and differences across languages in spatial communication can be understood in terms of universal constraints on action shaping spatial language and cognition.
  • Cox, C., Bergmann, C., Fowler, E., Keren-Portnoy, T., Roepstorff, A., Bryant, G., & Fusaroli, R. (2023). A systematic review and Bayesian meta-analysis of the acoustic features of infant-directed speech. Nature Human Behaviour, 7, 114-133. doi:10.1038/s41562-022-01452-1.

    Abstract

    When speaking to infants, adults often produce speech that differs systematically from that directed to other adults. In order to quantify the acoustic properties of this speech style across a wide variety of languages and cultures, we extracted results from empirical studies on the acoustic features of infant-directed speech (IDS). We analyzed data from 88 unique studies (734 effect sizes) on the following five acoustic parameters that have been systematically examined in the literature: i) fundamental frequency (fo), ii) fo variability, iii) vowel space area, iv) articulation rate, and v) vowel duration. Moderator analyses were conducted in hierarchical Bayesian robust regression models in order to examine how these features change with infant age and differ across languages, experimental tasks and recording environments. The moderator analyses indicated that fo, articulation rate, and vowel duration became more similar to adult-directed speech (ADS) over time, whereas fo variability and vowel space area exhibited stability throughout development. These results point the way for future research to disentangle different accounts of the functions and learnability of IDS by conducting theory-driven comparisons among different languages and using computational models to formulate testable predictions.

    Additional information

    supplementary information
  • Cox, S., Rösler, D., & Skiba, R. (1989). A tailor-made database for language teaching material. Literary & Linguistic Computing, 4(4), 260-264.
  • Cozijn, R., Noordman, L. G., & Vonk, W. (2011). Propositional integration and world-knowledge inference: Processes in understanding because sentences. Discourse Processes, 48, 475-500. doi:10.1080/0163853X.2011.594421.

    Abstract

    The issue addressed in this study is whether propositional integration and world-knowledge inference can be distinguished as separate processes during the comprehension of Dutch omdat (because) sentences. “Propositional integration” refers to the process by which the reader establishes the type of relation between two clauses or sentences. “World-knowledge inference” refers to the process of deriving the general causal relation and checking it against the reader's world knowledge. An eye-tracking experiment showed that the presence of the conjunction speeds up the processing of the words immediately following the conjunction, and slows down the processing of the sentence final words in comparison to the absence of the conjunction. A second, subject-paced reading experiment replicated the reading time findings, and the results of a verification task confirmed that the effect at the end of the sentence was due to inferential processing. The findings evidence integrative processing and inferential processing, respectively.
  • Cozijn, R., Commandeur, E., Vonk, W., & Noordman, L. G. (2011). The time course of the use of implicit causality information in the processing of pronouns: A visual world paradigm study. Journal of Memory and Language, 64, 381-403. doi:10.1016/j.jml.2011.01.001.

    Abstract

    Several theoretical accounts have been proposed with respect to the issue how quickly the implicit causality verb bias affects the understanding of sentences such as “John beat Pete at the tennis match, because he had played very well”. They can be considered as instances of two viewpoints: the focusing and the integration account. The focusing account claims that the bias should be manifest soon after the verb has been processed, whereas the integration account claims that the interpretation is deferred until disambiguating information is encountered. Up to now, this issue has remained unresolved because materials or methods have failed to address it conclusively. We conducted two experiments that exploited the visual world paradigm and ambiguous pronouns in subordinate because clauses. The first experiment presented implicit causality sentences with the task to resolve the ambiguous pronoun. To exclude strategic processing, in the second experiment, the task was to answer simple comprehension questions and only a minority of the sentences contained implicit causality verbs. In both experiments, the implicit causality of the verb had an effect before the disambiguating information was available. This result supported the focusing account.
  • Cristia, A., McGuire, G. L., Seidl, A., & Francis, A. L. (2011). Effects of the distribution of acoustic cues on infants' perception of sibilants. Journal of Phonetics, 39, 388-402. doi:10.1016/j.wocn.2011.02.004.

    Abstract

    A current theoretical view proposes that infants converge on the speech categories of their native language by attending to frequency distributions that occur in the acoustic input. To date, the only empirical support for this statistical learning hypothesis comes from studies where a single, salient dimension was manipulated. Additional evidence is sought here, by introducing a less salient pair of categories supported by multiple cues. We exposed English-learning infants to a multi-cue bidimensional grid ranging between retroflex and alveolopalatal sibilants in prevocalic position. This contrast is substantially more difficult according to previous cross-linguistic and perceptual research, and its perception is driven by cues in both the consonantal and the following vowel portions. Infants heard one of two distributions (flat, or with two peaks), and were tested with sounds varying along only one dimension. Infants' responses differed depending on the familiarization distribution, and their performance was equally good for the vocalic and the frication dimension, lending some support to the statistical hypothesis even in this harder learning situation. However, learning was restricted to the retroflex category, and a control experiment showed that lack of learning for the alveolopalatal category was not due to the presence of a competing category. Thus, these results contribute fundamental evidence on the extent and limitations of the statistical hypothesis as an explanation for infants' perceptual tuning.
  • Cristia, A. (2011). Fine-grained variation in caregivers' speech predicts their infants' discrimination. Journal of the Acoustical Society of America, 129, 3271-3280. doi:10.1121/1.3562562.

    Abstract

    Within the debate on the mechanisms underlying infants’ perceptual acquisition, one hypothesis proposes that infants’ perception is directly affected by the acoustic implementation of sound categories in the speech they hear. In consonance with this view, the present study shows that individual variation in fine-grained, subphonemic aspects of the acoustic realization of /s/ in caregivers’ speech predicts infants’ discrimination of this sound from the highly similar /∫/, suggesting that learning based on acoustic cue distributions may indeed drive natural phonological acquisition.
  • Cristia, A., Seidl, A., & Gerken, L. (2011). Learning classes of sounds in infancy. University of Pennsylvania Working Papers in Linguistics, 17, 9.

    Abstract

    Adults' phonotactic learning is affected by perceptual biases. One such bias concerns learning of constraints affecting groups of sounds: all else being equal, learning constraints affecting a natural class (a set of sounds sharing some phonetic characteristic) is easier than learning a constraint affecting an arbitrary set of sounds. This perceptual bias could be a given, for example, the result of innately guided learning; alternatively, it could be due to human learners’ experience with sounds. Using artificial grammars, we investigated whether such a bias arises in development, or whether it is present as soon as infants can learn phonotactics. Seven-month-old English-learning infants fail to generalize a phonotactic pattern involving fricatives and nasals, which does not form a coherent phonetic group, but succeed with the natural class of oral and nasal stops. In this paper, we report an experiment that explored whether those results also follow in a cohort of 4-month-olds. Unlike the older infants, 4-month-olds were able to generalize both groups, suggesting that the perceptual bias that makes phonotactic constraints on natural classes easier to learn is likely the effect of experience.
  • Cronin, K. A., Van Leeuwen, E. J. C., Mulenga, I. C., & Bodamer, M. D. (2011). Behavioral response of a chimpanzee mother toward her dead infant. American Journal of Primatology, 73(5), 415-421. doi:10.1002/ajp.20927.

    Abstract

    The mother-offspring bond is one of the strongest and most essential social bonds. Following is a detailed behavioral report of a female chimpanzee two days after her 16-month-old infant died, on the first day that the mother is observed to create distance between her and the corpse. A series of repeated approaches and retreats to and from the body are documented, along with detailed accounts of behaviors directed toward the dead infant by the mother and other group members. The behavior of the mother toward her dead infant not only highlights the maternal contribution to the mother-infant relationship but also elucidates the opportunities chimpanzees have to learn about the sensory cues associated with death, and the implications of death for the social environment.
  • Cutler, A., Howard, D., & Patterson, K. E. (1989). Misplaced stress on prosody: A reply to Black and Byng. Cognitive Neuropsychology, 6, 67-83.

    Abstract

    The recent claim by Black and Byng (1986) that lexical access in reading is subject to prosodic constraints is examined and found to be unsupported. The evidence from impaired reading which Black and Byng report is based on poorly controlled stimulus materials and is inadequately analysed and reported. An alternative explanation of their findings is proposed, and new data are reported for which this alternative explanation can account but their model cannot. Finally, their proposal is shown to be theoretically unmotivated and in conflict with evidence from normal reading.
  • Cutler, A. (1980). La leçon des lapsus. La Recherche, 11(112), 686-692.
  • Cutler, A. (2011). Listening to REAL second language. AATSEEL Newsletter, 54(3), 14.
  • Cutler, A. (1989). Straw modules [Commentary/Massaro: Speech perception]. Behavioral and Brain Sciences, 12, 760-762.
  • Cutler, A. (1989). The new Victorians. New Scientist, (1663), 66.
  • Daller, M. H., Treffers-Daller, J., & Furman, R. (2011). Transfer of conceptualization patterns in bilinguals: The construal of motion events in Turkish and German. Bilingualism: Language and Cognition, 14(1), 95-119. doi:10.1017/S1366728910000106.

    Abstract

    In the present article we provide evidence for the occurrence of transfer of conceptualization patterns in narratives of two German-Turkish bilingual groups. All bilingual participants grew up in Germany, but only one group is still resident in Germany (n = 49). The other, the returnees, moved back to Turkey after having lived in Germany for thirteen years (n = 35). The study is based on the theoretical framework for conceptual transfer outlined in Jarvis and Pavlenko (2008) and on the typology of satellite-framed and verb-framed languages developed by Talmy (1985, 1991, 2000a, b) and Slobin (1987, 1996, 2003, 2004, 2005, 2006). In the present study we provide evidence for the hypothesis that language structure affects the organization of information structure at the level of the Conceptualizer, and show that bilingual speakers’ conceptualization of motion events is influenced by the dominant linguistic environment in both languages (German for the group in Germany and Turkish for the returnees). The returnees follow the Turkish blueprints for the conceptualization of motion, in both Turkish and German event construals, whereas the German-resident bilinguals follow the German blueprints, when speaking German as well as Turkish. We argue that most of the patterns found are the result of transfer of conceptualization patterns from the dominant language of the environment.
  • Davids, N., Segers, E., Van den Brink, D., Mitterer, H., van Balkom, H., Hagoort, P., & Verhoeven, L. (2011). The nature of auditory discrimination problems in children with specific language impairment: An MMN study. Neuropsychologia, 49, 19-28. doi:10.1016/j.neuropsychologia.2010.11.001.

    Abstract

    Many children with Specific Language Impairment (SLI) show impairments in discriminating auditorily presented stimuli. The present study investigates whether these discrimination problems are speech specific or of a general auditory nature. This was studied by using a linguistic and nonlinguistic contrast that were matched for acoustic complexity in an active behavioral task and a passive ERP paradigm, known to elicit the mismatch negativity (MMN). In addition, attention skills and a variety of language skills were measured. Participants were 25 five-year-old Dutch children with SLI having receptive as well as productive language problems and 25 control children with typical speechand language development. At the behavioral level, the SLI group was impaired in discriminating the linguistic contrast as compared to the control group, while both groups were unable to distinguish the non-linguistic contrast. Moreover, the SLI group tended to have impaired attention skills which correlated with performance on most of the language tests. At the neural level, the SLI group, in contrast to the control group, did not show an MMN in response to either the linguistic or nonlinguistic contrast. The MMN data are consistent with an account that relates the symptoms in children with SLI to non-speech processing difficulties.
  • Davidson, D., & Indefrey, P. (2011). Error-related activity and correlates of grammatical plasticity. Frontiers in Psychology, 2: 219. doi:10.3389/fpsyg.2011.00219.

    Abstract

    Cognitive control involves not only the ability to manage competing task demands, but also the ability to adapt task performance during learning. This study investigated how violation-, response-, and feedback-related electrophysiological (EEG) activity changes over time during language learning. Twenty-two Dutch learners of German classified short prepositional phrases presented serially as text. The phrases were initially presented without feedback during a pre-test phase, and then with feedback in a training phase on two separate days spaced 1 week apart. The stimuli included grammatically correct phrases, as well as grammatical violations of gender and declension. Without feedback, participants' classification was near chance and did not improve over trials. During training with feedback, behavioral classification improved and violation responses appeared to both types of violation in the form of a P600. Feedback-related negative and positive components were also present from the first day of training. The results show changes in the electrophysiological responses in concert with improving behavioral discrimination, suggesting that the activity is related to grammar learning.
  • Dediu, D. (2011). Are languages really independent from genes? If not, what would a genetic bias affecting language diversity look like? Human Biology, 83, 279-296. doi:10.3378/027.083.0208.

    Abstract

    It is generally accepted that the relationship between human genes
    and language is very complex and multifaceted. This has its roots in the
    “regular” complexity governing the interplay among genes and between genes
    and environment for most phenotypes, but with the added layer of supraontogenetic
    and supra-individual processes defining culture. At the coarsest
    level, focusing on the species, it is clear that human-specific—but not necessarily
    faculty-specific—genetic factors subtend our capacity for language and a
    currently very productive research program is aiming at uncovering them. At the
    other end of the spectrum, it is uncontroversial that individual-level variations in
    different aspects related to speech and language have an important genetic
    component and their discovery and detailed characterization have already started
    to revolutionize the way we think about human nature. However, at the
    intermediate, glossogenetic/population level, the relationship becomes controversial,
    partly due to deeply ingrained beliefs about language acquisition and
    universality and partly because of confusions with a different type of genelanguages
    correlation due to shared history. Nevertheless, conceptual, mathematical
    and computational models—and, recently, experimental evidence from
    artificial languages and songbirds—have repeatedly shown that genetic biases
    affecting the acquisition or processing of aspects of language and speech can be
    amplified by population-level intergenerational cultural processes and made
    manifest either as fixed “universal” properties of language or as structured
    linguistic diversity. Here, I review several such models as well as the recently
    proposed case of a causal relationship between the distribution of tone languages
    and two genes related to brain growth and development, ASPM and Microcephalin,
    and I discuss the relevance of such genetic biasing for language
    evolution, change, and diversity.
  • Dediu, D. (2011). A Bayesian phylogenetic approach to estimating the stability of linguistic features and the genetic biasing of tone. Proceedings of the Royal Society of London/B, 278(1704), 474-479. doi:10.1098/rspb.2010.1595.

    Abstract

    Language is a hallmark of our species and understanding linguistic diversity is an area of major interest. Genetic factors influencing the cultural transmission of language provide a powerful and elegant explanation for aspects of the present day linguistic diversity and a window into the emergence and evolution of language. In particular, it has recently been proposed that linguistic tone—the usage of voice pitch to convey lexical and grammatical meaning—is biased by two genes involved in brain growth and development, ASPM and Microcephalin. This hypothesis predicts that tone is a stable characteristic of language because of its ‘genetic anchoring’. The present paper tests this prediction using a Bayesian phylogenetic framework applied to a large set of linguistic features and language families, using multiple software implementations, data codings, stability estimations, linguistic classifications and outgroup choices. The results of these different methods and datasets show a large agreement, suggesting that this approach produces reliable estimates of the stability of linguistic data. Moreover, linguistic tone is found to be stable across methods and datasets, providing suggestive support for the hypothesis of genetic influences on its distribution.

Share this page