Publications

Displaying 1201 - 1300 of 1452
  • Seuren, P. A. M. (1980). Wat is taal? Cahiers Bio-Wetenschappen en Maatschappij, 6(4), 23-29.
  • Seuren, P. A. M. (1998). Western linguistics: An historical introduction. Oxford: Blackwell.
  • Seuren, P. A. M. (1991). What makes a text untranslatable? In H. M. N. Noor Ein, & H. S. Atiah (Eds.), Pragmatik Penterjemahan: Prinsip, Amalan dan Penilaian Menuju ke Abad 21 ("The Pragmatics of Translation: Principles, Practice and Evaluation Moving towards the 21st Century") (pp. 19-27). Kuala Lumpur: Dewan Bahasa dan Pustaka.
  • Seuren, P. A. M. (1998). Towards a discourse-semantic account of donkey anaphora. In S. Botley, & T. McEnery (Eds.), New Approaches to Discourse Anaphora: Proceedings of the Second Colloquium on Discourse Anaphora and Anaphor Resolution (DAARC2) (pp. 212-220). Lancaster: Universiy Centre for Computer Corpus Research on Language, Lancaster University.
  • Seuren, P. A. M. (1980). Variabele competentie: Linguïstiek en sociolinguïstiek anno 1980. In Handelingen van het 36e Nederlands Filologencongres: Gehouden te Groningen op woensdag 9, donderdag 10 en vrijdag 11 April 1980 (pp. 41-56). Amsterdam: Holland University Press.
  • Severijnen, G. G., Bosker, H. R., & McQueen, J. M. (2022). Acoustic correlates of Dutch lexical stress re-examined: Spectral tilt is not always more reliable than intensity. In S. Frota, M. Cruz, & M. Vigário (Eds.), Proceedings of Speech Prosody 2022 (pp. 278-282). doi:10.21437/SpeechProsody.2022-57.

    Abstract

    The present study examined two acoustic cues in the production
    of lexical stress in Dutch: spectral tilt and overall intensity.
    Sluijter and Van Heuven (1996) reported that spectral tilt is a
    more reliable cue to stress than intensity. However, that study
    included only a small number of talkers (10) and only syllables
    with the vowels /aː/ and /ɔ/.
    The present study re-examined this issue in a larger and
    more variable dataset. We recorded 38 native speakers of Dutch
    (20 females) producing 744 tokens of Dutch segmentally
    overlapping words (e.g., VOORnaam vs. voorNAAM, “first
    name” vs. “respectable”), targeting 10 different vowels, in
    variable sentence contexts. For each syllable, we measured
    overall intensity and spectral tilt following Sluijter and Van
    Heuven (1996).
    Results from Linear Discriminant Analyses showed that,
    for the vowel /aː/ alone, spectral tilt showed an advantage over
    intensity, as evidenced by higher stressed/unstressed syllable
    classification accuracy scores for spectral tilt. However, when
    all vowels were included in the analysis, the advantage
    disappeared.
    These findings confirm that spectral tilt plays a larger role
    in signaling stress in Dutch /aː/ but show that, for a larger
    sample of Dutch vowels, overall intensity and spectral tilt are
    equally important.
  • Seyfeddinipur, M. (2006). Disfluency: Interrupting speech and gesture. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.59337.
  • Sha, Z., Van Rooij, D., Anagnostou, E., Arango, C., Auzias, G., Behrmann, M., Bernhardt, B., Bolte, S., Busatto, G. F., Calderoni, S., Calvo, R., Daly, E., Deruelle, C., Duan, M., Duran, F. L. S., Durston, S., Ecker, C., Ehrlich, S., Fair, D., Fedor, J. and 38 moreSha, Z., Van Rooij, D., Anagnostou, E., Arango, C., Auzias, G., Behrmann, M., Bernhardt, B., Bolte, S., Busatto, G. F., Calderoni, S., Calvo, R., Daly, E., Deruelle, C., Duan, M., Duran, F. L. S., Durston, S., Ecker, C., Ehrlich, S., Fair, D., Fedor, J., Fitzgerald, J., Floris, D. L., Franke, B., Freitag, C. M., Gallagher, L., Glahn, D. C., Haar, S., Hoekstra, L., Jahanshad, N., Jalbrzikowski, M., Janssen, J., King, J. A., Lazaro, L., Luna, B., McGrath, J., Medland, S. E., Muratori, F., Murphy, D. G., Neufeld, J., O’Hearn, K., Oranje, B., Parellada, M., Pariente, J. C., Postema, M., Remnelius, K. L., Retico, A., Rosa, P. G. P., Rubia, K., Shook, D., Tammimies, K., Taylor, M. J., Tosetti, M., Wallace, G. L., Zhou, F., Thompson, P. M., Fisher, S. E., Buitelaar, J. K., & Francks, C. (2022). Subtly altered topological asymmetry of brain structural covariance networks in autism spectrum disorder across 43 datasets from the ENIGMA consortium. Molecular Psychiatry, 27, 2114-2125. doi:10.1038/s41380-022-01452-7.

    Abstract

    Small average differences in the left-right asymmetry of cerebral cortical thickness have been reported in individuals with autism spectrum disorder (ASD) compared to typically developing controls, affecting widespread cortical regions. The possible impacts of these regional alterations in terms of structural network effects have not previously been characterized. Inter-regional morphological covariance analysis can capture network connectivity between different cortical areas at the macroscale level. Here, we used cortical thickness data from 1455 individuals with ASD and 1560 controls, across 43 independent datasets of the ENIGMA consortium’s ASD Working Group, to assess hemispheric asymmetries of intra-individual structural covariance networks, using graph theory-based topological metrics. Compared with typical features of small-world architecture in controls, the ASD sample showed significantly altered average asymmetry of networks involving the fusiform, rostral middle frontal, and medial orbitofrontal cortex, involving higher randomization of the corresponding right-hemispheric networks in ASD. A network involving the superior frontal cortex showed decreased right-hemisphere randomization. Based on comparisons with meta-analyzed functional neuroimaging data, the altered connectivity asymmetry particularly affected networks that subserve executive functions, language-related and sensorimotor processes. These findings provide a network-level characterization of altered left-right brain asymmetry in ASD, based on a large combined sample. Altered asymmetrical brain development in ASD may be partly propagated among spatially distant regions through structural connectivity.
  • Shatzman, K. B. (2006). Sensitivity to detailed acoustic information in word recognition. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.59331.
  • Shatzman, K. B., & McQueen, J. M. (2006). Segment duration as a cue to word boundaries in spoken-word recognition. Perception & Psychophysics, 68(1), 1-16.

    Abstract

    In two eye-tracking experiments, we examined the degree to which listeners use acoustic cues to word boundaries. Dutch participants listened to ambiguous sentences in which stop-initial words (e.g., pot, jar) were preceded by eens (once); the sentences could thus also refer to cluster-initial words (e.g., een spot, a spotlight). The participants made fewer fixations to target pictures (e.g., a jar) when the target and the preceding [s] were replaced by a recording of the cluster-initial word than when they were spliced from another token of the target-bearing sentence (Experiment 1). Although acoustic analyses revealed several differences between the two recordings, only [s] duration correlated with the participants’ fixations (more target fixations for shorter [s]s). Thus, we found that listeners apparently do not use all available acoustic differences equally. In Experiment 2, the participants made more fixations to target pictures when the [s] was shortened than when it was lengthened. Utterance interpretation can therefore be influenced by individual segment duration alone.
  • Shatzman, K. B., & McQueen, J. M. (2006). Prosodic knowledge affects the recognition of newly acquired words. Psychological Science, 17(5), 372-377. doi:10.1111/j.1467-9280.2006.01714.x.

    Abstract

    An eye-tracking study examined the involvement of prosodic knowledge—specifically, the knowledge that monosyllabic words tend to have longer durations than the first syllables of polysyllabic words—in the recognition of newly learned words. Participants learned new spoken words (by associating them to novel shapes): bisyllables and onset-embedded monosyllabic competitors (e.g., baptoe and bap). In the learning phase, the duration of the ambiguous sequence (e.g., bap) was held constant. In the test phase, its duration was longer than, shorter than, or equal to its learning-phase duration. Listeners’ fixations indicated that short syllables tended to be interpreted as the first syllables of the bisyllables, whereas long syllables generated more monosyllabic-word interpretations. Recognition of newly acquired words is influenced by prior prosodic knowledge and is therefore not determined solely on the basis of stored episodes of those words.
  • Shatzman, K. B., & McQueen, J. M. (2006). The modulation of lexical competition by segment duration. Psychonomic Bulletin & Review, 13(6), 966-971.

    Abstract

    In an eye-tracking study, we examined how fine-grained phonetic detail, such as segment duration, influences the lexical competition process during spoken word recognition. Dutch listeners’ eye movements to pictures of four objects were monitored as they heard sentences in which a stop-initial target word (e.g., pijp “pipe”) was preceded by an [s]. The participants made more fixations to pictures of cluster-initial words (e.g., spijker “nail”) when they heard a long [s] (mean duration, 103 msec) than when they heard a short [s] (mean duration, 73 msec). Conversely, the participants made more fixations to pictures of the stop-initial words when they heard a short [s] than when they heard a long [s]. Lexical competition between stop- and cluster-initial words, therefore, is modulated by segment duration differences of only 30 msec.
  • Shebani, Z., Carota, F., Hauk, O., Rowe, J. B., Barsalou, L. W., Tomasello, R., & Pulvermüller, F. (2022). Brain correlates of action word memory revealed by fMRI. Scientific Reports, 12: 16053. doi:10.1038/s41598-022-19416-w.

    Abstract

    Understanding language semantically related to actions activates the motor cortex. This activation is sensitive to semantic information such as the body part used to perform the action (e.g. arm-/leg-related action words). Additionally, motor movements of the hands/feet can have a causal effect on memory maintenance of action words, suggesting that the involvement of motor systems extends to working memory. This study examined brain correlates of verbal memory load for action-related words using event-related fMRI. Seventeen participants saw either four identical or four different words from the same category (arm-/leg-related action words) then performed a nonmatching-to-sample task. Results show that verbal memory maintenance in the high-load condition produced greater activation in left premotor and supplementary motor cortex, along with posterior-parietal areas, indicating that verbal memory circuits for action-related words include the cortical action system. Somatotopic memory load effects of arm- and leg-related words were observed, but only at more anterior cortical regions than was found in earlier studies employing passive reading tasks. These findings support a neurocomputational model of distributed action-perception circuits (APCs), according to which language understanding is manifest as full ignition of APCs, whereas working memory is realized as reverberant activity receding to multimodal prefrontal and lateral temporal areas.

    Additional information

    supplementary figure S1 caption
  • Shen, C. (2022). Individual differences in speech production and maximum speech performance. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Shi, R., Werker, J. F., & Cutler, A. (2006). Recognition and representation of function words in English-learning infants. Infancy, 10(2), 187-198. doi:10.1207/s15327078in1002_5.

    Abstract

    We examined infants' recognition of functors and the accuracy of the representations that infants construct of the perceived word forms. Auditory stimuli were “Functor + Content Word” versus “Nonsense Functor + Content Word” sequences. Eight-, 11-, and 13-month-old infants heard both real functors and matched nonsense functors (prosodically analogous to their real counterparts but containing a segmental change). Results reveal that 13-month-olds recognized functors with attention to segmental detail. Eight-month-olds did not distinguish real versus nonsense functors. The performance of 11-month-olds fell in between that of the older and younger groups, consistent with an emerging recognition of real functors. The three age groups exhibited a clear developmental trend. We propose that in the earliest stages of vocabulary acquisition, function elements receive no segmentally detailed representations, but such representations are gradually constructed so that once vocabulary growth starts in earnest, fully specified functor representations are in place to support it.
  • Shi, R., Cutler, A., Werker, J., & Cruickshank, M. (2006). Frequency and form as determinants of functor sensitivity in English-acquiring infants. Journal of the Acoustical Society of America, 119(6), EL61-EL67. doi:10.1121/1.2198947.

    Abstract

    High-frequency functors are arguably among the earliest perceived word forms and may assist extraction of initial vocabulary items. Canadian 11- and 8-month-olds were familiarized to pseudo-nouns following either a high-frequency functor the or a low-frequency functor her versus phonetically similar mispronunciations of each, kuh and ler, and then tested for recognition of the pseudo-nouns. A preceding the (but not kuh, her, ler)facilitated extraction of the pseudo-nouns for 11-month-olds; the is thus well-specified in form for these infants. However, both the and kuh (but not her-ler )f aciliated segmentation or 8-month-olds, suggesting an initial underspecified representation of high-frequency functors.
  • Shopen, T., Reid, N., Shopen, G., & Wilkins, D. G. (1997). Ensuring the survival of Aboriginal and Torres Strait islander languages into the 21st century. Australian Review of Applied Linguistics, 10(1), 143-157.

    Abstract

    Aboriginal languages threatened by speakers poor economic and social conditions; some may survive through support for community development, language maintenance, bilingual education and training of Aboriginal teachers and linguists, and nonAboriginal teachers of Aboriginal and Islander students.
  • Shukla, V., Long, M., & Rubio-Fernandez, P. (2022). Children’s acquisition of new/given markers in English, Hindi, Mandinka and Spanish: Exploring the effect of optionality during grammaticalization. Glossa Psycholinguistics, 1(1): 13. doi:10.5070/G6011120.

    Abstract

    We investigated the effect of optionality on the acquisition of new/given markers, with a special focus on grammaticalization as a stage of optional use of the emerging form. To this end, we conducted a narrative-elicitation task with 5-year-old children and adults across four typologically-distinct languages with different new/given markers: English, Hindi, Mandinka and Spanish. Our starting assumption was that the Hindi numeral ‘ek’ (one) is developing into an indefinite article, which should delay children’s acquisition because of its optional use to introduce discourse referents. Supporting the Optionality Hypothesis, Experiment 1 revealed that obligatory markers are acquired earlier than optional markers. Experiment 2 focused on Hindi and showed that 10-year-old children’s use of ‘ek’ to introduce discourse characters was higher than 5-year-olds’ and comparable to adults’, replicating this pattern of results in two different cities in Northern India. Lastly, a follow-up study showed that Mandinka-speaking children and adults made use of all available discourse markers when tested on a familiar story, rather than with pictorial prompts, highlighting the importance of using culturally-appropriate methods of narrative elicitation in cross-linguistic research. We conclude by discussing the implications of article grammaticalization for common ground management in a speech community.
  • Shukla, V., Long, M., Bhatia, V., & Rubio-Fernandez, P. (2022). Some sentences prime pragmatic reasoning in the verification and evaluation of comparisons. Journal of Experimental Psychology: Learning, Memory, and Cognition, 48(4), 569-582. doi:10.1037/xlm0001082.

    Abstract

    While most research on scalar implicature has focused on the lexical scale “some” vs “all,” here we investigated an understudied scale formed by two syntactic constructions: categorizations (e.g., “Wilma is a nurse”) and comparisons (“Wilma is like a nurse”). An experimental study by Rubio-Fernandez et al. (2017) showed high rates of logical responses to superordinate comparisons, even though they are underinformative when interpreted pragmatically (e.g., “A robin is like a bird” implies that a robin is not a bird). Based on recent studies on enrichment priming, we predicted that including “some” and “all” statements (which typically elicit high rates of pragmatic responses) in sentence verification and sentence evaluation tasks would introduce an informativity bias, increasing pragmatic responses to superordinate comparisons. The results of three Web-based experiments supported our predictions, showing that different scalar expressions not only give rise to different rates of scalar implicatures, but can also affect the degree to which an experimental task elicits pragmatic reasoning.
  • Sicoli, M. A., Majid, A., & Levinson, S. C. (2009). The language of sound: II. In A. Majid (Ed.), Field manual volume 12 (pp. 14-19). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.446294.

    Abstract

    The task is designed to elicit vocabulary for simple sounds. The primary goal is to establish how people describe sound and what resources the language provides generally for encoding this domain. More specifically: (1) whether there is dedicated vocabulary for encoding simple sound contrasts and (2) how much consistency there is within a community in descriptions. This develops on materials used in The language of sound
  • Simon-Thomas, E. R., Keltner, D. J., Sauter, D., Sinicropi-Yao, L., & Abramson, A. (2009). The voice conveys specific emotions: Evidence from vocal burst displays. Emotion, 9, 838-846. doi:10.1037/a0017810.

    Abstract

    Studies of emotion signaling inform claims about the taxonomic structure, evolutionary origins, and physiological correlates of emotions. Emotion vocalization research has tended to focus on a limited set of emotions: anger, disgust, fear, sadness, surprise, happiness, and for the voice, also tenderness. Here, we examine how well brief vocal bursts can communicate 22 different emotions: 9 negative (Study 1) and 13 positive (Study 2), and whether prototypical vocal bursts convey emotions more reliably than heterogeneous vocal bursts (Study 3). Results show that vocal bursts communicate emotions like anger, fear, and sadness, as well as seldom-studied states like awe, compassion, interest, and embarrassment. Ancillary analyses reveal family-wise patterns of vocal burst expression. Errors in classification were more common within emotion families (e.g., ‘self-conscious,’ ‘pro-social’) than between emotion families. The three studies reported highlight the voice as a rich modality for emotion display that can inform fundamental constructs about emotion.
  • Skiba, R. (2006). Computeranalyse/Computer Analysis. In U. Amon, N. Dittmar, K. Mattheier, & P. Trudgill (Eds.), Sociolinguistics: An international handbook of the science of language and society [2nd completely revised and extended edition] (pp. 1187-1197). Berlin, New York: de Gruyter.
  • Skiba, R. (1991). Eine Datenbank für Deutsch als Zweitsprache Materialien: Zum Einsatz von PC-Software bei Planung von Zweitsprachenunterricht. In H. Barkowski, & G. Hoff (Eds.), Berlin interkulturell: Ergebnisse einer Berliner Konferenz zu Migration und Pädagogik. (pp. 131-140). Berlin: Colloquium.
  • Skiba, R. (1998). Fachsprachenforschung in wissenschaftstheoretischer Perspektive. Tübingen: Gunter Narr.
  • Slivac, K. (2022). The enlanguaged brain: Cognitive and neural mechanisms of linguistic influence on perception. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Slobin, D. I. (2002). Cognitive and communicative consequences of linguistic diversity. In S. Strömqvist (Ed.), The diversity of languages and language learning (pp. 7-23). Lund, Sweden: Lund University, Centre for Languages and Literature.
  • Slonimska, A. (2022). The role of iconicity and simultaneity in efficient communication in the visual modality: Evidence from LIS (Italian Sign Language). PhD Thesis, Radboud University, Nijmegen.
  • Slonimska, A., Özyürek, A., & Capirci, O. (2022). Simultaneity as an emergent property of efficient communication in language: A comparison of silent gesture and sign language. Wiley Interdisciplinary Reviews: Cognitive Science, 46(5): 13133. doi:10.1111/cogs.13133.

    Abstract

    Sign languages use multiple articulators and iconicity in the visual modality which allow linguistic units to be organized not only linearly but also simultaneously. Recent research has shown that users of an established sign language such as LIS (Italian Sign Language) use simultaneous and iconic constructions as a modality-specific resource to achieve communicative efficiency when they are required to encode informationally rich events. However, it remains to be explored whether the use of such simultaneous and iconic constructions recruited for communicative efficiency can be employed even without a linguistic system (i.e., in silent gesture) or whether they are specific to linguistic patterning (i.e., in LIS). In the present study, we conducted the same experiment as in Slonimska et al. with 23 Italian speakers using silent gesture and compared the results of the two studies. The findings showed that while simultaneity was afforded by the visual modality to some extent, its use in silent gesture was nevertheless less frequent and qualitatively different than when used within a linguistic system. Thus, the use of simultaneous and iconic constructions for communicative efficiency constitutes an emergent property of sign languages. The present study highlights the importance of studying modality-specific resources and their use for linguistic expression in order to promote a more thorough understanding of the language faculty and its modality-specific adaptive capabilities.
  • Slonimska, A., Özyürek, A., & Capirci, O. (2022). Simultaneity as an emergent property of sign languages. In A. Ravignani, R. Asano, D. Valente, F. Ferretti, S. Hartmann, M. Hayashi, Y. Jadoul, M. Martins, Y. Oseki, E. D. Rodrigues, O. Vasileva, & S. Wacewicz (Eds.), The evolution of language: Proceedings of the Joint Conference on Language Evolution (JCoLE) (pp. 678-680). Nijmegen: Joint Conference on Language Evolution (JCoLE).
  • Smalley, S. L., Kustanovich, V., Minassian, S. L., Stone, J. L., Ogdie, M. N., McGough, J. J., McCracken, J. T., MacPhie, I. L., Francks, C., Fisher, S. E., Cantor, R. M., Monaco, A. P., & Nelson, S. F. (2002). Genetic linkage of Attention-Deficit/Hyperactivity Disorder on chromosome 16p13, in a region implicated in autism. American Journal of Human Genetics, 71(4), 959-963. doi:10.1086/342732.

    Abstract

    Attention-deficit/hyperactivity disorder (ADHD) is the most commonly diagnosed behavioral disorder in childhood and likely represents an extreme of normal behavior. ADHD significantly impacts learning in school-age children and leads to impaired functioning throughout the life span. There is strong evidence for a genetic etiology of the disorder, although putative alleles, principally in dopamine-related pathways suggested by candidate-gene studies, have very small effect sizes. We use affected-sib-pair analysis in 203 families to localize the first major susceptibility locus for ADHD to a 12-cM region on chromosome 16p13 (maximum LOD score 4.2; P=.000005), building upon an earlier genomewide scan of this disorder. The region overlaps that highlighted in three genome scans for autism, a disorder in which inattention and hyperactivity are common, and physically maps to a 7-Mb region on 16p13. These findings suggest that variations in a gene on 16p13 may contribute to common deficits found in both ADHD and autism.
  • De Smedt, K., & Kempen, G. (1991). Segment Grammar: A formalism for incremental sentence generation. In C. Paris, W. Swartout, & W. Mann (Eds.), Natural language generation and computational linguistics (pp. 329-349). Dordrecht: Kluwer Academic Publishers.

    Abstract

    Incremental sentence generation imposes special constraints on the representation of the grammar and the design of the formulator (the module which is responsible for constructing the syntactic and morphological structure). In the model of natural speech production presented here, a formalism called Segment Grammar is used for the representation of linguistic knowledge. We give a definition of this formalism and present a formulator design which relies on it. Next, we present an object- oriented implementation of Segment Grammar. Finally, we compare Segment Grammar with other formalisms.
  • Smits, R. (1998). A model for dependencies in phonetic categorization. Proceedings of the 16th International Congress on Acoustics and the 135th Meeting of the Acoustical Society of America, 2005-2006.

    Abstract

    A quantitative model of human categorization behavior is proposed, which can be applied to 4-alternative forced-choice categorization data involving two binary classifications. A number of processing dependencies between the two classifications are explicitly formulated, such as the dependence of the location, orientation, and steepness of the class boundary for one classification on the outcome of the other classification. The significance of various types of dependencies can be tested statistically. Analyses of a data set from the literature shows that interesting dependencies in human speech recognition can be uncovered using the model.
  • Smits, R., Sereno, J., & Jongman, A. (2006). Categorization of sounds. Journal of Experimental Psychology: Human Perception and Performance, 32(3), 733-754. doi:10.1037/0096-1523.32.3.733.

    Abstract

    The authors conducted 4 experiments to test the decision-bound, prototype, and distribution theories for the categorization of sounds. They used as stimuli sounds varying in either resonance frequency or duration. They created different experimental conditions by varying the variance and overlap of 2 stimulus distributions used in a training phase and varying the size of the stimulus continuum used in the subsequent test phase. When resonance frequency was the stimulus dimension, the pattern of categorization-function slopes was in accordance with the decision-bound theory. When duration was the stimulus dimension, however, the slope pattern gave partial support for the decision-bound and distribution theories. The authors introduce a new categorization model combining aspects of decision-bound and distribution theories that gives a superior account of the slope patterns across the 2 stimulus dimensions.
  • Snijders, T. M., Vosse, T., Kempen, G., Van Berkum, J. J. A., Petersson, K. M., & Hagoort, P. (2009). Retrieval and unification of syntactic structure in sentence comprehension: An fMRI study using word-category ambiguity. Cerebral Cortex, 19, 1493-1503. doi:10.1093/cercor/bhn187.

    Abstract

    Sentence comprehension requires the retrieval of single word information from long-term memory, and the integration of this information into multiword representations. The current functional magnetic resonance imaging study explored the hypothesis that the left posterior temporal gyrus supports the retrieval of lexical-syntactic information, whereas left inferior frontal gyrus (LIFG) contributes to syntactic unification. Twenty-eight subjects read sentences and word sequences containing word-category (noun–verb) ambiguous words at critical positions. Regions contributing to the syntactic unification process should show enhanced activation for sentences compared to words, and only within sentences display a larger signal for ambiguous than unambiguous conditions. The posterior LIFG showed exactly this predicted pattern, confirming our hypothesis that LIFG contributes to syntactic unification. The left posterior middle temporal gyrus was activated more for ambiguous than unambiguous conditions (main effect over both sentences and word sequences), as predicted for regions subserving the retrieval of lexical-syntactic information from memory. We conclude that understanding language involves the dynamic interplay between left inferior frontal and left posterior temporal regions.

    Additional information

    suppl1.pdf suppl2_dutch_stimulus.pdf
  • Snowdon, C. T., & Cronin, K. A. (2009). Comparative cognition and neuroscience. In G. Berntson, & J. Cacioppo (Eds.), Handbook of neuroscience for the behavioral sciences (pp. 32-55). Hoboken, NJ: Wiley.
  • Sønderby, I. E., Ching, C. R. K., Thomopoulos, S. I., Van der Meer, D., Sun, D., Villalon‐Reina, J. E., Agartz, I., Amunts, K., Arango, C., Armstrong, N. J., Ayesa‐Arriola, R., Bakker, G., Bassett, A. S., Boomsma, D. I., Bülow, R., Butcher, N. J., Calhoun, V. D., Caspers, S., Chow, E. W. C., Cichon, S. and 84 moreSønderby, I. E., Ching, C. R. K., Thomopoulos, S. I., Van der Meer, D., Sun, D., Villalon‐Reina, J. E., Agartz, I., Amunts, K., Arango, C., Armstrong, N. J., Ayesa‐Arriola, R., Bakker, G., Bassett, A. S., Boomsma, D. I., Bülow, R., Butcher, N. J., Calhoun, V. D., Caspers, S., Chow, E. W. C., Cichon, S., Ciufolini, S., Craig, M. C., Crespo‐Facorro, B., Cunningham, A. C., Dale, A. M., Dazzan, P., De Zubicaray, G. I., Djurovic, S., Doherty, J. L., Donohoe, G., Draganski, B., Durdle, C. A., Ehrlich, S., Emanuel, B. S., Espeseth, T., Fisher, S. E., Ge, T., Glahn, D. C., Grabe, H. J., Gur, R. E., Gutman, B. A., Haavik, J., Håberg, A. K., Hansen, L. A., Hashimoto, R., Hibar, D. P., Holmes, A. J., Hottenga, J., Hulshoff Pol, H. E., Jalbrzikowski, M., Knowles, E. E. M., Kushan, L., Linden, D. E. J., Liu, J., Lundervold, A. J., Martin‐Brevet, S., Martínez, K., Mather, K. A., Mathias, S. R., McDonald‐McGinn, D. M., McRae, A. F., Medland, S. E., Moberget, T., Modenato, C., Monereo Sánchez, J., Moreau, C. A., Mühleisen, T. W., Paus, T., Pausova, Z., Prieto, C., Ragothaman, A., Reinbold, C. S., Reis Marques, T., Repetto, G. M., Reymond, A., Roalf, D. R., Rodriguez‐Herreros, B., Rucker, J. J., Sachdev, P. S., Schmitt, J. E., Schofield, P. R., Silva, A. I., Stefansson, H., Stein, D. J., Tamnes, C. K., Tordesillas‐Gutiérrez, D., Ulfarsson, M. O., Vajdi, A., Van 't Ent, D., Van den Bree, M. B. M., Vassos, E., Vázquez‐Bourgon, J., Vila‐Rodriguez, F., Walters, G. B., Wen, W., Westlye, L. T., Wittfeld, K., Zackai, E. H., Stefánsson, K., Jacquemont, S., Thompson, P. M., Bearden, C. E., Andreassen, O. A., the ENIGMA-CNV Working Group, & the ENIGMA 22q11.2 Deletion Syndrome Working Group (2022). Effects of copy number variations on brain structure and risk for psychiatric illness: Large‐scale studies from the ENIGMAworking groups on CNVs. Human Brain Mapping, 43(1), 300-328. doi:10.1002/hbm.25354.

    Abstract

    The Enhancing NeuroImaging Genetics through Meta‐Analysis copy number variant (ENIGMA‐CNV) and 22q11.2 Deletion Syndrome Working Groups (22q‐ENIGMA WGs) were created to gain insight into the involvement of genetic factors in human brain development and related cognitive, psychiatric and behavioral manifestations. To that end, the ENIGMA‐CNV WG has collated CNV and magnetic resonance imaging (MRI) data from ~49,000 individuals across 38 global research sites, yielding one of the largest studies to date on the effects of CNVs on brain structures in the general population. The 22q‐ENIGMA WG includes 12 international research centers that assessed over 533 individuals with a confirmed 22q11.2 deletion syndrome, 40 with 22q11.2 duplications, and 333 typically developing controls, creating the largest‐ever 22q11.2 CNV neuroimaging data set. In this review, we outline the ENIGMA infrastructure and procedures for multi‐site analysis of CNVs and MRI data. So far, ENIGMA has identified effects of the 22q11.2, 16p11.2 distal, 15q11.2, and 1q21.1 distal CNVs on subcortical and cortical brain structures. Each CNV is associated with differences in cognitive, neurodevelopmental and neuropsychiatric traits, with characteristic patterns of brain structural abnormalities. Evidence of gene‐dosage effects on distinct brain regions also emerged, providing further insight into genotype–phenotype relationships. Taken together, these results offer a more comprehensive picture of molecular mechanisms involved in typical and atypical brain development. This “genotype‐first” approach also contributes to our understanding of the etiopathogenesis of brain disorders. Finally, we outline future directions to better understand effects of CNVs on brain structure and behavior.
  • Sotaro, K., & Dickey, L. W. (Eds.). (1998). Max Planck Institute for Psycholinguistics: Annual report 1998. Nijmegen: Max Planck Institute for Psycholinguistics.
  • Spinelli, E., Cutler, A., & McQueen, J. M. (2002). Resolution of liaison for lexical access in French. Revue Française de Linguistique Appliquée, 7, 83-96.

    Abstract

    Spoken word recognition involves automatic activation of lexical candidates compatible with the perceived input. In running speech, words abut one another without intervening gaps, and syllable boundaries can mismatch with word boundaries. For instance, liaison in ’petit agneau’ creates a syllable beginning with a consonant although ’agneau’ begins with a vowel. In two cross-modal priming experiments we investigate how French listeners recognise words in liaison environments. These results suggest that the resolution of liaison in part depends on acoustic cues which distinguish liaison from non-liaison consonants, and in part on the availability of lexical support for a liaison interpretation.
  • Sprenger, S. A., Levelt, W. J. M., & Kempen, G. (2006). Lexical access during the production of idiomatic phrases. Journal of Memory and Language, 54(2), 161-184. doi:10.1016/j.jml.2005.11.001.

    Abstract

    In three experiments we test the assumption that idioms have their own lexical entry, which is linked to its constituent lemmas (Cutting & Bock, 1997). Speakers produced idioms or literal phrases (Experiment 1), completed idioms (Experiment 2), or switched between idiom completion and naming (Experiment 3). The results of Experiment 1 show that identity priming speeds up idiom production more effectively than literal phrase production, indicating a hybrid representation of idioms. In Experiment 2, we find effects of both phonological and semantic priming. Thus, elements of an idiom can not only be primed via their wordform, but also via the conceptual level. The results of Experiment 3 show that preparing the last word of an idiom primes naming of both phonologically and semantically related targets, indicating that literal word meanings become active during idiom production. The results are discussed within the framework of the hybrid model of idiom representation.
  • Stärk, K., Kidd, E., & Frost, R. L. A. (2022). Word segmentation cues in German child-directed speech: A corpus analysis. Language and Speech, 65(1), 3-27. doi:10.1177/0023830920979016.

    Abstract

    To acquire language, infants must learn to segment words from running speech. A significant body of experimental research shows that infants use multiple cues to do so; however, little research has comprehensively examined the distribution of such cues in naturalistic speech. We conducted a comprehensive corpus analysis of German child-directed speech (CDS) using data from the Child Language Data Exchange System (CHILDES) database, investigating the availability of word stress, transitional probabilities (TPs), and lexical and sublexical frequencies as potential cues for word segmentation. Seven hours of data (~15,000 words) were coded, representing around an average day of speech to infants. The analysis revealed that for 97% of words, primary stress was carried by the initial syllable, implicating stress as a reliable cue to word onset in German CDS. Word identity was also marked by TPs between syllables, which were higher within than between words, and higher for backwards than forwards transitions. Words followed a Zipfian-like frequency distribution, and over two-thirds of words (78%) were monosyllabic. Of the 50 most frequent words, 82% were function words, which accounted for 47% of word tokens in the entire corpus. Finally, 15% of all utterances comprised single words. These results give rich novel insights into the availability of segmentation cues in German CDS, and support the possibility that infants draw on multiple converging cues to segment their input. The data, which we make openly available to the research community, will help guide future experimental investigations on this topic.

    Additional information

    Supplemental material via OSF
  • Stärk, K., Kidd, E., & Frost, R. L. A. (2022). The effect of children's prior knowledge and language abilities on their statistical learning. Applied Psycholinguistics, 43(5), 1045-1071. doi:10.1017/S0142716422000273.

    Abstract

    Statistical learning (SL) is assumed to lead to long-term memory representations. However, the way that those representations influence future learning remains largely unknown. We studied how children’s existing distributional linguistic knowledge influences their subsequent SL on a serial recall task, in which 49 German-speaking seven- to nine-year-old children repeated a series of six-syllable sequences. These contained either (i) bisyllabic words based on frequently occurring German syllable transitions (naturalistic sequences), (ii) bisyllabic words created from unattested syllable transitions (non-naturalistic sequences), or (iii) random syllable combinations (unstructured foils). Children demonstrated learning from naturalistic sequences from the beginning of the experiment, indicating that their implicit memory traces derived from their input language informed learning from the very early stages onward. Exploratory analyses indicated that children with a higher language proficiency were more accurate in repeating the sequences and improved most throughout the study compared to children with lower proficiency.
  • Stehouwer, H. (2006). Cue phrase selection methods for textual classification problems. Master Thesis, Twente University, Enschede.

    Abstract

    The classification of texts and pieces of texts uses the occurrence of, combinations of, words as an important indicator. Not every word or each combination of words gives a clear indication of the classification of a piece of text. Research has been done on methods that select some words or combinations of words that are more indicative of the type of a piece of text. These words or combinations of words are selected from the words and word-groups as they occur in the texts. These more indicative words or combinations of words we call ¿cue-phrases¿. The goal of these methods is to select the most indicative cue-phrases first. The collection of selected words and/or combinations thereof can then be used for training the classification system. To test these selection methods, a number of experiments has been done on a corpus containing cookbook recipes and on a corpus of four-participant meetings. To perform these experiments, a computer program was written. On the recipe corpus we looked at classifying the sentences into different types. Some examples of these types include ¿requirement¿ and ¿instruction¿. On the four-person meeting corpus we tried to learn, using only lexical features, whether a sentence is addressed to an individual or a group. The experiments on the recipe corpus produced good results that showed that, a number of, the used cue-phrase selection methods are suitable for feature selection. The experiments on the four-person meeting corpus where less successful in terms of performance off the classification task. We did see comparable patterns in selection methods, and considering the results of Jovanovic we can conclude that different features are needed for this particular classification task. One of the original goals was to look at ¿addressee¿ in discussions. Are sentences more often addressed to individuals inside discussions compared to outside discussions? However, in order to be able to accomplish this, we must first identify the segments of the text that are discussions. It proved hard to come to a reliable specification of discussions, and our initial definition wasn¿t sufficient.
  • Stehouwer, H., & van Zaanen, M. (2009). Language models for contextual error detection and correction. In Proceedings of the EACL 2009 Workshop on Computational Linguistic Aspects of Grammatical Inference (pp. 41-48). Association for Computational Linguistics.

    Abstract

    The problem of identifying and correcting confusibles, i.e. context-sensitive spelling errors, in text is typically tackled using specifically trained machine learning classifiers. For each different set of confusibles, a specific classifier is trained and tuned. In this research, we investigate a more generic approach to context-sensitive confusible correction. Instead of using specific classifiers, we use one generic classifier based on a language model. This measures the likelihood of sentences with different possible solutions of a confusible in place. The advantage of this approach is that all confusible sets are handled by a single model. Preliminary results show that the performance of the generic classifier approach is only slightly worse that that of the specific classifier approach
  • Stehouwer, H., & Van Zaanen, M. (2009). Token merging in language model-based confusible disambiguation. In T. Calders, K. Tuyls, & M. Pechenizkiy (Eds.), Proceedings of the 21st Benelux Conference on Artificial Intelligence (pp. 241-248).

    Abstract

    In the context of confusible disambiguation (spelling correction that requires context), the synchronous back-off strategy combined with traditional n-gram language models performs well. However, when alternatives consist of a different number of tokens, this classification technique cannot be applied directly, because the computation of the probabilities is skewed. Previous work already showed that probabilities based on different order n-grams should not be compared directly. In this article, we propose new probability metrics in which the size of the n is varied according to the number of tokens of the confusible alternative. This requires access to n-grams of variable length. Results show that the synchronous back-off method is extremely robust. We discuss the use of suffix trees as a technique to store variable length n-gram information efficiently.
  • Stewart, A. J., Kidd, E., & Haigh, M. (2009). Early sensitivity to discourse-level anomalies: Evidence from self-paced reading. Discourse Processes, 46(1), 46-69. doi:10.1080/01638530802629091.

    Abstract

    Two word-by-word, self-paced reading experiments investigated the speed with which readers were sensitive to discourse-level anomalies. An account arguing for delayed sensitivity (Guzman & Klin, 2000 Guzman, A. E. and Klin, C. M. 2000. Maintaining global coherence in reading: The role of sentence boundaries.. Memory & Cognition, 28: 722–730. [PubMed], [Web of Science ®], [Google Scholar]) was contrasted with one allowing for rapid sensitivity (Myers & O'Brien, 1998 Myers, J. L. and O'Brien, E. J. 1998. Accessing the discourse representation during reading.. Discourse Processes, 26: 131–157. [Taylor & Francis Online], [Web of Science ®], [Google Scholar]). Anomalies related to spatial information (Experiment 1) and character-attribute information (Experiment 2) were examined. Both experiments found that readers displayed rapid sensitivity to the anomalous information. A reading time penalty was observed for the region of text containing the anomalous information. This finding is most compatible with an account of text processing whereby incoming words are rapidly evaluated with respect to prior context. They are not consistent with an account that argues for delayed integration. Results are discussed in light of their implications for competing models of text processing.
  • Stewart, A. J., Haigh, M., & Kidd, E. (2009). An investigation into the online processing of counterfactual and indicative conditionals. Quarterly Journal of Experimental Psychology, 62(11), 2113-2125. doi:10.1080/17470210902973106.

    Abstract

    The ability to represent conditional information is central to human cognition. In two self-paced reading experiments we investigated how readers process counterfactual conditionals (e.g., If Darren had been athletic, he could probably have played on the rugby team ) and indicative conditionals (e.g., If Darren is athletic, he probably plays on the rugby team ). In Experiment 1 we focused on how readers process counterfactual conditional sentences. We found that processing of the antecedent of counterfactual conditionals was rapidly constrained by prior context (i.e., knowing whether Darren was or was not athletic). A reading-time penalty was observed for the critical region of text comprising the last word of the antecedent and the first word of the consequent when the information in the antecedent did not fit with prior context. In Experiment 2 we contrasted counterfactual conditionals with indicative conditionals. For counterfactual conditionals we found the same effect on the critical region as we found in Experiment 1. In contrast, however, we found no evidence that processing of the antecedent of indicative conditionals was constrained by prior context. For indicative conditionals (but not for counterfactual conditionals), the results we report are consistent with the suppositional account of conditionals. We propose that current theories of conditionals need to be able to account for online processing differences between indicative and counterfactual conditionals
  • Stivers, T. (2006). Treatment decisions: negotiations between doctors and parents in acute care encounters. In J. Heritage, & D. W. Maynard (Eds.), Communication in medical care: Interaction between primary care physicians and patients (pp. 279-312). Cambridge: Cambridge University Press.
  • Stivers, T. (2002). 'Symptoms only' and 'Candidate diagnoses': Presenting the problem in pediatric encounters. Health Communication, 14(3), 299-338.
  • Stivers, T., & Robinson, J. D. (2006). A preference for progressivity in interaction. Language in Society, 35(3), 367-392. doi:10.1017/S0047404506060179.

    Abstract

    This article investigates two types of preference organization in interaction: in response to a question that selects a next speaker in multi-party interaction, the preference for answers over non-answer responses as a category of a response; and the preference for selected next speakers to respond. It is asserted that the turn allocation rule specified by Sacks, Schegloff & Jefferson (1974) which states that a response is relevant by the selected next speaker at the transition relevance place is affected by these two preferences once beyond a normal transition space. It is argued that a “second-order” organization is present such that interactants prioritize a preference for answers over a preference for a response by the selected next speaker. This analysis reveals an observable preference for progressivity in interaction.
  • Stivers, T. (2002). Overt parent pressure for antibiotic medication in pediatric encounters. Social Science and Medicine, 54(7), 1111-1130.
  • Stivers, T. (1998). Prediagnostic commentary in veterinarian-client interaction. Research on Language and Social Interaction, 31(2), 241-277. doi:10.1207/s15327973rlsi3102_4.
  • Stivers, T., Enfield, N. J., Brown, P., Englert, C., Hayashi, M., Heinemann, T., Hoymann, G., Rossano, F., De Ruiter, J. P., Yoon, K.-E., & Levinson, S. C. (2009). Universals and cultural variation in turn-taking in conversation. Proceedings of the National Academy of Sciences of the United States of America, 106 (26), 10587-10592. doi:10.1073/pnas.0903616106.

    Abstract

    Informal verbal interaction is the core matrix for human social life. A mechanism for coordinating this basic mode of interaction is a system of turn-taking that regulates who is to speak and when. Yet relatively little is known about how this system varies across cultures. The anthropological literature reports significant cultural differences in the timing of turn-taking in ordinary conversation. We test these claims and show that in fact there are striking universals in the underlying pattern of response latency in conversation. Using a worldwide sample of 10 languages drawn from traditional indigenous communities to major world languages, we show that all of the languages tested provide clear evidence for a general avoidance of overlapping talk and a minimization of silence between conversational turns. In addition, all of the languages show the same factors explaining within-language variation in speed of response. We do, however, find differences across the languages in the average gap between turns, within a range of 250 ms from the cross-language mean. We believe that a natural sensitivity to these tempo differences leads to a subjective perception of dramatic or even fundamental differences as offered in ethnographic reports of conversational style. Our empirical evidence suggests robust human universals in this domain, where local variations are quantitative only, pointing to a single shared infrastructure for language use with likely ethological foundations.

    Additional information

    Stivers_2009_universals_suppl.pdf
  • Stoehr, A., Benders, T., Van Hell, J. G., & Fikkert, P. (2022). Feature generalization in Dutch–German bilingual and monolingual children’s speech production. First Language, 42(1), 101-123. doi:10.1177/01427237211058937.

    Abstract

    Dutch and German employ voicing contrasts, but Dutch lacks the ‘voiced’ dorsal plosive /ɡ/. We exploited this accidental phonological gap, measuring the presence of prevoicing and voice onset time durations during speech production to determine (1) whether preliterate bilingual Dutch–German and monolingual Dutch-speaking children aged 3;6–6;0 years generalized voicing to /ɡ/ in Dutch; and (2) whether there was evidence for featural cross-linguistic influence from Dutch to German in bilingual children, testing monolingual German-speaking children as controls. Bilingual and monolingual children’s production of /ɡ/ provided partial evidence for feature generalization: in Dutch, both bilingual and monolingual children either recombined Dutch voicing and place features to produce /ɡ/, suggesting feature generalization, or resorted to producing familiar /k/, suggesting segment-level adaptation within their Dutch phonological system. In German, bilingual children’s production of /ɡ/ was influenced by Dutch although the Dutch phoneme inventory lacks /ɡ/. This suggests that not only segments but also voicing features can exert cross-linguistic influence. Taken together, phonological features appear to play a crucial role in aspects of bilingual and monolingual children’s speech production.

    Additional information

    supplemental material
  • Stolker, C. J. J. M., & Poletiek, F. H. (1998). Smartengeld - Wat zijn we eigenlijk aan het doen? Naar een juridische en psychologische evaluatie. In F. Stadermann (Ed.), Bewijs en letselschade (pp. 71-86). Lelystad, The Netherlands: Koninklijke Vermande.
  • Strauß, A., Wu, T., McQueen, J. M., Scharenborg, O., & Hintz, F. (2022). The differential roles of lexical and sublexical processing during spoken-word recognition in clear and in noise. Cortex, 151, 70-88. doi:10.1016/j.cortex.2022.02.011.

    Abstract

    Successful spoken-word recognition relies on an interplay between lexical and sublexical processing. Previous research demonstrated that listeners readily shift between more lexically-biased and more sublexically-biased modes of processing in response to the situational context in which language comprehension takes place. Recognizing words in the presence of background noise reduces the perceptual evidence for the speech signal and – compared to the clear – results in greater uncertainty. It has been proposed that, when dealing with greater uncertainty, listeners rely more strongly on sublexical processing. The present study tested this proposal using behavioral and electroencephalography (EEG) measures. We reasoned that such an adjustment would be reflected in changes in the effects of variables predicting recognition performance with loci at lexical and sublexical levels, respectively. We presented native speakers of Dutch with words featuring substantial variability in (1) word frequency (locus at lexical level), (2) phonological neighborhood density (loci at lexical and sublexical levels) and (3) phonotactic probability (locus at sublexical level). Each participant heard each word in noise (presented at one of three signal-to-noise ratios) and in the clear and performed a two-stage lexical decision and transcription task while EEG was recorded. Using linear mixed-effects analyses, we observed behavioral evidence that listeners relied more strongly on sublexical processing when speech quality decreased. Mixed-effects modelling of the EEG signal in the clear condition showed that sublexical effects were reflected in early modulations of ERP components (e.g., within the first 300 ms post word onset). In noise, EEG effects occurred later and involved multiple regions activated in parallel. Taken together, we found evidence – especially in the behavioral data – supporting previous accounts that the presence of background noise induces a stronger reliance on sublexical processing.
  • Sumer, B., & Özyürek, A. (2022). Cross-modal investigation of event component omissions in language development: A comparison of signing and speaking children. Language, Cognition and Neuroscience, 37(8), 1023-1039. doi:10.1080/23273798.2022.2042336.

    Abstract

    Language development research suggests a universal tendency for children to be under- informative in narrating motion events by omitting components such as Path, Manner or Ground. However, this assumption has not been tested for children acquiring sign language. Due to the affordances of the visual-spatial modality of sign languages for iconic expression, signing children might omit event components less frequently than speaking children. Here we analysed motion event descriptions elicited from deaf children (4–10 years) acquiring Turkish Sign Language (TİD) and their Turkish-speaking peers. While children omitted all types of event components more often than adults, signing children and adults encoded more Path and Manner in TİD than their peers in Turkish. These results provide more evidence for a general universal tendency for children to omit event components as well as a modality bias for sign languages to encode both Manner and Path more frequently than spoken languages.
  • Sumer, B., & Özyürek, A. (2022). Language use in deaf children with early-signing versus late-signing deaf parents. Frontiers in Communication, 6: 804900. doi:10.3389/fcomm.2021.804900.

    Abstract

    Previous research has shown that spatial language is sensitive to the effects of delayed language exposure. Locative encodings of late-signing deaf adults varied from those of early-signing deaf adults in the preferred types of linguistic forms. In the current study, we investigated whether such differences would be found in spatial language use of deaf children with deaf parents who are either early or late signers of Turkish Sign Language (TİD). We analyzed locative encodings elicited from these two groups of deaf children for the use of different linguistic forms and the types of classifier handshapes. Our findings revealed differences between these two groups of deaf children in their preferred types of linguistic forms, which showed parallels to differences between late versus early deaf adult signers as reported by earlier studies. Deaf children in the current study, however, were similar to each other in the type of classifier handshapes that they used in their classifier constructions. Our findings have implications for expanding current knowledge on to what extent variation in language input (i.e., from early vs. late deaf signers) is reflected in children’s productions as well as the role of linguistic input on language development in general.
  • Suomi, K., McQueen, J. M., & Cutler, A. (1997). Vowel harmony and speech segmentation in Finnish. Journal of Memory and Language, 36, 422-444. doi:10.1006/jmla.1996.2495.

    Abstract

    Finnish vowel harmony rules require that if the vowel in the first syllable of a word belongs to one of two vowel sets, then all subsequent vowels in that word must belong either to the same set or to a neutral set. A harmony mismatch between two syllables containing vowels from the opposing sets thus signals a likely word boundary. We report five experiments showing that Finnish listeners can exploit this information in an on-line speech segmentation task. Listeners found it easier to detect words likehymyat the end of the nonsense stringpuhymy(where there is a harmony mismatch between the first two syllables) than in the stringpyhymy(where there is no mismatch). There was no such effect, however, when the target words appeared at the beginning of the nonsense string (e.g.,hymypuvshymypy). Stronger harmony effects were found for targets containing front harmony vowels (e.g.,hymy) than for targets containing back harmony vowels (e.g.,paloinkypaloandkupalo). The same pattern of results appeared whether target position within the string was predictable or unpredictable. Harmony mismatch thus appears to provide a useful segmentation cue for the detection of word onsets in Finnish speech.
  • Suppes, P., Böttner, M., & Liang, L. (1998). Machine Learning of Physics Word Problems: A Preliminary Report. In A. Aliseda, R. van Glabbeek, & D. Westerståhl (Eds.), Computing Natural Language (pp. 141-154). Stanford, CA, USA: CSLI Publications.
  • Swaab, T. Y., Brown, C. M., & Hagoort, P. (1997). Spoken sentence comprehension in aphasia: Event-related potential evidence for a lexical integration deficit. Journal of Cognitive Neuroscience, 9(1), 39-66.

    Abstract

    In this study the N400 component of the event-related potential was used to investigate spoken sentence understanding in Broca's and Wernicke's aphasics. The aim of the study was to determine whether spoken sentence comprehension problems in these patients might result from a deficit in the on-line integration of lexical information. Subjects listened to sentences spoken at a normal rate. In half of these sentences, the meaning of the final word of the sentence matched the semantic specifications of the preceding sentence context. In the other half of the sentences, the sentence-final word was anomalous with respect to the preceding sentence context. The N400 was measured to the sentence-final words in both conditions. The results for the aphasic patients (n = 14) were analyzed according to the severity of their comprehension deficit and compared to a group of 12 neurologically unimpaired age-matched controls, as well as a group of 6 nonaphasic patients with a lesion in the right hemisphere. The nonaphasic brain damaged patients and the aphasic patients with a light comprehension deficit (high comprehenders, n = 7) showed an N400 effect that was comparable to that of the neurologically unimpaired subjects. In the aphasic patients with a moderate to severe comprehension deficit (low comprehenders, n = 7), a reduction and delay of the N400 effect was obtained. In addition, the P300 component was measured in a classical oddball paradigm, in which subjects were asked to count infrequent low tones in a random series of high and low tones. No correlation was found between the occurrence of N400 and P300 effects, indicating that changes in the N400 results were related to the patients' language deficit. Overall, the pattern of results was compatible with the idea that aphasic patients with moderate to severe comprehension problems are impaired in the integration of lexical information into a higher order representation of the preceding sentence context.
  • Swaab, T. Y., Brown, C. M., & Hagoort, P. (1998). Understanding ambiguous words in sentence contexts: Electrophysiological evidence for delayed contextual selection in Broca's aphasia. Neuropsychologia, 36(8), 737-761. doi:10.1016/S0028-3932(97)00174-7.

    Abstract

    This study investigates whether spoken sentence comprehension deficits in Broca's aphasics results from their inability to access the subordinate meaning of ambiguous words (e.g. bank), or alternatively, from a delay in their selection of the contextually appropriate meaning. Twelve Broca's aphasics and twelve elderly controls were presented with lexical ambiguities in three context conditions, each followed by the same target words. In the concordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was related to the target. In the discordant condition, the sentence context biased the meaning of the sentence final ambiguous word that was incompatible with the target.In the unrelated condition, the sentence-final word was unambiguous and unrelated to the target. The task of the subjects was to listen attentively to the stimuli The activational status of the ambiguous sentence-final words was inferred from the amplitude of the N399 to the targets at two inter-stimulus intervals (ISIs) (100 ms and 1250 ms). At the short ISI, the Broca's aphasics showed clear evidence of activation of the subordinate meaning. In contrast to elderly controls, however, the Broca's aphasics were not successful at selecting the appropriate meaning of the ambiguity in the short ISI version of the experiment. But at the long ISI, in accordance with the performance of the elderly controls, the patients were able to successfully complete the contextual selection process. These results indicate that Broca's aphasics are delayed in the process of contextual selection. It is argued that this finding of delayed selection is compatible with the idea that comprehension deficits in Broca's aphasia result from a delay in the process of integrating lexical information.
  • Swift, M. (1998). [Book review of LOUIS-JACQUES DORAIS, La parole inuit: Langue, culture et société dans l'Arctique nord-américain]. Language in Society, 27, 273-276. doi:10.1017/S0047404598282042.

    Abstract

    This volume on Inuit speech follows the evolution of a native language of the North American Arctic, from its historical roots to its present-day linguistic structure and patterns of use from Alaska to Greenland. Drawing on a wide range of research from the fields of linguistics, anthropology, and sociology, Dorais integrates these diverse perspectives in a comprehensive view of native language development, maintenance, and use under conditions of marginalization due to social transition.
  • Swingley, D., & Fernald, A. (2002). Recognition of words referring to present and absent objects by 24-month-olds. Journal of Memory and Language, 46(1), 39-56. doi:10.1006/jmla.2001.2799.

    Abstract

    Three experiments tested young children's efficiency in recognizing words in speech referring to absent objects. Seventy-two 24-month-olds heard sentences containing target words denoting objects that were or were not present in a visual display. Children's eye movements were monitored as they heard the sentences. Three distinct patterns of response were shown. Children hearing a familiar word that was an appropriate label for the currently fixated picture maintained their gaze. Children hearing a familiar word that could not apply to the currently fixated picture rapidly shifted their gaze to the alternative picture, whether that alternative was the named target or not, and then continued to search for an appropriate referent. Finally, children hearing an unfamiliar word shifted their gaze slowly and irregularly. This set of outcomes is interpreted as evidence that by 24 months, rapid activation in word recognition does not depend on the presence of the words' referents. Rather, very young children are capable of quickly and efficiently interpreting words in the absence of visual supporting context.
  • Swingley, D., & Aslin, R. N. (2002). Lexical neighborhoods and the word-form representations of 14-month-olds. Psychological Science, 13(5), 480-484. doi:10.1111/1467-9280.00485.

    Abstract

    The degree to which infants represent phonetic detail in words has been a source of controversy in phonology and developmental psychology. One prominent hypothesis holds that infants store words in a vague or inaccurate form until the learning of similar–sounding neighbors forces attention to subtle phonetic distinctions. In the experiment reported here, we used a visual fixation task to assess word recognition. We present the first evidence indicating that, in fact, the lexical representations of 14– and 15–month–olds are encoded in fine detail, even when this detail is not functionally necessary for distinguishing similar words in the infant’s vocabulary. Exposure to words is sufficient for well–specified lexical representations, even well before the vocabulary spurt. These results suggest developmental continuity in infants’ representations of speech: As infants begin to build a vocabulary and learn word meanings, they use the perceptual abilities previously demonstrated in tasks testing the discrimination and categorization of meaningless syllables.
  • Swinney, D. A., Zurif, E. B., & Cutler, A. (1980). Effects of sentential stress and word class upon comprehension in Broca’s aphasics. Brain and Language, 10, 132-144. doi:10.1016/0093-934X(80)90044-9.

    Abstract

    The roles which word class (open/closed) and sentential stress play in the sentence comprehension processes of both agrammatic (Broca's) aphasics and normal listeners were examined with a word monitoring task. Overall, normal listeners responded more quickly to stressed than to unstressed items, but showed no effect of word class. Aphasics also responded more quickly to stressed than to unstressed materials, but, unlike the normals, responded faster to open than to closed class words regardless of their stress. The results are interpreted as support for the theory that Broca's aphasics lack the functional underlying open/closed class word distinction used in word recognition by normal listeners.
  • Szilagyi, I. A., Waarsing, J. H., Schiphof, D., Van Meurs, J. B. J., & Bierma-Zeinstra, S. M. A. (2022). Towards sex-specific osteoarthritis risk models: evaluation of risk factors for knee osteoarthritis in males and females. Rheumatology, 61(2), 648-657. doi:10.1093/rheumatology/keab378.

    Abstract

    Objectives

    The aim of this study was to identify sex-specific prevalence and strength of risk factors for the incidence of radiographic knee OA (incRKOA).
    Methods

    Our study population consisted of 10 958 Rotterdam Study participants free of knee OA in one or both knees at baseline. One thousand and sixty-four participants developed RKOA after a median follow-up time of 9.6 years. We estimated the association between each available risk factor and incRKOA using sex stratified multivariate regression models with generalized estimating equations. Subsequently, we statistically tested sex differences between risk estimates and calculated the population attributable fractions (PAFs) for modifiable risk factors.
    Results

    The prevalence of the investigated risk factors was, in general, higher in women compared with men, except that alcohol intake and smoking were higher in men and high BMI showed equal prevalence. We found significantly different risk estimates between men and women: high level of physical activity [relative risk (RR) 1.76 (95% CI: 1.29–2.40)] or a Kellgren and Lawrence score 1 at baseline [RR 5.48 (95% CI: 4.51–6.65)] was higher in men. Among borderline significantly different risk estimates was BMI ≥27, associated with higher risk for incRKOA in women [RR 2.00 (95% CI: 1.74–2.31)]. The PAF for higher BMI was 25.6% in women and 19.3% in men.
    Conclusion

    We found sex-specific differences in both presence and relative risk of several risk factors for incRKOA. Especially BMI, a modifiable risk factor, impacts women more strongly than men. These risk factors can be used in the development of personalized prevention strategies and in building sex-specific prediction tools to identify high risk profile patients.

    Additional information

    supplementary tables
  • Szuba, A., Redl, T., & De Hoop, H. (2022). Are second person masculine generics easier to process for men than for women? Evidence from Polish. Journal of Psycholinguistic Research, 51(4), 819-845. doi:10.1007/s10936-022-09859-7.

    Abstract

    In Polish, it is obligatory to mark feminine or masculine grammatical gender on second-person singular past tense verbs (e.g., Dostałaś list ‘You received-F a letter’). When the addressee’s gender is unknown or unspecified, masculine but never feminine gender marking may be used. The present self-paced reading experiment aims to determine whether this practice creates a processing disadvantage for female addressees in such contexts. We further investigated how men process being addressed with feminine-marked verbs, which constitutes a pragmatic violation. To this end, we presented Polish native speakers with short narratives. Each narrative contained either a second-person singular past tense verb with masculine or feminine gender marking, or a gerund verb with no gender marking as a baseline. We hypothesised that both men and women would read the verbs with gender marking mismatching their own gender more slowly than the gender-unmarked gerund verbs. The results revealed that the gender-mismatching verbs were read equally fast as the gerund verbs, and that the verbs with gender marking matching participant gender were read faster. While the relatively high reading time of the gender-unmarked baseline was unexpected, the pattern of results nevertheless shows that verbs with masculine marking were more difficult to process for women compared to men, and vice versa. In conclusion, even though masculine gender marking in the second person is commonly used with a gender-unspecific intention, it created similar processing difficulties for women as the ones that men experienced when addressed through feminine gender marking. This study is the first one, as far as we are aware, to provide evidence for the male bias of second-person masculine generics during language processing.
  • Tagliapietra, L., Fanari, R., De Candia, C., & Tabossi, P. (2009). Phonotactic regularities in the segmentation of spoken Italian. Quarterly Journal of Experimental Psychology, 62(2), 392 -415. doi:10.1080/17470210801907379.

    Abstract

    Five word-spotting experiments explored the role of consonantal and vocalic phonotactic cues in the segmentation of spoken Italian. The first set of experiments tested listeners' sensitivity to phonotactic constraints cueing syllable boundaries. Participants were slower in spotting words in nonsense strings when target onsets were misaligned (e.g., lago in ri.blago) than when they were aligned (e.g., lago in rin.lago) with phonotactically determined syllabic boundaries. This effect held also for sequences that occur only word-medially (e.g., /tl/ in ri.tlago), and competition effects could not account for the disadvantage in the misaligned condition. Similarly, target detections were slower when their offsets were misaligned (e.g., cittaacute in cittaacuteu.ba) than when they were aligned (e.g., cittaacute in cittaacute.oba) with a phonotactic syllabic boundary. The second set of experiments tested listeners' sensitivity to phonotactic cues, which specifically signal lexical (and not just syllable) boundaries. Results corroborate the role of syllabic information in speech segmentation and suggest that Italian listeners make little use of additional phonotactic information that specifically cues word boundaries.

    Files private

    Request files
  • Tagliapietra, L., Fanari, R., Collina, S., & Tabossi, P. (2009). Syllabic effects in Italian lexical access. Journal of Psycholinguistic Research, 38(6), 511-526. doi:10.1007/s10936-009-9116-4.

    Abstract

    Two cross-modal priming experiments tested whether lexical access is constrained by syllabic structure in Italian. Results extend the available Italian data on the processing of stressed syllables showing that syllabic information restricts the set of candidates to those structurally consistent with the intended word (Experiment 1). Lexical access, however, takes place as soon as possible and it is not delayed till the incoming input corresponds to the first syllable of the word. And, the initial activated set includes candidates whose syllabic structure does not match the intended word (Experiment 2). The present data challenge the early hypothesis that in Romance languages syllables are the units for lexical access during spoken word recognition. The implications of the results for our understanding of the role of syllabic information in language processing are discussed.
  • Takashima, A., Petersson, K. M., Rutters, F., Tendolkar, I., Jensen, O., Zwarts, M. J., McNaughton, B. L., & Fernández, G. (2006). Declarative memory consolidation in humans: A prospective functional magnetic resonance imaging study. Proceedings of the National Academy of Sciences of the United States of America [PNAS], 103(3), 756-761.

    Abstract

    Retrieval of recently acquired declarative memories depends on
    the hippocampus, but with time, retrieval is increasingly sustainable
    by neocortical representations alone. This process has been
    conceptualized as system-level consolidation. Using functional
    magnetic resonance imaging, we assessed over the course of three
    months how consolidation affects the neural correlates of memory
    retrieval. The duration of slow-wave sleep during a nap/rest
    period after the initial study session and before the first scan
    session on day 1 correlated positively with recognition memory
    performance for items studied before the nap and negatively with
    hippocampal activity associated with correct confident recognition.
    Over the course of the entire study, hippocampal activity for
    correct confident recognition continued to decrease, whereas activity
    in a ventral medial prefrontal region increased. These findings,
    together with data obtained in rodents, may prompt a
    revision of classical consolidation theory, incorporating a transfer
    of putative linking nodes from hippocampal to prelimbic prefrontal
    areas.
  • Ten Bosch, L., Baayen, R. H., & Ernestus, M. (2006). On speech variation and word type differentiation by articulatory feature representations. In Proceedings of Interspeech 2006 (pp. 2230-2233).

    Abstract

    This paper describes ongoing research aiming at the description of variation in speech as represented by asynchronous articulatory features. We will first illustrate how distances in the articulatory feature space can be used for event detection along speech trajectories in this space. The temporal structure imposed by the cosine distance in articulatory feature space coincides to a large extent with the manual segmentation on phone level. The analysis also indicates that the articulatory feature representation provides better such alignments than the MFCC representation does. Secondly, we will present first results that indicate that articulatory features can be used to probe for acoustic differences in the onsets of Dutch singulars and plurals.
  • Ten Oever, S., Carta, S., Kaufeld, G., & Martin, A. E. (2022). Neural tracking of phrases in spoken language comprehension is automatic and task-dependent. eLife, 11: e77468. doi:10.7554/eLife.77468.

    Abstract

    Linguistic phrases are tracked in sentences even though there is no one-to-one acoustic phrase marker in the physical signal. This phenomenon suggests an automatic tracking of abstract linguistic structure that is endogenously generated by the brain. However, all studies investigating linguistic tracking compare conditions where either relevant information at linguistic timescales is available, or where this information is absent altogether (e.g., sentences versus word lists during passive listening). It is therefore unclear whether tracking at phrasal timescales is related to the content of language, or rather, results as a consequence of attending to the timescales that happen to match behaviourally relevant information. To investigate this question, we presented participants with sentences and word lists while recording their brain activity with magnetoencephalography (MEG). Participants performed passive, syllable, word, and word-combination tasks corresponding to attending to four different rates: one they would naturally attend to, syllable-rates, word-rates, and phrasal-rates, respectively. We replicated overall findings of stronger phrasal-rate tracking measured with mutual information for sentences compared to word lists across the classical language network. However, in the inferior frontal gyrus (IFG) we found a task effect suggesting stronger phrasal-rate tracking during the word-combination task independent of the presence of linguistic structure, as well as stronger delta-band connectivity during this task. These results suggest that extracting linguistic information at phrasal rates occurs automatically with or without the presence of an additional task, but also that IFG might be important for temporal integration across various perceptual domains.
  • Ten Oever, S., Kaushik, K., & Martin, A. E. (2022). Inferring the nature of linguistic computations in the brain. PLoS Computational Biology, 18(7): e1010269. doi:10.1371/journal.pcbi.1010269.

    Abstract

    Sentences contain structure that determines their meaning beyond that of individual words. An influential study by Ding and colleagues (2016) used frequency tagging of phrases and sentences to show that the human brain is sensitive to structure by finding peaks of neural power at the rate at which structures were presented. Since then, there has been a rich debate on how to best explain this pattern of results with profound impact on the language sciences. Models that use hierarchical structure building, as well as models based on associative sequence processing, can predict the neural response, creating an inferential impasse as to which class of models explains the nature of the linguistic computations reflected in the neural readout. In the current manuscript, we discuss pitfalls and common fallacies seen in the conclusions drawn in the literature illustrated by various simulations. We conclude that inferring the neural operations of sentence processing based on these neural data, and any like it, alone, is insufficient. We discuss how to best evaluate models and how to approach the modeling of neural readouts to sentence processing in a manner that remains faithful to cognitive, neural, and linguistic principles.
  • ten Bosch, L., Hämäläinen, A., Scharenborg, O., & Boves, L. (2006). Acoustic scores and symbolic mismatch penalties in phone lattices. In Proceedings of the 2006 IEEE International Conference on Acoustics, Speech and Signal Processing [ICASSP 2006]. IEEE.

    Abstract

    This paper builds on previous work that aims at unraveling the structure of the speech signal by means of using probabilistic representations. The context of this work is a multi-pass speech recognition system in which a phone lattice is created and used as a basis for a lexical search in which symbolic mismatches are allowed at certain costs. The focus is on the optimization of the costs of phone insertions, deletions and substitutions that are used in the lexical decoding pass. Two optimization approaches are presented, one related to a multi-pass computational model for human speech recognition, the other based on a decoding in which Bayes’ risks are minimized. In the final section, the advantages of these optimization methods are discussed and compared.
  • Ter Keurs, M., Brown, C. M., & Hagoort, P. (2002). Lexical processing of vocabulary class in patients with Broca's aphasia: An event-related brain potential study on agrammatic comprehension. Neuropsychologia, 40(9), 1547-1561. doi:10.1016/S0028-3932(02)00025-8.

    Abstract

    This paper presents electrophysiological evidence of an impairment in the on-line processing of word class information in patients with Broca’s aphasia with agrammatic comprehension. Event-related brain potentials (ERPs) were recorded from the scalp while Broca patients and non-aphasic control subjects read open- and closed-class words that appeared one at a time on a PC screen. Separate waveforms were computed for open- and closed-class words. The non-aphasic control subjects showed a modulation of an early left anterior negativity in the 210–325 ms as a function of vocabulary class (VC), and a late left anterior negative shift to closed-class words in the 400–700 ms epoch. An N400 effect was present in both control subjects and Broca patients. We have taken the early electrophysiological differences to reflect the first availability of word-category information from the mental lexicon. The late differences can be related to post-lexical processing. In contrast to the control subjects, the Broca patients showed no early VC effect and no late anterior shift to closed-class words. The results support the view that an incomplete and/or delayed availability of word-class information might be an important factor in Broca’s agrammatic comprehension.
  • Ter Avest, I. J., & Mulder, K. (2009). The Acquisition of Gender Agreement in the Determiner Phrase by Bilingual Children. Toegepaste Taalwetenschap in Artikelen, 81(1), 133-142.
  • Ter Bekke, M., Özyürek, A., & Ünal, E. (2022). Speaking but not gesturing predicts event memory: A cross-linguistic comparison. Language and Cognition, 14(3), 362-384. doi:10.1017/langcog.2022.3.

    Abstract

    Every day people see, describe, and remember motion events. However, the relation between multimodal encoding of motion events in speech and gesture, and memory is not yet fully understood. Moreover, whether language typology modulates this relation remains to be tested. This study investigates whether the type of motion event information (path or manner) mentioned in speech and gesture predicts which information is remembered and whether this varies across speakers of typologically different languages. Dutch- and Turkish-speakers watched and described motion events and completed a surprise recognition memory task. For both Dutch- and Turkish-speakers, manner memory was at chance level. Participants who mentioned path in speech during encoding were more accurate at detecting changes to the path in the memory task. The relation between mentioning path in speech and path memory did not vary cross-linguistically. Finally, the co-speech gesture did not predict memory above mentioning path in speech. These findings suggest that how speakers describe a motion event in speech is more important than the typology of the speakers’ native language in predicting motion event memory. The motion event videos are available for download for future research at https://osf.io/p8cas/.

    Additional information

    S1866980822000035sup001.docx
  • Terrill, A. (2002). Systems of nominal classification in East Papuan languages. Oceanic Linguistics, 41(1), 63-88.

    Abstract

    The existence of nominal classification systems has long been thought of as one of the defining features of the Papuan languages of island New Guinea. However, while almost all of these languages do have nominal classification systems, they are, in fact, extremely divergent from each other. This paper examines these systems in the East Papuan languages in order to examine the question of the relationship between these Papuan outliers. Nominal classification systems are often archaic, preserving older features lost elsewhere in a language. Also, evidence shows that they are not easily borrowed into languages (although they can be). For these reasons, it is useful to consider nominal classification systems as a tool for exploring ancient historical relationships between languages. This paper finds little evidence of relationship between the nominal classification systems of the East Papuan languages as a whole. It argues that the mere existence of nominal classification systems cannot be used as evidence that the East Papuan languages form a genetic family. The simplest hypothesis is that either the systems were inherited so long ago as to obscure the genetic evidence, or else the appearance of nominal classification systems in these languages arose through borrowing of grammatical systems rather than of morphological forms.
  • Terrill, A., & Dunn, M. (2006). Semantic transference: Two preliminary case studies from the Solomon Islands. In C. Lefebvre, L. White, & C. Jourdan (Eds.), L2 acquisition and Creole genesis: Dialogues (pp. 67-85). Amsterdam: Benjamins.
  • Terrill, A. (2002). Why make books for people who can't read? A perspective on documentation of an endangered language from Solomon Islands. International Journal of the Sociology of Language, 155/156(1), 205-219. doi:10.1515/ijsl.2002.029.

    Abstract

    This paper explores the issue of documenting an endangered language from the perspective of a community with low levels of literacy, I first discuss the background of the language community with whom I work, the Lavukal people of Solomon Islands, and discuss whether, and to what extent, Lavukaleve is an endangered language. I then go on to discuss the documentation project. My main point is that while low literacy levels and a nonreading culture would seem to make documentation a strange choice as a tool for language maintenance, in fact both serve as powerful cultural symbols of the importance and prestige of Lavukaleve. It is well known that a common reason for language death is that speakers choose not to transmit their language to the next generation (e.g. Winter 1993). Lavukaleve is particularly vulnerable in this respect. By utilizing cultural symbols of status and prestige, the standing of Lavukaleve can be enhanced, thus helping to ensure the transmission of Lavukaleve to future generations.
  • Terrill, A. (1998). Biri. München: Lincom Europa.

    Abstract

    This work presents a salvage grammar of the Biri language of Eastern Central Queensland, a Pama-Nyungan language belonging to the large Maric subgroup. As the language is no longer used, the grammatical description is based on old written sources and on recordings made by linguists in the 1960s and 1970s. Biri is in many ways typical of the Pama-Nyungan languages of Southern Queensland. It has split case marking systems, marking nouns according to an ergative/absolutive system and pronouns according to a nominative/accusative system. Unusually for its area, Biri also has bound pronouns on its verb, cross-referencing the person, number and case of core participants. As far as it is possible, the grammatical discussion is ‘theory neutral’. The first four chapters deal with the phonology, morphology, and syntax of the language. The last two chapters contain a substantial discussion of Biri’s place in the Pama-Nyungan family. In chapter 6 the numerous dialects of the Biri language are discussed. In chapter 7 the close linguistic relationship between Biri and the surrounding languages is examined.
  • Terrill, A. (2009). [Review of Felix K. Ameka, Alan Dench, and Nicholas Evans (eds). 2006. Catching language: The standing challenge of grammar writing]. Language Documentation & Conservation, 3(1), 132-137. Retrieved from http://hdl.handle.net/10125/4432.
  • Terrill, A. (2002). [Review of the book The Interface between syntax and discourse in Korafe, a Papuan language of Papua New Guinea by Cynthia J. M. Farr]. Linguistic Typology, 6(1), 110-116. doi:10.1515/lity.2002.004.
  • Terrill, A. (2002). Dharumbal: The language of Rockhampton, Australia. Canberra: Pacific Linguistics.
  • Terrill, A. (2006). Central Solomon languages. In K. Brown (Ed.), Encyclopedia of language and linguistics (vol. 2) (pp. 279-280). Amsterdam: Elsevier.

    Abstract

    The Papuan languages of the central Solomon Islands are a negatively defined areal grouping: They are those four or possibly five languages in the central Solomon Islands that do not belong to the Austronesian family. Bilua (Vella Lavella), Touo (Rendova), Lavukaleve (Russell Islands), Savosavo (Savo Island) and possibly Kazukuru (New Georgia) have been identified as non-Austronesian since the early 20th century. However, their affiliations both to each other and to other languages still remain a mystery. Heterogeneous and until recently largely undescribed, they present an interesting departure from what is known both of Austronesian languages in the region and of the Papuan languages of the mainland of New Guinea.
  • Terrill, A. (2006). Body part terms in Lavukaleve, a Papuan language of the Solomon Islands. Language Sciences, 28(2-3), 304-322. doi:10.1016/j.langsci.2005.11.008.

    Abstract

    This paper explores body part terms in Lavukaleve, a Papuan isolate spoken in the Solomon Islands. The full set of body part terms collected so far is presented, and their grammatical properties are explained. It is argued that Lavukaleve body part terms do not enter into partonomic relations with each other, and that a hierarchical structure of body part terms does not apply for Lavukaleve. It is shown too that some universal claims which have been made about the expression of terms relating to limbs are contradicted in Lavukaleve, which has only one general term covering arm, hand, leg and (for some people) foot.
  • Terrill, A. (2002). [Review of the book The Interface between syntax and discourse in Korafe, a Papuan language of Papua New Guinea by Cynthia J. M. Farr]. Linguistic Typology, 6(1), 110-116. doi:10.1515/lity.2002.004.
  • Tesink, C. M. J. Y., Buitelaar, J. K., Petersson, K. M., Van der Gaag, R. J., Kan, C. C., Tendolkar, I., & Hagoort, P. (2009). Neural correlates of pragmatic language comprehension in autism disorders. Brain, 132, 1941-1952. doi:10.1093/brain/awp103.

    Abstract

    Difficulties with pragmatic aspects of communication are universal across individuals with autism spectrum disorders (ASDs). Here we focused on an aspect of pragmatic language comprehension that is relevant to social interaction in daily life: the integration of speaker characteristics inferred from the voice with the content of a message. Using functional magnetic resonance imaging (fMRI), we examined the neural correlates of the integration of voice-based inferences about the speaker’s age, gender or social background, and sentence content in adults with ASD and matched control participants. Relative to the control group, the ASD group showed increased activation in right inferior frontal gyrus (RIFG; Brodmann area 47) for speakerincongruent sentences compared to speaker-congruent sentences. Given that both groups performed behaviourally at a similar level on a debriefing interview outside the scanner, the increased activation in RIFG for the ASD group was interpreted as being compensatory in nature. It presumably reflects spill-over processing from the language dominant left hemisphere due to higher task demands faced by the participants with ASD when integrating speaker characteristics and the content of a spoken sentence. Furthermore, only the control group showed decreased activation for speaker-incongruent relative to speaker-congruent sentences in right ventral medial prefrontal cortex (vMPFC; Brodmann area 10), including right anterior cingulate cortex (ACC; Brodmann area 24/32). Since vMPFC is involved in self-referential processing related to judgments and inferences about self and others, the absence of such a modulation in vMPFC activation in the ASD group possibly points to atypical default self-referential mental activity in ASD. Our results show that in ASD compensatory mechanisms are necessary in implicit, low-level inferential processes in spoken language understanding. This indicates that pragmatic language problems in ASD are not restricted to high-level inferential processes, but encompass the most basic aspects of pragmatic language processing.
  • Tesink, C. M. J. Y., Petersson, K. M., Van Berkum, J. J. A., Van den Brink, D., Buitelaar, J. K., & Hagoort, P. (2009). Unification of speaker and meaning in language comprehension: An fMRI study. Journal of Cognitive Neuroscience, 21, 2085-2099. doi:10.1162/jocn.2008.21161.

    Abstract

    When interpreting a message, a listener takes into account several sources of linguistic and extralinguistic information. Here we focused on one particular form of extralinguistic information, certain speaker characteristics as conveyed by the voice. Using functional magnetic resonance imaging, we examined the neural structures involved in the unification of sentence meaning and voice-based inferences about the speaker's age, sex, or social background. We found enhanced activation in the inferior frontal gyrus bilaterally (BA 45/47) during listening to sentences whose meaning was incongruent with inferred speaker characteristics. Furthermore, our results showed an overlap in brain regions involved in unification of speaker-related information and those used for the unification of semantic and world knowledge information [inferior frontal gyrus bilaterally (BA 45/47) and left middle temporal gyrus (BA 21)]. These findings provide evidence for a shared neural unification system for linguistic and extralinguistic sources of information and extend the existing knowledge about the role of inferior frontal cortex as a crucial component for unification during language comprehension.
  • Theakston, A. L., Lieven, E. V., Pine, J. M., & Rowland, C. F. (2002). Going, going, gone: The acquisition of the verb ‘go’. Journal of Child Language, 29(4), 783-811. doi:10.1017/S030500090200538X.

    Abstract

    This study investigated different accounts of early argument structure acquisition and verb paradigm building through the detailed examination of the acquisition of the verb Go. Data from 11 children followed longitudinally between the ages of 2;0 and 3;0 were examined. Children's uses of the different forms of Go were compared with respect to syntactic structure and the semantics encoded. The data are compatible with the suggestion that the children were not operating with a single verb representation that differentiated between different forms of Go but rather that their knowledge of the relationship between the different forms of Go varied depending on the structure produced and the meaning encoded. However, a good predictor of the children's use of different forms of Go in particular structures and to express particular meanings was the frequency of use of those structures and meanings with particular forms of Go in the input. The implications of these findings for theories of syntactic category formation and abstract rule-based descriptions of grammar are discussed.
  • Theakston, A., & Rowland, C. F. (2009). Introduction to Special Issue: Cognitive approaches to language acquisition. Cognitive Linguistics, 20(3), 477-480. doi:10.1515/COGL.2009.021.
  • Theakston, A. L., Lieven, E. V., Pine, J. M., & Rowland, C. F. (2006). Note of clarification on the coding of light verbs in ‘Semantic generality, input frequency and the acquisition of syntax’ (Journal of Child Language 31, 61–99). Journal of Child Language, 33(1), 191-197. doi:10.1017/S0305000905007178.

    Abstract

    In our recent paper, ‘Semantic generality, input frequency and the acquisition of syntax’ (Journal of Child Language31, 61–99), we presented data from two-year-old children to examine the question of whether the semantic generality of verbs contributed to their ease and stage of acquisition over and above the effects of their typically high frequency in the language to which children are exposed. We adopted two different categorization schemes to determine whether individual verbs should be considered to be semantically general, or ‘light’, or whether they encoded more specific semantics. These categorization schemes were based on previous work in the literature on the role of semantically general verbs in early verb acquisition, and were designed, in the first case, to be a conservative estimate of semantic generality, including only verbs designated as semantically general by a number of other researchers (e.g. Clark, 1978; Pinker, 1989; Goldberg, 1998), and, in the second case, to be a more inclusive estimate of semantic generality based on Ninio's (1999a,b) suggestion that grammaticalizing verbs encode the semantics associated with semantically general verbs. Under this categorization scheme, a much larger number of verbs were included as semantically general verbs.
  • Theakston, A. L., & Rowland, C. F. (2009). The acquisition of auxiliary syntax: A longitudinal elicitation study. Part 1: Auxiliary BE. Journal of Speech, Language, and Hearing Research, 52, 1449-1470. doi:10.1044/1092-4388(2009/08-0037).

    Abstract

    Purpose: The question of how and when English-speaking children acquire auxiliaries is the subject of extensive debate. Some researchers posit the existence of innately given Universal Grammar principles to guide acquisition, although some aspects of the auxiliary system must be learned from the input. Others suggest that auxiliaries can be learned without Universal Grammar, citing evidence of piecemeal learning in their support. This study represents a unique attempt to trace the development of auxiliary syntax by using a longitudinal elicitation methodology. Method: Twelve English-speaking children participated in 3 tasks designed to elicit auxiliary BE in declaratives and yes/no and wh-questions. They completed each task 6 times in total between the ages of 2;10 (years;months) and 3;6. Results: The children’s levels of correct use of 2 forms of BE (is,are) differed according to auxiliary form and sentence structure, and these relations changed over development. An analysis of the children’s errors also revealed complex interactions between these factors. Conclusion: These data are problematic for existing accounts of auxiliary acquisition and highlight the need for researchers working within both generativist and constructivist frameworks to develop more detailed theories of acquisition that directly predict the pattern of acquisition observed.
  • Thiebaut de Schotten, M., & Forkel, S. J. (2022). The emergent properties of the connected brain. Science, 378(6619), 505-510. doi:10.1126/science.abq2591.

    Abstract

    There is more to brain connections than the mere transfer of signals between brain regions. Behavior and cognition emerge through cortical area interaction. This requires integration between local and distant areas orchestrated by densely connected networks. Brain connections determine the brain’s functional organization. The imaging of connections in the living brain has provided an opportunity to identify the driving factors behind the neurobiology of cognition. Connectivity differences between species and among humans have furthered the understanding of brain evolution and of diverging cognitive profiles. Brain pathologies amplify this variability through disconnections and, consequently, the disintegration of cognitive functions. The prediction of long-term symptoms is now preferentially based on brain disconnections. This paradigm shift will reshape our brain maps and challenge current brain models.
  • Tielbeek, J. J., Uffelmann, E., Williams, B. S., Colodro-Conde, L., Gagnon, É., Mallard, T. T., Levitt, B., Jansen, P. R., Johansson, A., Sallis, H. M., Pistis, G., Saunders, G. R. B., Allegrini, A. G., Rimfeld, K., Konte, B., Klein, M., Hartmann, A. M., Salvatore, J. E., Nolte, I. M., Demontis, D. and 63 moreTielbeek, J. J., Uffelmann, E., Williams, B. S., Colodro-Conde, L., Gagnon, É., Mallard, T. T., Levitt, B., Jansen, P. R., Johansson, A., Sallis, H. M., Pistis, G., Saunders, G. R. B., Allegrini, A. G., Rimfeld, K., Konte, B., Klein, M., Hartmann, A. M., Salvatore, J. E., Nolte, I. M., Demontis, D., Malmberg, A., Burt, S. A., Savage, J., Sugden, K., Poulton, R., Harris, K. M., Vrieze, S., McGue, M., Iacono, W. G., Mota, N. R., Mill, J., Viana, J. F., Mitchell, B. L., Morosoli, J. J., Andlauer, T., Ouellet-Morin, I., Tremblay, R. E., Côté, S., Gouin, J.-P., Brendgen, M., Dionne, G., Vitaro, F., Lupton, M. K., Martin, N. G., COGA Consortium, Spit for Science Working Group, Castelao, E., Räikkönen, K., Eriksson, J., Lahti, J., Hartman, C. A., Oldehinkel, A. J., Snieder, H., Liu, H., Preisig, M., Whipp, A., Vuoksimaa, E., Lu, Y., Jern, P., Rujescu, D., Giegling, I., Palviainen, T., Kaprio, J., Harden, K. P., Munafò, M. R., Morneau-Vaillancourt, G., Plomin, R., Viding, E., Boutwell, B. B., Aliev, F., Dick, D., Popma, A., Faraone, S. V., Børglum, A. D., Medland, S. E., Franke, B., Boivin, M., Pingault, J.-B., Glennon, J. C., Barnes, J. C., Fisher, S. E., Moffitt, T. E., Caspi, A., Polderman, T. J., & Posthuma, D. (2022). Uncovering the genetic architecture of broad antisocial behavior through a genome-wide association study meta-analysis. Molecular Psychiatry, 27(11), 4453-4463. doi:10.1038/s41380-022-01793-3.

    Abstract

    Despite the substantial heritability of antisocial behavior (ASB), specific genetic variants robustly associated with the trait have not been identified. The present study by the Broad Antisocial Behavior Consortium (BroadABC) meta-analyzed data from 28 discovery samples (N = 85,359) and five independent replication samples (N = 8058) with genotypic data and broad measures of ASB. We identified the first significant genetic associations with broad ASB, involving common intronic variants in the forkhead box protein P2 (FOXP2) gene (lead SNP rs12536335, p = 6.32 × 10−10). Furthermore, we observed intronic variation in Foxp2 and one of its targets (Cntnap2) distinguishing a mouse model of pathological aggression (BALB/cJ strain) from controls (BALB/cByJ strain). Polygenic risk score (PRS) analyses in independent samples revealed that the genetic risk for ASB was associated with several antisocial outcomes across the lifespan, including diagnosis of conduct disorder, official criminal convictions, and trajectories of antisocial development. We found substantial genetic correlations of ASB with mental health (depression rg = 0.63, insomnia rg = 0.47), physical health (overweight rg = 0.19, waist-to-hip ratio rg = 0.32), smoking (rg = 0.54), cognitive ability (intelligence rg = −0.40), educational attainment (years of schooling rg = −0.46) and reproductive traits (age at first birth rg = −0.58, father’s age at death rg = −0.54). Our findings provide a starting point toward identifying critical biosocial risk mechanisms for the development of ASB.
  • Timpson, N. J., Tobias, J. H., Richards, J. B., Soranzo, N., Duncan, E. L., Sims, A.-M., Whittaker, P., Kumanduri, V., Zhai, G., Glaser, B., Eisman, J., Jones, G., Nicholson, G., Prince, R., Seeman, E., Spector, T. D., Brown, M. A., Peltonen, L., Smith, G. D., Deloukas, P. and 1 moreTimpson, N. J., Tobias, J. H., Richards, J. B., Soranzo, N., Duncan, E. L., Sims, A.-M., Whittaker, P., Kumanduri, V., Zhai, G., Glaser, B., Eisman, J., Jones, G., Nicholson, G., Prince, R., Seeman, E., Spector, T. D., Brown, M. A., Peltonen, L., Smith, G. D., Deloukas, P., & Evans, D. M. (2009). Common variants in the region around Osterix are associated with bone mineral density and growth in childhood. Human Molecular Genetics, 18(8), 1510-1517. doi:10.1093/hmg/ddp052.

    Abstract

    Peak bone mass achieved in adolescence is a determinant of bone mass in later life. In order to identify genetic variants affecting bone mineral density (BMD), we performed a genome-wide association study of BMD and related traits in 1518 children from the Avon Longitudinal Study of Parents and Children (ALSPAC). We compared results with a scan of 134 adults with high or low hip BMD. We identified associations with BMD in an area of chromosome 12 containing the Osterix (SP7) locus, a transcription factor responsible for regulating osteoblast differentiation (ALSPAC: P = 5.8 x 10(-4); Australia: P = 3.7 x 10(-4)). This region has previously shown evidence of association with adult hip and lumbar spine BMD in an Icelandic population, as well as nominal association in a UK population. A meta-analysis of these existing studies revealed strong association between SNPs in the Osterix region and adult lumbar spine BMD (P = 9.9 x 10(-11)). In light of these findings, we genotyped a further 3692 individuals from ALSPAC who had whole body BMD and confirmed the association in children as well (P = 5.4 x 10(-5)). Moreover, all SNPs were related to height in ALSPAC children, but not weight or body mass index, and when height was included as a covariate in the regression equation, the association with total body BMD was attenuated. We conclude that genetic variants in the region of Osterix are associated with BMD in children and adults probably through primary effects on growth.
  • Torreira, F., & Ernestus, M. (2009). Probabilistic effects on French [t] duration. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 448-451). Causal Productions Pty Ltd.

    Abstract

    The present study shows that [t] consonants are affected by probabilistic factors in a syllable-timed language as French, and in spontaneous as well as in journalistic speech. Study 1 showed a word bigram frequency effect in spontaneous French, but its exact nature depended on the corpus on which the probabilistic measures were based. Study 2 investigated journalistic speech and showed an effect of the joint frequency of the test word and its following word. We discuss the possibility that these probabilistic effects are due to the speaker’s planning of upcoming words, and to the speaker’s adaptation to the listener’s needs.
  • Trabasso, T., & Ozyurek, A. (1997). Communicating evaluation in narrative understanding. In T. Givon (Ed.), Conversation: Cognitive, communicative and social perspectives (pp. 268-302). Philadelphia, PA: Benjamins.
  • Trilsbeek, P., & Van Uytvanck, D. (2009). Regional archives and community portals. IASA Journal, 32, 69-73.
  • Troncarelli, M. C., & Drude, S. (2002). Awytyza Ti'ingku. Livro para alfabetização na língua aweti: Awytyza Ti’ingku. Alphabetisierungs‐Fibel der Awetí‐Sprache. São Paulo: Instituto Sócio-Ambiental.

Share this page