Publications

Displaying 1001 - 1100 of 1180
  • Seuren, P. A. M. (1984). The bioprogram hypothesis: Facts and fancy. A commentary on Bickerton "The language bioprogram hypothesis". Behavioral and Brain Sciences, 7(2), 208-209. doi:10.1017/S0140525X00044356.
  • Seuren, P. A. M. (1984). The comparative revisited. Journal of Semantics, 3(1), 109-141. doi:10.1093/jos/3.1-2.109.
  • Seuren, P. A. M. (1993). The question of predicate clefting in the Indian Ocean Creoles. In F. Byrne, & D. Winford (Eds.), Focus and grammatical relations in Creole languages (pp. 53-64). Amsterdam: Benjamins.
  • Seuren, P. A. M. (1993). Why does mean 2 mean "2"? Grist to the anti-Grice mill. In E. Hajičová (Ed.), Proceedings on the Conference on Functional Description of Language (pp. 225-235). Prague: Faculty of Mathematics and Physics, Charles University.
  • Severijnen, G. G. A., Bosker, H. R., & McQueen, J. M. (2024). Your “VOORnaam” is not my “VOORnaam”: An acoustic analysis of individual talker differences in word stress in Dutch. Journal of Phonetics, 103: 101296. doi:10.1016/j.wocn.2024.101296.

    Abstract

    Different talkers speak differently, even within the same homogeneous group. These differences lead to acoustic variability in speech, causing challenges for correct perception of the intended message. Because previous descriptions of this acoustic variability have focused mostly on segments, talker variability in prosodic structures is not yet well documented. The present study therefore examined acoustic between-talker variability in word stress in Dutch. We recorded 40 native Dutch talkers from a participant sample with minimal dialectal variation and balanced gender, producing segmentally overlapping words (e.g., VOORnaam vs. voorNAAM; ‘first name’ vs. ‘respectable’, capitalization indicates lexical stress), and measured different acoustic cues to stress. Each individual participant’s acoustic measurements were analyzed using Linear Discriminant Analyses, which provide coefficients for each cue, reflecting the strength of each cue in a talker’s productions. On average, talkers primarily used mean F0, intensity, and duration. Moreover, each participant also employed a unique combination of cues, illustrating large prosodic variability between talkers. In fact, classes of cue-weighting tendencies emerged, differing in which cue was used as the main cue. These results offer the most comprehensive acoustic description, to date, of word stress in Dutch, and illustrate that large prosodic variability is present between individual talkers.
  • Seyfeddinipur, M. (2006). Disfluency: Interrupting speech and gesture. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.59337.
  • Shan, W., Zhang, Y., Zhao, J., Wu, S., Zhao, L., Ip, P., Tucker, J. D., & Jiang, F. (2024). Positive parent–child interactions moderate certain maltreatment effects on psychosocial well-being in 6-year-old children. Pediatric Research, 95, 802-808. doi:10.1038/s41390-023-02842-5.

    Abstract

    Background: Positive parental interactions may buffer maltreated children from poor psychosocial outcomes. The study aims to evaluate the associations between various types of maltreatment and psychosocial outcomes in early childhood, and examine the moderating effect of positive parent-child interactions on them.

    Methods: Data were from a representative Chinese 6-year-old children sample (n = 17,088). Caregivers reported the history of child maltreatment perpetrated by any individuals, completed the Strengths and Difficulties Questionnaire as a proxy for psychosocial well-being, and reported the frequency of their interactions with children by the Chinese Parent-Child Interaction Scale.

    Results: Physical abuse, emotional abuse, neglect, and sexual abuse were all associated with higher odds of psychosocial problems (aOR = 1.90 [95% CI: 1.57-2.29], aOR = 1.92 [95% CI: 1.75-2.10], aOR = 1.64 [95% CI: 1.17-2.30], aOR = 2.03 [95% CI: 1.30-3.17]). Positive parent-child interactions were associated with lower odds of psychosocial problems after accounting for different types of maltreatment. The moderating effect of frequent parent-child interactions was found only in the association between occasional only physical abuse and psychosocial outcomes (interaction term: aOR = 0.34, 95% CI: 0.15-0.77).

    Conclusions: Maltreatment and positive parent-child interactions have impacts on psychosocial well-being in early childhood. Positive parent-child interactions could only buffer the adverse effect of occasional physical abuse on psychosocial outcomes. More frequent parent-child interactions may be an important intervention opportunity among some children.

    Impact: It provides the first data on the prevalence of different single types and combinations of maltreatment in early childhood in Shanghai, China by drawing on a city-level population-representative sample. It adds to evidence that different forms and degrees of maltreatment were all associated with a higher risk of psychosocial problems in early childhood. Among them, sexual abuse posed the highest risk, followed by emotional abuse. It innovatively found that higher frequencies of parent-child interactions may provide buffering effects only to children who are exposed to occasional physical abuse. It provides a potential intervention opportunity, especially for physically abused children.
  • Shatzman, K. B. (2006). Sensitivity to detailed acoustic information in word recognition. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.59331.
  • Shatzman, K. B., & McQueen, J. M. (2006). Segment duration as a cue to word boundaries in spoken-word recognition. Perception & Psychophysics, 68(1), 1-16.

    Abstract

    In two eye-tracking experiments, we examined the degree to which listeners use acoustic cues to word boundaries. Dutch participants listened to ambiguous sentences in which stop-initial words (e.g., pot, jar) were preceded by eens (once); the sentences could thus also refer to cluster-initial words (e.g., een spot, a spotlight). The participants made fewer fixations to target pictures (e.g., a jar) when the target and the preceding [s] were replaced by a recording of the cluster-initial word than when they were spliced from another token of the target-bearing sentence (Experiment 1). Although acoustic analyses revealed several differences between the two recordings, only [s] duration correlated with the participants’ fixations (more target fixations for shorter [s]s). Thus, we found that listeners apparently do not use all available acoustic differences equally. In Experiment 2, the participants made more fixations to target pictures when the [s] was shortened than when it was lengthened. Utterance interpretation can therefore be influenced by individual segment duration alone.
  • Shatzman, K. B., & McQueen, J. M. (2006). Prosodic knowledge affects the recognition of newly acquired words. Psychological Science, 17(5), 372-377. doi:10.1111/j.1467-9280.2006.01714.x.

    Abstract

    An eye-tracking study examined the involvement of prosodic knowledge—specifically, the knowledge that monosyllabic words tend to have longer durations than the first syllables of polysyllabic words—in the recognition of newly learned words. Participants learned new spoken words (by associating them to novel shapes): bisyllables and onset-embedded monosyllabic competitors (e.g., baptoe and bap). In the learning phase, the duration of the ambiguous sequence (e.g., bap) was held constant. In the test phase, its duration was longer than, shorter than, or equal to its learning-phase duration. Listeners’ fixations indicated that short syllables tended to be interpreted as the first syllables of the bisyllables, whereas long syllables generated more monosyllabic-word interpretations. Recognition of newly acquired words is influenced by prior prosodic knowledge and is therefore not determined solely on the basis of stored episodes of those words.
  • Shatzman, K. B., & McQueen, J. M. (2006). The modulation of lexical competition by segment duration. Psychonomic Bulletin & Review, 13(6), 966-971.

    Abstract

    In an eye-tracking study, we examined how fine-grained phonetic detail, such as segment duration, influences the lexical competition process during spoken word recognition. Dutch listeners’ eye movements to pictures of four objects were monitored as they heard sentences in which a stop-initial target word (e.g., pijp “pipe”) was preceded by an [s]. The participants made more fixations to pictures of cluster-initial words (e.g., spijker “nail”) when they heard a long [s] (mean duration, 103 msec) than when they heard a short [s] (mean duration, 73 msec). Conversely, the participants made more fixations to pictures of the stop-initial words when they heard a short [s] than when they heard a long [s]. Lexical competition between stop- and cluster-initial words, therefore, is modulated by segment duration differences of only 30 msec.
  • Shi, R., Werker, J. F., & Cutler, A. (2006). Recognition and representation of function words in English-learning infants. Infancy, 10(2), 187-198. doi:10.1207/s15327078in1002_5.

    Abstract

    We examined infants' recognition of functors and the accuracy of the representations that infants construct of the perceived word forms. Auditory stimuli were “Functor + Content Word” versus “Nonsense Functor + Content Word” sequences. Eight-, 11-, and 13-month-old infants heard both real functors and matched nonsense functors (prosodically analogous to their real counterparts but containing a segmental change). Results reveal that 13-month-olds recognized functors with attention to segmental detail. Eight-month-olds did not distinguish real versus nonsense functors. The performance of 11-month-olds fell in between that of the older and younger groups, consistent with an emerging recognition of real functors. The three age groups exhibited a clear developmental trend. We propose that in the earliest stages of vocabulary acquisition, function elements receive no segmentally detailed representations, but such representations are gradually constructed so that once vocabulary growth starts in earnest, fully specified functor representations are in place to support it.
  • Shi, R., Werker, J., & Cutler, A. (2003). Function words in early speech perception. In Proceedings of the 15th International Congress of Phonetic Sciences (pp. 3009-3012).

    Abstract

    Three experiments examined whether infants recognise functors in phrases, and whether their representations of functors are phonetically well specified. Eight- and 13- month-old English infants heard monosyllabic lexical words preceded by real functors (e.g., the, his) versus nonsense functors (e.g., kuh); the latter were minimally modified segmentally (but not prosodically) from real functors. Lexical words were constant across conditions; thus recognition of functors would appear as longer listening time to sequences with real functors. Eightmonth- olds' listening times to sequences with real versus nonsense functors did not significantly differ, suggesting that they did not recognise real functors, or functor representations lacked phonetic specification. However, 13-month-olds listened significantly longer to sequences with real functors. Thus, somewhere between 8 and 13 months of age infants learn familiar functors and represent them with segmental detail. We propose that accumulated frequency of functors in input in general passes a critical threshold during this time.
  • Shi, R., Cutler, A., Werker, J., & Cruickshank, M. (2006). Frequency and form as determinants of functor sensitivity in English-acquiring infants. Journal of the Acoustical Society of America, 119(6), EL61-EL67. doi:10.1121/1.2198947.

    Abstract

    High-frequency functors are arguably among the earliest perceived word forms and may assist extraction of initial vocabulary items. Canadian 11- and 8-month-olds were familiarized to pseudo-nouns following either a high-frequency functor the or a low-frequency functor her versus phonetically similar mispronunciations of each, kuh and ler, and then tested for recognition of the pseudo-nouns. A preceding the (but not kuh, her, ler)facilitated extraction of the pseudo-nouns for 11-month-olds; the is thus well-specified in form for these infants. However, both the and kuh (but not her-ler )f aciliated segmentation or 8-month-olds, suggesting an initial underspecified representation of high-frequency functors.
  • Silverstein, P., Bergmann, C., & Syed, M. (Eds.). (2024). Open science and metascience in developmental psychology [Special Issue]. Infant and Child Development, 33(1).
  • Silverstein, P., Bergmann, C., & Syed, M. (2024). Open science and metascience in developmental psychology: Introduction to the special issue. Infant and Child Development, 33(1): e2495. doi:10.1002/icd.2495.
  • Skiba, R. (2006). Computeranalyse/Computer Analysis. In U. Amon, N. Dittmar, K. Mattheier, & P. Trudgill (Eds.), Sociolinguistics: An international handbook of the science of language and society [2nd completely revised and extended edition] (pp. 1187-1197). Berlin, New York: de Gruyter.
  • Skiba, R. (2003). Computer Analysis: Corpus based language research. In U. Amon, N. Dittmar, K. Mattheier, & P. Trudgil (Eds.), Handbook ''Sociolinguistics'' (2nd ed.) (pp. 1250-1260). Berlin: de Gruyter.
  • Skiba, R. (1993). Funktionale Analyse des Spracherwerbs einer polnischen Deutschlernerin. In A. Katny (Ed.), Beiträge zur Sprachwissenschaft, Psycho- und Soziolinguistik: Probleme des Deutschen als Mutter-, Fremd- und Zweitsprache (pp. 201-225). Rzeszów: WSP.
  • Skiba, R. (1993). Modal verbs and their syntactical characteristics in elementary learner varieties. In N. Dittmar, & A. Reich (Eds.), Modality in language acquisition (pp. 247-260). Berlin: Walter de Gruyter.
  • Slaats, S. (2024). On the interplay between lexical probability and syntactic structure in language comprehension. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Slobin, D. I. (2002). Cognitive and communicative consequences of linguistic diversity. In S. Strömqvist (Ed.), The diversity of languages and language learning (pp. 7-23). Lund, Sweden: Lund University, Centre for Languages and Literature.
  • Slonimska, A. (2024). The role of iconicity and simultaneity in efficient communication in the visual modality: Evidence from LIS (Italian Sign Language) [Dissertation Abstract]. Sign Language & Linguistics, 27(1), 116-124. doi:10.1075/sll.00084.slo.
  • Smalley, S. L., Kustanovich, V., Minassian, S. L., Stone, J. L., Ogdie, M. N., McGough, J. J., McCracken, J. T., MacPhie, I. L., Francks, C., Fisher, S. E., Cantor, R. M., Monaco, A. P., & Nelson, S. F. (2002). Genetic linkage of Attention-Deficit/Hyperactivity Disorder on chromosome 16p13, in a region implicated in autism. American Journal of Human Genetics, 71(4), 959-963. doi:10.1086/342732.

    Abstract

    Attention-deficit/hyperactivity disorder (ADHD) is the most commonly diagnosed behavioral disorder in childhood and likely represents an extreme of normal behavior. ADHD significantly impacts learning in school-age children and leads to impaired functioning throughout the life span. There is strong evidence for a genetic etiology of the disorder, although putative alleles, principally in dopamine-related pathways suggested by candidate-gene studies, have very small effect sizes. We use affected-sib-pair analysis in 203 families to localize the first major susceptibility locus for ADHD to a 12-cM region on chromosome 16p13 (maximum LOD score 4.2; P=.000005), building upon an earlier genomewide scan of this disorder. The region overlaps that highlighted in three genome scans for autism, a disorder in which inattention and hyperactivity are common, and physically maps to a 7-Mb region on 16p13. These findings suggest that variations in a gene on 16p13 may contribute to common deficits found in both ADHD and autism.
  • Smits, R., Warner, N., McQueen, J. M., & Cutler, A. (2003). Unfolding of phonetic information over time: A database of Dutch diphone perception. Journal of the Acoustical Society of America, 113(1), 563-574. doi:10.1121/1.1525287.

    Abstract

    We present the results of a large-scale study on speech perception, assessing the number and type of perceptual hypotheses which listeners entertain about possible phoneme sequences in their language. Dutch listeners were asked to identify gated fragments of all 1179 diphones of Dutch, providing a total of 488 520 phoneme categorizations. The results manifest orderly uptake of acoustic information in the signal. Differences across phonemes in the rate at which fully correct recognition was achieved arose as a result of whether or not potential confusions could occur with other phonemes of the language ~long with short vowels, affricates with their initial components, etc.!. These data can be used to improve models of how acoustic phonetic information is mapped onto the mental lexicon during speech comprehension.
  • Smits, R., Sereno, J., & Jongman, A. (2006). Categorization of sounds. Journal of Experimental Psychology: Human Perception and Performance, 32(3), 733-754. doi:10.1037/0096-1523.32.3.733.

    Abstract

    The authors conducted 4 experiments to test the decision-bound, prototype, and distribution theories for the categorization of sounds. They used as stimuli sounds varying in either resonance frequency or duration. They created different experimental conditions by varying the variance and overlap of 2 stimulus distributions used in a training phase and varying the size of the stimulus continuum used in the subsequent test phase. When resonance frequency was the stimulus dimension, the pattern of categorization-function slopes was in accordance with the decision-bound theory. When duration was the stimulus dimension, however, the slope pattern gave partial support for the decision-bound and distribution theories. The authors introduce a new categorization model combining aspects of decision-bound and distribution theories that gives a superior account of the slope patterns across the 2 stimulus dimensions.
  • Smits, R. (2000). Temporal distribution of information for human consonant recognition in VCV utterances. Journal of Phonetics, 28, 111-135. doi:10.006/jpho.2000.0107.

    Abstract

    The temporal distribution of perceptually relevant information for consonant recognition in British English VCVs is investigated. The information distribution in the vicinity of consonantal closure and release was measured by presenting initial and final portions, respectively, of naturally produced VCV utterances to listeners for categorization. A multidimensional scaling analysis of the results provided highly interpretable, four-dimensional geometrical representations of the confusion patterns in the categorization data. In addition, transmitted information as a function of truncation point was calculated for the features manner place and voicing. The effects of speaker, vowel context, stress, and distinctive feature on the resulting information distributions were tested statistically. It was found that, although all factors are significant, the location and spread of the distributions depends principally on the distinctive feature, i.e., the temporal distribution of perceptually relevant information is very different for the features manner, place, and voicing.
  • Soheili-Nezhad, S., Ibáñez-Solé, O., Izeta, A., Hoeijmakers, J. H. J., & Stoeger, T. (2024). Time is ticking faster for long genes in aging. Trends in Genetics, 40(4), 299-312. doi:10.1016/j.tig.2024.01.009.

    Abstract

    Recent studies of aging organisms have identified a systematic phenomenon, characterized by a negative correlation between gene length and their expression in various cell types, species, and diseases. We term this phenomenon gene-length-dependent transcription decline (GLTD) and suggest that it may represent a bottleneck in the transcription machinery and thereby significantly contribute to aging as an etiological factor. We review potential links between GLTD and key aging processes such as DNA damage and explore their potential in identifying disease modification targets. Notably, in Alzheimer’s disease, GLTD spotlights extremely long synaptic genes at chromosomal fragile sites (CFSs) and their vulnerability to postmitotic DNA damage. We suggest that GLTD is an integral element of biological aging.
  • Sommers, R. P. (2024). Neurobiology of reference. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Spinelli, E., Cutler, A., & McQueen, J. M. (2002). Resolution of liaison for lexical access in French. Revue Française de Linguistique Appliquée, 7, 83-96.

    Abstract

    Spoken word recognition involves automatic activation of lexical candidates compatible with the perceived input. In running speech, words abut one another without intervening gaps, and syllable boundaries can mismatch with word boundaries. For instance, liaison in ’petit agneau’ creates a syllable beginning with a consonant although ’agneau’ begins with a vowel. In two cross-modal priming experiments we investigate how French listeners recognise words in liaison environments. These results suggest that the resolution of liaison in part depends on acoustic cues which distinguish liaison from non-liaison consonants, and in part on the availability of lexical support for a liaison interpretation.
  • Spinelli, E., McQueen, J. M., & Cutler, A. (2003). Processing resyllabified words in French. Journal of Memory and Language, 48(2), 233-254. doi:10.1016/S0749-596X(02)00513-2.
  • Sprenger, S. A. (2003). Fixed expressions and the production of idioms. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.57562.
  • Sprenger, S. A., Levelt, W. J. M., & Kempen, G. (2006). Lexical access during the production of idiomatic phrases. Journal of Memory and Language, 54(2), 161-184. doi:10.1016/j.jml.2005.11.001.

    Abstract

    In three experiments we test the assumption that idioms have their own lexical entry, which is linked to its constituent lemmas (Cutting & Bock, 1997). Speakers produced idioms or literal phrases (Experiment 1), completed idioms (Experiment 2), or switched between idiom completion and naming (Experiment 3). The results of Experiment 1 show that identity priming speeds up idiom production more effectively than literal phrase production, indicating a hybrid representation of idioms. In Experiment 2, we find effects of both phonological and semantic priming. Thus, elements of an idiom can not only be primed via their wordform, but also via the conceptual level. The results of Experiment 3 show that preparing the last word of an idiom primes naming of both phonologically and semantically related targets, indicating that literal word meanings become active during idiom production. The results are discussed within the framework of the hybrid model of idiom representation.
  • Stärk, K. (2024). The company language keeps: How distributional cues influence statistical learning for language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Stehouwer, H. (2006). Cue phrase selection methods for textual classification problems. Master Thesis, Twente University, Enschede.

    Abstract

    The classification of texts and pieces of texts uses the occurrence of, combinations of, words as an important indicator. Not every word or each combination of words gives a clear indication of the classification of a piece of text. Research has been done on methods that select some words or combinations of words that are more indicative of the type of a piece of text. These words or combinations of words are selected from the words and word-groups as they occur in the texts. These more indicative words or combinations of words we call ¿cue-phrases¿. The goal of these methods is to select the most indicative cue-phrases first. The collection of selected words and/or combinations thereof can then be used for training the classification system. To test these selection methods, a number of experiments has been done on a corpus containing cookbook recipes and on a corpus of four-participant meetings. To perform these experiments, a computer program was written. On the recipe corpus we looked at classifying the sentences into different types. Some examples of these types include ¿requirement¿ and ¿instruction¿. On the four-person meeting corpus we tried to learn, using only lexical features, whether a sentence is addressed to an individual or a group. The experiments on the recipe corpus produced good results that showed that, a number of, the used cue-phrase selection methods are suitable for feature selection. The experiments on the four-person meeting corpus where less successful in terms of performance off the classification task. We did see comparable patterns in selection methods, and considering the results of Jovanovic we can conclude that different features are needed for this particular classification task. One of the original goals was to look at ¿addressee¿ in discussions. Are sentences more often addressed to individuals inside discussions compared to outside discussions? However, in order to be able to accomplish this, we must first identify the segments of the text that are discussions. It proved hard to come to a reliable specification of discussions, and our initial definition wasn¿t sufficient.
  • Stivers, T. (2006). Treatment decisions: negotiations between doctors and parents in acute care encounters. In J. Heritage, & D. W. Maynard (Eds.), Communication in medical care: Interaction between primary care physicians and patients (pp. 279-312). Cambridge: Cambridge University Press.
  • Stivers, T. (2002). 'Symptoms only' and 'Candidate diagnoses': Presenting the problem in pediatric encounters. Health Communication, 14(3), 299-338.
  • Stivers, T., & Robinson, J. D. (2006). A preference for progressivity in interaction. Language in Society, 35(3), 367-392. doi:10.1017/S0047404506060179.

    Abstract

    This article investigates two types of preference organization in interaction: in response to a question that selects a next speaker in multi-party interaction, the preference for answers over non-answer responses as a category of a response; and the preference for selected next speakers to respond. It is asserted that the turn allocation rule specified by Sacks, Schegloff & Jefferson (1974) which states that a response is relevant by the selected next speaker at the transition relevance place is affected by these two preferences once beyond a normal transition space. It is argued that a “second-order” organization is present such that interactants prioritize a preference for answers over a preference for a response by the selected next speaker. This analysis reveals an observable preference for progressivity in interaction.
  • Stivers, T. (2002). Overt parent pressure for antibiotic medication in pediatric encounters. Social Science and Medicine, 54(7), 1111-1130.
  • Stivers, T., Mangione-Smith, R., Elliott, M. N., McDonald, L., & Heritage, J. (2003). Why do physicians think parents expect antibiotics? What parents report vs what physicians believe. Journal of Family Practice, 52(2), 140-147.
  • Stivers, T., Chalfoun, A., & Rossi, G. (2024). To err is human but to persist is diabolical: Toward a theory of interactional policing. Frontiers in Sociology: Sociological Theory, 9: 1369776. doi:10.3389/fsoc.2024.1369776.

    Abstract

    Social interaction is organized around norms and preferences that guide our construction of actions and our interpretation of those of others, creating a reflexive moral order. Sociological theory suggests two possibilities for the type of moral order that underlies the policing of interactional norm and preference violations: a morality that focuses on the nature of violations themselves and a morality that focuses on the positioning of actors as they maintain their conduct comprehensible, even when they depart from norms and preferences. We find that actors are more likely to reproach interactional violations for which an account is not provided by the transgressor, and that actors weakly reproach or let pass first offenses while more strongly policing violators who persist in bad behavior. Based on these findings, we outline a theory of interactional policing that rests not on the nature of the violation but rather on actors' moral positioning.
  • Swaab, T., Brown, C. M., & Hagoort, P. (2003). Understanding words in sentence contexts: The time course of ambiguity resolution. Brain and Language, 86(2), 326-343. doi:10.1016/S0093-934X(02)00547-3.

    Abstract

    Spoken language comprehension requires rapid integration of information from multiple linguistic sources. In the present study we addressed the temporal aspects of this integration process by focusing on the time course of the selection of the appropriate meaning of lexical ambiguities (“bank”) in sentence contexts. Successful selection of the contextually appropriate meaning of the ambiguous word is dependent upon the rapid binding of the contextual information in the sentence to the appropriate meaning of the ambiguity. We used the N400 to identify the time course of this binding process. The N400 was measured to target words that followed three types of context sentences. In the concordant context, the sentence biased the meaning of the sentence-final ambiguous word so that it was related to the target. In the discordant context, the sentence context biased the meaning so that it was not related to the target. In the unrelated control condition, the sentences ended in an unambiguous noun that was unrelated to the target. Half of the concordant sentences biased the dominant meaning, and the other half biased the subordinate meaning of the sentence-final ambiguous words. The ISI between onset of the target word and offset of the sentence-final word of the context sentence was 100 ms in one version of the experiment, and 1250 ms in the second version. We found that (i) the lexically dominant meaning is always partly activated, independent of context, (ii) initially both dominant and subordinate meaning are (partly) activated, which suggests that contextual and lexical factors both contribute to sentence interpretation without context completely overriding lexical information, and (iii) strong lexical influences remain present for a relatively long period of time.
  • Swingley, D. (2003). Phonetic detail in the developing lexicon. Language and Speech, 46(3), 265-294.

    Abstract

    Although infants show remarkable sensitivity to linguistically relevant phonetic variation in speech, young children sometimes appear not to make use of this sensitivity. Here, children's knowledge of the sound-forms of familiar words was assessed using a visual fixation task. Dutch 19-month-olds were shown pairs of pictures and heard correct pronunciations and mispronunciations of familiar words naming one of the pictures. Mispronunciations were word-initial in Experiment 1 and word-medial in Experiment 2, and in both experiments involved substituting one segment with [d] (a common sound in Dutch) or [g] (a rare sound). In both experiments, word recognition performance was better for correct pronunciations than for mispronunciations involving either substituted consonant. These effects did not depend upon children's knowledge of lexical or nonlexical phonological neighbors of the tested words. The results indicate the encoding of phonetic detail in words at 19 months.
  • Swingley, D., & Fernald, A. (2002). Recognition of words referring to present and absent objects by 24-month-olds. Journal of Memory and Language, 46(1), 39-56. doi:10.1006/jmla.2001.2799.

    Abstract

    Three experiments tested young children's efficiency in recognizing words in speech referring to absent objects. Seventy-two 24-month-olds heard sentences containing target words denoting objects that were or were not present in a visual display. Children's eye movements were monitored as they heard the sentences. Three distinct patterns of response were shown. Children hearing a familiar word that was an appropriate label for the currently fixated picture maintained their gaze. Children hearing a familiar word that could not apply to the currently fixated picture rapidly shifted their gaze to the alternative picture, whether that alternative was the named target or not, and then continued to search for an appropriate referent. Finally, children hearing an unfamiliar word shifted their gaze slowly and irregularly. This set of outcomes is interpreted as evidence that by 24 months, rapid activation in word recognition does not depend on the presence of the words' referents. Rather, very young children are capable of quickly and efficiently interpreting words in the absence of visual supporting context.
  • Swingley, D., & Aslin, R. N. (2002). Lexical neighborhoods and the word-form representations of 14-month-olds. Psychological Science, 13(5), 480-484. doi:10.1111/1467-9280.00485.

    Abstract

    The degree to which infants represent phonetic detail in words has been a source of controversy in phonology and developmental psychology. One prominent hypothesis holds that infants store words in a vague or inaccurate form until the learning of similar–sounding neighbors forces attention to subtle phonetic distinctions. In the experiment reported here, we used a visual fixation task to assess word recognition. We present the first evidence indicating that, in fact, the lexical representations of 14– and 15–month–olds are encoded in fine detail, even when this detail is not functionally necessary for distinguishing similar words in the infant’s vocabulary. Exposure to words is sufficient for well–specified lexical representations, even well before the vocabulary spurt. These results suggest developmental continuity in infants’ representations of speech: As infants begin to build a vocabulary and learn word meanings, they use the perceptual abilities previously demonstrated in tasks testing the discrimination and categorization of meaningless syllables.
  • Swingley, D., & Aslin, R. N. (2000). Spoken word recognition and lexical representation in very young children. Cognition, 76, 147-166. doi:10.1016/S0010-0277(00)00081-0.

    Abstract

    Although children's knowledge of the sound patterns of words has been a focus of debate for many years, little is known about the lexical representations very young children use in word recognition. In particular, researchers have questioned the degree of specificity encoded in early lexical representations. The current study addressed this issue by presenting 18–23-month-olds with object labels that were either correctly pronounced, or mispronounced. Mispronunciations involved replacement of one segment with a similar segment, as in ‘baby–vaby’. Children heard sentences containing these words while viewing two pictures, one of which was the referent of the sentence. Analyses of children's eye movements showed that children recognized the spoken words in both conditions, but that recognition was significantly poorer when words were mispronounced. The effects of mispronunciation on recognition were unrelated to age or to spoken vocabulary size. The results suggest that children's representations of familiar words are phonetically well-specified, and that this specification may not be a consequence of the need to differentiate similar words in production.
  • Takashima, A., Petersson, K. M., Rutters, F., Tendolkar, I., Jensen, O., Zwarts, M. J., McNaughton, B. L., & Fernández, G. (2006). Declarative memory consolidation in humans: A prospective functional magnetic resonance imaging study. Proceedings of the National Academy of Sciences of the United States of America [PNAS], 103(3), 756-761.

    Abstract

    Retrieval of recently acquired declarative memories depends on
    the hippocampus, but with time, retrieval is increasingly sustainable
    by neocortical representations alone. This process has been
    conceptualized as system-level consolidation. Using functional
    magnetic resonance imaging, we assessed over the course of three
    months how consolidation affects the neural correlates of memory
    retrieval. The duration of slow-wave sleep during a nap/rest
    period after the initial study session and before the first scan
    session on day 1 correlated positively with recognition memory
    performance for items studied before the nap and negatively with
    hippocampal activity associated with correct confident recognition.
    Over the course of the entire study, hippocampal activity for
    correct confident recognition continued to decrease, whereas activity
    in a ventral medial prefrontal region increased. These findings,
    together with data obtained in rodents, may prompt a
    revision of classical consolidation theory, incorporating a transfer
    of putative linking nodes from hippocampal to prelimbic prefrontal
    areas.
  • Takashima, A., Carota, F., Schoots, V., Redmann, A., Jehee, J., & Indefrey, P. (2024). Tomatoes are red: The perception of achromatic objects elicits retrieval of associated color knowledge. Journal of Cognitive Neuroscience, 36(1), 24-45. doi:10.1162/jocn_a_02068.

    Abstract

    When preparing to name an object, semantic knowledge about the object and its attributes is activated, including perceptual properties. It is unclear, however, whether semantic attribute activation contributes to lexical access or is a consequence of activating a concept irrespective of whether that concept is to be named or not. In this study, we measured neural responses using fMRI while participants named objects that are typically green or red, presented in black line drawings. Furthermore, participants underwent two other tasks with the same objects, color naming and semantic judgment, to see if the activation pattern we observe during picture naming is (a) similar to that of a task that requires accessing the color attribute and (b) distinct from that of a task that requires accessing the concept but not its name or color. We used representational similarity analysis to detect brain areas that show similar patterns within the same color category, but show different patterns across the two color categories. In all three tasks, activation in the bilateral fusiform gyri (“Human V4”) correlated with a representational model encoding the red–green distinction weighted by the importance of color feature for the different objects. This result suggests that when seeing objects whose color attribute is highly diagnostic, color knowledge about the objects is retrieved irrespective of whether the color or the object itself have to be named.
  • Tamaoka, K., Yu, S., Zhang, J., Otsuka, Y., Lim, H., Koizumi, M., & Verdonschot, R. G. (2024). Syntactic structures in motion: Investigating word order variations in verb-final (Korean) and verb-initial (Tongan) languages. Frontiers in Psychology, 15: 1360191. doi:10.3389/fpsyg.2024.1360191.

    Abstract

    This study explored sentence processing in two typologically distinct languages: Korean, a verb-final language, and Tongan, a verb-initial language. The first experiment revealed that in Korean, sentences arranged in the scrambled OSV (Object, Subject, Verb) order were processed more slowly than those in the canonical SOV order, highlighting a scrambling effect. It also found that sentences with subject topicalization in the SOV order were processed as swiftly as those in the canonical form, whereas sentences with object topicalization in the OSV order were processed with speeds and accuracy comparable to scrambled sentences. However, since topicalization and scrambling in Korean use the same OSV order, independently distinguishing the effects of topicalization is challenging. In contrast, Tongan allows for a clear separation of word orders for topicalization and scrambling, facilitating an independent evaluation of topicalization effects. The second experiment, employing a maze task, confirmed that Tongan’s canonical VSO order was processed more efficiently than the VOS scrambled order, thereby verifying a scrambling effect. The third experiment investigated the effects of both scrambling and topicalization in Tongan, finding that the canonical VSO order was processed most efficiently in terms of speed and accuracy, unlike the VOS scrambled and SVO topicalized orders. Notably, the OVS object-topicalized order was processed as efficiently as the VSO canonical order, while the SVO subject-topicalized order was slower than VSO but faster than VOS. By independently assessing the effects of topicalization apart from scrambling, this study demonstrates that both subject and object topicalization in Tongan facilitate sentence processing, contradicting the predictions based on movement-based anticipation.

    Additional information

    appendix 1-3
  • Tanenhaus, M. K., Magnuson, J. S., Dahan, D., & Chaimbers, G. (2000). Eye movements and lexical access in spoken-language comprehension: evaluating a linking hypothesis between fixations and linguistic processing. Journal of Psycholinguistic Research, 29, 557-580. doi:10.1023/A:1026464108329.

    Abstract

    A growing number of researchers in the sentence processing community are using eye movements to address issues in spoken language comprehension. Experiments using this paradigm have shown that visually presented referential information, including properties of referents relevant to specific actions, influences even the earliest moments of syntactic processing. Methodological concerns about task-specific strategies and the linking hypothesis between eye movements and linguistic processing are identified and discussed. These concerns are addressed in a review of recent studies of spoken word recognition which introduce and evaluate a detailed linking hypothesis between eye movements and lexical access. The results provide evidence about the time course of lexical activation that resolves some important theoretical issues in spoken-word recognition. They also demonstrate that fixations are sensitive to properties of the normal language-processing system that cannot be attributed to task-specific strategies
  • Ten Bosch, L., Baayen, R. H., & Ernestus, M. (2006). On speech variation and word type differentiation by articulatory feature representations. In Proceedings of Interspeech 2006 (pp. 2230-2233).

    Abstract

    This paper describes ongoing research aiming at the description of variation in speech as represented by asynchronous articulatory features. We will first illustrate how distances in the articulatory feature space can be used for event detection along speech trajectories in this space. The temporal structure imposed by the cosine distance in articulatory feature space coincides to a large extent with the manual segmentation on phone level. The analysis also indicates that the articulatory feature representation provides better such alignments than the MFCC representation does. Secondly, we will present first results that indicate that articulatory features can be used to probe for acoustic differences in the onsets of Dutch singulars and plurals.
  • ten Bosch, L., Hämäläinen, A., Scharenborg, O., & Boves, L. (2006). Acoustic scores and symbolic mismatch penalties in phone lattices. In Proceedings of the 2006 IEEE International Conference on Acoustics, Speech and Signal Processing [ICASSP 2006]. IEEE.

    Abstract

    This paper builds on previous work that aims at unraveling the structure of the speech signal by means of using probabilistic representations. The context of this work is a multi-pass speech recognition system in which a phone lattice is created and used as a basis for a lexical search in which symbolic mismatches are allowed at certain costs. The focus is on the optimization of the costs of phone insertions, deletions and substitutions that are used in the lexical decoding pass. Two optimization approaches are presented, one related to a multi-pass computational model for human speech recognition, the other based on a decoding in which Bayes’ risks are minimized. In the final section, the advantages of these optimization methods are discussed and compared.
  • Ten Oever, S., & Martin, A. E. (2024). Interdependence of “what” and “when” in the brain. Journal of Cognitive Neuroscience, 36(1), 167-186. doi:10.1162/jocn_a_02067.

    Abstract

    From a brain's-eye-view, when a stimulus occurs and what it is are interrelated aspects of interpreting the perceptual world. Yet in practice, the putative perceptual inferences about sensory content and timing are often dichotomized and not investigated as an integrated process. We here argue that neural temporal dynamics can influence what is perceived, and in turn, stimulus content can influence the time at which perception is achieved. This computational principle results from the highly interdependent relationship of what and when in the environment. Both brain processes and perceptual events display strong temporal variability that is not always modeled; we argue that understanding—and, minimally, modeling—this temporal variability is key for theories of how the brain generates unified and consistent neural representations and that we ignore temporal variability in our analysis practice at the peril of both data interpretation and theory-building. Here, we review what and when interactions in the brain, demonstrate via simulations how temporal variability can result in misguided interpretations and conclusions, and outline how to integrate and synthesize what and when in theories and models of brain computation.
  • Ten Oever, S., Titone, L., te Rietmolen, N., & Martin, A. E. (2024). Phase-dependent word perception emerges from region-specific sensitivity to the statistics of language. Proceedings of the National Academy of Sciences of the United States of America, 121(3): e2320489121. doi:10.1073/pnas.2320489121.

    Abstract

    Neural oscillations reflect fluctuations in excitability, which biases the percept of ambiguous sensory input. Why this bias occurs is still not fully understood. We hypothesized that neural populations representing likely events are more sensitive, and thereby become active on earlier oscillatory phases, when the ensemble itself is less excitable. Perception of ambiguous input presented during less-excitable phases should therefore be biased toward frequent or predictable stimuli that have lower activation thresholds. Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoencephalography (MEG), and computational modelling. With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word-identification behavior based on phoneme and lexical frequencies, respectively. This finding was reproduced in a computational model. These results demonstrate that oscillations provide a temporal ordering of neural activity based on the sensitivity of separable neural populations.
  • Ter Keurs, M., Brown, C. M., & Hagoort, P. (2002). Lexical processing of vocabulary class in patients with Broca's aphasia: An event-related brain potential study on agrammatic comprehension. Neuropsychologia, 40(9), 1547-1561. doi:10.1016/S0028-3932(02)00025-8.

    Abstract

    This paper presents electrophysiological evidence of an impairment in the on-line processing of word class information in patients with Broca’s aphasia with agrammatic comprehension. Event-related brain potentials (ERPs) were recorded from the scalp while Broca patients and non-aphasic control subjects read open- and closed-class words that appeared one at a time on a PC screen. Separate waveforms were computed for open- and closed-class words. The non-aphasic control subjects showed a modulation of an early left anterior negativity in the 210–325 ms as a function of vocabulary class (VC), and a late left anterior negative shift to closed-class words in the 400–700 ms epoch. An N400 effect was present in both control subjects and Broca patients. We have taken the early electrophysiological differences to reflect the first availability of word-category information from the mental lexicon. The late differences can be related to post-lexical processing. In contrast to the control subjects, the Broca patients showed no early VC effect and no late anterior shift to closed-class words. The results support the view that an incomplete and/or delayed availability of word-class information might be an important factor in Broca’s agrammatic comprehension.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Hand gestures have predictive potential during conversation: An investigation of the timing of gestures in relation to speech. Cognitive Science, 48(1): e13407. doi:10.1111/cogs.13407.

    Abstract

    During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Gestures speed up responses to questions. Language, Cognition and Neuroscience, 39(4), 423-430. doi:10.1080/23273798.2024.2314021.

    Abstract

    Most language use occurs in face-to-face conversation, which involves rapid turn-taking. Seeing communicative bodily signals in addition to hearing speech may facilitate such fast responding. We tested whether this holds for co-speech hand gestures by investigating whether these gestures speed up button press responses to questions. Sixty native speakers of Dutch viewed videos in which an actress asked yes/no-questions, either with or without a corresponding iconic hand gesture. Participants answered the questions as quickly and accurately as possible via button press. Gestures did not impact response accuracy, but crucially, gestures sped up responses, suggesting that response planning may be finished earlier when gestures are seen. How much gestures sped up responses was not related to their timing in the question or their timing with respect to the corresponding information in speech. Overall, these results are in line with the idea that multimodality may facilitate fast responding during face-to-face conversation.
  • Ter Bekke, M., Levinson, S. C., Van Otterdijk, L., Kühn, M., & Holler, J. (2024). Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition, 248: 105806. doi:10.1016/j.cognition.2024.105806.

    Abstract

    The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction.
  • Terporten, R., Huizeling, E., Heidlmayr, K., Hagoort, P., & Kösem, A. (2024). The interaction of context constraints and predictive validity during sentence reading. Journal of Cognitive Neuroscience, 36(2), 225-238. doi:10.1162/jocn_a_02082.

    Abstract

    Words are not processed in isolation; instead, they are commonly embedded in phrases and sentences. The sentential context influences the perception and processing of a word. However, how this is achieved by brain processes and whether predictive mechanisms underlie this process remain a debated topic. Here, we employed an experimental paradigm in which we orthogonalized sentence context constraints and predictive validity, which was defined as the ratio of congruent to incongruent sentence endings within the experiment. While recording electroencephalography, participants read sentences with three levels of sentential context constraints (high, medium, and low). Participants were also separated into two groups that differed in their ratio of valid congruent to incongruent target words that could be predicted from the sentential context. For both groups, we investigated modulations of alpha power before, and N400 amplitude modulations after target word onset. The results reveal that the N400 amplitude gradually decreased with higher context constraints and cloze probability. In contrast, alpha power was not significantly affected by context constraint. Neither the N400 nor alpha power were significantly affected by changes in predictive validity.
  • Terrill, A. (2002). Systems of nominal classification in East Papuan languages. Oceanic Linguistics, 41(1), 63-88.

    Abstract

    The existence of nominal classification systems has long been thought of as one of the defining features of the Papuan languages of island New Guinea. However, while almost all of these languages do have nominal classification systems, they are, in fact, extremely divergent from each other. This paper examines these systems in the East Papuan languages in order to examine the question of the relationship between these Papuan outliers. Nominal classification systems are often archaic, preserving older features lost elsewhere in a language. Also, evidence shows that they are not easily borrowed into languages (although they can be). For these reasons, it is useful to consider nominal classification systems as a tool for exploring ancient historical relationships between languages. This paper finds little evidence of relationship between the nominal classification systems of the East Papuan languages as a whole. It argues that the mere existence of nominal classification systems cannot be used as evidence that the East Papuan languages form a genetic family. The simplest hypothesis is that either the systems were inherited so long ago as to obscure the genetic evidence, or else the appearance of nominal classification systems in these languages arose through borrowing of grammatical systems rather than of morphological forms.
  • Terrill, A., & Dunn, M. (2006). Semantic transference: Two preliminary case studies from the Solomon Islands. In C. Lefebvre, L. White, & C. Jourdan (Eds.), L2 acquisition and Creole genesis: Dialogues (pp. 67-85). Amsterdam: Benjamins.
  • Terrill, A., & Dunn, M. (2003). Orthographic design in the Solomon Islands: The social, historical, and linguistic situation of Touo (Baniata). Written Language and Literacy, 6(2), 177-192. doi:10.1075/wll.6.2.03ter.

    Abstract

    This paper discusses the development of an orthography for the Touo language (Solomon Islands). Various orthographies have been proposed for this language in the past, and the paper discusses why they are perceived by the community to have failed. Current opinion about orthography development within the Touo-speaking community is divided along religious, political, and geographical grounds; and the development of a successful orthography must take into account a variety of opinions. The paper examines the social, historical, and linguistic obstacles that have hitherto prevented the development of an accepted Touo orthography, and presents a new proposal which has thus far gained acceptance with community leaders. The fundamental issue is that creating an orthography for a language takes place in a social, political, and historical context; and for an orthography to be acceptable for the speakers of a language, all these factors must be taken into account.
  • Terrill, A. (2002). Why make books for people who can't read? A perspective on documentation of an endangered language from Solomon Islands. International Journal of the Sociology of Language, 155/156(1), 205-219. doi:10.1515/ijsl.2002.029.

    Abstract

    This paper explores the issue of documenting an endangered language from the perspective of a community with low levels of literacy, I first discuss the background of the language community with whom I work, the Lavukal people of Solomon Islands, and discuss whether, and to what extent, Lavukaleve is an endangered language. I then go on to discuss the documentation project. My main point is that while low literacy levels and a nonreading culture would seem to make documentation a strange choice as a tool for language maintenance, in fact both serve as powerful cultural symbols of the importance and prestige of Lavukaleve. It is well known that a common reason for language death is that speakers choose not to transmit their language to the next generation (e.g. Winter 1993). Lavukaleve is particularly vulnerable in this respect. By utilizing cultural symbols of status and prestige, the standing of Lavukaleve can be enhanced, thus helping to ensure the transmission of Lavukaleve to future generations.
  • Terrill, A. (2002). [Review of the book The Interface between syntax and discourse in Korafe, a Papuan language of Papua New Guinea by Cynthia J. M. Farr]. Linguistic Typology, 6(1), 110-116. doi:10.1515/lity.2002.004.
  • Terrill, A. (2002). Dharumbal: The language of Rockhampton, Australia. Canberra: Pacific Linguistics.
  • Terrill, A. (2003). A grammar of Lavukaleve. Berlin: Mouton de Gruyter.
  • Terrill, A. (2006). Central Solomon languages. In K. Brown (Ed.), Encyclopedia of language and linguistics (vol. 2) (pp. 279-280). Amsterdam: Elsevier.

    Abstract

    The Papuan languages of the central Solomon Islands are a negatively defined areal grouping: They are those four or possibly five languages in the central Solomon Islands that do not belong to the Austronesian family. Bilua (Vella Lavella), Touo (Rendova), Lavukaleve (Russell Islands), Savosavo (Savo Island) and possibly Kazukuru (New Georgia) have been identified as non-Austronesian since the early 20th century. However, their affiliations both to each other and to other languages still remain a mystery. Heterogeneous and until recently largely undescribed, they present an interesting departure from what is known both of Austronesian languages in the region and of the Papuan languages of the mainland of New Guinea.
  • Terrill, A. (2006). Body part terms in Lavukaleve, a Papuan language of the Solomon Islands. Language Sciences, 28(2-3), 304-322. doi:10.1016/j.langsci.2005.11.008.

    Abstract

    This paper explores body part terms in Lavukaleve, a Papuan isolate spoken in the Solomon Islands. The full set of body part terms collected so far is presented, and their grammatical properties are explained. It is argued that Lavukaleve body part terms do not enter into partonomic relations with each other, and that a hierarchical structure of body part terms does not apply for Lavukaleve. It is shown too that some universal claims which have been made about the expression of terms relating to limbs are contradicted in Lavukaleve, which has only one general term covering arm, hand, leg and (for some people) foot.
  • Terrill, A. (2002). [Review of the book The Interface between syntax and discourse in Korafe, a Papuan language of Papua New Guinea by Cynthia J. M. Farr]. Linguistic Typology, 6(1), 110-116. doi:10.1515/lity.2002.004.
  • Terrill, A. (2003). Linguistic stratigraphy in the central Solomon Islands: Lexical evidence of early Papuan/Austronesian interaction. Journal of the Polynesian Society, 112(4), 369-401.

    Abstract

    The extent to which linguistic borrowing can be used to shed light on the existence and nature of early contact between Papuan and Oceanic speakers is examined. The question is addressed by taking one Papuan language, Lavukaleve, spoken in the Russell Islands, central Solomon Islands and examining lexical borrowings between it and nearby Oceanic languages, and with reconstructed forms of Proto Oceanic. Evidence from ethnography, culture history and archaeology, when added to the linguistic evidence provided in this study, indicates long-standing cultural links between other (non-Russell) islands. The composite picture is one of a high degree of cultural contact with little linguistic mixing, i.e., little or no changes affecting the structure of the languages and actually very little borrowed vocabulary.
  • Theakston, A. L., Lieven, E. V., Pine, J. M., & Rowland, C. F. (2002). Going, going, gone: The acquisition of the verb ‘go’. Journal of Child Language, 29(4), 783-811. doi:10.1017/S030500090200538X.

    Abstract

    This study investigated different accounts of early argument structure acquisition and verb paradigm building through the detailed examination of the acquisition of the verb Go. Data from 11 children followed longitudinally between the ages of 2;0 and 3;0 were examined. Children's uses of the different forms of Go were compared with respect to syntactic structure and the semantics encoded. The data are compatible with the suggestion that the children were not operating with a single verb representation that differentiated between different forms of Go but rather that their knowledge of the relationship between the different forms of Go varied depending on the structure produced and the meaning encoded. However, a good predictor of the children's use of different forms of Go in particular structures and to express particular meanings was the frequency of use of those structures and meanings with particular forms of Go in the input. The implications of these findings for theories of syntactic category formation and abstract rule-based descriptions of grammar are discussed.
  • Theakston, A. L., Lieven, E. V., Pine, J. M., & Rowland, C. F. (2006). Note of clarification on the coding of light verbs in ‘Semantic generality, input frequency and the acquisition of syntax’ (Journal of Child Language 31, 61–99). Journal of Child Language, 33(1), 191-197. doi:10.1017/S0305000905007178.

    Abstract

    In our recent paper, ‘Semantic generality, input frequency and the acquisition of syntax’ (Journal of Child Language31, 61–99), we presented data from two-year-old children to examine the question of whether the semantic generality of verbs contributed to their ease and stage of acquisition over and above the effects of their typically high frequency in the language to which children are exposed. We adopted two different categorization schemes to determine whether individual verbs should be considered to be semantically general, or ‘light’, or whether they encoded more specific semantics. These categorization schemes were based on previous work in the literature on the role of semantically general verbs in early verb acquisition, and were designed, in the first case, to be a conservative estimate of semantic generality, including only verbs designated as semantically general by a number of other researchers (e.g. Clark, 1978; Pinker, 1989; Goldberg, 1998), and, in the second case, to be a more inclusive estimate of semantic generality based on Ninio's (1999a,b) suggestion that grammaticalizing verbs encode the semantics associated with semantically general verbs. Under this categorization scheme, a much larger number of verbs were included as semantically general verbs.
  • Thothathiri, M., Basnakova, J., Lewis, A. G., & Briand, J. M. (2024). Fractionating difficulty during sentence comprehension using functional neuroimaging. Cerebral Cortex, 34(2): bhae032. doi:10.1093/cercor/bhae032.

    Abstract

    Sentence comprehension is highly practiced and largely automatic, but this belies the complexity of the underlying processes. We used functional neuroimaging to investigate garden-path sentences that cause difficulty during comprehension, in order to unpack the different processes used to support sentence interpretation. By investigating garden-path and other types of sentences within the same individuals, we functionally profiled different regions within the temporal and frontal cortices in the left hemisphere. The results revealed that different aspects of comprehension difficulty are handled by left posterior temporal, left anterior temporal, ventral left frontal, and dorsal left frontal cortices. The functional profiles of these regions likely lie along a spectrum of specificity to generality, including language-specific processing of linguistic representations, more general conflict resolution processes operating over linguistic representations, and processes for handling difficulty in general. These findings suggest that difficulty is not unitary and that there is a role for a variety of linguistic and non-linguistic processes in supporting comprehension.

    Additional information

    supplementary information
  • Titus, A., Dijkstra, T., Willems, R. M., & Peeters, D. (2024). Beyond the tried and true: How virtual reality, dialog setups, and a focus on multimodality can take bilingual language production research forward. Neuropsychologia, 193: 108764. doi:10.1016/j.neuropsychologia.2023.108764.

    Abstract

    Bilinguals possess the ability of expressing themselves in more than one language, and typically do so in contextually rich and dynamic settings. Theories and models have indeed long considered context factors to affect bilingual language production in many ways. However, most experimental studies in this domain have failed to fully incorporate linguistic, social, or physical context aspects, let alone combine them in the same study. Indeed, most experimental psycholinguistic research has taken place in isolated and constrained lab settings with carefully selected words or sentences, rather than under rich and naturalistic conditions. We argue that the most influential experimental paradigms in the psycholinguistic study of bilingual language production fall short of capturing the effects of context on language processing and control presupposed by prominent models. This paper therefore aims to enrich the methodological basis for investigating context aspects in current experimental paradigms and thereby move the field of bilingual language production research forward theoretically. After considering extensions of existing paradigms proposed to address context effects, we present three far-ranging innovative proposals, focusing on virtual reality, dialog situations, and multimodality in the context of bilingual language production.
  • Troncarelli, M. C., & Drude, S. (2002). Awytyza Ti'ingku. Livro para alfabetização na língua aweti: Awytyza Ti’ingku. Alphabetisierungs‐Fibel der Awetí‐Sprache. São Paulo: Instituto Sócio-Ambiental.
  • Trujillo, J. P. (2024). Motion-tracking technology for the study of gesture. In A. Cienki (Ed.), The Cambridge Handbook of Gesture Studies. Cambridge: Cambridge University Press.
  • Trujillo, J. P., & Holler, J. (2024). Conversational facial signals combine into compositional meanings that change the interpretation of speaker intentions. Scientific Reports, 14: 2286. doi:10.1038/s41598-024-52589-0.

    Abstract

    Human language is extremely versatile, combining a limited set of signals in an unlimited number of ways. However, it is unknown whether conversational visual signals feed into the composite utterances with which speakers communicate their intentions. We assessed whether different combinations of visual signals lead to different intent interpretations of the same spoken utterance. Participants viewed a virtual avatar uttering spoken questions while producing single visual signals (i.e., head turn, head tilt, eyebrow raise) or combinations of these signals. After each video, participants classified the communicative intention behind the question. We found that composite utterances combining several visual signals conveyed different meaning compared to utterances accompanied by the single visual signals. However, responses to combinations of signals were more similar to the responses to related, rather than unrelated, individual signals, indicating a consistent influence of the individual visual signals on the whole. This study therefore provides first evidence for compositional, non-additive (i.e., Gestalt-like) perception of multimodal language.

    Additional information

    41598_2024_52589_MOESM1_ESM.docx
  • Trujillo, J. P., & Holler, J. (2024). Information distribution patterns in naturalistic dialogue differ across languages. Psychonomic Bulletin & Review, 31, 1723-1734. doi:10.3758/s13423-024-02452-0.

    Abstract

    The natural ecology of language is conversation, with individuals taking turns speaking to communicate in a back-and-forth fashion. Language in this context involves strings of words that a listener must process while simultaneously planning their own next utterance. It would thus be highly advantageous if language users distributed information within an utterance in a way that may facilitate this processing–planning dynamic. While some studies have investigated how information is distributed at the level of single words or clauses, or in written language, little is known about how information is distributed within spoken utterances produced during naturalistic conversation. It also is not known how information distribution patterns of spoken utterances may differ across languages. We used a set of matched corpora (CallHome) containing 898 telephone conversations conducted in six different languages (Arabic, English, German, Japanese, Mandarin, and Spanish), analyzing more than 58,000 utterances, to assess whether there is evidence of distinct patterns of information distributions at the utterance level, and whether these patterns are similar or differed across the languages. We found that English, Spanish, and Mandarin typically show a back-loaded distribution, with higher information (i.e., surprisal) in the last half of utterances compared with the first half, while Arabic, German, and Japanese showed front-loaded distributions, with higher information in the first half compared with the last half. Additional analyses suggest that these patterns may be related to word order and rate of noun and verb usage. We additionally found that back-loaded languages have longer turn transition times (i.e.,time between speaker turns)

    Additional information

    Data availability
  • Tuinman, A. (2006). Overcompensation of /t/ reduction in Dutch by German/Dutch bilinguals. In Variation, detail and representation: 10th Conference on Laboratory Phonology (pp. 101-102).
  • Ullman, M. T., Bulut, T., & Walenski, M. (2024). Hijacking limitations of working memory load to test for composition in language. Cognition, 251: 105875. doi:10.1016/j.cognition.2024.105875.

    Abstract

    Although language depends on storage and composition, just what is stored or (de)composed remains unclear. We leveraged working memory load limitations to test for composition, hypothesizing that decomposed forms should particularly tax working memory. We focused on a well-studied paradigm, English inflectional morphology. We predicted that (compositional) regulars should be harder to maintain in working memory than (non-compositional) irregulars, using a 3-back production task. Frequency, phonology, orthography, and other potentially confounding factors were controlled for. Compared to irregulars, regulars and their accompanying −s/−ing-affixed filler items yielded more errors. Underscoring the decomposition of only regulars, regulars yielded more bare-stem (e.g., walk) and stem affixation errors (walks/walking) than irregulars, whereas irregulars yielded more past-tense-form affixation errors (broughts/tolded). In line with previous evidence that regulars can be stored under certain conditions, the regular-irregular difference held specifically for phonologically consistent (not inconsistent) regulars, in particular for both low and high frequency consistent regulars in males, but only for low frequency consistent regulars in females. Sensitivity analyses suggested the findings were robust. The study further elucidates the computation of inflected forms, and introduces a simple diagnostic for linguistic composition.

    Additional information

    Data availabillity
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2024). Knowledge of a talker’s f0 affects subsequent perception of voiceless fricatives. In Y. Chen, A. Chen, & A. Arvaniti (Eds.), Proceedings of Speech Prosody 2024 (pp. 432-436).

    Abstract

    The human brain deals with the infinite variability of speech through multiple mechanisms. Some of them rely solely on information in the speech input (i.e., signal-driven) whereas some rely on linguistic or real-world knowledge (i.e., knowledge-driven). Many signal-driven perceptual processes rely on the enhancement of acoustic differences between incoming speech sounds, producing contrastive adjustments. For instance, when an ambiguous voiceless fricative is preceded by a high fundamental frequency (f0) sentence, the fricative is perceived as having lower a spectral center of gravity (CoG). However, it is not clear whether knowledge of a talker’s typical f0 can lead to similar contrastive effects. This study investigated a possible talker f0 effect on fricative CoG perception. In the exposure phase, two groups of participants (N=16 each) heard the same talker at high or low f0 for 20 minutes. Later, in the test phase, participants rated fixed-f0 /?ɔk/ tokens as being /sɔk/ (i.e., high CoG) or /ʃɔk/ (i.e., low CoG), where /?/ represents a fricative from a 5-step /s/-/ʃ/ continuum. Surprisingly, the data revealed the opposite of our contrastive hypothesis, whereby hearing high f0 instead biased perception towards high CoG. Thus, we demonstrated that talker f0 information affects fricative CoG perception.
  • Van Staden, M., Bowerman, M., & Verhelst, M. (2006). Some properties of spatial description in Dutch. In S. C. Levinson, & D. Wilkins (Eds.), Grammars of Space (pp. 475-511). Cambridge: Cambridge University Press.
  • Van Turennout, M., Bielamowicz, L., & Martin, A. (2003). Modulation of neural activity during object naming: Effects of time and practice. Cerebral Cortex, 13(4), 381-391.

    Abstract

    Repeated exposure to objects improves our ability to identify and name them, even after a long delay. Previous brain imaging studies have demonstrated that this experience-related facilitation of object naming is associated with neural changes in distinct brain regions. We used event-related functional magnetic resonance imaging (fMRI) to examine the modulation of neural activity in the object naming system as a function of experience and time. Pictures of common objects were presented repeatedly for naming at different time intervals (1 h, 6 h and 3 days) before scanning, or at 30 s intervals during scanning. The results revealed that as objects became more familiar with experience, activity in occipitotemporal and left inferior frontal regions decreased while activity in the left insula and basal ganglia increased. In posterior regions, reductions in activity as a result of multiple repetitions did not interact with time, whereas in left inferior frontal cortex larger decreases were observed when repetitions were spaced out over time. This differential modulation of activity in distinct brain regions provides support for the idea that long-lasting object priming is mediated by two neural mechanisms. The first mechanism may involve changes in object-specific representations in occipitotemporal cortices, the second may be a form of procedural learning involving a reorganization in brain circuitry that leads to more efficient name retrieval.
  • Van Alphen, P. M., & McQueen, J. M. (2006). The effect of voice onset time differences on lexical access in Dutch. Journal of Experimental Psychology: Human Perception and Performance, 32(1), 178-196. doi:10.1037/0096-1523.32.1.178.

    Abstract

    Effects on spoken-word recognition of prevoicing differences in Dutch initial voiced plosives were examined. In 2 cross-modal identity-priming experiments, participants heard prime words and nonwords beginning with voiced plosives with 12, 6, or 0 periods of prevoicing or matched items beginning with voiceless plosives and made lexical decisions to visual tokens of those items. Six-period primes had the same effect as 12-period primes. Zero-period primes had a different effect, but only when their voiceless counterparts were real words. Listeners could nevertheless discriminate the 6-period primes from the 12- and 0-period primes. Phonetic detail appears to influence lexical access only to the extent that it is useful: In Dutch, presence versus absence of prevoicing is more informative than amount of prevoicing.
  • Van den Brink, D., Brown, C. M., & Hagoort, P. (2006). The cascaded nature of lexical selection and integration in auditory sentence processing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(3), 364-372. doi:10.1037/0278-7393.32.3.364.

    Abstract

    An event-related brain potential experiment was carried out to investigate the temporal relationship
    between lexical selection and the semantic integration in auditory sentence processing. Participants were
    presented with spoken sentences that ended with a word that was either semantically congruent or
    anomalous. Information about the moment in which a sentence-final word could uniquely be identified,
    its isolation point (IP), was compared with the onset of the elicited N400 congruity effect, reflecting
    semantic integration processing. The results revealed that the onset of the N400 effect occurred prior to
    the IP of the sentence-final words. Moreover, the factor early or late IP did not affect the onset of the
    N400. These findings indicate that lexical selection and semantic integration are cascading processes, in
    that semantic integration processing can start before the acoustic information allows the selection of a
    unique candidate and seems to be attempted in parallel for multiple candidates that are still compatible
    with the bottom–up acoustic input.
  • Van Berkum, J. J. A., Zwitserlood, P., Hagoort, P., & Brown, C. M. (2003). When and how do listeners relate a sentence to the wider discourse? Evidence from the N400 effect. Cognitive Brain Research, 17(3), 701-718. doi:10.1016/S0926-6410(03)00196-4.

    Abstract

    In two ERP experiments, we assessed the impact of discourse-level information on the processing of an unfolding spoken sentence. Subjects listened to sentences like Jane told her brother that he was exceptionally quick/slow, designed such that the alternative critical words were equally acceptable within the local sentence context. In Experiment 1, these sentences were embedded in a discourse that rendered one of the critical words anomalous (e.g. because Jane’s brother had in fact done something very quickly). Relative to the coherent alternative, these discourse-anomalous words elicited a standard N400 effect that started at 150–200 ms after acoustic word onset. Furthermore, when the same sentences were heard in isolation in Experiment 2, the N400 effect disappeared. The results demonstrate that our listeners related the unfolding spoken words to the wider discourse extremely rapidly, after having heard the first two or three phonemes only, and in many cases well before the end of the word. In addition, the identical nature of discourse- and sentence-dependent N400 effects suggests that from the perspective of the word-elicited comprehension process indexed by the N400, the interpretive context delineated by a single unfolding sentence and a larger discourse is functionally identical.
  • Van Turennout, M., Schmitt, B., & Hagoort, P. (2003). When words come to mind: Electrophysiological insights on the time course of speaking and understanding words. In N. O. Schiller, & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 241-278). Berlin: Mouton de Gruyter.
  • van Staden, M., & Majid, A. (2003). Body colouring task 2003. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 66-68). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877666.

    Abstract

    This Field Manual entry has been superceded by the published version: Van Staden, M., & Majid, A. (2006). Body colouring task. Language Sciences, 28(2-3), 158-161. doi:10.1016/j.langsci.2005.11.004.

    Additional information

    2003_body_model_large.pdf

    Files private

    Request files
  • Van Ooijen, B., Cutler, A., & Berinetto, P. M. (1993). Click detection in Italian and English. In Eurospeech 93: Vol. 1 (pp. 681-684). Berlin: ESCA.

    Abstract

    We report four experiments in which English and Italian monolinguals detected clicks in continous speech in their native language. Two of the experiments used an off-line location task, and two used an on-line reaction time task. Despite there being large differences between English and Italian with respect to rhythmic characteristics, very similar response patterns were found for the two language groups. It is concluded that the process of click detection operates independently from language-specific differences in perceptual processing at the sublexical level.
  • Van Berkum, J. J. A., Brown, C. M., Hagoort, P., & Zwitserlood, P. (2003). Event-related brain potentials reflect discourse-referential ambiguity in spoken language comprehension. Psychophysiology, 40(2), 235-248. doi:10.1111/1469-8986.00025.

    Abstract

    In two experiments, we explored the use of event-related brain potentials to selectively track the processes that establish reference during spoken language comprehension. Subjects listened to stories in which a particular noun phrase like "the girl" either uniquely referred to a single referent mentioned in the earlier discourse, or ambiguously referred to two equally suitable referents. Referentially ambiguous nouns ("the girl" with two girls introduced in the discourse context) elicited a frontally dominant and sustained negative shift in brain potentials, emerging within 300–400 ms after acoustic noun onset. The early onset of this effect reveals that reference to a discourse entity can be established very rapidly. Its morphology and distribution suggest that at least some of the processing consequences of referential ambiguity may involve an increased demand on memory resources. Furthermore, because this referentially induced ERP effect is very different from that of well-known ERP effects associated with the semantic (N400) and syntactic (e.g., P600/SPS) aspects of language comprehension, it suggests that ERPs can be used to selectively keep track of three major processes involved in the comprehension of an unfolding piece of discourse.
  • Van Gijn, E. (2006). A grammar of Yurakaré. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Abstract

    This book provides an overview of the grammatical structure of the language Yurakaré, an unclassified and previously undescribed language of central Bolivia. It consists of 8 chapters, each describing different aspects of the language. Chapter 1 is an introduction to the Yurakaré people and their language. Chapter 2 describes the phonology of the language, from the individual sounds to the stress system. In chapter 3 the morphology of Yurakaré is introduced, i.e. the parts of speech, and the different morphological processes. Chapter 4 is a description of the noun phrase and contains information about nouns, adjectives, postpositions and quantifiers. It also discusses the categories associated with the noun phrase in Yurakaré, such as number, possession, collectivity/distributivity, diminutive. In chapter 5, called 'Verbal agreement, voice and valency' there is a description of the argument structure of predicates, how arguments are expressed and how argument structure can be altered by means of voice and valency-changing operations such as applicatives, causative and middle voice. In chapter 6 there is an overview of verbal morphology, apart from the morphology associated with voice, valency and cross-reference discussed in chapter 5. There is also a description of adverbs in the language in this chapter. Chapter 7 discusses formal and functional properties of modal and aspectual enclitics. In chapter 8, finally, the structure of the clause (both simplex and complex) is discussed, including the switch-reference system and word order. The book ends with two text samples.
  • Van Staden, M., & Majid, A. (2006). Body colouring task. Language Sciences, 28(2-3), 158-161. doi:10.1016/j.langsci.2005.11.004.

    Abstract

    This paper outlines a method for collecting information on the extensional meanings of body part terms using a colouring in task.
  • Van Gompel, R. P., & Majid, A. (2003). Antecedent frequency effects during the processing of pronouns. Cognition, 90(3), 255-264. doi:10.1016/S0010-0277(03)00161-6.

    Abstract

    An eye-movement reading experiment investigated whether the ease with which pronouns are processed is affected by the lexical frequency of their antecedent. Reading times following pronouns with infrequent antecedents were faster than following pronouns with frequent antecedents. We argue that this is consistent with a saliency account, according to which infrequent antecedents are more salient than frequent antecedents. The results are not predicted by accounts which claim that readers access all or part of the lexical properties of the antecedent during the processing of pronouns.
  • Van Turennout, M. (2002). Het benoemen van een object veroorzaakt langdurige veranderingen in het brein. Neuropraxis, 6(3), 77-81.
  • Van Valin Jr., R. D. (2000). Focus structure or abstract syntax? A role and reference grammar account of some ‘abstract’ syntactic phenomena. In Z. Estrada Fernández, & I. Barreras Aguilar (Eds.), Memorias del V Encuentro Internacional de Lingüística en el Noroeste: (2 v.) Estudios morfosintácticos (pp. 39-62). Hermosillo: Editorial Unison.
  • Van den Bos, E. J., & Poletiek, F. H. (2006). Implicit artificial grammar learning in adults and children. In R. Sun (Ed.), Proceedings of the 28th Annual Conference of the Cognitive Science Society (CogSci 2006) (pp. 2619). Austin, TX, USA: Cognitive Science Society.
  • Van Valin Jr., R. D. (2003). Minimalism and explanation. In J. Moore, & M. Polinsky (Eds.), The nature of explanation in linguistic theory (pp. 281-297). University of Chicago Press.
  • van de Beek, D., Weisfelt, M., Hoogman, M., de Gans, J., & Schmand, B. (2006). Neuropsychological sequelae of bacterial meningitis: The influence of alcoholism and adjunctive dexamethasone therapy [Letter to the editor]. Brain, 129, E46. doi:10.1093/brain/awl052.

    Abstract

    The article by Schmidt and colleagues (2006) reported neuropsychological sequelae of bacterial and viral meningitis. In a retrospective study, they carefully selected patients and excluded those with concomitant conditions such as alcoholism after Streptococcus pneumoniae meningitis (Schmidt et al., 2006). The authors should be complimented for their solid work; however, some questions can be raised.
  • van Geenhoven, V. (2002). Raised Possessors and Noun Incorporation in West Greenlandic. Natural Language & Linguistic Theory, 20(4), 759-821.

    Abstract

    This paper addresses the question of whether noun incorporation is a syntactically base-generated or a syntactically derived construction. Focusing on so-called 'raised possessors' in West Greenlandic noun incorporating constructions and presenting some new data, I discuss some problems that arise if we use the derivational framework of Bittner and Hale (1996) to analyze them. I show that if we make the predication relations in noun incorporating constructions overt in their syntax and if we adopt a dynamic approach to semantics, a base-generated syntactic input enriched with a coindexation system is all that we need to arrive at an adequate semantic interpretation of these constructions.
  • Van Valin Jr., R. D. (2006). Some universals of verb semantics. In R. Mairal, & J. Gil (Eds.), Linguistic universals (pp. 155-178). Cambridge: Cambridge University Press.

Share this page