Comprehension Dept Publications
Publications Language and Comprehension
Displaying 1 - 20 of 836
-
Severijnen, G. G. A., Bosker, H. R., & McQueen, J. M. (2022). Acoustic correlates of Dutch lexical stress re-examined: Spectral tilt is not always more reliable than intensity. Talk presented at Speech Prosody 2022. Lisbon, Portugal. 2022-05-23 - 2022-05-26.
-
Cho, T. (2022). The Phonetics-Prosody Interface and Prosodic Strengthening in Korean. In S. Cho, & J. Whitman (
Eds. ), Cambridge handbook of Korean linguistics (pp. 248-293). Cambridge: Cambridge University Press. -
Choi, J., Broersma, M., & Cutler, A. (2018). Phonetic learning is not enhanced by sequential exposure to more than one language. Linguistic Research, 35(3), 567-581. doi:10.17250/khisli.35.3.201812.006.
Abstract
Several studies have documented that international adoptees, who in early years have
experienced a change from a language used in their birth country to a new language
in an adoptive country, benefit from the limited early exposure to the birth language
when relearning that language’s sounds later in life. The adoptees’ relearning advantages
have been argued to be conferred by lasting birth-language knowledge obtained from
the early exposure. However, it is also plausible to assume that the advantages may
arise from adoptees’ superior ability to learn language sounds in general, as a result
of their unusual linguistic experience, i.e., exposure to multiple languages in sequence
early in life. If this is the case, then the adoptees’ relearning benefits should generalize
to previously unheard language sounds, rather than be limited to their birth-language
sounds. In the present study, adult Korean adoptees in the Netherlands and matched
Dutch-native controls were trained on identifying a Japanese length distinction to which
they had never been exposed before. The adoptees and Dutch controls did not differ
on any test carried out before, during, or after the training, indicating that observed
adoptee advantages for birth-language relearning do not generalize to novel, previously
unheard language sounds. The finding thus fails to support the suggestion that
birth-language relearning advantages may arise from enhanced ability to learn language
sounds in general conferred by early experience in multiple languages. Rather, our
finding supports the original contention that such advantages involve memory traces
obtained before adoption -
Tsuji, S., Fikkert, P., Yamane, N., & Mazuka, R. (2016). Language-general biases and language-specific experience contribute to phonological detail in toddlers' word representations. Developmental Psychology, 52, 379-390. doi:10.1037/dev0000093.
Abstract
Although toddlers in their 2nd year of life generally have phonologically detailed representations of words, a consistent lack of sensitivity to certain kinds of phonological changes has been reported. The origin of these insensitivities is poorly understood, and uncovering their cause is crucial for obtaining a complete picture of early phonological development. The present study explored the origins of the insensitivity to the change from coronal to labial consonants. In cross-linguistic research, we assessed to what extent this insensitivity is language-specific (or would show both in learners of Dutch and a very different language like Japanese), and contrast/direction-specific to the coronal-to-labial change (or would also extend to the coronal-to-dorsal change). We measured Dutch and Japanese 18-month-old toddlers' sensitivity to labial and dorsal mispronunciations of newly learned coronal-initial words. Both Dutch and Japanese toddlers showed reduced sensitivity to the coronal-to-labial change, although this effect was more pronounced in Dutch toddlers. The lack of sensitivity was also specific to the coronal-to-labial change because toddlers from both language backgrounds were highly sensitive to dorsal mispronunciations. Combined with results from previous studies, the present outcomes are most consistent with an early, language-general bias specific to the coronal-to-labial change, which is modified by the properties of toddlers' early, language-specific lexicon -
Tsuji, S., Mazuka, R., Cristia, A., & Fikkert, P. (2015). Even at 4 months, a labial is a good enough coronal, but not vice versa. Cognition, 134, 252-256. doi:10.1016/j.cognition.2014.10.009.
Abstract
Numerous studies have revealed an asymmetry tied to the perception of coronal place of articulation: participants accept a labial mispronunciation of a coronal target, but not vice versa. Whether or not this asymmetry is based on language-general properties or arises from language-specific experience has been a matter of debate. The current study suggests a bias of the first type by documenting an early, cross-linguistic asymmetry related to coronal place of articulation. Japanese and Dutch 4- and 6-month-old infants showed evidence of discrimination if they were habituated to a labial and then tested on a coronal sequence, but not vice versa. This finding has important implications for both phonological theories and infant speech perception researchAdditional information
Tsuji_etal_suppl_2014.xlsx -
Zhou, W. (2015). Assessing birth language memory in young adoptees. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Tromp, J., Hagoort, P., & Meyer, A. S. (2015). Indirect request comprehension requires additional processing effort: A pupillometry study. Poster presented at the 19th Meeting of the European Society for Cognitive Psychology (ESCoP 2015), Paphos, Cyprus.
-
Tromp, J., Meyer, A. S., & Hagoort, P. (2015). Pupillometry reveals increased processing demands for indirect request comprehension. Poster presented at the 21st Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2015), Valetta, Malta.
Abstract
Fluctuations in pupil size have been shown to reflect variations in processing demands during language
comprehension. Increases in pupil diameter have been observed as a consequence of syntactic anomalies
(Schluroff 1982), increased syntactic complexity (Just & Carpenter 1993) and lexical ambiguity (Ben-
Nun 1986). An issue that has not received attention is whether pupil size also varies due to pragmatic
manipulations. In a pupillometry experiment, we investigated whether pupil diameter is sensitive to
increased processing demands as a result of comprehending an indirect request versus a statement. During
natural conversation, communication is often indirect. For example, in an appropriate context, ''It'' cold in
here'' is a request to shut the window, rather than a statement about room temperature (Holtgraves 1994).
We tested 49 Dutch participants (mean age = 20.8). They were presented with 120 picture-sentence
combinations that could either be interpreted as an indirect request (a picture of a window with the
sentence ''it's hot here'') or as a statement (a picture of a window with the sentence ''it's nice here''). The
indirect requests were non-conventional, i.e. they did not contain directive propositional content and were
not directly related to the underlying felicity conditions (Holtgraves 2002). In order to verify that the
indirect requests were recognized, participants were asked to decide after each combination whether or
not they heard a request. Based on the hypothesis that understanding this type of indirect utterances
requires additional inferences to be made on the part of the listener (e.g., Holtgraves 2002; Searle 1975;
Van Ackeren et al. 2012), we predicted a larger pupil diameter for indirect requests than statements. The
data were analyzed using linear mixed-effects models in R, which allow for simultaneous inclusion of
participants and items as random factors (Baayen, Davidson, & Bates 2008). The results revealed a larger
mean pupil size and a larger peak pupil size for indirect requests as compared to statements. In line with
previous studies on pupil size and language comprehension (e.g., Just & Carpenter 1993), this difference
was observed within a 1.5 second window after critical word onset. We suggest that the increase in pupil
size reflects additional on-line processing demands for the comprehension of non-conventional indirect
requests as compared to statements. This supports the idea that comprehending this type of indirect
request requires capacity demanding inferencing on the part of the listener. In addition, this study
demonstrates the usefulness of pupillometry as a tool for experimental research in pragmatics. -
Tromp, J., Meyer, A. S., & Hagoort, P. (2015). Pupillometry reveals increased processing demands for indirect request comprehension. Poster presented at the 14th International Pragmatics Conference, Antwerp, Belgium.
Abstract
Fluctuations in pupil size have been shown to reflect variations in processing demands during language
comprehension. Increases in pupil diameter have been observed as a consequence of syntactic anomalies
(Schluroff 1982), increased syntactic complexity (Just & Carpenter 1993) and lexical ambiguity (Ben-
Nun 1986). An issue that has not received attention is whether pupil size also varies due to pragmatic
manipulations. In a pupillometry experiment, we investigated whether pupil diameter is sensitive to
increased processing demands as a result of comprehending an indirect request versus a statement. During
natural conversation, communication is often indirect. For example, in an appropriate context, ''It'' cold in
here'' is a request to shut the window, rather than a statement about room temperature (Holtgraves 1994).
We tested 49 Dutch participants (mean age = 20.8). They were presented with 120 picture-sentence
combinations that could either be interpreted as an indirect request (a picture of a window with the
sentence ''it's hot here'') or as a statement (a picture of a window with the sentence ''it's nice here''). The
indirect requests were non-conventional, i.e. they did not contain directive propositional content and were
not directly related to the underlying felicity conditions (Holtgraves 2002). In order to verify that the
indirect requests were recognized, participants were asked to decide after each combination whether or
not they heard a request. Based on the hypothesis that understanding this type of indirect utterances
requires additional inferences to be made on the part of the listener (e.g., Holtgraves 2002; Searle 1975;
Van Ackeren et al. 2012), we predicted a larger pupil diameter for indirect requests than statements. The
data were analyzed using linear mixed-effects models in R, which allow for simultaneous inclusion of
participants and items as random factors (Baayen, Davidson, & Bates 2008). The results revealed a larger
mean pupil size and a larger peak pupil size for indirect requests as compared to statements. In line with
previous studies on pupil size and language comprehension (e.g., Just & Carpenter 1993), this difference
was observed within a 1.5 second window after critical word onset. We suggest that the increase in pupil
size reflects additional on-line processing demands for the comprehension of non-conventional indirect
requests as compared to statements. This supports the idea that comprehending this type of indirect
request requires capacity demanding inferencing on the part of the listener. In addition, this study
demonstrates the usefulness of pupillometry as a tool for experimental research in pragmatics. -
Choi, J. (2014). Rediscovering a forgotten language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Acheson, D. J., Veenstra, A., Meyer, A. S., & Hagoort, P. (2014). EEG pattern classification of semantic and syntactic Influences on subject-verb agreement in production. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam.
Abstract
Subject-verb agreement is one of the most common
grammatical encoding operations in language
production. In many languages, morphological
inflection on verbs code for the number of the head
noun of a subject phrase (e.g., The key to the cabinets
is rusty). Despite the relative ease with which subjectverb
agreement is accomplished, people sometimes
make agreement errors (e.g., The key to the cabinets
are rusty). Such errors offer a window into the early
stages of production planning. Agreement errors are
influenced by both syntactic and semantic factors, and
are more likely to occur when a sentence contains either
conceptual or syntactic number mismatches. Little
is known about the timecourse of these influences,
however, and some controversy exists as to whether
they are independent. The current study was designed
to address these two issues using EEG. Semantic and
syntactic factors influencing number mismatch were
factorially-manipulated in a forced-choice sentence
completion paradigm. To avoid EEG artifact associated
with speaking, participants (N=20) were presented with
a noun-phrase, and pressed a button to indicate which
version of the verb ‘to be’ (is/are) should continue
the sentence. Semantic number was manipulated
using preambles that were semantically-integrated or
unintegrated. Semantic integration refers to the semantic
relationship between nouns in a noun-phrase, with
integrated items promoting conceptual-singularity.
The syntactic manipulation was the number (singular/
plural) of the local noun preceding the decision. This
led to preambles such as “The pizza with the yummy
topping(s)... “ (integated) vs. “The pizza with the tasty
bevarage(s)...” (unintegrated). Behavioral results showed
effects of both Local Noun Number and Semantic
Integration, with more errors and longer reaction times
occurring in the mismatching conditions (i.e., plural
local nouns; unintegrated subject phrases). Classic ERP
analyses locked to the local noun (0-700 ms) and to the
time preceding the response (-600 to 0 ms) showed no
systematic differences between conditions. Despite this
result, we assessed whether difference might emerge
using multivariate pattern analysis (MVPA). Using the
same epochs as above, support-vector machines with a
radial basis function were trained on the single-trial level
to classify the difference between Local Noun Number
and Semantic Integration conditions across time and
channels. Results revealed that both conditions could
be reliably classified at the single subject level, and
that classification accuracy was strongest in the epoch
preceding the response. Classification accuracy was
at chance when a classifier trained to dissociate Local
Noun Number was used to predict Semantic Integration
(and vice versa), providing some evidence of the
independence of the two effects. Significant inter-subject
variability was present in the channels and time-points
that were critical for classification, but earlier timepoints
were more often important for classifying Local Noun
Number than Semantic Integration. One result of this
variability is classification performed across subjects was
at chance, which may explain the failure to find standard
ERP effects. This study thus provides an important first
test of semantic and syntactic influences on subject-verb
agreement with EEG, and demonstrates that where
classic ERP analyses fail, MVPA can reliably distinguish
differences at the neurophysiological level. -
Lahey, M., & Ernestus, M. (2014). Schwa reduction in spontaneous infant-directed speech. Poster presented at the 14th Conference on Laboratory Phonology (LabPhon 14), Tokyo, Japan.
-
Gonzalez Gomez, N., Hayashi, A., Tsuji, S., Mazuka, R., & Nazzi, T. (2014). The role of the input on the development of the LC bias: A crosslinguistic comparison. Cognition, 132(3), 301-311. doi:10.1016/j.cognition.2014.04.004.
Abstract
Previous studies have described the existence of a phonotactic bias called the Labial–Coronal (LC) bias, corresponding to a tendency to produce more words beginning with a labial consonant followed by a coronal consonant (i.e. “bat”) than the opposite CL pattern (i.e. “tap”). This bias has initially been interpreted in terms of articulatory constraints of the human speech production system. However, more recently, it has been suggested that this presumably language-general LC bias in production might be accompanied by LC and CL biases in perception, acquired in infancy on the basis of the properties of the linguistic input. The present study investigates the origins of these perceptual biases, testing infants learning Japanese, a language that has been claimed to possess more CL than LC sequences, and comparing them with infants learning French, a language showing a clear LC bias in its lexicon. First, a corpus analysis of Japanese IDS and ADS revealed the existence of an overall LC bias, except for plosive sequences in ADS, which show a CL bias across counts. Second, speech preference experiments showed a perceptual preference for CL over LC plosive sequences (all recorded by a Japanese speaker) in 13- but not in 7- and 10-month-old Japanese-learning infants (Experiment 1), while revealing the emergence of an LC preference between 7 and 10 months in French-learning infants, using the exact same stimuli. These crosslinguistic behavioral differences, obtained with the same stimuli, thus reflect differences in processing in two populations of infants, which can be linked to differences in the properties of the lexicons of their respective native languages. These findings establish that the emergence of a CL/LC bias is related to exposure to a linguistic input. -
Mazuka, R., Hasegawa, M., & Tsuji, S. (2014). Development of non-native vowel discrimination: Improvement without exposure. Developmental Psychobiology, 56(2), 192-209. doi:10.1002/dev.21193.
Abstract
he present study tested Japanese 4.5- and 10-month old infants' ability to discriminate three German vowel pairs, none of which are contrastive in Japanese, using a visual habituation–dishabituation paradigm. Japanese adults' discrimination of the same pairs was also tested. The results revealed that Japanese 4.5-month old infants discriminated the German /bu:k/-/by:k/ contrast, but they showed no evidence of discriminating the /bi:k/-/be:k/ or /bu:k/-/bo:k/ contrasts. Japanese 10-month old infants, on the other hand, discriminated the German /bi:k/-/be:k/ contrast, while they showed no evidence of discriminating the /bu:k/-/by:k/ or /bu:k/-/bo:k/ contrasts. Japanese adults, in contrast, were highly accurate in their discrimination of all of the pairs. The results indicate that discrimination of non-native contrasts is not always easy even for young infants, and that their ability to discriminate non-native contrasts can improve with age even when they receive no exposure to a language in which the given contrast is phonemic. © 2013 Wiley Periodicals, Inc. Dev Psychobiol 56: 192–209, 2014. -
Tsuji, S. (2014). The road to native listening. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Poellmann, K., McQueen, J. M., Baayen, R. H., & Mitterer, H. (2013). Adaptation to reductions: Challenges of regional variation. Talk presented at the Tagung experimentell arbeitender Psychologen [TeaP 2013]. Vienna, Austria. 2013-03-24 - 2013-03-27.
-
Brouwer, S., Mitterer, H., & Huettig, F. (2013). Discourse context and the recognition of reduced and canonical spoken words. Applied Psycholinguistics, 34, 519-539. doi:10.1017/S0142716411000853.
Abstract
In two eye-tracking experiments we examined whether wider discourse information helps
the recognition of reduced pronunciations (e.g., 'puter') more than the recognition of
canonical pronunciations of spoken words (e.g., 'computer'). Dutch participants listened to
sentences from a casual speech corpus containing canonical and reduced target words. Target
word recognition was assessed by measuring eye fixation proportions to four printed words
on a visual display: the target, a "reduced form" competitor, a "canonical form" competitor
and an unrelated distractor. Target sentences were presented in isolation or with a wider
discourse context. Experiment 1 revealed that target recognition was facilitated by wider
discourse information. Importantly, the recognition of reduced forms improved significantly
when preceded by strongly rather than by weakly supportive discourse contexts. This was not
the case for canonical forms: listeners' target word recognition was not dependent on the
degree of supportive context. Experiment 2 showed that the differential context effects in
Experiment 1 were not due to an additional amount of speaker information. Thus, these data
suggest that in natural settings a strongly supportive discourse context is more important for
the recognition of reduced forms than the recognition of canonical forms. -
Kooijman, V., Junge, C., Johnson, E. K., Hagoort, P., & Cutler, A. (2013). Predictive brain signals of linguistic development. Frontiers in Psychology, 4: 25. doi:10.3389/fpsyg.2013.00025.
Abstract
The ability to extract word forms from continuous speech is a prerequisite for constructing a vocabulary and emerges in the first year of life. Electrophysiological (ERP) studies of speech segmentation by 9- to 12-month-old listeners in several languages have found a left-localized negativity linked to word onset as a marker of word detection. We report an ERP study showing significant evidence of speech segmentation in Dutch-learning 7-month-olds. In contrast to the left-localized negative effect reported with older infants, the observed overall mean effect had a positive polarity. Inspection of individual results revealed two participant sub-groups: a majority showing a positive-going response, and a minority showing the left negativity observed in older age groups. We retested participants at age three, on vocabulary comprehension and word and sentence production. On every test, children who at 7 months had shown the negativity associated with segmentation of words from speech outperformed those who had produced positive-going brain responses to the same input. The earlier that infants show the left-localized brain responses typically indicating detection of words in speech, the better their early childhood language skills. -
Cutler, A., & Bruggeman, L. (2013). Vocabulary structure and spoken-word recognition: Evidence from French reveals the source of embedding asymmetry. In Proceedings of INTERSPEECH: 14th Annual Conference of the International Speech Communication Association (pp. 2812-2816).
Abstract
Vocabularies contain hundreds of thousands of words built from only a handful of phonemes, so that inevitably longer words tend to contain shorter ones. In many languages (but not all) such embedded words occur more often word-initially than word-finally, and this asymmetry, if present, has farreaching consequences for spoken-word recognition. Prior research had ascribed the asymmetry to suffixing or to effects of stress (in particular, final syllables containing the vowel schwa). Analyses of the standard French vocabulary here reveal an effect of suffixing, as predicted by this account, and further analyses of an artificial variety of French reveal that extensive final schwa has an independent and additive effect in promoting the embedding asymmetry. -
Van der Zande, P., Jesse, A., & Cutler, A. (2013). Lexically guided retuning of visual phonetic categories. Journal of the Acoustical Society of America, 134, 562-571. doi:10.1121/1.4807814.
Abstract
Listeners retune the boundaries between phonetic categories to adjust to individual speakers' productions. Lexical information, for example, indicates what an unusual sound is supposed to be, and boundary retuning then enables the speaker's sound to be included in the appropriate auditory phonetic category. In this study, it was investigated whether lexical knowledge that is known to guide the retuning of auditory phonetic categories, can also retune visual phonetic categories. In Experiment 1, exposure to a visual idiosyncrasy in ambiguous audiovisually presented target words in a lexical decision task indeed resulted in retuning of the visual category boundary based on the disambiguating lexical context. In Experiment 2 it was tested whether lexical information retunes visual categories directly, or indirectly through the generalization from retuned auditory phonetic categories. Here, participants were exposed to auditory-only versions of the same ambiguous target words as in Experiment 1. Auditory phonetic categories were retuned by lexical knowledge, but no shifts were observed for the visual phonetic categories. Lexical knowledge can therefore guide retuning of visual phonetic categories, but lexically guided retuning of auditory phonetic categories is not generalized to visual categories. Rather, listeners adjust auditory and visual phonetic categories to talker idiosyncrasies separately.
Share this page