Displaying 1 - 7 of 7
-
Bunce, J., Soderstrom, M., Bergelson, E., Rosemberg, C., Stein, A., Alam, F., Migdalek, M. J., & Casillas, M. (2024). A cross-linguistic examination of young children’s everyday language experiences. Journal of Child Language. Advance online publication. doi:10.1017/S030500092400028X.
Abstract
We present an exploratory cross-linguistic analysis of the quantity of target-child-directed speech and adult-directed speech in North American English (US & Canadian), United Kingdom English, Argentinian Spanish, Tseltal (Tenejapa, Mayan), and Yélî Dnye (Rossel Island, Papuan), using annotations from 69 children aged 2–36 months. Using a novel methodological approach, our cross-linguistic and cross-cultural findings support prior work suggesting that target-child-directed speech quantities are stable across early development, while adult-directed speech decreases. A preponderance of speech from women was found to a similar degree across groups, with less target-child-directed speech from men and children in the North American samples than elsewhere. Consistently across groups, children also heard more adult-directed than target-child-directed speech. Finally, the numbers of talkers present in any given clip strongly impacted children’s moment-to-moment input quantities. These findings illustrate how the structure of home life impacts patterns of early language exposure across diverse developmental contexts.Additional information
S030500092400028Xsup001.pdf -
Casillas, M., Foushee, R., Méndez Girón, J., Polian, G., & Brown, P. (2024). Little evidence for a noun bias in Tseltal spontaneous speech. First Language, 44(6), 600-628. doi:10.1177/01427237231216571.
Abstract
This study examines whether children acquiring Tseltal (Mayan) demonstrate a noun bias – an overrepresentation of nouns in their early vocabularies. Nouns, specifically concrete and animate nouns, are argued to universally predominate in children’s early vocabularies because their referents are naturally available as bounded concepts to which linguistic labels can be mapped. This early advantage for noun learning has been documented using multiple methods and across a diverse collection of language populations. However, past evidence bearing on a noun bias in Tseltal learners has been mixed. Tseltal grammatical features and child–caregiver interactional patterns dampen the salience of nouns and heighten the salience of verbs, leading to the prediction of a diminished noun bias and perhaps even an early predominance of verbs. We here analyze the use of noun and verb stems in children’s spontaneous speech from egocentric daylong recordings of 29 Tseltal learners between 0;9 and 4;4. We find weak to no evidence for a noun bias using two separate analytical approaches on the same data; one analysis yields a preliminary suggestion of a flipped outcome (i.e. a verb bias). We discuss the implications of these findings for broader theories of learning bias in early lexical development. -
Lutzenberger, H., Casillas, M., Fikkert, P., Crasborn, O., & De Vos, C. (2024). More than looks: Exploring methods to test phonological discrimination in the sign language Kata Kolok. Language Learning and Development, 20(4), 297-323. doi:10.1080/15475441.2023.2277472.
Abstract
The lack of diversity in the language sciences has increasingly been criticized as it holds the potential for producing flawed theories. Research on (i) geographically diverse language communities and (ii) on sign languages is necessary to corroborate, sharpen, and extend existing theories. This study contributes a case study of adapting a well-established paradigm to study the acquisition of sign phonology in Kata Kolok, a sign language of rural Bali, Indonesia. We conducted an experiment modeled after the familiarization paradigm with child signers of Kata Kolok. Traditional analyses of looking time did not yield significant differences between signing and non-signing children. Yet, additional behavioral analyses (attention, eye contact, hand behavior) suggest that children who are signers and those who are non-signers, as well as those who are hearing and those who are deaf, interact differently with the task. This study suggests limitations of the paradigm due to the ecology of sign languages and the sociocultural characteristics of the sample, calling for a mixed-methods approach. Ultimately, this paper aims to elucidate the diversity of adaptations necessary for experimental design, procedure, and analysis, and to offer a critical reflection on the contribution of similar efforts and the diversification of the field.Additional information
materials, source code, export queries, and scripts -
Casillas, M., Bergelson, E., Warlaumont, A. S., Cristia, A., Soderstrom, M., VanDam, M., & Sloetjes, H. (2017). A New Workflow for Semi-automatized Annotations: Tests with Long-Form Naturalistic Recordings of Childrens Language Environments. In Proceedings of Interspeech 2017 (pp. 2098-2102). doi:10.21437/Interspeech.2017-1418.
Abstract
Interoperable annotation formats are fundamental to the utility, expansion, and sustainability of collective data repositories.In language development research, shared annotation schemes have been critical to facilitating the transition from raw acoustic data to searchable, structured corpora. Current schemes typically require comprehensive and manual annotation of utterance boundaries and orthographic speech content, with an additional, optional range of tags of interest. These schemes have been enormously successful for datasets on the scale of dozens of recording hours but are untenable for long-format recording corpora, which routinely contain hundreds to thousands of audio hours. Long-format corpora would benefit greatly from (semi-)automated analyses, both on the earliest steps of annotation—voice activity detection, utterance segmentation, and speaker diarization—as well as later steps—e.g., classification-based codes such as child-vs-adult-directed speech, and speech recognition to produce phonetic/orthographic representations. We present an annotation workflow specifically designed for long-format corpora which can be tailored by individual researchers and which interfaces with the current dominant scheme for short-format recordings. The workflow allows semi-automated annotation and analyses at higher linguistic levels. We give one example of how the workflow has been successfully implemented in a large cross-database project. -
Casillas, M., & Frank, M. C. (2017). The development of children's ability to track and predict turn structure in conversation. Journal of Memory and Language, 92, 234-253. doi:10.1016/j.jml.2016.06.013.
Abstract
Children begin developing turn-taking skills in infancy but take several years to fluidly integrate their growing knowledge of language into their turn-taking behavior. In two eye-tracking experiments, we measured children’s anticipatory gaze to upcoming responders while controlling linguistic cues to turn structure. In Experiment 1, we showed English and non-English conversations to English-speaking adults and children. In Experiment 2, we phonetically controlled lexicosyntactic and prosodic cues in English-only speech. Children spontaneously made anticipatory gaze switches by age two and continued improving through age six. In both experiments, children and adults made more anticipatory switches after hearing questions. Consistent with prior findings on adult turn prediction, prosodic information alone did not increase children’s anticipatory gaze shifts. But, unlike prior work with adults, lexical information alone was not sucient either—children’s performance was best overall with lexicosyntax and prosody together. Our findings support an account in which turn tracking and turn prediction emerge in infancy and then gradually become integrated with children’s online linguistic processing. -
Casillas, M., Amatuni, A., Seidl, A., Soderstrom, M., Warlaumont, A., & Bergelson, E. (2017). What do Babies hear? Analyses of Child- and Adult-Directed Speech. In Proceedings of Interspeech 2017 (pp. 2093-2097). doi:10.21437/Interspeech.2017-1409.
Abstract
Child-directed speech is argued to facilitate language development, and is found cross-linguistically and cross-culturally to varying degrees. However, previous research has generally focused on short samples of child-caregiver interaction, often in the lab or with experimenters present. We test the generalizability of this phenomenon with an initial descriptive analysis of the speech heard by young children in a large, unique collection of naturalistic, daylong home recordings. Trained annotators coded automatically-detected adult speech 'utterances' from 61 homes across 4 North American cities, gathered from children (age 2-24 months) wearing audio recorders during a typical day. Coders marked the speaker gender (male/female) and intended addressee (child/adult), yielding 10,886 addressee and gender tags from 2,523 minutes of audio (cf. HB-CHAAC Interspeech ComParE challenge; Schuller et al., in press). Automated speaker-diarization (LENA) incorrectly gender-tagged 30% of male adult utterances, compared to manually-coded consensus. Furthermore, we find effects of SES and gender on child-directed and overall speech, increasing child-directed speech with child age, and interactions of speaker gender, child gender, and child age: female caretakers increased their child-directed speech more with age than male caretakers did, but only for male infants. Implications for language acquisition and existing classification algorithms are discussed. -
Schuller, B., Steidl, S., Batliner, A., Bergelson, E., Krajewski, J., Janott, C., Amatuni, A., Casillas, M., Seidl, A., Soderstrom, M., Warlaumont, A. S., Hidalgo, G., Schnieder, S., Heiser, C., Hohenhorst, W., Herzog, M., Schmitt, M., Qian, K., Zhang, Y., Trigeorgis, G. and 2 moreSchuller, B., Steidl, S., Batliner, A., Bergelson, E., Krajewski, J., Janott, C., Amatuni, A., Casillas, M., Seidl, A., Soderstrom, M., Warlaumont, A. S., Hidalgo, G., Schnieder, S., Heiser, C., Hohenhorst, W., Herzog, M., Schmitt, M., Qian, K., Zhang, Y., Trigeorgis, G., Tzirakis, P., & Zafeiriou, S. (2017). The INTERSPEECH 2017 computational paralinguistics challenge: Addressee, cold & snoring. In Proceedings of Interspeech 2017 (pp. 3442-3446). doi:10.21437/Interspeech.2017-43.
Abstract
The INTERSPEECH 2017 Computational Paralinguistics Challenge addresses three different problems for the first time in research competition under well-defined conditions: In the Addressee sub-challenge, it has to be determined whether speech produced by an adult is directed towards another adult or towards a child; in the Cold sub-challenge, speech under cold has to be told apart from ‘healthy’ speech; and in the Snoring subchallenge, four different types of snoring have to be classified. In this paper, we describe these sub-challenges, their conditions, and the baseline feature extraction and classifiers, which include data-learnt feature representations by end-to-end learning with convolutional and recurrent neural networks, and bag-of-audiowords for the first time in the challenge series
Share this page