Publications

Displaying 101 - 200 of 238
  • Jesse, A., Reinisch, E., & Nygaard, L. C. (2010). Learning of adjectival word meaning through tone of voice [Abstract]. Journal of the Acoustical Society of America, 128, 2475.

    Abstract

    Speakers express word meaning through systematic but non-canonical acoustic variation of tone of voice (ToV), i.e., variation of speaking rate, pitch, vocal effort, or loudness. Words are, for example, pronounced at a higher pitch when referring to small than to big referents. In the present study, we examined whether listeners can use ToV to learn the meaning of novel adjectives (e.g., “blicket”). During training, participants heard sentences such as “Can you find the blicket one?” spoken with ToV representing hot-cold, strong-weak, and big-small. Participants’ eye movements to two simultaneously shown objects with properties representing the relevant two endpoints (e.g., an elephant and an ant for big-small) were monitored. Assignment of novel adjectives to endpoints was counterbalanced across participants. During test, participants heard the sentences spoken with a neutral ToV, while seeing old or novel picture pairs varying along the same dimensions (e.g., a truck and a car for big-small). Participants had to click on the adjective’s referent. As evident from eye movements, participants did not infer the intended meaning during first exposure, but learned the meaning with the help of ToV during training. At test listeners applied this knowledge to old and novel items even in the absence of informative ToV.
  • Junge, C., Hagoort, P., Kooijman, V., & Cutler, A. (2010). Brain potentials for word segmentation at seven months predict later language development. In K. Franich, K. M. Iserman, & L. L. Keil (Eds.), Proceedings of the 34th Annual Boston University Conference on Language Development. Volume 1 (pp. 209-220). Somerville, MA: Cascadilla Press.
  • Junge, C., Cutler, A., & Hagoort, P. (2010). Ability to segment words from speech as a precursor of later language development: Insights from electrophysiological responses in the infant brain. In M. Burgess, J. Davey, C. Don, & T. McMinn (Eds.), Proceedings of 20th International Congress on Acoustics, ICA 2010. Incorporating Proceedings of the 2010 annual conference of the Australian Acoustical Society (pp. 3727-3732). Australian Acoustical Society, NSW Division.
  • Karaca, F., Brouwer, S., Unsworth, S., & Huettig, F. (2021). Prediction in bilingual children: The missing piece of the puzzle. In E. Kaan, & T. Grüter (Eds.), Prediction in Second Language Processing and Learning (pp. 116-137). Amsterdam: Benjamins.

    Abstract

    A wealth of studies has shown that more proficient monolingual speakers are better at predicting upcoming information during language comprehension. Similarly, prediction skills of adult second language (L2) speakers in their L2 have also been argued to be modulated by their L2 proficiency. How exactly language proficiency and prediction are linked, however, is yet to be systematically investigated. One group of language users which has the potential to provide invaluable insights into this link is bilingual children. In this paper, we compare bilingual children’s prediction skills with those of monolingual children and adult L2 speakers, and show how investigating bilingual children’s prediction skills may contribute to our understanding of how predictive processing works.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Ozyurek, A. (2021). Spatial language use predicts spatial memory of children: Evidence from sign, speech, and speech-plus-gesture. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 672-678). Vienna: Cognitive Science Society.

    Abstract

    There is a strong relation between children’s exposure to
    spatial terms and their later memory accuracy. In the current
    study, we tested whether the production of spatial terms by
    children themselves predicts memory accuracy and whether
    and how language modality of these encodings modulates
    memory accuracy differently. Hearing child speakers of
    Turkish and deaf child signers of Turkish Sign Language
    described pictures of objects in various spatial relations to each
    other and later tested for their memory accuracy of these
    pictures in a surprise memory task. We found that having
    described the spatial relation between the objects predicted
    better memory accuracy. However, the modality of these
    descriptions in sign, speech, or speech-plus-gesture did not
    reveal differences in memory accuracy. We discuss the
    implications of these findings for the relation between spatial
    language, memory, and the modality of encoding.
  • Kempen, G., Anbeek, G., Desain, P., Konst, L., & De Semdt, K. (1987). Author environments: Fifth generation text processors. In Commission of the European Communities. Directorate-General for Telecommunications, Information Industries, and Innovation (Ed.), Esprit'86: Results and achievements (pp. 365-372). Amsterdam: Elsevier Science Publishers.
  • Kempen, G., Anbeek, G., Desain, P., Konst, L., & De Smedt, K. (1987). Author environments: Fifth generation text processors. In Commission of the European Communities. Directorate-General for Telecommunications, Information Industries, and Innovation (Ed.), Esprit'86: Results and achievements (pp. 365-372). Amsterdam: Elsevier Science Publishers.
  • Kempen, G. (1999). Visual Grammar: Multimedia for grammar and spelling instruction in primary education. In K. Cameron (Ed.), CALL: Media, design, and applications (pp. 223-238). Lisse: Swets & Zeitlinger.
  • Kemps-Snijders, M., Koller, T., Sloetjes, H., & Verweij, H. (2010). LAT bridge: Bridging tools for annotation and exploration of rich linguistic data. In N. Calzolari, B. Maegaard, J. Mariani, J. Odjik, K. Choukri, S. Piperidis, M. Rosner, & D. Tapias (Eds.), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 2648-2651). European Language Resources Association (ELRA).

    Abstract

    We present a software module, the LAT Bridge, which enables bidirectionalcommunication between the annotation and exploration tools developed at the MaxPlanck Institute for Psycholinguistics as part of our Language ArchivingTechnology (LAT) tool suite. These existing annotation and exploration toolsenable the annotation, enrichment, exploration and archive management oflinguistic resources. The user community has expressed the desire to usedifferent combinations of LAT tools in conjunction with each other. The LATBridge is designed to cater for a number of basic data interaction scenariosbetween the LAT annotation and exploration tools. These interaction scenarios(e.g. bootstrapping a wordlist, searching for annotation examples or lexicalentries) have been identified in collaboration with researchers at ourinstitute.We had to take into account that the LAT tools for annotation and explorationrepresent a heterogeneous application scenario with desktop-installed andweb-based tools. Additionally, the LAT Bridge has to work in situations wherethe Internet is not available or only in an unreliable manner (i.e. with a slowconnection or with frequent interruptions). As a result, the LAT Bridge’sarchitecture supports both online and offline communication between the LATannotation and exploration tools.
  • Khetarpal, N., Majid, A., Malt, B. C., Sloman, S., & Regier, T. (2010). Similarity judgments reflect both language and cross-language tendencies: Evidence from two semantic domains. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp. 358-363). Austin, TX: Cognitive Science Society.

    Abstract

    Many theories hold that semantic variation in the world’s languages can be explained in terms of a universal conceptual space that is partitioned differently by different languages. Recent work has supported this view in the semantic domain of containers (Malt et al., 1999), and assumed it in the domain of spatial relations (Khetarpal et al., 2009), based in both cases on similarity judgments derived from pile-sorting of stimuli. Here, we reanalyze data from these two studies and find a more complex picture than these earlier studies suggested. In both cases we find that sorting is similar across speakers of different languages (in line with the earlier studies), but nonetheless reflects the sorter’s native language (in contrast with the earlier studies). We conclude that there are cross-culturally shared conceptual tendencies that can be revealed by pile-sorting, but that these tendencies may be modulated to some extent by language. We discuss the implications of these findings for accounts of semantic variation.
  • Kita, S., Ozyurek, A., Allen, S., & Ishizuka, T. (2010). Early links between iconic gestures and sound symbolic words: Evidence for multimodal protolanguage. In A. D. Smith, M. Schouwstra, B. de Boer, & K. Smith (Eds.), Proceedings of the 8th International conference on the Evolution of Language (EVOLANG 8) (pp. 429-430). Singapore: World Scientific.
  • Kita, S., & Ozyurek, A. (1999). Semantische Koordination zwischen Sprache und spontanen ikonischen Gesten: Eine sprachvergleichende Untersuchung. In Max-Planck-Gesellschaft (Ed.), Jahrbuch 1998 (pp. 388-391). Göttingen: Vandenhoeck & Ruprecht.
  • Klein, W. (2021). Das „Heidelberger Forschungsprojekt Pidgin-Deutsch “und die Folgen. In B. Ahrenholz, & M. Rost-Roth (Eds.), Ein Blick zurück nach vorn: Frühe deutsche Forschung zu Zweitspracherwerb, Migration, Mehrsprachigkeit und zweitsprachbezogener Sprachdidaktik sowie ihre Bedeutung heute (pp. 50-95). Berlin: De Gruyter.
  • Klein, W., & Geyken, A. (2010). Das Digitale Wörterbuch der Deutschen Sprache (DWDS). In U. Heid, S. Schierholz, W. Schweickard, H. E. Wiegand, R. H. Gouws, & W. Wolski (Eds.), Lexicographica: International annual for lexicography (pp. 79-96). Berlin, New York: De Gruyter.

    Abstract

    No area in the study of human languages has a longer history and a higher practical signifi cance than lexicography. The advent of the computer has dramaticually changed this discipline in ways which go far beyond the digitisation of materials in combination with effi cient search tools, or the transfer of an existing dictionary onto the computer. They allow the stepwise elaboration of what is called here Digital Lexical Systems, i.e., computerized systems in which the underlying data - in form of an extendable corpus - and description of lexical properties on various levels can be effi ciently combined. This paper discusses the range of these possibilities and describes the present form of the German „Digital Lexical System of the Academy“, a project of the Berlin-Brandenburg Academy of Sciences (www.dwds.de).
  • Klein, W. (2010). Der mühselige Weg zur Erforschung des Schönen. In S. Walther, G. Staupe, & T. Macho (Eds.), Was ist schön? Begleitbuch zur Ausstellung (pp. 124-131). Göttingen: Wallstein.
  • Klein, W. (1999). Die Lehren des Zweitspracherwerbs. In N. Dittmar, & A. Ramat (Eds.), Grammatik und Diskurs: Studien zum Erwerb des Deutschen und des Italienischen (pp. 279-290). Tübingen: Stauffenberg.
  • Klein, W. (1987). L'espressione della temporalita in una varieta elementare di L2. In A. Ramat (Ed.), L'apprendimento spontaneo di una seconda lingua (pp. 131-146). Bologna: Molino.
  • Klein, W. (2010). Typen und Konzepte des Spracherwerbs. In H. Ludger (Ed.), Sprachwissenschaft, ein Reader (pp. 902-924). Berlin: De Gruyter Studium.
  • Klein, W. (2010). Über die zwänglerische Befolgung sprachlicher Normen. In P. Eisenberg (Ed.), Der Jugend zuliebe: Literarische Texte, für die Schule verändert (pp. 77-87). Göttingen: Wallstein.
  • Koutamanis, E., Kootstra, G. J., Dijkstra, T., & Unsworth., S. (2021). Lexical priming as evidence for language-nonselective access in the simultaneous bilingual child's lexicon. In D. Dionne, & L.-A. Vidal Covas (Eds.), BUCLD 45: Proceedings of the 45th annual Boston University Conference on Language Development (pp. 413-430). Sommerville, MA: Cascadilla Press.
  • Kung, C., Chwilla, D. J., Gussenhoven, C., Bögels, S., & Schriefers, H. (2010). What did you say just now, bitterness or wife? An ERP study on the interaction between tone, intonation and context in Cantonese Chinese. In Proceedings of Speech Prosody 2010 (pp. 1-4).

    Abstract

    Previous studies on Cantonese Chinese showed that rising
    question intonation contours on low-toned words lead to
    frequent misperceptions of the tones. Here we explored the
    processing consequences of this interaction between tone and
    intonation by comparing the processing and identification of
    monosyllabic critical words at the end of questions and
    statements, using a tone identification task, and ERPs as an
    online measure of speech comprehension. Experiment 1
    yielded higher error rates for the identification of low tones at
    the end of questions and a larger N400-P600 pattern, reflecting
    processing difficulty and reanalysis, compared to other
    conditions. In Experiment 2, we investigated the effect of
    immediate lexical context on the tone by intonation interaction.
    Increasing contextual constraints led to a reduction in errors
    and the disappearance of the P600 effect. These results
    indicate that there is an immediate interaction between tone,
    intonation, and context in online speech comprehension. The
    difference in performance and activation patterns between the
    two experiments highlights the significance of context in
    understanding a tone language, like Cantonese-Chinese.
  • Kupisch, T., Pereira Soares, S. M., Puig-Mayenco, E., & Rothman, J. (2021). Multilingualism and Chomsky's Generative Grammar. In N. Allott (Ed.), A companion to Chomsky (pp. 232-242). doi:10.1002/9781119598732.ch15.

    Abstract

    Like Einstein's general theory of relativity is concerned with explaining the basics of an observable experience – i.e., gravity – most people take for granted that Chomsky's theory of generative grammar (GG) is concerned with the basic nature of language. This chapter highlights a mere subset of central constructs in GG, showing how they have featured prominently and thus shaped formal linguistic studies in multilingualism. Because multilingualism includes a wide range of nonmonolingual populations, the constructs are divided across child bilingualism and adult third language for greater coverage. In the case of the former, the chapter examines how poverty of the stimulus has been investigated. Using the nascent field of L3/Ln acquisition as the backdrop, it discusses how the GG constructs of I-language versus E-language sit at the core of debates regarding the very notion of what linguistic transfer and mental representations should be taken to be.
  • Kuzla, C., Ernestus, M., & Mitterer, H. (2010). Compensation for assimilatory devoicing and prosodic structure in German fricative perception. In C. Fougeron, B. Kühnert, M. D'Imperio, & N. Vallée (Eds.), Laboratory Phonology 10 (pp. 731-757). Berlin: De Gruyter.
  • Lai, J., & Poletiek, F. H. (2010). The impact of starting small on the learnability of recursion. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32rd Annual Conference of the Cognitive Science Society (CogSci 2010) (pp. 1387-1392). Austin, TX, USA: Cognitive Science Society.
  • Levelt, W. J. M. (1999). Language. In G. Adelman, & B. H. Smith (Eds.), Elsevier's encyclopedia of neuroscience (2nd enlarged and revised edition) (pp. 1005-1008). Amsterdam: Elsevier Science.
  • Levelt, W. J. M. (1962). Motion breaking and the perception of causality. In A. Michotte (Ed.), Causalité, permanence et réalité phénoménales: Etudes de psychologie expérimentale (pp. 244-258). Louvain: Publications Universitaires.
  • Levelt, W. J. M., & Plomp, R. (1962). Musical consonance and critical bandwidth. In Proceedings of the 4th International Congress Acoustics (pp. 55-55).
  • Levelt, W. J. M. (1987). Hochleistung in Millisekunden - Sprechen und Sprache verstehen. In Jahrbuch der Max-Planck-Gesellschaft (pp. 61-77). Göttingen: Vandenhoeck & Ruprecht.
  • Levelt, W. J. M. (1999). Producing spoken language: A blueprint of the speaker. In C. M. Brown, & P. Hagoort (Eds.), The neurocognition of language (pp. 83-122). Oxford University Press.
  • Levelt, W. J. M., & Plomp, K. (1968). The appreciation of musical intervals. In J. M. M. Aler (Ed.), Proceedings of the fifth International Congress of Aesthetics, Amsterdam 1964 (pp. 901-904). The Hague: Mouton.
  • Levelt, W. J. M., & d'Arcais, F. (1987). Snelheid en uniciteit bij lexicale toegang. In H. Crombag, L. Van der Kamp, & C. Vlek (Eds.), De psychologie voorbij: Ontwikkelingen rond model, metriek en methode in de gedragswetenschappen (pp. 55-68). Lisse: Swets & Zeitlinger.
  • Levelt, W. J. M., & Schriefers, H. (1987). Stages of lexical access. In G. A. Kempen (Ed.), Natural language generation: new results in artificial intelligence, psychology and linguistics (pp. 395-404). Dordrecht: Nijhoff.
  • Levinson, S. C. (1999). Deixis. In K. Brown, & J. Miller (Eds.), Concise encyclopedia of grammatical categories (pp. 132-136). Oxford: Elsevier.
  • Levinson, S. C. (1999). Deixis and Demonstratives. In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 29-40). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.2573810.

    Abstract

    Demonstratives are key items in understanding how a language constructs and interprets spatial relationships. They are also multi-functional, with applications to non-spatial deictic fields such as time, perception, person and discourse, and uses in anaphora and affect marking. This item consists of an overview of theoretical distinctions in demonstrative systems, followed by a set of practical queries and elicitation suggestions for demonstratives in “table top” space, wider spatial fields, and naturalistic data.
  • Levinson, S. C. (1999). General Questions About Topological Relations in Adpositions and Cases. In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 57-68). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.2615829.

    Abstract

    The world’s languages encode a diverse range of topological relations. However, cross-linguistic investigation suggests that the relations IN, AT and ON are especially fundamental to the grammaticised expression of space. The purpose of this questionnaire is to collect information about adpositions, case markers, and spatial nominals that are involved in the expression of core IN/AT/ON meanings. The task explores the more general parts of a language’s topological system, with a view to testing certain hypotheses about the packaging of spatial concepts. The questionnaire consists of target translation sentences that focus on a number of dimensions including animacy, caused location and motion.
  • Levinson, S. C. (2010). Generalized conversational implicature. In L. Cummings (Ed.), The pragmatics encyclopedia (pp. 201-203). London: Routledge.
  • Levinson, S. C. (1999). Hypotheses concerning basic locative constructions and the verbal elements within them. In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 55-56). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3002711.

    Abstract

    Languages differ widely in terms of how they encode the fundamental concepts of location and position. For some languages, verbs have an important role to play in describing situations (e.g., whether a bottle is standing or lying on the table); for others, verbs are not used in describing location at all. This item outlines certain hypotheses concerning four “types” of languages: those that have verbless basic locatives; those that use a single verb; those that have several verbs available to express location; and those that use positional verbs. The document was originally published as an appendix to the 'Picture series for positional verbs' (https://doi.org/10.17617/2.2573831).
  • Levinson, S. C. (1987). Minimization and conversational inference. In M. Bertuccelli Papi, & J. Verschueren (Eds.), The pragmatic perspective: Selected papers from the 1985 International Pragmatics Conference (pp. 61-129). Benjamins.
  • Levinson, S. C. (1999). Language and culture. In R. Wilson, & F. Keil (Eds.), MIT encyclopedia of the cognitive sciences (pp. 438-440). Cambridge: MIT press.
  • Levshina, N. (2021). Conditional inference trees and random forests. In M. Paquot, & T. Gries (Eds.), Practical Handbook of Corpus Linguistics (pp. 611-643). New York: Springer.
  • Liszkowski, U. (2010). Before L1: A differentiated perspective on infant gestures. In M. Gullberg, & K. De Bot (Eds.), Gestures in language development (pp. 35-51). Amsterdam: Benjamins.
  • Majid, A. (2010). Words for parts of the body. In B. C. Malt, & P. Wolff (Eds.), Words and the Mind: How words capture human experience (pp. 58-71). New York: Oxford University Press.
  • Mak, M., & Willems, R. M. (2021). Mental simulation during literary reading. In D. Kuiken, & A. M. Jacobs (Eds.), Handbook of empirical literary studies (pp. 63-84). Berlin: De Gruyter.

    Abstract

    Readers experience a number of sensations during reading. They do
    not – or do not only – process words and sentences in a detached, abstract
    manner. Instead they “perceive” what they read about. They see descriptions of
    scenery, feel what characters feel, and hear the sounds in a story. These sensa-
    tions tend to be grouped under the umbrella terms “mental simulation” and
    “mental imagery.” This chapter provides an overview of empirical research on
    the role of mental simulation during literary reading. Our chapter also discusses
    what mental simulation is and how it relates to mental imagery. Moreover, it
    explores how mental simulation plays a role in leading models of literary read-
    ing and investigates under what circumstances mental simulation occurs dur-
    ing literature reading. Finally, the effect of mental simulation on the literary
    reader’s experience is discussed, and suggestions and unresolved issues in this
    field are formulated.
  • Mamus, E., Speed, L. J., Ozyurek, A., & Majid, A. (2021). Sensory modality of input influences encoding of motion events in speech but not co-speech gestures. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 376-382). Vienna: Cognitive Science Society.

    Abstract

    Visual and auditory channels have different affordances and
    this is mirrored in what information is available for linguistic
    encoding. The visual channel has high spatial acuity, whereas
    the auditory channel has better temporal acuity. These
    differences may lead to different conceptualizations of events
    and affect multimodal language production. Previous studies of
    motion events typically present visual input to elicit speech and
    gesture. The present study compared events presented as audio-
    only, visual-only, or multimodal (visual+audio) input and
    assessed speech and co-speech gesture for path and manner of
    motion in Turkish. Speakers with audio-only input mentioned
    path more and manner less in verbal descriptions, compared to
    speakers who had visual input. There was no difference in the
    type or frequency of gestures across conditions, and gestures
    were dominated by path-only gestures. This suggests that input
    modality influences speakers’ encoding of path and manner of
    motion events in speech, but not in co-speech gestures.
  • Matic, D. (2010). Discourse and syntax in linguistic change: Decline of postverbal topical subjects in Serbo-Croat. In G. Ferraresi, & R. Lühr (Eds.), Diachronic studies on information structure: Language acquisition and change (pp. 117-142). Berlin: Mouton de Gruyter.
  • Mazzone, M., & Campisi, E. (2010). Embodiment, metafore, comunicazione. In G. P. Storari, & E. Gola (Eds.), Forme e formalizzazioni. Atti del XVI congresso nazionale. Cagliari: CUEC.
  • Mazzone, M., & Campisi, E. (2010). Are there communicative intentions? In L. A. Pérez Miranda, & A. I. Madariaga (Eds.), Advances in cognitive science. IWCogSc-10. Proceedings of the ILCLI International Workshop on Cognitive Science Workshop on Cognitive Science (pp. 307-322). Bilbao, Spain: The University of the Basque Country.

    Abstract

    Grice in pragmatics and Levelt in psycholinguistics have proposed models of human communication where the starting point of communicative action is an individual intention. This assumption, though, has to face serious objections with regard to the alleged existence of explicit representations of the communicative goals to be pursued. Here evidence is surveyed which shows that in fact speaking may ordinarily be a quite automatic activity prompted by contextual cues and driven by behavioural schemata abstracted away from social regularities. On the one hand, this means that there could exist no intentions in the sense of explicit representations of communicative goals, following from deliberate reasoning and triggering the communicative action. On the other hand, however, there are reasons to allow for a weaker notion of intention than this, according to which communication is an intentional affair, after all. Communicative action is said to be intentional in this weaker sense to the extent that it is subject to a double mechanism of control, with respect both to present-directed and future-directed intentions.
  • McQueen, J. M., & Cutler, A. (2010). Cognitive processes in speech perception. In W. J. Hardcastle, J. Laver, & F. E. Gibbon (Eds.), The handbook of phonetic sciences (2nd ed., pp. 489-520). Oxford: Blackwell.
  • Merkx, D., & Frank, S. L. (2021). Human sentence processing: Recurrence or attention? In E. Chersoni, N. Hollenstein, C. Jacobs, Y. Oseki, L. Prévot, & E. Santus (Eds.), Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2021) (pp. 12-22). Stroudsburg, PA, USA: Association for Computational Linguistics (ACL). doi:10.18653/v1/2021.cmcl-1.2.

    Abstract

    Recurrent neural networks (RNNs) have long been an architecture of interest for computational models of human sentence processing. The recently introduced Transformer architecture outperforms RNNs on many natural language processing tasks but little is known about its ability to model human language processing. We compare Transformer- and RNN-based language models’ ability to account for measures of human reading effort. Our analysis shows Transformers to outperform RNNs in explaining self-paced reading times and neural activity during reading English sentences, challenging the widely held idea that human sentence processing involves recurrent and immediate processing and provides evidence for cue-based retrieval.
  • Merkx, D., Frank, S. L., & Ernestus, M. (2021). Semantic sentence similarity: Size does not always matter. In Proceedings of Interspeech 2021 (pp. 4393-4397). doi:10.21437/Interspeech.2021-1464.

    Abstract

    This study addresses the question whether visually grounded speech recognition (VGS) models learn to capture sentence semantics without access to any prior linguistic knowledge. We produce synthetic and natural spoken versions of a well known semantic textual similarity database and show that our VGS model produces embeddings that correlate well with human semantic similarity judgements. Our results show that a model trained on a small image-caption database outperforms two models trained on much larger databases, indicating that database size is not all that matters. We also investigate the importance of having multiple captions per image and find that this is indeed helpful even if the total number of images is lower, suggesting that paraphrasing is a valuable learning signal. While the general trend in the field is to create ever larger datasets to train models on, our findings indicate other characteristics of the database can just as important.
  • Mudd, K., Lutzenberger, H., De Vos, C., & De Boer, B. (2021). Social structure and lexical uniformity: A case study of gender differences in the Kata Kolok community. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 2692-2698). Vienna: Cognitive Science Society.

    Abstract

    Language emergence is characterized by a high degree of lex-
    ical variation. It has been suggested that the speed at which
    lexical conventionalization occurs depends partially on social
    structure. In large communities, individuals receive input from
    many sources, creating a pressure for lexical convergence.
    In small, insular communities, individuals can remember id-
    iolects and share common ground with interlocuters, allow-
    ing these communities to retain a high degree of lexical vari-
    ation. We look at lexical variation in Kata Kolok, a sign lan-
    guage which emerged six generations ago in a Balinese vil-
    lage, where women tend to have more tightly-knit social net-
    works than men. We test if there are differing degrees of lexical
    uniformity between women and men by reanalyzing a picture
    description task in Kata Kolok. We find that women’s produc-
    tions exhibit less lexical uniformity than men’s. One possible
    explanation of this finding is that women’s more tightly-knit
    social networks allow for remembering idiolects, alleviating
    the pressure for lexical convergence, but social network data
    from the Kata Kolok community is needed to support this ex-
    planation.
  • Munro, R., Bethard, S., Kuperman, V., Lai, V. T., Melnick, R., Potts, C., Schnoebelen, T., & Tily, H. (2010). Crowdsourcing and language studies: The new generation of linguistic data. In Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk. Proceedings of the Workshop (pp. 122-130). Stroudsburg, PA: Association for Computational Linguistics.
  • Nijhof, S., & Zwitserlood, I. (1999). Pluralization in Sign Language of the Netherlands (NGT). In J. Don, & T. Sanders (Eds.), OTS Yearbook 1998-1999 (pp. 58-78). Utrecht: UiL OTS.
  • Norcliffe, E., Enfield, N. J., Majid, A., & Levinson, S. C. (2010). The grammar of perception. In E. Norcliffe, & N. J. Enfield (Eds.), Field manual volume 13 (pp. 7-16). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Otake, T., McQueen, J. M., & Cutler, A. (2010). Competition in the perception of spoken Japanese words. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 114-117).

    Abstract

    Japanese listeners detected Japanese words embedded at the end of nonsense sequences (e.g., kaba 'hippopotamus' in gyachikaba). When the final portion of the preceding context together with the initial portion of the word (e.g., here, the sequence chika) was compatible with many lexical competitors, recognition of the embedded word was more difficult than when such a sequence was compatible with few competitors. This clear effect of competition, established here for preceding context in Japanese, joins similar demonstrations, in other languages and for following contexts, to underline that the functional architecture of the human spoken-word recognition system is a universal one.
  • Ozyurek, A., & Kita, S. (1999). Expressing manner and path in English and Turkish: Differences in speech, gesture, and conceptualization. In M. Hahn, & S. C. Stoness (Eds.), Proceedings of the Twenty-first Annual Conference of the Cognitive Science Society (pp. 507-512). London: Erlbaum.
  • Ozyurek, A. (2010). The role of iconic gestures in production and comprehension of language: Evidence from brain and behavior. In S. Kopp, & I. Wachsmuth (Eds.), Gesture in embodied communication and human-computer interaction: 8th International Gesture Workshop, GW 2009, Bielefeld, Germany, February 25-27 2009. Revised selected papers (pp. 1-10). Berlin: Springer.
  • Petrich, P., Piedrasanta, R., Figuerola, H., & Le Guen, O. (2010). Variantes y variaciones en la percepción de los antepasados entre los Mayas. In A. Monod Becquelin, A. Breton, & M. H. Ruz (Eds.), Figuras Mayas de la diversidad (pp. 255-275). Mérida, Mexico: Universidad autónoma de México.
  • Pluymaekers, M., Ernestus, M., Baayen, R. H., & Booij, G. (2010). Morphological effects on fine phonetic detail: The case of Dutch -igheid. In C. Fougeron, B. Kühnert, M. D'Imperio, & N. Vallée (Eds.), Laboratory Phonology 10 (pp. 511-532). Berlin: De Gruyter.
  • Pouw, W., Wit, J., Bögels, S., Rasenberg, M., Milivojevic, B., & Ozyurek, A. (2021). Semantically related gestures move alike: Towards a distributional semantics of gesture kinematics. In V. G. Duffy (Ed.), Digital human modeling and applications in health, safety, ergonomics and risk management. human body, motion and behavior:12th International Conference, DHM 2021, Held as Part of the 23rd HCI International Conference, HCII 2021 (pp. 269-287). Berlin: Springer. doi:10.1007/978-3-030-77817-0_20.
  • Rapold, C. J. (2010). Beneficiary and other roles of the dative in Tashelhiyt. In F. Zúñiga, & S. Kittilä (Eds.), Benefactives and malefactives: Typological perspectives and case studies (pp. 351-376). Amsterdam: Benjamins.

    Abstract

    This paper explores the semantics of the dative in Tashelhiyt, a Berber language from Morocco. After a brief morphosyntactic overview of the dative in this language, I identify a wide range of its semantic roles, including possessor, experiencer, distributive and unintending causer. I arrange these roles in a semantic map and propose semantic links between the roles such as metaphorisation and generalisation. In the light of the Tashelhiyt data, the paper also proposes additions to previous semantic maps of the dative (Haspelmath 1999, 2003) and to Kittilä’s 2005 typology of beneficiary coding.
  • Rapold, C. J. (2010). Defining converbs ten years on - A hitchhikers'guide. In S. Völlmin, A. Amha, C. J. Rapold, & S. Zaugg-Coretti (Eds.), Converbs, medial verbs, clause chaining and related issues (pp. 7-30). Köln: Rüdiger Köppe Verlag.
  • Reesink, G. (2010). The difference a word makes. In K. A. McElhannon, & G. Reesink (Eds.), A mosaic of languages and cultures: Studies celebrating the career of Karl J. Franklin (pp. 434-446). Dallas, TX: SIL International.

    Abstract

    This paper offers some thoughts on the question what effect language has on the understanding and hence behavior of a human being. It reviews some issues of linguistic relativity, known as the “Sapir-Whorf hypothesis,” suggesting that the culture we grow up in is reflected in the language and that our cognition (and our worldview) is shaped or colored by the conventions developed by our ancestors and peers. This raises questions for the degree of translatability, illustrated by the comparison of two poems by a Dutch poet who spent most of his life in the USA. Mutual understanding, I claim, is possible because we have the cognitive apparatus that allows us to enter different emic systems.
  • Reesink, G. (2010). Prefixation of arguments in West Papuan languages. In M. Ewing, & M. Klamer (Eds.), East Nusantara, typological and areal analyses (pp. 71-95). Canberra: Pacific Linguistics.
  • Reinisch, E., Jesse, A., & Nygaard, L. C. (2010). Tone of voice helps learning the meaning of novel adjectives [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 114). York: University of York.

    Abstract

    To understand spoken words listeners have to cope with seemingly meaningless variability in the speech signal. Speakers vary, for example, their tone of voice (ToV) by changing speaking rate, pitch, vocal effort, and loudness. This variation is independent of "linguistic prosody" such as sentence intonation or speech rhythm. The variation due to ToV, however, is not random. Speakers use, for example, higher pitch when referring to small objects than when referring to large objects and importantly, adult listeners are able to use these non-lexical ToV cues to distinguish between the meanings of antonym pairs (e.g., big-small; Nygaard, Herold, & Namy, 2009). In the present study, we asked whether listeners infer the meaning of novel adjectives from ToV and subsequently interpret these adjectives according to the learned meaning even in the absence of ToV. Moreover, if listeners actually acquire these adjectival meanings, then they should generalize these word meanings to novel referents. ToV would thus be a semantic cue to lexical acquisition. This hypothesis was tested in an exposure-test paradigm with adult listeners. In the experiment listeners' eye movements to picture pairs were monitored. The picture pairs represented the endpoints of the adjectival dimensions big-small, hot-cold, and strong-weak (e.g., an elephant and an ant represented big-small). Four picture pairs per category were used. While viewing the pictures participants listened to lexically unconstraining sentences containing novel adjectives, for example, "Can you find the foppick one?" During exposure, the sentences were spoken in infant-directed speech with the intended adjectival meaning expressed by ToV. Word-meaning pairings were counterbalanced across participants. Each word was repeated eight times. Listeners had no explicit task. To guide listeners' attention to the relation between the words and pictures, three sets of filler trials were included that contained real English adjectives (e.g., full-empty). In the subsequent test phase participants heard the novel adjectives in neutral adult-directed ToV. Test sentences were recorded before the speaker was informed about intended word meanings. Participants had to choose which of two pictures on the screen the speaker referred to. Picture pairs that were presented during the exposure phase and four new picture pairs per category that varied along the critical dimensions were tested. During exposure listeners did not spontaneously direct their gaze to the intended referent at the first presentation. But as indicated by listener's fixation behavior, they quickly learned the relationship between ToV and word meaning over only two exposures. Importantly, during test participants consistently identified the intended referent object even in the absence of informative ToV. Learning was found for all three tested categories and did not depend on whether the picture pairs had been presented during exposure. Listeners thus use ToV not only to distinguish between antonym pairs but they are able to extract word meaning from ToV and assign this meaning to novel words. The newly learned word meanings can then be generalized to novel referents even in the absence of ToV cues. These findings suggest that ToV can be used as a semantic cue to lexical acquisition. References Nygaard, L. C., Herold, D. S., & Namy, L. L. (2009) The semantics of prosody: Acoustic and perceptual evidence of prosodic correlates to word meaning. Cognitive Science, 33. 127-146.
  • Reis, A., Petersson, K. M., & Faísca, L. (2010). Neuroplasticidade: Os efeitos de aprendizagens específicas no cérebro humano. In C. Nunes, & S. N. Jesus (Eds.), Temas actuais em Psicologia (pp. 11-26). Faro: Universidade do Algarve.
  • Reis, A., Faísca, L., Castro, S.-L., & Petersson, K. M. (2010). Preditores da leitura ao longo da escolaridade: Um estudo com alunos do 1 ciclo do ensino básico. In Actas do VII simpósio nacional de investigação em psicologia (pp. 3117-3132).

    Abstract

    A aquisição da leitura decorre ao longo de diversas etapas, desde o momento em que a criança inicia o contacto com o alfabeto até ao momento em que se torna um leitor competente, apto a ler correcta e fluentemente. Compreender a evolução desta competência através de uma análise da diferenciação do peso de variáveis preditoras da leitura possibilita teorizar sobre os mecanismos cognitivos envolvidos nas diferentes fases de desenvolvimento da leitura. Realizámos um estudo transversal com 568 alunos do segundo ao quarto ano do primeiro ciclo do Ensino Básico, em que se avaliou o impacto de capacidades de processamento fonológico, nomeação rápida, conhecimento letra-som e vocabulário, bem como de capacidades cognitivas mais gerais (inteligência não-verbal e memória de trabalho), na exactidão e velocidade da leitura. De uma forma geral, os resultados mostraram que, apesar da consciência fonológica permanecer como o preditor mais importante da exactidão e fluência da leitura, o seu peso decresce à medida que a escolaridade aumenta. Observou-se também que, à medida que o contributo da consciência fonológica para a explicação da velocidade de leitura diminuía, aumentava o contributo de outras variáveis mais associadas ao automatismo e reconhecimento lexical, tais como a nomeação rápida e o vocabulário. Em suma, podemos dizer que ao longo da escolaridade se observa uma alteração dinâmica dos processos cognitivos subjacentes à leitura, o que sugere que a criança evolui de uma estratégia de leitura ancorada em processamentos sub-lexicais, e como tal mais dependente de processamentos fonológicos, para uma estratégia baseada no reconhecimento ortográfico das palavras.
  • Roberts, L. (2010). Parsing the L2 input, an overview: Investigating L2 learners’ processing of syntactic ambiguities and dependencies in real-time comprehension. In G. D. Véronique (Ed.), Language, Interaction and Acquisition [Special issue] (pp. 189-205). Amsterdam: Benjamins.

    Abstract

    The acquisition of second language (L2) syntax has been central to the study of L2 acquisition, but recently there has been an interest in how learners apply their L2 syntactic knowledge to the input in real-time comprehension. Investigating L2 learners’ moment-by-moment syntactic analysis during listening or reading of sentence as it unfolds — their parsing of the input — is important, because language learning involves both the acquisition of knowledge and the ability to use it in real time. Using methods employed in monolingual processing research, investigations often focus on the processing of temporary syntactic ambiguities and structural dependencies. Investigating ambiguities involves examining parsing decisions at points in a sentence where there is a syntactic choice and this can offer insights into the nature of the parsing mechanism, and in particular, its processing preferences. Studying the establishment of syntactic dependencies at the critical point in the input allows for an investigation of how and when different kinds of information (e.g., syntactic, semantic, pragmatic) are put to use in real-time interpretation. Within an L2 context, further questions are of interest and familiar from traditional L2 acquisition research. Specifically, how native-like are the parsing procedures that L2 learners apply when processing the L2 input? What is the role of the learner’s first language (L1)? And, what are the effects of individual factors such as age, proficiency/dominance and working memory on L2 parsing? In the current paper I will provide an overview of the findings of some experimental research designed to investigate these questions.
  • Rossi, G. (2021). Conversation analysis (CA). In J. Stanlaw (Ed.), The International Encyclopedia of Linguistic Anthropology. Wiley-Blackwell. doi:10.1002/9781118786093.iela0080.

    Abstract

    Conversation analysis (CA) is an approach to the study of language and social interaction that puts at center stage its sequential development. The chain of initiating and responding actions that characterizes any interaction is a source of internal evidence for the meaning of social behavior as it exposes the understandings that participants themselves give of what one another is doing. Such an analysis requires the close and repeated inspection of audio and video recordings of naturally occurring interaction, supported by transcripts and other forms of annotation. Distributional regularities are complemented by a demonstration of participants' orientation to deviant behavior. CA has long maintained a constructive dialogue and reciprocal influence with linguistic anthropology. This includes a recent convergence on the cross-linguistic and cross-cultural study of social interaction.
  • Rossi, G. (2010). Interactive written discourse: Pragmatic aspects of SMS communication. In G. Garzone, P. Catenaccio, & C. Degano (Eds.), Diachronic perspectives on genres in specialized communication. Conference Proceedings (pp. 135-138). Milano: CUEM.
  • Sadakata, M., Van der Zanden, L., & Sekiyama, K. (2010). Influence of musical training on perception of L2 speech. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 118-121).

    Abstract

    The current study reports specific cases in which a positive transfer of perceptual ability from the music domain to the language domain occurs. We tested whether musical training enhances discrimination and identification performance of L2 speech sounds (timing features, nasal consonants and vowels). Native Dutch and Japanese speakers with different musical training experience, matched for their estimated verbal IQ, participated in the experiments. Results indicated that musical training strongly increases one’s ability to perceive timing information in speech signals. We also found a benefit of musical training on discrimination performance for a subset of the tested vowel contrasts.
  • San Roque, L., & Norcliffe, E. (2010). Knowledge asymmetries in grammar and interaction. In E. Norcliffe, & N. J. Enfield (Eds.), Field manual volume 13 (pp. 37-44). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.529153.
  • Sauter, D. (2010). Non-verbal emotional vocalizations across cultures [Abstract]. In E. Zimmermann, & E. Altenmüller (Eds.), Evolution of emotional communication: From sounds in nonhuman mammals to speech and music in man (pp. 15). Hannover: University of Veterinary Medicine Hannover.

    Abstract

    Despite differences in language, culture, and ecology, some human characteristics are similar in people all over the world, while other features vary from one group to the next. These similarities and differences can inform arguments about what aspects of the human mind are part of our shared biological heritage and which are predominantly products of culture and language. I will present data from a cross-cultural project investigating the recognition of non-verbal vocalizations of emotions, such as screams and laughs, across two highly different cultural groups. English participants were compared to individuals from remote, culturally isolated Namibian villages. Vocalizations communicating the so-called “basic emotions” (anger, disgust, fear, joy, sadness, and surprise) were bidirectionally recognised. In contrast, a set of additional positive emotions was only recognised within, but not across, cultural boundaries. These results indicate that a number of primarily negative emotions are associated with vocalizations that can be recognised across cultures, while at least some positive emotions are communicated with culture-specific signals. I will discuss these findings in the context of accounts of emotions at differing levels of analysis, with an emphasis on the often-neglected positive emotions.
  • Sauter, D., Crasborn, O., & Haun, D. B. M. (2010). The role of perceptual learning in emotional vocalizations [Abstract]. In C. Douilliez, & C. Humez (Eds.), Third European Conference on Emotion 2010. Proceedings (pp. 39-39). Lille: Université de Lille.

    Abstract

    Many studies suggest that emotional signals can be recognized across cultures and modalities. But to what extent are these signals innate and to what extent are they learned? This study investigated whether auditory learning is necessary for the production of recognizable emotional vocalizations by examining the vocalizations produced by people born deaf. Recordings were made of eight congenitally deaf Dutch individuals, who produced non-verbal vocalizations of a range of negative and positive emotions. Perception was examined in a forced-choice task with hearing Dutch listeners (n = 25). Considerable variability was found across emotions, suggesting that auditory learning is more important for the acquisition of certain types of vocalizations than for others. In particular, achievement and surprise sounds were relatively poorly recognized. In contrast, amusement and disgust vocalizations were well recognized, suggesting that for some emotions, recognizable vocalizations can develop without any auditory learning. The implications of these results for models of emotional communication are discussed, and other routes of social learning available to the deaf individuals are considered.
  • Sauter, D., Crasborn, O., & Haun, D. B. M. (2010). The role of perceptual learning in emotional vocalizations [Abstract]. Journal of the Acoustical Society of America, 128, 2476.

    Abstract

    Vocalizations like screams and laughs are used to communicate affective states, but what acoustic cues in these signals require vocal learning and which ones are innate? This study investigated the role of auditory learning in the production of non-verbal emotional vocalizations by examining the vocalizations produced by people born deaf. Recordings were made of congenitally deaf Dutch individuals and matched hearing controls, who produced non-verbal vocalizations of a range of negative and positive emotions. Perception was examined in a forced-choice task with hearing Dutch listeners (n = 25), and judgments were analyzed together with acoustic cues, including envelope, pitch, and spectral measures. Considerable variability was found across emotions and acoustic cues, and the two types of information were related for a sub-set of the emotion categories. These results suggest that auditory learning is less important for the acquisition of certain types of vocalizations than for others (particularly amusement and relief), and they also point to a less central role for auditory learning of some acoustic features in affective non-verbal vocalizations. The implications of these results for models of vocal emotional communication are discussed.
  • Schäfer, M., & Haun, D. B. M. (2010). Sharing among children across cultures. In E. Norcliffe, & N. J. Enfield (Eds.), Field manual volume 13 (pp. 45-49). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.529154.
  • Schiller, N. O., Van Lieshout, P. H. H. M., Meyer, A. S., & Levelt, W. J. M. (1999). Does the syllable affiliation of intervocalic consonants have an articulatory basis? Evidence from electromagnetic midsagittal artculography. In B. Maassen, & P. Groenen (Eds.), Pathologies of speech and language. Advances in clinical phonetics and linguistics (pp. 342-350). London: Whurr Publishers.
  • Schuppler, B., Ernestus, M., Van Dommelen, W., & Koreman, J. (2010). Predicting human perception and ASR classification of word-final [t] by its acoustic sub-segmental properties. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 2466-2469).

    Abstract

    This paper presents a study on the acoustic sub-segmental properties of word-final /t/ in conversational standard Dutch and how these properties contribute to whether humans and an ASR system classify the /t/ as acoustically present or absent. In general, humans and the ASR system use the same cues (presence of a constriction, a burst, and alveolar frication), but the ASR system is also less sensitive to fine cues (weak bursts, smoothly starting friction) than human listeners and misled by the presence of glottal vibration. These data inform the further development of models of human and automatic speech processing.
  • Senft, G. (2021). A very special letter. In T. Szczerbowski (Ed.), Language "as round as an orange".. In memory of Professor Krystyna Pisarkowa on the 90th anniversary of her birth (pp. 367). Krakow: Uniwersytetu Pedagogicznj.
  • Senft, G. (1999). Bronislaw Kasper Malinowski. In J. Verschueren, J.-O. Östman, J. Blommaert, & C. Bulcaen (Eds.), Handbook of pragmatics: 1997 installment. Amsterdam: Benjamins.
  • Senft, G. (2010). Culture change - language change: Missionaries and moribund varieties of Kilivila. In G. Senft (Ed.), Endangered Austronesian and Australian Aboriginal languages: Essays on language documentation, archiving, and revitalization (pp. 69-95). Canberra: Pacific Linguistics.
  • Senft, G. (2010). Introduction. In G. Senft (Ed.), Endangered Austronesian and Australian Aboriginal languages: Essays on language documentation, archiving, and revitalization (pp. 1-13). Canberra: Pacific Linguistics.
  • Senghas, A., Ozyurek, A., & Goldin-Meadow, S. (2010). The evolution of segmentation and sequencing: Evidence from homesign and Nicaraguan Sign Language. In A. D. Smith, M. Schouwstra, B. de Boer, & K. Smith (Eds.), Proceedings of the 8th International conference on the Evolution of Language (EVOLANG 8) (pp. 279-289). Singapore: World Scientific.
  • Seuren, P. A. M. (2010). Donkey sentences. In A. Barber, & R. J. Stainton (Eds.), Concise encyclopedia of philosophy of language and linguistics (pp. 169-171). Amsterdam: Elsevier.
  • Seuren, P. A. M. (2010). Aristotle and linguistics. In A. Barber, & R. J. Stainton (Eds.), Concise encyclopedia of philosophy of language and linguistics (pp. 25-27). Amsterdam: Elsevier.

    Abstract

    Aristotle's importance in the professional study of language consists first of all in the fact that he demythologized language and made it an object of rational investigation. In the context of his theory of truth as correspondence, he also provided the first semantic analysis of propositions in that he distinguished two main constituents, the predicate, which expresses a property, and the remainder of the proposition, referring to a substance to which the property is assigned. That assignment is either true or false. Later, the ‘remainder’ was called subject term, and the Aristotelian predicate was identified with the verb in the sentence. The Aristotelian predicate, however, is more like what is now called the ‘comment,’ whereas his remainder corresponds to the topic. Aristotle, furthermore, defined nouns and verbs as word classes. In addition, he introduced the term ‘case’ for paradigmatic morphological variation.
  • Seuren, P. A. M. (2010). Meaning: Cognitive dependency of lexical meaning. In A. Barber, & R. J. Stainton (Eds.), Concise encyclopedia of philosophy of language and linguistics (pp. 424-426). Amsterdam: Elsevier.
  • Seuren, P. A. M. (2010). Presupposition. In A. Barber, & R. J. Stainton (Eds.), Concise encyclopedia of philosophy of language and linguistics (pp. 589-596). Amsterdam: Elsevier.
  • Seuren, P. A. M. (1999). The subject-predicate debate X-rayed. In D. Cram, A. Linn, & E. Nowak (Eds.), History of Linguistics 1996: Selected papers from the Seventh International Conference on the History of the Language Sciences (ICHOLS VII), Oxford, 12-17 September 1996. Volume 1: Traditions in Linguistics Worldwide (pp. 41-55). Amsterdam: Benjamins.
  • Seuren, P. A. M. (1999). Topic and comment. In C. F. Justus, & E. C. Polomé (Eds.), Language Change and Typological Variation: Papers in Honor of Winfred P. Lehmann on the Occasion of His 83rd Birthday. Vol. 2: Grammatical universals and typology (pp. 348-373). Washington, DC: Institute for the Study of Man.
  • Shattuck-Hufnagel, S., & Cutler, A. (1999). The prosody of speech error corrections revisited. In J. Ohala, Y. Hasegawa, M. Ohala, D. Granville, & A. Bailey (Eds.), Proceedings of the Fourteenth International Congress of Phonetic Sciences: Vol. 2 (pp. 1483-1486). Berkely: University of California.

    Abstract

    A corpus of digitized speech errors is used to compare the prosody of correction patterns for word-level vs. sound-level errors. Results for both peak F0 and perceived prosodic markedness confirm that speakers are more likely to mark corrections of word-level errors than corrections of sound-level errors, and that errors ambiguous between word-level and soundlevel (such as boat for moat) show correction patterns like those for sound level errors. This finding increases the plausibility of the claim that word-sound-ambiguous errors arise at the same level of processing as sound errors that do not form words.
  • Li, Y., Wu, S., Shi, S., Tong, S., Zhang, Y., & Guo, X. (2021). Enhanced inter-brain connectivity between children and adults during cooperation: a dual EEG study. In 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC) (pp. 6289-6292). doi:10.1109/EMBC46164.2021.9630330.

    Abstract

    Previous fNIRS studies have suggested that adult-child cooperation is accompanied by increased inter-brain synchrony. However, its reflection in the electrophysiological synchrony remains unclear. In this study, we designed a naturalistic and well-controlled adult-child interaction paradigm using a tangram solving video game, and recorded dual-EEG from child and adult dyads during cooperative and individual conditions. By calculating the directed inter-brain connectivity in the theta and alpha bands, we found that the inter-brain frontal network was more densely connected and stronger in strength during the cooperative than the individual condition when the adult was watching the child playing. Moreover, the inter-brain network across different dyads shared more common information flows from the player to the observer during cooperation, but was more individually different in solo play. The results suggest an enhancement in inter-brain EEG interactions during adult-child cooperation. However, the enhancement was evident in all cooperative cases but partly depended on the role of participants.
  • Sikveland, A., Öttl, A., Amdal, I., Ernestus, M., Svendsen, T., & Edlund, J. (2010). Spontal-N: A Corpus of Interactional Spoken Norwegian. In N. Calzolari, K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, & D. Tapias (Eds.), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 2986-2991). Paris: European Language Resources Association (ELRA).

    Abstract

    Spontal-N is a corpus of spontaneous, interactional Norwegian. To our knowledge, it is the first corpus of Norwegian in which the majority of speakers have spent significant parts of their lives in Sweden, and in which the recorded speech displays varying degrees of interference from Swedish. The corpus consists of studio quality audio- and video-recordings of four 30-minute free conversations between acquaintances, and a manual orthographic transcription of the entire material. On basis of the orthographic transcriptions, we automatically annotated approximately 50 percent of the material on the phoneme level, by means of a forced alignment between the acoustic signal and pronunciations listed in a dictionary. Approximately seven percent of the automatic transcription was manually corrected. Taking the manual correction as a gold standard, we evaluated several sources of pronunciation variants for the automatic transcription. Spontal-N is intended as a general purpose speech resource that is also suitable for investigating phonetic detail.
  • Simon, E., Escudero, P., & Broersma, M. (2010). Learning minimally different words in a third language: L2 proficiency as a crucial predictor of accuracy in an L3 word learning task. In K. Diubalska-Kolaczyk, M. Wrembel, & M. Kul (Eds.), Proceedings of the Sixth International Symposium on the Acquisition of Second Language Speech (New Sounds 2010).
  • Skiba, R. (2010). Polnisch. In S. Colombo-Scheffold, P. Fenn, S. Jeuk, & J. Schäfer (Eds.), Ausländisch für Deutsche. Sprachen der Kinder - Sprachen im Klassenzimmer (2. korrigierte und erweiterte Auflage, pp. 165-176). Freiburg: Fillibach.
  • De Smedt, K., & Kempen, G. (1987). Incremental sentence production, self-correction, and coordination. In G. Kempen (Ed.), Natural language generation: New results in artificial intelligence, psychology and linguistics (pp. 365-376). Dordrecht: Nijhoff.
  • Spilková, H., Brenner, D., Öttl, A., Vondřička, P., Van Dommelen, W., & Ernestus, M. (2010). The Kachna L1/L2 picture replication corpus. In N. Calzolari, K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, & D. Tapias (Eds.), Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10) (pp. 2432-2436). Paris: European Language Resources Association (ELRA).

    Abstract

    This paper presents the Kachna corpus of spontaneous speech, in which ten Czech and ten Norwegian speakers were recorded both in their native language and in English. The dialogues are elicited using a picture replication task that requires active cooperation and interaction of speakers by asking them to produce a drawing as close to the original as possible. The corpus is appropriate for the study of interactional features and speech reduction phenomena across native and second languages. The combination of productions in non-native English and in speakers’ native language is advantageous for investigation of L2 issues while providing a L1 behaviour reference from all the speakers. The corpus consists of 20 dialogues comprising 12 hours 53 minutes of recording, and was collected in 2008. Preparation of the transcriptions, including a manual orthographic transcription and an automatically generated phonetic transcription, is currently in progress. The phonetic transcriptions are automatically generated by aligning acoustic models with the speech signal on the basis of the orthographic transcriptions and a dictionary of pronunciation variants compiled for the relevant language. Upon completion the corpus will be made available via the European Language Resources Association (ELRA).
  • Staum Casasanto, L., Jasmin, K., & Casasanto, D. (2010). Virtually accommodating: Speech rate accommodation to a virtual interlocutor. In S. Ohlsson, & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp. 127-132). Austin, TX: Cognitive Science Society.

    Abstract

    Why do people accommodate to each other’s linguistic behavior? Studies of natural interactions (Giles, Taylor & Bourhis, 1973) suggest that speakers accommodate to achieve interactional goals, influencing what their interlocutor thinks or feels about them. But is this the only reason speakers accommodate? In real-world conversations, interactional motivations are ubiquitous, making it difficult to assess the extent to which they drive accommodation. Do speakers still accommodate even when interactional goals cannot be achieved, for instance, when their interlocutor cannot interpret their accommodation behavior? To find out, we asked participants to enter an immersive virtual reality (VR) environment and to converse with a virtual interlocutor. Participants accommodated to the speech rate of their virtual interlocutor even though he could not interpret their linguistic behavior, and thus accommodation could not possibly help them to achieve interactional goals. Results show that accommodation does not require explicit interactional goals, and suggest other social motivations for accommodation.
  • Stehouwer, H., & van Zaanen, M. (2010). Enhanced suffix arrays as language models: Virtual k-testable languages. In J. M. Sempere, & P. García (Eds.), Grammatical inference: Theoretical results and applications 10th International Colloquium, ICGI 2010, Valencia, Spain, September 13-16, 2010. Proceedings (pp. 305-308). Berlin: Springer.

    Abstract

    In this article, we propose the use of suffix arrays to efficiently implement n-gram language models with practically unlimited size n. This approach, which is used with synchronous back-off, allows us to distinguish between alternative sequences using large contexts. We also show that we can build this kind of models with additional information for each symbol, such as part-of-speech tags and dependency information. The approach can also be viewed as a collection of virtual k-testable automata. Once built, we can directly access the results of any k-testable automaton generated from the input training data. Synchronous back- off automatically identies the k-testable automaton with the largest feasible k. We have used this approach in several classification tasks.
  • Stehouwer, H., & Van Zaanen, M. (2010). Finding patterns in strings using suffix arrays. In M. Ganzha, & M. Paprzycki (Eds.), Proceedings of the International Multiconference on Computer Science and Information Technology, October 18–20, 2010. Wisła, Poland (pp. 505-511). IEEE.

    Abstract

    Finding regularities in large data sets requires implementations of systems that are efficient in both time and space requirements. Here, we describe a newly developed system that exploits the internal structure of the enhanced suffixarray to find significant patterns in a large collection of sequences. The system searches exhaustively for all significantly compressing patterns where patterns may consist of symbols and skips or wildcards. We demonstrate a possible application of the system by detecting interesting patterns in a Dutch and an English corpus.
  • Stehouwer, H., & van Zaanen, M. (2010). Using suffix arrays as language models: Scaling the n-gram. In Proceedings of the 22st Benelux Conference on Artificial Intelligence (BNAIC 2010), October 25-26, 2010.

    Abstract

    In this article, we propose the use of suffix arrays to implement n-gram language models with practically unlimited size n. These unbounded n-grams are called 1-grams. This approach allows us to use large contexts efficiently to distinguish between different alternative sequences while applying synchronous back-off. From a practical point of view, the approach has been applied within the context of spelling confusibles, verb and noun agreement and prenominal adjective ordering. These initial experiments show promising results and we relate the performance to the size of the n-grams used for disambiguation.

Share this page