Publications

Displaying 401 - 494 of 494
  • Seuren, P. A. M. (2006). Aristotle and linguistics. In K. Brown (Ed.), Encyclopedia of language and lingusitics (vol.1) (pp. 469-471). Amsterdam: Elsevier.

    Abstract

    Aristotle's importance in the professional study of language consists first of all in the fact that he demythologized language and made it an object of rational investigation. In the context of his theory of truth as correspondence, he also provided the first semantic analysis of propositions in that he distinguished two main constituents, the predicate, which expresses a property, and the remainder of the proposition, referring to a substance to which the property is assigned. That assignment is either true or false. Later, the ‘remainder’ was called subject term, and the Aristotelian predicate was identified with the verb in the sentence. The Aristotelian predicate, however, is more like what is now called the ‘comment,’ whereas his remainder corresponds to the topic. Aristotle, furthermore, defined nouns and verbs as word classes. In addition, he introduced the term ‘case’ for paradigmatic morphological variation.
  • Seuren, P. A. M. (2006). Meaning, the cognitive dependency of lexical meaning. In K. Brown (Ed.), Encyclopedia of Language and Linguistics (vol. 7) (pp. 575-577). Amsterdam: Elsevier.

    Abstract

    There is a growing awareness among theoretical linguists and philosophers of language that the linguistic definition of lexical meanings, which must be learned when one learns a language, underdetermines not only full utterance interpretation but also sentence meaning. The missing information must be provided by cognition – that is, either general encyclopedic or specific situational knowledge. This fact crucially shows the basic insufficiency of current standard model-theoretic semantics as a paradigm for the analysis and description of linguistic meaning.
  • Seuren, P. A. M. (2006). Lexical conditions. In K. Brown (Ed.), Encyclopedia of Language and Linguistics (vol. 7) (pp. 77-79). Amsterdam: Elsevier.

    Abstract

    The lexical conditions, also known as satisfaction conditions, of a predicate P are the conditions that must be satisfied by the term referents of P for P applied to these term referents to yield a true sentence. In view of presupposition theory it makes sense to distinguish two categories of lexical conditions, the preconditions that must be satisfied for the sentence to be usable in any given discourse, and the update conditions which must be satisfied for the sentence to yield truth.
  • Seuren, P. A. M. (2006). Multivalued logics. In K. Brown (Ed.), Encyclopedia of Language and Linguistics (vol. 8) (pp. 387-390). Amsterdam: Elsevier.

    Abstract

    The widely prevailing view that standard bivalent logic is the only possible sound logical system, imposed by metaphysical necessity, has been shattered by the development of multivalent logics during the 20th century. It is now clear that standard bivalent logic is merely the minimal representative of a wide variety of viable logics with any number of truth values. These viable logics can be subdivided into families. In this article, the Kleene family and the PPCn family are subjected to special examination, as they appear to be most relevant for the study of the logical properties of human language.
  • Seuren, P. A. M. (1994). Factivity. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 3) (pp. 1205). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1994). Function, set-theoretical. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 3) (pp. 1314). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1994). Incrementation. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 3) (pp. 1646). Oxford: Pergamon Press.
  • Seuren, P. A. M. (2009). Hesseling, Dirk Christiaan. In H. Stammerjohann (Ed.), Lexicon Grammaticorum: A bio-bibliographical companion to the history of linguistics. Volume 1. (2nd ed.) (pp. 649-650). Berlin: DeGruyter.
  • Seuren, P. A. M. (1994). Lexical conditions. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 4) (pp. 2140-2141). Oxford: Pergamon Press.
  • Seuren, P. A. M. (2009). Logical systems and natural logical intuitions. In Current issues in unity and diversity of languages: Collection of the papers selected from the CIL 18, held at Korea University in Seoul on July 21-26, 2008. http://www.cil18.org (pp. 53-60).

    Abstract

    The present paper is part of a large research programme investigating the nature and properties of the predicate logic inherent in natural language. The general hypothesis is that natural speakers start off with a basic-natural logic, based on natural cognitive functions, including the basic-natural way of dealing with plural objects. As culture spreads, functional pressure leads to greater generalization and mathematical correctness, yielding ever more refined systems until the apogee of standard modern predicate logic. Four systems of predicate calculus are considered: Basic-Natural Predicate Calculus (BNPC), Aritsotelian-Abelardian Predicate Calculus (AAPC), Aritsotelian-Boethian Predicate Calculus (ABPC), also known as the classic Square of Opposition, and Standard Modern Predicate Calculus (SMPC). (ABPC is logically faulty owing to its Undue Existential Import (UEI), but that fault is repaired by the addition of a presuppositional component to the logic.) All four systems are checked against seven natural logical intuitions. It appears that BNPC scores best (five out of seven), followed by ABPC (three out of seven). AAPC and SMPC finish ex aequo with two out of seven.
  • Seuren, P. A. M. (1994). Existence predicate (discourse semantics). In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 3) (pp. 1190-1191). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1994). Existential presupposition. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 3) (pp. 1191-1192). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1994). Presupposition. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 6) (pp. 3311-3320). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1994). Projection problem. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 6) (pp. 3358-3360). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1994). The computational lexicon: All lexical content is predicate. In Z. Yusoff (Ed.), Proceedings of the International Conference on Linguistic Applications 26-28 July 1994 (pp. 211-216). Penang: Universiti Sains Malaysia, Unit Terjemahan Melalui Komputer (UTMK).
  • Seuren, P. A. M. (1994). Sign. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 7) (pp. 3885-3888). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1994). Syntax and semantics: Relationship. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 8) (pp. 4494-4500). Oxford: Pergamon Press.
  • Seuren, P. A. M. (1982). Riorientamenti metodologici nello studio della variabilità linguistica. In D. Gambarara, & A. D'Atri (Eds.), Ideologia, filosofia e linguistica: Atti del Convegno Internazionale di Studi, Rende (CS) 15-17 Settembre 1978 ( (pp. 499-515). Roma: Bulzoni.
  • Seuren, P. A. M. (1985). Predicate raising and semantic transparency in Mauritian Creole. In N. Boretzky, W. Enninger, & T. Stolz (Eds.), Akten des 2. Essener Kolloquiums über "Kreolsprachen und Sprachkontakte", 29-30 Nov. 1985 (pp. 203-229). Bochum: Brockmeyer.
  • Seuren, P. A. M. (1994). Prediction and retrodiction. In R. E. Asher, & J. M. Y. Simpson (Eds.), The Encyclopedia of Language and Linguistics (vol. 6) (pp. 3302-3303). Oxford: Pergamon Press.
  • Seuren, P. A. M. (2013). The logico-philosophical tradition. In K. Allan (Ed.), The Oxford handbook of the history of linguistics (pp. 537-554). Oxford: Oxford University Press.
  • Seuren, P. A. M. (2009). Voorhoeve, Jan. In H. Stammerjohann (Ed.), Lexicon Grammaticorum: A bio-bibliographical companion to the history of linguistics. Volume 2. (2nd ed.) (pp. 1593-1594). Berlin: DeGruyter.
  • Seuren, P. A. M. (1998). Towards a discourse-semantic account of donkey anaphora. In S. Botley, & T. McEnery (Eds.), New Approaches to Discourse Anaphora: Proceedings of the Second Colloquium on Discourse Anaphora and Anaphor Resolution (DAARC2) (pp. 212-220). Lancaster: Universiy Centre for Computer Corpus Research on Language, Lancaster University.
  • Seuren, P. A. M. (1994). Translation relations in semantic syntax. In G. Bouma, & G. Van Noord (Eds.), CLIN IV: Papers from the Fourth CLIN Meeting (pp. 149-162). Groningen: Vakgroep Alfa-informatica, Rijksuniversiteit Groningen.
  • Shayan, S., Moreira, A., Windhouwer, M., Koenig, A., & Drude, S. (2013). LEXUS 3 - a collaborative environment for multimedia lexica. In Proceedings of the Digital Humanities Conference 2013 (pp. 392-395).
  • Li, Y., Wu, S., Shi, S., Tong, S., Zhang, Y., & Guo, X. (2021). Enhanced inter-brain connectivity between children and adults during cooperation: a dual EEG study. In 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC) (pp. 6289-6292). doi:10.1109/EMBC46164.2021.9630330.

    Abstract

    Previous fNIRS studies have suggested that adult-child cooperation is accompanied by increased inter-brain synchrony. However, its reflection in the electrophysiological synchrony remains unclear. In this study, we designed a naturalistic and well-controlled adult-child interaction paradigm using a tangram solving video game, and recorded dual-EEG from child and adult dyads during cooperative and individual conditions. By calculating the directed inter-brain connectivity in the theta and alpha bands, we found that the inter-brain frontal network was more densely connected and stronger in strength during the cooperative than the individual condition when the adult was watching the child playing. Moreover, the inter-brain network across different dyads shared more common information flows from the player to the observer during cooperation, but was more individually different in solo play. The results suggest an enhancement in inter-brain EEG interactions during adult-child cooperation. However, the enhancement was evident in all cooperative cases but partly depended on the role of participants.
  • Sicoli, M. A., Majid, A., & Levinson, S. C. (2009). The language of sound: II. In A. Majid (Ed.), Field manual volume 12 (pp. 14-19). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.446294.

    Abstract

    The task is designed to elicit vocabulary for simple sounds. The primary goal is to establish how people describe sound and what resources the language provides generally for encoding this domain. More specifically: (1) whether there is dedicated vocabulary for encoding simple sound contrasts and (2) how much consistency there is within a community in descriptions. This develops on materials used in The language of sound
  • Skiba, R. (2006). Computeranalyse/Computer Analysis. In U. Amon, N. Dittmar, K. Mattheier, & P. Trudgill (Eds.), Sociolinguistics: An international handbook of the science of language and society [2nd completely revised and extended edition] (pp. 1187-1197). Berlin, New York: de Gruyter.
  • Sloetjes, H. (2013). The ELAN annotation tool. In H. Lausberg (Ed.), Understanding body movement: A guide to empirical research on nonverbal behaviour with an introduction to the NEUROGES coding system (pp. 193-198). Frankfurt a/M: Lang.
  • Sloetjes, H. (2013). Step by step introduction in NEUROGES coding with ELAN. In H. Lausberg (Ed.), Understanding body movement: A guide to empirical research on nonverbal behaviour with an introduction to the NEUROGES coding system (pp. 201-212). Frankfurt a/M: Lang.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Modelling the effects of formal literacy training on language mediated visual attention. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 3420-3425). Austin, TX: Cognitive Science Society.

    Abstract

    Recent empirical evidence suggests that language-mediated eye gaze is partly determined by level of formal literacy training. Huettig, Singh and Mishra (2011) showed that high-literate individuals' eye gaze was closely time locked to phonological overlap between a spoken target word and items presented in a visual display. In contrast, low-literate individuals' eye gaze was not related to phonological overlap, but was instead strongly influenced by semantic relationships between items. Our present study tests the hypothesis that this behavior is an emergent property of an increased ability to extract phonological structure from the speech signal, as in the case of high-literates, with low-literates more reliant on more coarse grained structure. This hypothesis was tested using a neural network model, that integrates linguistic information extracted from the speech signal with visual and semantic information within a central resource. We demonstrate that contrasts in fixation behavior similar to those observed between high and low literates emerge when models are trained on speech signals of contrasting granularity.
  • Snowdon, C. T., & Cronin, K. A. (2009). Comparative cognition and neuroscience. In G. Berntson, & J. Cacioppo (Eds.), Handbook of neuroscience for the behavioral sciences (pp. 32-55). Hoboken, NJ: Wiley.
  • Stehouwer, H., & van Zaanen, M. (2009). Language models for contextual error detection and correction. In Proceedings of the EACL 2009 Workshop on Computational Linguistic Aspects of Grammatical Inference (pp. 41-48). Association for Computational Linguistics.

    Abstract

    The problem of identifying and correcting confusibles, i.e. context-sensitive spelling errors, in text is typically tackled using specifically trained machine learning classifiers. For each different set of confusibles, a specific classifier is trained and tuned. In this research, we investigate a more generic approach to context-sensitive confusible correction. Instead of using specific classifiers, we use one generic classifier based on a language model. This measures the likelihood of sentences with different possible solutions of a confusible in place. The advantage of this approach is that all confusible sets are handled by a single model. Preliminary results show that the performance of the generic classifier approach is only slightly worse that that of the specific classifier approach
  • Stehouwer, H., & Van Zaanen, M. (2009). Token merging in language model-based confusible disambiguation. In T. Calders, K. Tuyls, & M. Pechenizkiy (Eds.), Proceedings of the 21st Benelux Conference on Artificial Intelligence (pp. 241-248).

    Abstract

    In the context of confusible disambiguation (spelling correction that requires context), the synchronous back-off strategy combined with traditional n-gram language models performs well. However, when alternatives consist of a different number of tokens, this classification technique cannot be applied directly, because the computation of the probabilities is skewed. Previous work already showed that probabilities based on different order n-grams should not be compared directly. In this article, we propose new probability metrics in which the size of the n is varied according to the number of tokens of the confusible alternative. This requires access to n-grams of variable length. Results show that the synchronous back-off method is extremely robust. We discuss the use of suffix trees as a technique to store variable length n-gram information efficiently.
  • Stivers, T. (2006). Treatment decisions: negotiations between doctors and parents in acute care encounters. In J. Heritage, & D. W. Maynard (Eds.), Communication in medical care: Interaction between primary care physicians and patients (pp. 279-312). Cambridge: Cambridge University Press.
  • Stolker, C. J. J. M., & Poletiek, F. H. (1998). Smartengeld - Wat zijn we eigenlijk aan het doen? Naar een juridische en psychologische evaluatie. In F. Stadermann (Ed.), Bewijs en letselschade (pp. 71-86). Lelystad, The Netherlands: Koninklijke Vermande.
  • Sumer, B., Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2013). Acquisition of locative expressions in children learning Turkish Sign Language (TİD) and Turkish. In E. Arik (Ed.), Current directions in Turkish Sign Language research (pp. 243-272). Newcastle upon Tyne: Cambridge Scholars Publishing.

    Abstract

    In sign languages, where space is often used to talk about space, expressions of spatial relations (e.g., ON, IN, UNDER, BEHIND) may rely on analogue mappings of real space onto signing space. In contrast, spoken languages express space in mostly categorical ways (e.g. adpositions). This raises interesting questions about the role of language modality in the acquisition of expressions of spatial relations. However, whether and to what extent modality influences the acquisition of spatial language is controversial – mostly due to the lack of direct comparisons of Deaf children to Deaf adults and to age-matched hearing children in similar tasks. Furthermore, the previous studies have taken English as the only model for spoken language development of spatial relations.
    Therefore, we present a balanced study in which spatial expressions by deaf and hearing children in two different age-matched groups (preschool children and school-age children) are systematically compared, as well as compared to the spatial expressions of adults. All participants performed the same tasks, describing angular (LEFT, RIGHT, FRONT, BEHIND) and non-angular spatial configurations (IN, ON, UNDER) of different objects (e.g. apple in box; car behind box).
    The analysis of the descriptions with non-angular spatial relations does not show an effect of modality on the development of
    locative expressions in TİD and Turkish. However, preliminary results of the analysis of expressions of angular spatial relations suggest that signers provide angular information in their spatial descriptions
    more frequently than Turkish speakers in all three age groups, and thus showing a potentially different developmental pattern in this domain. Implications of the findings with regard to the development of relations in spatial language and cognition will be discussed.
  • Sumner, M., Kurumada, C., Gafter, R., & Casillas, M. (2013). Phonetic variation and the recognition of words with pronunciation variants. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 3486-3492). Austin, TX: Cognitive Science Society.
  • Suppes, P., Böttner, M., & Liang, L. (1998). Machine Learning of Physics Word Problems: A Preliminary Report. In A. Aliseda, R. van Glabbeek, & D. Westerståhl (Eds.), Computing Natural Language (pp. 141-154). Stanford, CA, USA: CSLI Publications.
  • Ten Bosch, L., Baayen, R. H., & Ernestus, M. (2006). On speech variation and word type differentiation by articulatory feature representations. In Proceedings of Interspeech 2006 (pp. 2230-2233).

    Abstract

    This paper describes ongoing research aiming at the description of variation in speech as represented by asynchronous articulatory features. We will first illustrate how distances in the articulatory feature space can be used for event detection along speech trajectories in this space. The temporal structure imposed by the cosine distance in articulatory feature space coincides to a large extent with the manual segmentation on phone level. The analysis also indicates that the articulatory feature representation provides better such alignments than the MFCC representation does. Secondly, we will present first results that indicate that articulatory features can be used to probe for acoustic differences in the onsets of Dutch singulars and plurals.
  • ten Bosch, L., Hämäläinen, A., Scharenborg, O., & Boves, L. (2006). Acoustic scores and symbolic mismatch penalties in phone lattices. In Proceedings of the 2006 IEEE International Conference on Acoustics, Speech and Signal Processing [ICASSP 2006]. IEEE.

    Abstract

    This paper builds on previous work that aims at unraveling the structure of the speech signal by means of using probabilistic representations. The context of this work is a multi-pass speech recognition system in which a phone lattice is created and used as a basis for a lexical search in which symbolic mismatches are allowed at certain costs. The focus is on the optimization of the costs of phone insertions, deletions and substitutions that are used in the lexical decoding pass. Two optimization approaches are presented, one related to a multi-pass computational model for human speech recognition, the other based on a decoding in which Bayes’ risks are minimized. In the final section, the advantages of these optimization methods are discussed and compared.
  • Ten Bosch, L., Boves, L., & Ernestus, M. (2013). Towards an end-to-end computational model of speech comprehension: simulating a lexical decision task. In Proceedings of INTERSPEECH 2013: 14th Annual Conference of the International Speech Communication Association (pp. 2822-2826).

    Abstract

    This paper describes a computational model of speech comprehension that takes the acoustic signal as input and predicts reaction times as observed in an auditory lexical decision task. By doing so, we explore a new generation of end-to-end computational models that are able to simulate the behaviour of human subjects participating in a psycholinguistic experiment. So far, nearly all computational models of speech comprehension do not start from the speech signal itself, but from abstract representations of the speech signal, while the few existing models that do start from the acoustic signal cannot directly model reaction times as obtained in comprehension experiments. The main functional components in our model are the perception stage, which is compatible with the psycholinguistic model Shortlist B and is implemented with techniques from automatic speech recognition, and the decision stage, which is based on the linear ballistic accumulation decision model. We successfully tested our model against data from 20 participants performing a largescale auditory lexical decision experiment. Analyses show that the model is a good predictor for the average judgment and reaction time for each word.
  • Terrill, A., & Dunn, M. (2006). Semantic transference: Two preliminary case studies from the Solomon Islands. In C. Lefebvre, L. White, & C. Jourdan (Eds.), L2 acquisition and Creole genesis: Dialogues (pp. 67-85). Amsterdam: Benjamins.
  • Terrill, A. (2006). Central Solomon languages. In K. Brown (Ed.), Encyclopedia of language and linguistics (vol. 2) (pp. 279-280). Amsterdam: Elsevier.

    Abstract

    The Papuan languages of the central Solomon Islands are a negatively defined areal grouping: They are those four or possibly five languages in the central Solomon Islands that do not belong to the Austronesian family. Bilua (Vella Lavella), Touo (Rendova), Lavukaleve (Russell Islands), Savosavo (Savo Island) and possibly Kazukuru (New Georgia) have been identified as non-Austronesian since the early 20th century. However, their affiliations both to each other and to other languages still remain a mystery. Heterogeneous and until recently largely undescribed, they present an interesting departure from what is known both of Austronesian languages in the region and of the Papuan languages of the mainland of New Guinea.
  • Thompson-Schill, S., Hagoort, P., Dominey, P. F., Honing, H., Koelsch, S., Ladd, D. R., Lerdahl, F., Levinson, S. C., & Steedman, M. (2013). Multiple levels of structure in language and music. In M. A. Arbib (Ed.), Language, music, and the brain: A mysterious relationship (pp. 289-303). Cambridge, MA: MIT Press.

    Abstract

    A forum devoted to the relationship between music and language begins with an implicit assumption: There is at least one common principle that is central to all human musical systems and all languages, but that is not characteristic of (most) other domains. Why else should these two categories be paired together for analysis? We propose that one candidate for a common principle is their structure. In this chapter, we explore the nature of that structure—and its consequences for psychological and neurological processing mechanisms—within and across these two domains.
  • Timmer, K., Ganushchak, L. Y., Mitlina, Y., & Schiller, N. O. (2013). Choosing first or second language phonology in 125 ms [Abstract]. Journal of Cognitive Neuroscience, 25 Suppl., 164.

    Abstract

    We are often in a bilingual situation (e.g., overhearing a conversation in the train). We investigated whether first (L1) and second language (L2) phonologies are automatically activated. A masked priming paradigm was used, with Russian words as targets and either Russian or English words as primes. Event-related potentials (ERPs) were recorded while Russian (L1) – English (L2) bilinguals read aloud L1 target words (e.g. РЕЙС /reis/ ‘fl ight’) primed with either L1 (e.g. РАНА /rana/ ‘wound’) or L2 words (e.g. PACK). Target words were read faster when they were preceded by phonologically related L1 primes but not by orthographically related L2 primes. ERPs showed orthographic priming in the 125-200 ms time window. Thus, both L1 and L2 phonologies are simultaneously activated during L1 reading. The results provide support for non-selective models of bilingual reading, which assume automatic activation of the non-target language phonology even when it is not required by the task.
  • Torreira, F., & Ernestus, M. (2009). Probabilistic effects on French [t] duration. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 448-451). Causal Productions Pty Ltd.

    Abstract

    The present study shows that [t] consonants are affected by probabilistic factors in a syllable-timed language as French, and in spontaneous as well as in journalistic speech. Study 1 showed a word bigram frequency effect in spontaneous French, but its exact nature depended on the corpus on which the probabilistic measures were based. Study 2 investigated journalistic speech and showed an effect of the joint frequency of the test word and its following word. We discuss the possibility that these probabilistic effects are due to the speaker’s planning of upcoming words, and to the speaker’s adaptation to the listener’s needs.
  • Trujillo, J. P., Levinson, S. C., & Holler, J. (2021). Visual information in computer-mediated interaction matters: Investigating the association between the availability of gesture and turn transition timing in conversation. In M. Kurosu (Ed.), Human-Computer Interaction. Design and User Experience Case Studies. HCII 2021 (pp. 643-657). Cham: Springer. doi:10.1007/978-3-030-78468-3_44.

    Abstract

    Natural human interaction involves the fast-paced exchange of speaker turns. Crucially, if a next speaker waited with planning their turn until the current speaker was finished, language production models would predict much longer turn transition times than what we observe. Next speakers must therefore prepare their turn in parallel to listening. Visual signals likely play a role in this process, for example by helping the next speaker to process the ongoing utterance and thus prepare an appropriately-timed response.

    To understand how visual signals contribute to the timing of turn-taking, and to move beyond the mostly qualitative studies of gesture in conversation, we examined unconstrained, computer-mediated conversations between 20 pairs of participants while systematically manipulating speaker visibility. Using motion tracking and manual gesture annotation, we assessed 1) how visibility affected the timing of turn transitions, and 2) whether use of co-speech gestures and 3) the communicative kinematic features of these gestures were associated with changes in turn transition timing.

    We found that 1) decreased visibility was associated with less tightly timed turn transitions, and 2) the presence of gestures was associated with more tightly timed turn transitions across visibility conditions. Finally, 3) structural and salient kinematics contributed to gesture’s facilitatory effect on turn transition times.

    Our findings suggest that speaker visibility--and especially the presence and kinematic form of gestures--during conversation contributes to the temporal coordination of conversational turns in computer-mediated settings. Furthermore, our study demonstrates that it is possible to use naturalistic conversation and still obtain controlled results.
  • Tuinman, A. (2006). Overcompensation of /t/ reduction in Dutch by German/Dutch bilinguals. In Variation, detail and representation: 10th Conference on Laboratory Phonology (pp. 101-102).
  • Uddén, J., Araújo, S., Forkstam, C., Ingvar, M., Hagoort, P., & Petersson, K. M. (2009). A matter of time: Implicit acquisition of recursive sequence structures. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the Thirty-First Annual Conference of the Cognitive Science Society (pp. 2444-2449).

    Abstract

    A dominant hypothesis in empirical research on the evolution of language is the following: the fundamental difference between animal and human communication systems is captured by the distinction between regular and more complex non-regular grammars. Studies reporting successful artificial grammar learning of nested recursive structures and imaging studies of the same have methodological shortcomings since they typically allow explicit problem solving strategies and this has been shown to account for the learning effect in subsequent behavioral studies. The present study overcomes these shortcomings by using subtle violations of agreement structure in a preference classification task. In contrast to the studies conducted so far, we use an implicit learning paradigm, allowing the time needed for both abstraction processes and consolidation to take place. Our results demonstrate robust implicit learning of recursively embedded structures (context-free grammar) and recursive structures with cross-dependencies (context-sensitive grammar) in an artificial grammar learning task spanning 9 days. Keywords: Implicit artificial grammar learning; centre embedded; cross-dependency; implicit learning; context-sensitive grammar; context-free grammar; regular grammar; non-regular grammar
  • Ünal, E., & Papafragou, A. (2013). Linguistic and conceptual representations of inference as a knowledge source. In S. Baiz, N. Goldman, & R. Hawkes (Eds.), Proceedings of the 37th Annual Boston University Conference on Language Development (BUCLD 37) (pp. 433-443). Boston: Cascadilla Press.
  • Vainio, M., Suni, A., Raitio, T., Nurminen, J., Järvikivi, J., & Alku, P. (2009). New method for delexicalization and its application to prosodic tagging for text-to-speech synthesis. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 1703-1706).

    Abstract

    This paper describes a new flexible delexicalization method based on glottal excited parametric speech synthesis scheme. The system utilizes inverse filtered glottal flow and all-pole modelling of the vocal tract. The method provides a possibility to retain and manipulate all relevant prosodic features of any kind of speech. Most importantly, the features include voice quality, which has not been properly modeled in earlier delexicalization methods. The functionality of the new method was tested in a prosodic tagging experiment aimed at providing word prominence data for a text-to-speech synthesis system. The experiment confirmed the usefulness of the method and further corroborated earlier evidence that linguistic factors influence the perception of prosodic prominence.
  • Van Staden, M., Bowerman, M., & Verhelst, M. (2006). Some properties of spatial description in Dutch. In S. C. Levinson, & D. Wilkins (Eds.), Grammars of Space (pp. 475-511). Cambridge: Cambridge University Press.
  • Van Berkum, J. J. A. (2009). The neuropragmatics of 'simple' utterance comprehension: An ERP review. In U. Sauerland, & K. Yatsushiro (Eds.), Semantics and pragmatics: From experiment to theory (pp. 276-316). Basingstoke: Palgrave Macmillan.

    Abstract

    In this chapter, I review my EEG research on comprehending sentences in context from a pragmatics-oriented perspective. The review is organized around four questions: (1) When and how do extra-sentential factors such as the prior text, identity of the speaker, or value system of the comprehender affect the incremental sentence interpretation processes indexed by the so-called N400 component of the ERP? (2) When and how do people identify the referents for expressions such as “he” or “the review”, and how do referential processes interact with sense and syntax? (3) How directly pragmatic are the interpretation-relevant ERP effects reported here? (4) Do readers and listeners anticipate upcoming information? One important claim developed in the chapter is that the well-known N400 component, although often associated with ‘semantic integration’, only indirectly reflects the sense-making involved in structure-sensitive dynamic composition of the type studied in semantics and pragmatics. According to the multiple-cause intensified retrieval (MIR) account -- essentially an extension of the memory retrieval account proposed by Kutas and colleagues -- the amplitude of the word-elicited N400 reflects the computational resources used in retrieving the relatively invariant coded meaning stored in semantic long-term memory for, and made available by, the word at hand. Such retrieval becomes more resource-intensive when the coded meanings cued by this word do not match with expectations raised by the relevant interpretive context, but also when certain other relevance signals, such as strong affective connotation or a marked delivery, indicate the need for deeper processing. The most important consequence of this account is that pragmatic modulations of the N400 come about not because the N400 at hand directly reflects a rich compositional-semantic and/or Gricean analysis to make sense of the word’s coded meaning in this particular context, but simply because the semantic and pragmatic implications of the preceding words have already been computed, and now define a less or more helpful interpretive background within which to retrieve coded meaning for the critical word.
  • Van Valin Jr., R. D. (2009). Case in role and reference grammar. In A. Malchukov, & A. Spencer (Eds.), The Oxford handbook of case (pp. 102-120). Oxford University Press.
  • Van Berkum, J. J. A. (2009). Does the N400 directly reflect compositional sense-making? Psychophysiology, Special Issue: Society for Psychophysiological Research Abstracts for the Forty-Ninth Annual Meeting, 46(Suppl. 1), s2.

    Abstract

    A not uncommon assumption in psycholinguistics is that the N400 directly indexes high-level semantic integration, the compositional, word-driven construction of sentence- and discourse-level meaning in some language-relevant unification space. The various discourse- and speaker-dependent modulations of the N400 uncovered by us and others are often taken to support this 'compositional integration' position. In my talk, I will argue that these N400 modulations are probably better interpreted as only indirectly reflecting compositional sense-making. The account that I will advance for these N400 effects is a variant of the classic Kutas and Federmeier (2002, TICS) memory retrieval account in which context effects on the word-elicited N400 are taken to reflect contextual priming of LTM access. It differs from the latter in making more explicit that the contextual cues that prime access to a word's meaning in LTM can range from very simple (e.g., a single concept) to very complex ones (e.g., a structured representation of the current discourse). Furthermore, it incorporates the possibility, suggested by recent N400 findings, that semantic retrieval can also be intensified in response to certain ‘relevance signals’, such as strong value-relevance, or a marked delivery (linguistic focus, uncommon choice of words, etc). In all, the perspective I'll draw is that in the context of discourse-level language processing, N400 effects reflect an 'overlay of technologies', with the construction of discourse-level representations riding on top of more ancient sense-making technology.
  • Van Wijk, C., & Kempen, G. (1985). From sentence structure to intonation contour: An algorithm for computing pitch contours on the basis of sentence accents and syntactic structure. In B. Müller (Ed.), Sprachsynthese: Zur Synthese von natürlich gesprochener Sprache aus Texten und Konzepten (pp. 157-182). Hildesheim: Georg Olms.
  • Van den Bos, E. J., & Poletiek, F. H. (2006). Implicit artificial grammar learning in adults and children. In R. Sun (Ed.), Proceedings of the 28th Annual Conference of the Cognitive Science Society (CogSci 2006) (pp. 2619). Austin, TX, USA: Cognitive Science Society.
  • Van Wijk, C., & Kempen, G. (1982). Kost zinsbouw echt tijd? In R. Stuip, & W. Zwanenberg (Eds.), Handelingen van het zevenendertigste Nederlands Filologencongres (pp. 223-231). Amsterdam: APA-Holland University Press.
  • Van Valin Jr., R. D. (2013). Head-marking languages and linguistic theory. In B. Bickel, L. A. Grenoble, D. A. Peterson, & A. Timberlake (Eds.), Language typology and historical contingency: In honor of Johanna Nichols (pp. 91-124). Amsterdam: Benjamins.

    Abstract

    In her path-breaking 1986 paper, Johanna Nichols proposed a typological contrast between head-marking and dependent-marking languages. Nichols argues that even though the syntactic relations between the head and its dependents are the same in both types of language, the syntactic “bond” between them is not the same; in dependent-marking languages it is one of government, whereas in head-marking languages it is one of apposition. This distinction raises an important question for linguistic theory: How can this contrast – government versus apposition – which can show up in all of the major phrasal types in a language, be captured? The purpose of this paper is to explore the various approaches that have been taken in an attempt to capture the difference between head-marked and dependent-marked syntax in different linguistic theories. The basic problem that head-marking languages pose for syntactic theory will be presented, and then generative approaches will be discussed. The analysis of head-marked structure in Role and Reference Grammar will be presented
  • Van Valin Jr., R. D. (2013). Lexical representation, co-composition, and linking syntax and semantics. In J. Pustejovsky, P. Bouillon, H. Isahara, K. Kanzaki, & C. Lee (Eds.), Advances in generative lexicon theory (pp. 67-107). Dordrecht: Springer.
  • Van Gijn, R., & Gipper, S. (2009). Irrealis in Yurakaré and other languages: On the cross-linguistic consistency of an elusive category. In L. Hogeweg, H. De Hoop, & A. Malchukov (Eds.), Cross-linguistic semantics of tense, aspect, and modality (pp. 155-178). Amsterdam: Benjamins.

    Abstract

    The linguistic category of irrealis does not show stable semantics across languages. This makes it difficult to formulate general statements about this category, and it has led some researchers to reject irrealis as a cross-linguistically valid category. In this paper we look at the semantics of the irrealis category of Yurakaré, an unclassified language spoken in central Bolivia, and compare it to irrealis semantics of a number of other languages. Languages differ with respect to the subcategories they subsume under the heading of irrealis. The variable subcategories are future tense, imperatives, negatives, and habitual aspect. We argue that the cross-linguistic variation is not random, and can be stated in terms of an implicational scale.
  • Van Valin Jr., R. D. (1994). Extraction restrictions, competing theories and the argument from the poverty of the stimulus. In S. D. Lima, R. Corrigan, & G. K. Iverson (Eds.), The reality of linguistic rules (pp. 243-259). Amsterdam: Benjamins.
  • Van Geenhoven, V. (1998). On the Argument Structure of some Noun Incorporating Verbs in West Greenlandic. In M. Butt, & W. Geuder (Eds.), The Projection of Arguments - Lexical and Compositional Factors (pp. 225-263). Stanford, CA, USA: CSLI Publications.
  • Van Valin Jr., R. D. (2009). Privileged syntactic arguments, pivots and controllers. In L. Guerrero, S. Ibáñez, & V. A. Belloro (Eds.), Studies in role and reference grammar (pp. 45-68). Mexico: Universidad Nacional Autónoma de México.
  • Van Valin Jr., R. D. (1998). The acquisition of WH-questions and the mechanisms of language acquisition. In M. Tomasello (Ed.), The new psychology of language: Cognitive and functional approaches to language structure (pp. 221-249). Mahwah, New Jersey: Erlbaum.
  • Van Valin Jr., R. D. (2006). Some universals of verb semantics. In R. Mairal, & J. Gil (Eds.), Linguistic universals (pp. 155-178). Cambridge: Cambridge University Press.
  • Van Valin Jr., R. D. (2009). Role and reference grammar. In F. Brisard, J.-O. Östman, & J. Verschueren (Eds.), Grammar, meaning, and pragmatics (pp. 239-249). Amsterdam: Benjamins.
  • Van de Ven, M., Tucker, B. V., & Ernestus, M. (2009). Semantic context effects in the recognition of acoustically unreduced and reduced words. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (pp. 1867-1870). Causal Productions Pty Ltd.

    Abstract

    Listeners require context to understand the casual pronunciation variants of words that are typical of spontaneous speech (Ernestus et al., 2002). The present study reports two auditory lexical decision experiments, investigating listeners' use of semantic contextual information in the comprehension of unreduced and reduced words. We found a strong semantic priming effect for low frequency unreduced words, whereas there was no such effect for reduced words. Word frequency was facilitatory for all words. These results show that semantic context is relevant especially for the comprehension of unreduced words, which is unexpected given the listener driven explanation of reduction in spontaneous speech.
  • Van Valin Jr., R. D. (2006). Semantic macroroles and language processing. In I. Bornkessel, M. Schlesewsky, B. Comrie, & A. Friederici (Eds.), Semantic role universals and argument linking: Theoretical, typological and psycho-/neurolinguistic perspectives (pp. 263-302). Berlin: Mouton de Gruyter.
  • van Hell, J. G., & Witteman, M. J. (2009). The neurocognition of switching between languages: A review of electrophysiological studies. In L. Isurin, D. Winford, & K. de Bot (Eds.), Multidisciplinary approaches to code switching (pp. 53-84). Philadelphia: John Benjamins.

    Abstract

    The seemingly effortless switching between languages and the merging of two languages into a coherent utterance is a hallmark of bilingual language processing, and reveals the flexibility of human speech and skilled cognitive control. That skill appears to be available not only to speakers when they produce language-switched utterances, but also to listeners and readers when presented with mixed language information. In this chapter, we review electrophysiological studies in which Event-Related Potentials (ERPs) are derived from recordings of brain activity to examine the neurocognitive aspects of comprehending and producing mixed language. Topics we discuss include the time course of brain activity associated with language switching between single stimuli and language switching of words embedded in a meaningful sentence context. The majority of ERP studies report that switching between languages incurs neurocognitive costs, but –more interestingly- ERP patterns differ as a function of L2 proficiency and the amount of daily experience with language switching, the direction of switching (switching into L2 is typically associated with higher switching costs than switching into L1), the type of language switching task, and the predictability of the language switch. Finally, we outline some future directions for this relatively new approach to the study of language switching.
  • Van Putten, S. (2013). The meaning of the Avatime additive particle tsye. In M. Balbach, L. Benz, S. Genzel, M. Grubic, A. Renans, S. Schalowski, M. Stegenwallner, & A. Zeldes (Eds.), Information structure: Empirical perspectives on theory (pp. 55-74). Potsdam: Universitätsverlag Potsdam. Retrieved from http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:de:kobv:517-opus-64804.
  • Verhagen, J. (2009). Light verbs and the acquisition of finiteness and negation in Dutch as a second language. In C. Dimroth, & P. Jordens (Eds.), Functional categories in learner language (pp. 203-234). Berlin: Mouton de Gruyter.
  • Verkerk, A. (2009). A semantic map of secondary predication. In B. Botma, & J. Van Kampen (Eds.), Linguistics in the Netherlands 2009 (pp. 115-126).
  • Vernes, S. C., & Fisher, S. E. (2013). Genetic pathways implicated in speech and language. In S. Helekar (Ed.), Animal models of speech and language disorders (pp. 13-40). New York: Springer. doi:10.1007/978-1-4614-8400-4_2.

    Abstract

    Disorders of speech and language are highly heritable, providing strong
    support for a genetic basis. However, the underlying genetic architecture is complex,
    involving multiple risk factors. This chapter begins by discussing genetic loci associated
    with common multifactorial language-related impairments and goes on to
    detail the only gene (known as FOXP2) to be directly implicated in a rare monogenic
    speech and language disorder. Although FOXP2 was initially uncovered in
    humans, model systems have been invaluable in progressing our understanding of
    the function of this gene and its associated pathways in language-related areas of the
    brain. Research in species from mouse to songbird has revealed effects of this gene
    on relevant behaviours including acquisition of motor skills and learned vocalisations
    and demonstrated a role for Foxp2 in neuronal connectivity and signalling,
    particularly in the striatum. Animal models have also facilitated the identification of
    wider neurogenetic networks thought to be involved in language development and
    disorder and allowed the investigation of new candidate genes for disorders involving
    language, such as CNTNAP2 and FOXP1. Ongoing work in animal models promises
    to yield new insights into the genetic and neural mechanisms underlying human
    speech and language
  • Von Stutterheim, C., Carroll, M., & Klein, W. (2009). New perspectives in analyzing aspectual distinctions across languages. In W. Klein, & P. Li (Eds.), The expression of time (pp. 195-216). Berlin: Mouton de Gruyter.
  • Weber, A. (1998). Listening to nonnative language which violates native assimilation rules. In D. Duez (Ed.), Proceedings of the European Scientific Communication Association workshop: Sound patterns of Spontaneous Speech (pp. 101-104).

    Abstract

    Recent studies using phoneme detection tasks have shown that spoken-language processing is neither facilitated nor interfered with by optional assimilation, but is inhibited by violation of obligatory assimilation. Interpretation of these results depends on an assessment of their generality, specifically, whether they also obtain when listeners are processing nonnative language. Two separate experiments are presented in which native listeners of German and native listeners of Dutch had to detect a target fricative in legal monosyllabic Dutch nonwords. All of the nonwords were correct realisations in standard Dutch. For German listeners, however, half of the nonwords contained phoneme strings which violate the German fricative assimilation rule. Whereas the Dutch listeners showed no significant effects, German listeners detected the target fricative faster when the German fricative assimilation was violated than when no violation occurred. The results might suggest that violation of assimilation rules does not have to make processing more difficult per se.
  • Weber, A. (2009). The role of linguistic experience in lexical recognition [Abstract]. Journal of the Acoustical Society of America, 125, 2759.

    Abstract

    Lexical recognition is typically slower in L2 than in L1. Part of the difficulty comes from a not precise enough processing of L2 phonemes. Consequently, L2 listeners fail to eliminate candidate words that L1 listeners can exclude from competing for recognition. For instance, the inability to distinguish /r/ from /l/ in rocket and locker makes for Japanese listeners both words possible candidates when hearing their onset (e.g., Cutler, Weber, and Otake, 2006). The L2 disadvantage can, however, be dispelled: For L2 listeners, but not L1 listeners, L2 speech from a non-native talker with the same language background is known to be as intelligible as L2 speech from a native talker (e.g., Bent and Bradlow, 2003). A reason for this may be that L2 listeners have ample experience with segmental deviations that are characteristic for their own accent. On this account, only phonemic deviations that are typical for the listeners’ own accent will cause spurious lexical activation in L2 listening (e.g., English magic pronounced as megic for Dutch listeners). In this talk, I will present evidence from cross-modal priming studies with a variety of L2 listener groups, showing how the processing of phonemic deviations is accent-specific but withstands fine phonetic differences.
  • Widlok, T. (2006). Two ways of looking at a Mangetti grove. In A. Takada (Ed.), Proceedings of the workshop: Landscape and society (pp. 11-16). Kyoto: 21st Century Center of Excellence Program.
  • Windhouwer, M., Petro, J., Newskaya, I., Drude, S., Aristar-Dry, H., & Gippert, J. (2013). Creating a serialization of LMF: The experience of the RELISH project. In G. Francopoulo (Ed.), LMF - Lexical Markup Framework (pp. 215-226). London: Wiley.
  • Windhouwer, M., & Wright, S. E. (2013). LMF and the Data Category Registry: Principles and application. In G. Francopoulo (Ed.), LMF: Lexical Markup Framework (pp. 41-50). London: Wiley.
  • Wittek, A. (1998). Learning verb meaning via adverbial modification: Change-of-state verbs in German and the adverb "wieder" again. In A. Greenhill, M. Hughes, H. Littlefield, & H. Walsh (Eds.), Proceedings of the 22nd Annual Boston University Conference on Language Development (pp. 779-790). Somerville, MA: Cascadilla Press.
  • Wittenburg, P., Brugman, H., Russel, A., Klassmann, A., & Sloetjes, H. (2006). ELAN: a professional framework for multimodality research. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 1556-1559).

    Abstract

    Utilization of computer tools in linguistic research has gained importance with the maturation of media frameworks for the handling of digital audio and video. The increased use of these tools in gesture, sign language and multimodal interaction studies has led to stronger requirements on the flexibility, the efficiency and in particular the time accuracy of annotation tools. This paper describes the efforts made to make ELAN a tool that meets these requirements, with special attention to the developments in the area of time accuracy. In subsequent sections an overview will be given of other enhancements in the latest versions of ELAN, that make it a useful tool in multimodality research.
  • Wittenburg, P., Broeder, D., Klein, W., Levinson, S. C., & Romary, L. (2006). Foundations of modern language resource archives. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 625-628).

    Abstract

    A number of serious reasons will convince an increasing amount of researchers to store their relevant material in centers which we will call "language resource archives". They combine the duty of taking care of long-term preservation as well as the task to give access to their material to different user groups. Access here is meant in the sense that an active interaction with the data will be made possible to support the integration of new data, new versions or commentaries of all sort. Modern Language Resource Archives will have to adhere to a number of basic principles to fulfill all requirements and they will have to be involved in federations to create joint language resource domains making it even more simple for the researchers to access the data. This paper makes an attempt to formulate the essential pillars language resource archives have to adhere to.
  • Wittenburg, P., & Ringersma, J. (2013). Metadata description for lexicons. In R. H. Gouws, U. Heid, W. Schweickard, & H. E. Wiegand (Eds.), Dictionaries: An international encyclopedia of lexicography: Supplementary volume: Recent developments with focus on electronic and computational lexicography (pp. 1329-1335). Berlin: Mouton de Gruyter.
  • Wood, N. (2009). Field recording for dummies. In A. Majid (Ed.), Field manual volume 12 (pp. V). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Wright, S. E., Windhouwer, M., Schuurman, I., & Kemps-Snijders, M. (2013). Community efforts around the ISOcat Data Category Registry. In I. Gurevych, & J. Kim (Eds.), The People's Web meets NLP: Collaboratively constructed language resources (pp. 349-374). New York: Springer.

    Abstract

    The ISOcat Data Category Registry provides a community computing environment for creating, storing, retrieving, harmonizing and standardizing data category specifications (DCs), used to register linguistic terms used in various fields. This chapter recounts the history of DC documentation in TC 37, beginning from paper-based lists created for lexicographers and terminologists and progressing to the development of a web-based resource for a much broader range of users. While describing the considerable strides that have been made to collect a very large comprehensive collection of DCs, it also outlines difficulties that have arisen in developing a fully operative web-based computing environment for achieving consensus on data category names, definitions, and selections and describes efforts to overcome some of the present shortcomings and to establish positive working procedures designed to engage a wide range of people involved in the creation of language resources.
  • Xiao, M., Kong, X., Liu, J., & Ning, J. (2009). TMBF: Bloom filter algorithms of time-dependent multi bit-strings for incremental set. In Proceedings of the 2009 International Conference on Ultra Modern Telecommunications & Workshops.

    Abstract

    Set is widely used as a kind of basic data structure. However, when it is used for large scale data set the cost of storage, search and transport is overhead. The bloom filter uses a fixed size bit string to represent elements in a static set, which can reduce storage space and search cost that is a fixed constant. The time-space efficiency is achieved at the cost of a small probability of false positive in membership query. However, for many applications the space savings and locating time constantly outweigh this drawback. Dynamic bloom filter (DBF) can support concisely representation and approximate membership queries of dynamic set instead of static set. It has been proved that DBF not only possess the advantage of standard bloom filter, but also has better features when dealing with dynamic set. This paper proposes a time-dependent multiple bit-strings bloom filter (TMBF) which roots in the DBF and targets on dynamic incremental set. TMBF uses multiple bit-strings in time order to present a dynamic increasing set and uses backward searching to test whether an element is in a set. Based on the system logs from a real P2P file sharing system, the evaluation shows a 20% reduction in searching cost compared to DBF.
  • Zeshan, U. (2006). Sign language of the world. In K. Brown (Ed.), Encyclopedia of language and linguistics (vol. 11) (pp. 358-365). Amsterdam: Elsevier.

    Abstract

    Although sign language-using communities exist in all areas of the world, few sign languages have been documented in detail. Sign languages occur in a variety of sociocultural contexts, ranging from sign languages used in closed village communities to officially recognized national sign languages. They may be grouped into language families on historical grounds or may participate in various language contact situations. Systematic cross-linguistic comparison reveals both significant structural similarities and important typological differences between sign languages. Focusing on information from non-Western countries, this article provides an overview of the sign languages of the world.
  • Zhang, Y., Ding, R., Frassinelli, D., Tuomainen, J., Klavinskis-Whiting, S., & Vigliocco, G. (2021). Electrophysiological signatures of second language multimodal comprehension. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 2971-2977). Vienna: Cognitive Science Society.

    Abstract

    Language is multimodal: non-linguistic cues, such as prosody,
    gestures and mouth movements, are always present in face-to-
    face communication and interact to support processing. In this
    paper, we ask whether and how multimodal cues affect L2
    processing by recording EEG for highly proficient bilinguals
    when watching naturalistic materials. For each word, we
    quantified surprisal and the informativeness of prosody,
    gestures, and mouth movements. We found that each cue
    modulates the N400: prosodic accentuation, meaningful
    gestures, and informative mouth movements all reduce N400.
    Further, effects of meaningful gestures but not mouth
    informativeness are enhanced by prosodic accentuation,
    whereas effects of mouth are enhanced by meaningful gestures
    but reduced by beat gestures. Compared with L1, L2
    participants benefit less from cues and their interactions, except
    for meaningful gestures and mouth movements. Thus, in real-
    world language comprehension, L2 comprehenders use
    multimodal cues just as L1 speakers albeit to a lesser extent.
  • Zhang, Y., Amatuni, A., Cain, E., Wang, X., Crandall, D., & Yu, C. (2021). Human learners integrate visual and linguistic information cross-situational verb learning. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 2267-2273). Vienna: Cognitive Science Society.

    Abstract

    Learning verbs is challenging because it is difficult to infer the precise meaning of a verb when there are a multitude of relations that one can derive from a single event. To study this verb learning challenge, we used children's egocentric view collected from naturalistic toy-play interaction as learning materials and investigated how visual and linguistic information provided in individual naming moments as well as cross-situational information provided from multiple learning moments can help learners resolve this mapping problem using the Human Simulation Paradigm. Our results show that learners benefit from seeing children's egocentric views compared to third-person observations. In addition, linguistic information can help learners identify the correct verb meaning by eliminating possible meanings that do not belong to the linguistic category. Learners are also able to integrate visual and linguistic information both within and across learning situations to reduce the ambiguity in the space of possible verb meanings.
  • Zimianiti, E., Dimitrakopoulou, M., & Tsangalidis, A. (2021). Τhematic roles in dementia: The case of psychological verbs. In A. Botinis (Ed.), ExLing 2021: Proceedings of the 12th International Conference of Experimental Linguistics (pp. 269-272). Athens, Greece: ExLing Society.

    Abstract

    This study investigates the difficulty of people with Mild Cognitive Impairment (MCI), mild and moderate Alzheimer’s disease (AD) in the production and comprehension of psychological verbs, as thematic realization may involve both the canonical and non-canonical realization of arguments. More specifically, we aim to examine whether there is a deficit in the mapping of syntactic and semantic representations in psych-predicates regarding Greek-speaking individuals with MCI and AD, and whether the linguistic abilities associated with θ-role assignment decrease as the disease progresses. Moreover, given the decline of cognitive abilities in people with MCI and AD, we explore the effects of components of memory (Semantic, Episodic, and Working Memory) on the assignment of thematic roles in constructions with psychological verbs.
  • Zwitserlood, I., & Van Gijn, I. (2006). Agreement phenomena in Sign Language of the Netherlands. In P. Ackema (Ed.), Arguments and Agreement (pp. 195-229). Oxford: Oxford University Press.
  • Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2013). Expression of multiple entities in Turkish Sign Language (TİD). In E. Arik (Ed.), Current Directions in Turkish Sign Language Research (pp. 272-302). Newcastle upon Tyne: Cambridge Scholars Publishing.

    Abstract

    This paper reports on an exploration of the ways in which multiple entities are expressed in Turkish Sign Language (TİD). The (descriptive and quantitative) analyses provided are based on a corpus of both spontaneous data and specifically elicited data, in order to provide as comprehensive an account as possible. We have found several devices in TİD for expression of multiple entities, in particular localization, spatial plural predicate inflection, and a specific form used to express multiple entities that are side by side in the same configuration (not reported for any other sign language to date), as well as numerals and quantifiers. In contrast to some other signed languages, TİD does not appear to have a productive system of plural reduplication. We argue that none of the devices encountered in the TİD data is a genuine plural marking device and that the plural interpretation of multiple entity localizations and plural predicate inflections is a by-product of the use of space to indicate the existence or the involvement in an event of multiple entities.

Share this page