Publications

Displaying 301 - 365 of 365
  • Seuren, P. A. M. (1980). Dreiwertige Logik und die Semantik natürlicher Sprache. In J. Ballweg, & H. Glinz (Eds.), Grammatik und Logik: Jahrbuch 1979 des Instituts für deutsche Sprache (pp. 72-103). Düsseldorf: Pädagogischer Verlag Schwann.
  • Seuren, P. A. M. (2006). Factivity. In K. Brown (Ed.), Encyclopedia of Language and Linguistics (vol. 4) (pp. 423-424). Amsterdam: Elsevier.

    Abstract

    Some predicates are ‘factive’ in that they induce the presupposition that what is said in their subordinate that clause is true.
  • Seuren, P. A. M. (2006). Donkey sentences. In K. Brown (Ed.), Encyclopedia of Language and Linguistics (vol. 3) (pp. 763-766). Amsterdam: Elsevier.

    Abstract

    The term ‘donkey sentences’ derives from the medieval philosopher Walter Burleigh, whose example sentences contain mention of donkeys. The modern philosopher Peter Geach rediscovered Burleigh's sentences and the associated problem. The problem is that natural language anaphoric pronouns are sometimes used in a way that cannot be accounted for in terms of modern predicate calculus. The solution lies in establishing a separate category of anaphoric pronouns that refer via the intermediary of a contextually given antecedent, possibly an existentially quantified expression.
  • Seuren, P. A. M. (2006). Early formalization tendencies in 20th-century American linguistics. In S. Auroux, E. Koerner, H.-J. Niederehe, & K. Versteegh (Eds.), History of the Language Sciences: An International Handbook on the Evolution of the Study of Language from the Beginnings to the Present (pp. 2026-2034). Berlin: Walter de Gruyter.
  • Seuren, P. A. M. (2006). Discourse domain. In K. Brown (Ed.), Encyclopedia of language and lingusitics (vol. 1) (pp. 638-639). Amsterdam: Elsevier.

    Abstract

    A discourse domain D is a form of middle-term memory for the storage of the information embodied in the discourse at hand. The information carried by a new utterance u is added to D (u is incremented to D). The processes involved and the specific structure of D are a matter of ongoing research.
  • Seuren, P. A. M. (2006). Discourse semantics. In K. Brown (Ed.), Encyclopedia of language and linguistics (vol. 3) (pp. 669-677). Amsterdam: Elsevier.

    Abstract

    Discourse semantics (DSx) is based on the fact that the interpretation of uttered sentences is dependent on and co-determined by the information stored in a specialized middle-term cognitive memory called discourse domain (D). DSx studies the structure and dynamics of Ds and the conditions to be fulfilled by D for proper interpretation. It does so in the light of the truth-conditional criteria for semantics, with an emphasis on intensionality phenomena. It requires the assumption of virtual entities and virtual facts. Any model-theoretic interpretation holds between discourse structures and pre-established verification domains.
  • Seuren, P. A. M. (2006). Aristotle and linguistics. In K. Brown (Ed.), Encyclopedia of language and lingusitics (vol.1) (pp. 469-471). Amsterdam: Elsevier.

    Abstract

    Aristotle's importance in the professional study of language consists first of all in the fact that he demythologized language and made it an object of rational investigation. In the context of his theory of truth as correspondence, he also provided the first semantic analysis of propositions in that he distinguished two main constituents, the predicate, which expresses a property, and the remainder of the proposition, referring to a substance to which the property is assigned. That assignment is either true or false. Later, the ‘remainder’ was called subject term, and the Aristotelian predicate was identified with the verb in the sentence. The Aristotelian predicate, however, is more like what is now called the ‘comment,’ whereas his remainder corresponds to the topic. Aristotle, furthermore, defined nouns and verbs as word classes. In addition, he introduced the term ‘case’ for paradigmatic morphological variation.
  • Seuren, P. A. M. (2006). Meaning, the cognitive dependency of lexical meaning. In K. Brown (Ed.), Encyclopedia of Language and Linguistics (vol. 7) (pp. 575-577). Amsterdam: Elsevier.

    Abstract

    There is a growing awareness among theoretical linguists and philosophers of language that the linguistic definition of lexical meanings, which must be learned when one learns a language, underdetermines not only full utterance interpretation but also sentence meaning. The missing information must be provided by cognition – that is, either general encyclopedic or specific situational knowledge. This fact crucially shows the basic insufficiency of current standard model-theoretic semantics as a paradigm for the analysis and description of linguistic meaning.
  • Seuren, P. A. M. (2006). Lexical conditions. In K. Brown (Ed.), Encyclopedia of Language and Linguistics (vol. 7) (pp. 77-79). Amsterdam: Elsevier.

    Abstract

    The lexical conditions, also known as satisfaction conditions, of a predicate P are the conditions that must be satisfied by the term referents of P for P applied to these term referents to yield a true sentence. In view of presupposition theory it makes sense to distinguish two categories of lexical conditions, the preconditions that must be satisfied for the sentence to be usable in any given discourse, and the update conditions which must be satisfied for the sentence to yield truth.
  • Seuren, P. A. M. (2006). Multivalued logics. In K. Brown (Ed.), Encyclopedia of Language and Linguistics (vol. 8) (pp. 387-390). Amsterdam: Elsevier.

    Abstract

    The widely prevailing view that standard bivalent logic is the only possible sound logical system, imposed by metaphysical necessity, has been shattered by the development of multivalent logics during the 20th century. It is now clear that standard bivalent logic is merely the minimal representative of a wide variety of viable logics with any number of truth values. These viable logics can be subdivided into families. In this article, the Kleene family and the PPCn family are subjected to special examination, as they appear to be most relevant for the study of the logical properties of human language.
  • Seuren, P. A. M. (2003). Logic, language and thought. In H. J. Ribeiro (Ed.), Encontro nacional de filosofia analítica. (pp. 259-276). Coimbra, Portugal: Faculdade de Letras.
  • Seuren, P. A. M. (1990). Filosofie van de taalwetenschappen. Leiden: Martinus Nijhoff.
  • Seuren, P. A. M. (1990). Serial verb constructions. In B. D. Joseph, & A. M. Zwicky (Eds.), When verbs collide: Papers from the 1990 Ohio State Mini-Conference on Serial Verbs (pp. 14-33). Columbus, OH: The Ohio State University, Department of Linguistics.
  • Seuren, P. A. M. (1998). Western linguistics: An historical introduction. Oxford: Blackwell.
  • Seuren, P. A. M. (1998). Towards a discourse-semantic account of donkey anaphora. In S. Botley, & T. McEnery (Eds.), New Approaches to Discourse Anaphora: Proceedings of the Second Colloquium on Discourse Anaphora and Anaphor Resolution (DAARC2) (pp. 212-220). Lancaster: Universiy Centre for Computer Corpus Research on Language, Lancaster University.
  • Seuren, P. A. M. (1980). Variabele competentie: Linguïstiek en sociolinguïstiek anno 1980. In Handelingen van het 36e Nederlands Filologencongres: Gehouden te Groningen op woensdag 9, donderdag 10 en vrijdag 11 April 1980 (pp. 41-56). Amsterdam: Holland University Press.
  • Shi, R., Werker, J., & Cutler, A. (2003). Function words in early speech perception. In Proceedings of the 15th International Congress of Phonetic Sciences (pp. 3009-3012).

    Abstract

    Three experiments examined whether infants recognise functors in phrases, and whether their representations of functors are phonetically well specified. Eight- and 13- month-old English infants heard monosyllabic lexical words preceded by real functors (e.g., the, his) versus nonsense functors (e.g., kuh); the latter were minimally modified segmentally (but not prosodically) from real functors. Lexical words were constant across conditions; thus recognition of functors would appear as longer listening time to sequences with real functors. Eightmonth- olds' listening times to sequences with real versus nonsense functors did not significantly differ, suggesting that they did not recognise real functors, or functor representations lacked phonetic specification. However, 13-month-olds listened significantly longer to sequences with real functors. Thus, somewhere between 8 and 13 months of age infants learn familiar functors and represent them with segmental detail. We propose that accumulated frequency of functors in input in general passes a critical threshold during this time.
  • Li, Y., Wu, S., Shi, S., Tong, S., Zhang, Y., & Guo, X. (2021). Enhanced inter-brain connectivity between children and adults during cooperation: a dual EEG study. In 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC) (pp. 6289-6292). doi:10.1109/EMBC46164.2021.9630330.

    Abstract

    Previous fNIRS studies have suggested that adult-child cooperation is accompanied by increased inter-brain synchrony. However, its reflection in the electrophysiological synchrony remains unclear. In this study, we designed a naturalistic and well-controlled adult-child interaction paradigm using a tangram solving video game, and recorded dual-EEG from child and adult dyads during cooperative and individual conditions. By calculating the directed inter-brain connectivity in the theta and alpha bands, we found that the inter-brain frontal network was more densely connected and stronger in strength during the cooperative than the individual condition when the adult was watching the child playing. Moreover, the inter-brain network across different dyads shared more common information flows from the player to the observer during cooperation, but was more individually different in solo play. The results suggest an enhancement in inter-brain EEG interactions during adult-child cooperation. However, the enhancement was evident in all cooperative cases but partly depended on the role of participants.
  • Skiba, R. (2006). Computeranalyse/Computer Analysis. In U. Amon, N. Dittmar, K. Mattheier, & P. Trudgill (Eds.), Sociolinguistics: An international handbook of the science of language and society [2nd completely revised and extended edition] (pp. 1187-1197). Berlin, New York: de Gruyter.
  • Skiba, R. (2003). Computer Analysis: Corpus based language research. In U. Amon, N. Dittmar, K. Mattheier, & P. Trudgil (Eds.), Handbook ''Sociolinguistics'' (2nd ed.) (pp. 1250-1260). Berlin: de Gruyter.
  • Skiba, R. (1998). Fachsprachenforschung in wissenschaftstheoretischer Perspektive. Tübingen: Gunter Narr.
  • Skiba, R. (1990). Steinbruch-Datenbanken: Materialien für „Deutsch als Zweitsprache für Kinder und Jugendliche" und „Deutsch als Fachsprache". In Lehr- und Lernmittel-Datenbanken für den Fremdsprachenunterricht (pp. 15-20). Zürich: Eurocentres - Learning Service.
  • De Smedt, K., & Kempen, G. (1990). Discontinuous constituency in Segment Grammar. In Proceedings of the Symposium on Discontinuous Constituency. Tilburg: University of Brabant.
  • Sotaro, K., & Dickey, L. W. (Eds.). (1998). Max Planck Institute for Psycholinguistics: Annual report 1998. Nijmegen: Max Planck Institute for Psycholinguistics.
  • Stivers, T. (2006). Treatment decisions: negotiations between doctors and parents in acute care encounters. In J. Heritage, & D. W. Maynard (Eds.), Communication in medical care: Interaction between primary care physicians and patients (pp. 279-312). Cambridge: Cambridge University Press.
  • Stolker, C. J. J. M., & Poletiek, F. H. (1998). Smartengeld - Wat zijn we eigenlijk aan het doen? Naar een juridische en psychologische evaluatie. In F. Stadermann (Ed.), Bewijs en letselschade (pp. 71-86). Lelystad, The Netherlands: Koninklijke Vermande.
  • Suppes, P., Böttner, M., & Liang, L. (1998). Machine Learning of Physics Word Problems: A Preliminary Report. In A. Aliseda, R. van Glabbeek, & D. Westerståhl (Eds.), Computing Natural Language (pp. 141-154). Stanford, CA, USA: CSLI Publications.
  • Ten Bosch, L., Baayen, R. H., & Ernestus, M. (2006). On speech variation and word type differentiation by articulatory feature representations. In Proceedings of Interspeech 2006 (pp. 2230-2233).

    Abstract

    This paper describes ongoing research aiming at the description of variation in speech as represented by asynchronous articulatory features. We will first illustrate how distances in the articulatory feature space can be used for event detection along speech trajectories in this space. The temporal structure imposed by the cosine distance in articulatory feature space coincides to a large extent with the manual segmentation on phone level. The analysis also indicates that the articulatory feature representation provides better such alignments than the MFCC representation does. Secondly, we will present first results that indicate that articulatory features can be used to probe for acoustic differences in the onsets of Dutch singulars and plurals.
  • ten Bosch, L., Hämäläinen, A., Scharenborg, O., & Boves, L. (2006). Acoustic scores and symbolic mismatch penalties in phone lattices. In Proceedings of the 2006 IEEE International Conference on Acoustics, Speech and Signal Processing [ICASSP 2006]. IEEE.

    Abstract

    This paper builds on previous work that aims at unraveling the structure of the speech signal by means of using probabilistic representations. The context of this work is a multi-pass speech recognition system in which a phone lattice is created and used as a basis for a lexical search in which symbolic mismatches are allowed at certain costs. The focus is on the optimization of the costs of phone insertions, deletions and substitutions that are used in the lexical decoding pass. Two optimization approaches are presented, one related to a multi-pass computational model for human speech recognition, the other based on a decoding in which Bayes’ risks are minimized. In the final section, the advantages of these optimization methods are discussed and compared.
  • Terrill, A., & Dunn, M. (2006). Semantic transference: Two preliminary case studies from the Solomon Islands. In C. Lefebvre, L. White, & C. Jourdan (Eds.), L2 acquisition and Creole genesis: Dialogues (pp. 67-85). Amsterdam: Benjamins.
  • Terrill, A. (1998). Biri. München: Lincom Europa.

    Abstract

    This work presents a salvage grammar of the Biri language of Eastern Central Queensland, a Pama-Nyungan language belonging to the large Maric subgroup. As the language is no longer used, the grammatical description is based on old written sources and on recordings made by linguists in the 1960s and 1970s. Biri is in many ways typical of the Pama-Nyungan languages of Southern Queensland. It has split case marking systems, marking nouns according to an ergative/absolutive system and pronouns according to a nominative/accusative system. Unusually for its area, Biri also has bound pronouns on its verb, cross-referencing the person, number and case of core participants. As far as it is possible, the grammatical discussion is ‘theory neutral’. The first four chapters deal with the phonology, morphology, and syntax of the language. The last two chapters contain a substantial discussion of Biri’s place in the Pama-Nyungan family. In chapter 6 the numerous dialects of the Biri language are discussed. In chapter 7 the close linguistic relationship between Biri and the surrounding languages is examined.
  • Terrill, A. (2003). A grammar of Lavukaleve. Berlin: Mouton de Gruyter.
  • Terrill, A. (2006). Central Solomon languages. In K. Brown (Ed.), Encyclopedia of language and linguistics (vol. 2) (pp. 279-280). Amsterdam: Elsevier.

    Abstract

    The Papuan languages of the central Solomon Islands are a negatively defined areal grouping: They are those four or possibly five languages in the central Solomon Islands that do not belong to the Austronesian family. Bilua (Vella Lavella), Touo (Rendova), Lavukaleve (Russell Islands), Savosavo (Savo Island) and possibly Kazukuru (New Georgia) have been identified as non-Austronesian since the early 20th century. However, their affiliations both to each other and to other languages still remain a mystery. Heterogeneous and until recently largely undescribed, they present an interesting departure from what is known both of Austronesian languages in the region and of the Papuan languages of the mainland of New Guinea.
  • Trujillo, J. P., Levinson, S. C., & Holler, J. (2021). Visual information in computer-mediated interaction matters: Investigating the association between the availability of gesture and turn transition timing in conversation. In M. Kurosu (Ed.), Human-Computer Interaction. Design and User Experience Case Studies. HCII 2021 (pp. 643-657). Cham: Springer. doi:10.1007/978-3-030-78468-3_44.

    Abstract

    Natural human interaction involves the fast-paced exchange of speaker turns. Crucially, if a next speaker waited with planning their turn until the current speaker was finished, language production models would predict much longer turn transition times than what we observe. Next speakers must therefore prepare their turn in parallel to listening. Visual signals likely play a role in this process, for example by helping the next speaker to process the ongoing utterance and thus prepare an appropriately-timed response.

    To understand how visual signals contribute to the timing of turn-taking, and to move beyond the mostly qualitative studies of gesture in conversation, we examined unconstrained, computer-mediated conversations between 20 pairs of participants while systematically manipulating speaker visibility. Using motion tracking and manual gesture annotation, we assessed 1) how visibility affected the timing of turn transitions, and 2) whether use of co-speech gestures and 3) the communicative kinematic features of these gestures were associated with changes in turn transition timing.

    We found that 1) decreased visibility was associated with less tightly timed turn transitions, and 2) the presence of gestures was associated with more tightly timed turn transitions across visibility conditions. Finally, 3) structural and salient kinematics contributed to gesture’s facilitatory effect on turn transition times.

    Our findings suggest that speaker visibility--and especially the presence and kinematic form of gestures--during conversation contributes to the temporal coordination of conversational turns in computer-mediated settings. Furthermore, our study demonstrates that it is possible to use naturalistic conversation and still obtain controlled results.
  • Tuinman, A. (2006). Overcompensation of /t/ reduction in Dutch by German/Dutch bilinguals. In Variation, detail and representation: 10th Conference on Laboratory Phonology (pp. 101-102).
  • Van Staden, M., Bowerman, M., & Verhelst, M. (2006). Some properties of spatial description in Dutch. In S. C. Levinson, & D. Wilkins (Eds.), Grammars of Space (pp. 475-511). Cambridge: Cambridge University Press.
  • Van Turennout, M., Schmitt, B., & Hagoort, P. (2003). When words come to mind: Electrophysiological insights on the time course of speaking and understanding words. In N. O. Schiller, & A. S. Meyer (Eds.), Phonetics and phonology in language comprehension and production: Differences and similarities (pp. 241-278). Berlin: Mouton de Gruyter.
  • van Staden, M., & Majid, A. (2003). Body colouring task 2003. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 66-68). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877666.

    Abstract

    This Field Manual entry has been superceded by the published version: Van Staden, M., & Majid, A. (2006). Body colouring task. Language Sciences, 28(2-3), 158-161. doi:10.1016/j.langsci.2005.11.004.

    Additional information

    2003_body_model_large.pdf

    Files private

    Request files
  • Van den Bos, E. J., & Poletiek, F. H. (2006). Implicit artificial grammar learning in adults and children. In R. Sun (Ed.), Proceedings of the 28th Annual Conference of the Cognitive Science Society (CogSci 2006) (pp. 2619). Austin, TX, USA: Cognitive Science Society.
  • Van Valin Jr., R. D. (2003). Minimalism and explanation. In J. Moore, & M. Polinsky (Eds.), The nature of explanation in linguistic theory (pp. 281-297). University of Chicago Press.
  • Van Geenhoven, V. (1998). On the Argument Structure of some Noun Incorporating Verbs in West Greenlandic. In M. Butt, & W. Geuder (Eds.), The Projection of Arguments - Lexical and Compositional Factors (pp. 225-263). Stanford, CA, USA: CSLI Publications.
  • Van Valin Jr., R. D. (1998). The acquisition of WH-questions and the mechanisms of language acquisition. In M. Tomasello (Ed.), The new psychology of language: Cognitive and functional approaches to language structure (pp. 221-249). Mahwah, New Jersey: Erlbaum.
  • Van Valin Jr., R. D. (2006). Some universals of verb semantics. In R. Mairal, & J. Gil (Eds.), Linguistic universals (pp. 155-178). Cambridge: Cambridge University Press.
  • Van Valin Jr., R. D. (2006). Semantic macroroles and language processing. In I. Bornkessel, M. Schlesewsky, B. Comrie, & A. Friederici (Eds.), Semantic role universals and argument linking: Theoretical, typological and psycho-/neurolinguistic perspectives (pp. 263-302). Berlin: Mouton de Gruyter.
  • Von Stutterheim, C., Carroll, M., & Klein, W. (2003). Two ways of construing complex temporal structures. In F. Lenz (Ed.), Deictic conceptualization of space, time and person (pp. 97-133). Amsterdam: Benjamins.
  • Vonk, W., & Cozijn, R. (2003). On the treatment of saccades and regressions in eye movement measures of reading time. In J. Hyönä, R. Radach, & H. Deubel (Eds.), The mind's eye: Cognitive and applied aspects of eye movement research (pp. 291-312). Amsterdam: Elsevier.
  • De Vos, C. (2006). Mixed signals: Combining affective and linguistic functions of eyebrows in sign language of The Netherlands (Master's thesis). Nijmegen: Department of Linguistics, Radboud University.

    Abstract

    Sign Language of the Netherlands (NGT) is a visual-gestural language in which linguistic information is conveyed through manual as well as non-manual channels; not only the hands, but also body position, head position and facial expression are important for the language structure. Facial expressions serve grammatical functions in the marking of topics, yes/no questions, and wh-questions (Coerts, 1992). Furthermore, facial expression is used nonlinguistically in the expression of affect (Ekman, 1979). Consequently, at the phonetic level obligatory marking of grammar using facial expression may conflict with the expression of affect. In this study, I investigated the interplay of linguistic and affective functions of brow movements in NGT. Three hypotheses were tested in this thesis. The first is that the affective markers of eyebrows would dominate over the linguistic markers. The second hypothesis predicts that the grammatical markers dominate over the affective brow movements. A third possibility is that a Phonetic Sum would occur in which both functions are combined simultaneously. I elicited sentences combining grammatical and affective functions of eyebrows using a randomised design. Five sentence types were included: declarative sentences, topic sentences, yes-no questions, wh-questions with the wh-sign sentence-final and wh-questions with the wh-sign sentence-initial. These sentences were combined with neutral, surprised, angry, and distressed affect. The brow movements were analysed using the Facial Action Coding System (Ekman, Friesen, & Hager, 2002a). In these sentences, the eyebrows serve a linguistic function, an affective function, or both. One of the possibilities in the latter cases was that a Phonetic Sum would occur that combines both functions simultaneously. Surprisingly, it was found that a Phonetic Sum occurs in which the phonetic weight of Action Unit 4 appears to play an important role. The results show that affect displays may alter question signals in NGT.
  • Wagner, A., & Braun, A. (2003). Is voice quality language-dependent? Acoustic analyses based on speakers of three different languages. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 651-654). Adelaide: Causal Productions.
  • Warner, N. (2003). Rapid perceptibility as a factor underlying universals of vowel inventories. In A. Carnie, H. Harley, & M. Willie (Eds.), Formal approaches to function in grammar, in honor of Eloise Jelinek (pp. 245-261). Amsterdam: Benjamins.
  • Weber, A., & Smits, R. (2003). Consonant and vowel confusion patterns by American English listeners. In M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of Phonetic Sciences.

    Abstract

    This study investigated the perception of American English phonemes by native listeners. Listeners identified either the consonant or the vowel in all possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios (0 dB, 8 dB, and 16 dB). Effects of syllable position, signal-to-noise ratio, and articulatory features on vowel and consonant identification are discussed. The results constitute the largest source of data that is currently available on phoneme confusion patterns of American English phonemes by native listeners.
  • Weber, A., & Smits, R. (2003). Consonant and vowel confusion patterns by American English listeners. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003) (pp. 1437-1440). Adelaide: Causal Productions.

    Abstract

    This study investigated the perception of American English phonemes by native listeners. Listeners identified either the consonant or the vowel in all possible English CV and VC syllables. The syllables were embedded in multispeaker babble at three signalto-noise ratios (0 dB, 8 dB, and 16 dB). Effects of syllable position, signal-to-noise ratio, and articulatory features on vowel and consonant identification are discussed. The results constitute the largest source of data that is currently available on phoneme confusion patterns of American English phonemes by native listeners.
  • Weber, A. (1998). Listening to nonnative language which violates native assimilation rules. In D. Duez (Ed.), Proceedings of the European Scientific Communication Association workshop: Sound patterns of Spontaneous Speech (pp. 101-104).

    Abstract

    Recent studies using phoneme detection tasks have shown that spoken-language processing is neither facilitated nor interfered with by optional assimilation, but is inhibited by violation of obligatory assimilation. Interpretation of these results depends on an assessment of their generality, specifically, whether they also obtain when listeners are processing nonnative language. Two separate experiments are presented in which native listeners of German and native listeners of Dutch had to detect a target fricative in legal monosyllabic Dutch nonwords. All of the nonwords were correct realisations in standard Dutch. For German listeners, however, half of the nonwords contained phoneme strings which violate the German fricative assimilation rule. Whereas the Dutch listeners showed no significant effects, German listeners detected the target fricative faster when the German fricative assimilation was violated than when no violation occurred. The results might suggest that violation of assimilation rules does not have to make processing more difficult per se.
  • Wender, K. F., Haun, D. B. M., Rasch, B. H., & Blümke, M. (2003). Context effects in memory for routes. In C. Freksa, W. Brauer, C. Habel, & K. F. Wender (Eds.), Spatial cognition III: Routes and navigation, human memory and learning, spatial representation and spatial learning (pp. 209-231). Berlin: Springer.
  • Widlok, T. (2006). Two ways of looking at a Mangetti grove. In A. Takada (Ed.), Proceedings of the workshop: Landscape and society (pp. 11-16). Kyoto: 21st Century Center of Excellence Program.
  • Wittek, A. (1998). Learning verb meaning via adverbial modification: Change-of-state verbs in German and the adverb "wieder" again. In A. Greenhill, M. Hughes, H. Littlefield, & H. Walsh (Eds.), Proceedings of the 22nd Annual Boston University Conference on Language Development (pp. 779-790). Somerville, MA: Cascadilla Press.
  • Wittenburg, P., Brugman, H., Russel, A., Klassmann, A., & Sloetjes, H. (2006). ELAN: a professional framework for multimodality research. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 1556-1559).

    Abstract

    Utilization of computer tools in linguistic research has gained importance with the maturation of media frameworks for the handling of digital audio and video. The increased use of these tools in gesture, sign language and multimodal interaction studies has led to stronger requirements on the flexibility, the efficiency and in particular the time accuracy of annotation tools. This paper describes the efforts made to make ELAN a tool that meets these requirements, with special attention to the developments in the area of time accuracy. In subsequent sections an overview will be given of other enhancements in the latest versions of ELAN, that make it a useful tool in multimodality research.
  • Wittenburg, P., Broeder, D., Klein, W., Levinson, S. C., & Romary, L. (2006). Foundations of modern language resource archives. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 625-628).

    Abstract

    A number of serious reasons will convince an increasing amount of researchers to store their relevant material in centers which we will call "language resource archives". They combine the duty of taking care of long-term preservation as well as the task to give access to their material to different user groups. Access here is meant in the sense that an active interaction with the data will be made possible to support the integration of new data, new versions or commentaries of all sort. Modern Language Resource Archives will have to adhere to a number of basic principles to fulfill all requirements and they will have to be involved in federations to create joint language resource domains making it even more simple for the researchers to access the data. This paper makes an attempt to formulate the essential pillars language resource archives have to adhere to.
  • Zeshan, U. (2006). Sign language of the world. In K. Brown (Ed.), Encyclopedia of language and linguistics (vol. 11) (pp. 358-365). Amsterdam: Elsevier.

    Abstract

    Although sign language-using communities exist in all areas of the world, few sign languages have been documented in detail. Sign languages occur in a variety of sociocultural contexts, ranging from sign languages used in closed village communities to officially recognized national sign languages. They may be grouped into language families on historical grounds or may participate in various language contact situations. Systematic cross-linguistic comparison reveals both significant structural similarities and important typological differences between sign languages. Focusing on information from non-Western countries, this article provides an overview of the sign languages of the world.
  • Zeshan, U. (Ed.). (2006). Interrogative and negative constructions in sign languages. Nijmegen: Ishara Press.
  • Zhang, Y., Ding, R., Frassinelli, D., Tuomainen, J., Klavinskis-Whiting, S., & Vigliocco, G. (2021). Electrophysiological signatures of second language multimodal comprehension. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 2971-2977). Vienna: Cognitive Science Society.

    Abstract

    Language is multimodal: non-linguistic cues, such as prosody,
    gestures and mouth movements, are always present in face-to-
    face communication and interact to support processing. In this
    paper, we ask whether and how multimodal cues affect L2
    processing by recording EEG for highly proficient bilinguals
    when watching naturalistic materials. For each word, we
    quantified surprisal and the informativeness of prosody,
    gestures, and mouth movements. We found that each cue
    modulates the N400: prosodic accentuation, meaningful
    gestures, and informative mouth movements all reduce N400.
    Further, effects of meaningful gestures but not mouth
    informativeness are enhanced by prosodic accentuation,
    whereas effects of mouth are enhanced by meaningful gestures
    but reduced by beat gestures. Compared with L1, L2
    participants benefit less from cues and their interactions, except
    for meaningful gestures and mouth movements. Thus, in real-
    world language comprehension, L2 comprehenders use
    multimodal cues just as L1 speakers albeit to a lesser extent.
  • Zhang, Y., Amatuni, A., Cain, E., Wang, X., Crandall, D., & Yu, C. (2021). Human learners integrate visual and linguistic information cross-situational verb learning. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (Eds.), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 2267-2273). Vienna: Cognitive Science Society.

    Abstract

    Learning verbs is challenging because it is difficult to infer the precise meaning of a verb when there are a multitude of relations that one can derive from a single event. To study this verb learning challenge, we used children's egocentric view collected from naturalistic toy-play interaction as learning materials and investigated how visual and linguistic information provided in individual naming moments as well as cross-situational information provided from multiple learning moments can help learners resolve this mapping problem using the Human Simulation Paradigm. Our results show that learners benefit from seeing children's egocentric views compared to third-person observations. In addition, linguistic information can help learners identify the correct verb meaning by eliminating possible meanings that do not belong to the linguistic category. Learners are also able to integrate visual and linguistic information both within and across learning situations to reduce the ambiguity in the space of possible verb meanings.
  • Zimianiti, E., Dimitrakopoulou, M., & Tsangalidis, A. (2021). Τhematic roles in dementia: The case of psychological verbs. In A. Botinis (Ed.), ExLing 2021: Proceedings of the 12th International Conference of Experimental Linguistics (pp. 269-272). Athens, Greece: ExLing Society.

    Abstract

    This study investigates the difficulty of people with Mild Cognitive Impairment (MCI), mild and moderate Alzheimer’s disease (AD) in the production and comprehension of psychological verbs, as thematic realization may involve both the canonical and non-canonical realization of arguments. More specifically, we aim to examine whether there is a deficit in the mapping of syntactic and semantic representations in psych-predicates regarding Greek-speaking individuals with MCI and AD, and whether the linguistic abilities associated with θ-role assignment decrease as the disease progresses. Moreover, given the decline of cognitive abilities in people with MCI and AD, we explore the effects of components of memory (Semantic, Episodic, and Working Memory) on the assignment of thematic roles in constructions with psychological verbs.
  • Zwitserlood, I., & Van Gijn, I. (2006). Agreement phenomena in Sign Language of the Netherlands. In P. Ackema (Ed.), Arguments and Agreement (pp. 195-229). Oxford: Oxford University Press.
  • Zwitserlood, P. (1990). Max-Planck-Institute for Psycholinguistics: Annual Report Nr.11 1990. Nijmegen: MPI for Psycholinguistics.
  • Zwitserlood, I. (2003). Word formation below and above little x: Evidence from Sign Language of the Netherlands. In Proceedings of SCL 19. Nordlyd Tromsø University Working Papers on Language and Linguistics (pp. 488-502).

    Abstract

    Although in many respects sign languages have a similar structure to that of spoken languages, the different modalities in which both types of languages are expressed cause differences in structure as well. One of the most striking differences between spoken and sign languages is the influence of the interface between grammar and PF on the surface form of utterances. Spoken language words and phrases are in general characterized by sequential strings of sounds, morphemes and words, while in sign languages we find that many phonemes, morphemes, and even words are expressed simultaneously. A linguistic model should be able to account for the structures that occur in both spoken and sign languages. In this paper, I will discuss the morphological/ morphosyntactic structure of signs in Nederlandse Gebarentaal (Sign Language of the Netherlands, henceforth NGT), with special focus on the components ‘place of articulation’ and ‘handshape’. I will focus on their multiple functions in the grammar of NGT and argue that the framework of Distributed Morphology (DM), which accounts for word formation in spoken languages, is also suited to account for the formation of structures in sign languages. First I will introduce the phonological and morphological structure of NGT signs. Then, I will briefly outline the major characteristics of the DM framework. Finally, I will account for signs that have the same surface form but have a different morphological structure by means of that framework.

Share this page