Publications

Displaying 101 - 200 of 232
  • Kempen, G. (1992). Generation. In W. Bright (Ed.), International encyclopedia of linguistics (pp. 59-61). New York: Oxford University Press.
  • Kempen, G. (1992). Language technology and language instruction: Computational diagnosis of word level errors. In M. Swartz, & M. Yazdani (Eds.), Intelligent tutoring systems for foreign language learning: The bridge to international communication (pp. 191-198). Berlin: Springer.
  • Kempen, G. (1978). Sentence construction by a psychologically plausible formulator. In R. N. Campbell, & P. T. Smith (Eds.), Recent advances in the psychology of language: Formal and experimental approaches. Volume 2 (pp. 103-124). New York: Plenum Press.
  • Kempen, G. (1998). Sentence parsing. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 213-228). Berlin: Springer.
  • Kempen, G. (1992). Second language acquisition as a hybrid learning process. In F. Engel, D. Bouwhuis, T. Bösser, & G. d'Ydewalle (Eds.), Cognitive modelling and interactive environments in language learning (pp. 139-144). Berlin: Springer.
  • Kita, S., van Gijn, I., & van der Hulst, H. (1998). Movement phases in signs and co-speech gestures, and their transcription by human coders. In Gesture and Sign-Language in Human-Computer Interaction (Lecture Notes in Artificial Intelligence - LNCS Subseries, Vol. 1371) (pp. 23-35). Berlin, Germany: Springer-Verlag.

    Abstract

    The previous literature has suggested that the hand movement in co-speech gestures and signs consists of a series of phases with qualitatively different dynamic characteristics. In this paper, we propose a syntagmatic rule system for movement phases that applies to both co-speech gestures and signs. Descriptive criteria for the rule system were developed for the analysis video-recorded continuous production of signs and gesture. It involves segmenting a stream of body movement into phases and identifying different phase types. Two human coders used the criteria to analyze signs and cospeech gestures that are produced in natural discourse. It was found that the criteria yielded good inter-coder reliability. These criteria can be used for the technology of automatic recognition of signs and co-speech gestures in order to segment continuous production and identify the potentially meaningbearing phase.
  • Klein, W. (2000). Changing concepts of the nature-nurture debate. In R. Hide, J. Mittelstrass, & W. Singer (Eds.), Changing concepts of nature at the turn of the millenium: Proceedings plenary session of the Pontifical academy of sciences, 26-29 October 1998 (pp. 289-299). Vatican City: Pontificia Academia Scientiarum.
  • Klein, W. (1992). Der Fall Horten gegen Delius, oder: Der Laie, der Fachmann und das Recht. In G. Grewendorf (Ed.), Rechtskultur als Sprachkultur: Zur forensischen Funktion der Sprachanalyse (pp. 284-313). Frankfurt am Main: Suhrkamp.
  • Klein, W. (2000). Der Mythos vom Sprachverfall. In Berlin-Brandenburgische Akademie der Wissenschaften (Ed.), Jahrbuch 1999: Berlin-Brandenburgische Akademie der Wissenschaften (pp. 139-158). Berlin: Akademie Verlag.
  • Klein, W. (1998). Ein Blick zurück auf die Varietätengrammatik. In U. Ammon, K. Mattheier, & P. Nelde (Eds.), Sociolinguistica: Internationales Jahrbuch für europäische Soziolinguistik (pp. 22-38). Tübingen: Niemeyer.
  • Klein, W., & Heidelberger Forschungsprojekt "Pidgin - Deutsch" (1978). Aspekte der ungesteuerten Erlernung des Deutschen durch ausländische Arbeiter. In C. Molony, H. Zobl, & W. Stölting (Eds.), German in contact with other languages / Deutsch im Kontakt mit anderen Sprachen (pp. 147-183). Wiesbaden: Scriptor.
  • Klein, W. (1998). Assertion and finiteness. In N. Dittmar, & Z. Penner (Eds.), Issues in the theory of language acquisition: Essays in honor of Jürgen Weissenborn (pp. 225-245). Bern: Peter Lang.
  • Klein, W., & Perdue, C. (1992). Framework. In W. Klein, & C. Perdue (Eds.), Utterance structure: Developing grammars again (pp. 11-59). Amsterdam: Benjamins.
  • Klein, W. (2000). Prozesse des Zweitspracherwerbs. In H. Grimm (Ed.), Enzyklopädie der Psychologie: Vol. 3 (pp. 538-570). Göttingen: Hogrefe.
  • Klein, W., & Carroll, M. (1992). The acquisition of German. In W. Klein, & C. Perdue (Eds.), Utterance structure: Developing grammars again (pp. 123-188). Amsterdam: Benjamins.
  • Klein, W. (1978). The aquisition of German syntax by foreign migrant workers. In D. Sankoff (Ed.), Linguistic variation: models and methods (pp. 1-22). New York: Academic Press.
  • Klein, W. (1978). Soziolinguistik. In H. Balmer (Ed.), Die Psychologie des 20. Jahrhunderts: Vol. 7. Piaget und die Folgen (pp. 1130-1147). Zürich: Kindler.
  • Klein, W., & Vater, H. (1998). The perfect in English and German. In L. Kulikov, & H. Vater (Eds.), Typology of verbal categories: Papers presented to Vladimir Nedjalkov on the occasion of his 70th birthday (pp. 215-235). Tübingen: Niemeyer.
  • Klein, W. (1980). Verbal planning in route directions. In H. Dechert, & M. Raupach (Eds.), Temporal variables in speech (pp. 159-168). Den Haag: Mouton.
  • Kuijpers, C. T., Coolen, R., Houston, D., & Cutler, A. (1998). Using the head-turning technique to explore cross-linguistic performance differences. In C. Rovee-Collier, L. Lipsitt, & H. Hayne (Eds.), Advances in infancy research: Vol. 12 (pp. 205-220). Stamford: Ablex.
  • Lansner, A., Sandberg, A., Petersson, K. M., & Ingvar, M. (2000). On forgetful attractor network memories. In H. Malmgren, M. Borga, & L. Niklasson (Eds.), Artificial neural networks in medicine and biology: Proceedings of the ANNIMAB-1 Conference, Göteborg, Sweden, 13-16 May 2000 (pp. 54-62). Heidelberg: Springer Verlag.

    Abstract

    A recurrently connected attractor neural network with a Hebbian learning rule is currently our best ANN analogy for a piece cortex. Functionally biological memory operates on a spectrum of time scales with regard to induction and retention, and it is modulated in complex ways by sub-cortical neuromodulatory systems. Moreover, biological memory networks are commonly believed to be highly distributed and engage many co-operating cortical areas. Here we focus on the temporal aspects of induction and retention of memory in a connectionist type attractor memory model of a piece of cortex. A continuous time, forgetful Bayesian-Hebbian learning rule is described and compared to the characteristics of LTP and LTD seen experimentally. More generally, an attractor network implementing this learning rule can operate as a long-term, intermediate-term, or short-term memory. Modulation of the print-now signal of the learning rule replicates some experimental memory phenomena, like e.g. the von Restorff effect.
  • Lev-Ari, S. (2019). The influence of social network properties on language processing and use. In M. S. Vitevitch (Ed.), Network Science in Cognitive Psychology (pp. 10-29). New York, NY: Routledge.

    Abstract

    Language is a social phenomenon. The author learns, processes, and uses it in social contexts. In other words, the social environment shapes the linguistic knowledge and use of the knowledge. To a degree, this is trivial. A child exposed to Japanese will become fluent in Japanese, whereas a child exposed to only Spanish will not understand Japanese but will master the sounds, vocabulary, and grammar of Spanish. Language is a structured system. Sounds and words do not occur randomly but are characterized by regularities. Learners are sensitive to these regularities and exploit them when learning language. People differ in the sizes of their social networks. Some people tend to interact with only a few people, whereas others might interact with a wide range of people. This is reflected in people’s holiday greeting habits: some people might send cards to only a few people, whereas other would send greeting cards to more than 350 people.
  • Levelt, W. J. M., Sinclair, A., & Jarvella, R. J. (1978). Causes and functions of linguistic awareness in language acquisition: Some introductory remarks. In A. Sinclair, R. Jarvella, & W. J. M. Levelt (Eds.), The child's conception of language (pp. 1-14). Heidelberg: Springer.
  • Levelt, W. J. M. (1978). A survey of studies in sentence perception: 1970-1976. In W. J. M. Levelt, & G. Flores d'Arcais (Eds.), Studies in the perception of language (pp. 1-74). New York: Wiley.
  • Levelt, W. J. M. (2000). Introduction Section VII: Language. In M. S. Gazzaniga (Ed.), The new cognitive neurosciences; 2nd ed. (pp. 843-844). Cambridge: MIT Press.
  • Levelt, W. J. M. (1980). On-line processing constraints on the properties of signed and spoken language. In U. Bellugi, & M. Studdert-Kennedy (Eds.), Signed and spoken language: Biological constraints on linguistic form (pp. 141-160). Weinheim: Verlag Chemie.

    Abstract

    It is argued that the dominantly successive nature of language is largely mode-independent and holds equally for sign and for spoken language. A preliminary distinction is made between what is simultaneous or successive in the signal, and what is in the process; these need not coincide, and it is the successiveness of the process that is at stake. It is then discussed extensively for the word/sign level, and in a more preliminary fashion for the clause and discourse level that online processes are parallel in that they can simultaneously draw on various sources of knowledge (syntactic, semantic, pragmatic), but successive in that they can work at the interpretation of only one unit at a time. This seems to hold for both sign and spoken language. In the final section, conjectures are made about possible evolutionary explanations for these properties of language processing.
  • Levelt, W. J. M. (1992). Psycholinguistics: An overview. In W. Bright (Ed.), International encyclopedia of linguistics (Vol. 3) (pp. 290-294). Oxford: Oxford University Press.
  • Levelt, W. J. M. (2000). Psychology of language. In K. Pawlik, & M. R. Rosenzweig (Eds.), International handbook of psychology (pp. 151-167). London: SAGE publications.
  • Levelt, W. J. M. (2000). Speech production. In A. E. Kazdin (Ed.), Encyclopedia of psychology (pp. 432-433). Oxford University Press.
  • Levelt, W. J. M., Schreuder, R., & Hoenkamp, E. (1978). Structure and use of verbs of motion. In R. N. Campbell, & P. T. Smith (Eds.), Recent advances in the psychology of language: Vol 2. Formal and experimental approaches (pp. 137-162). New York: Plenum Press.
  • Levelt, W. J. M., & Indefrey, P. (2000). The speaking mind/brain: Where do spoken words come from? In A. Marantz, Y. Miyashita, & W. O'Neil (Eds.), Image, language, brain: Papers from the First Mind Articulation Project Symposium (pp. 77-94). Cambridge, Mass.: MIT Press.
  • Levelt, W. J. M. (1980). Toegepaste aspecten van het taal-psychologisch onderzoek: Enkele inleidende overwegingen. In J. Matter (Ed.), Toegepaste aspekten van de taalpsychologie (pp. 3-11). Amsterdam: VU Boekhandel.
  • Levinson, S. C. (1992). Space in Australian Languages Questionnaire. In S. C. Levinson (Ed.), Space stimuli kit 1.2 (pp. 29-40). Nijmegen: Max Planck Institute for Psycholinguistics.

    Abstract

    This questionnaire is designed to explore how spatial relations are encoded in Australian language, but may be of interest to researchers further afield.
  • Levinson, S. C. (1992). Space in Australian Languages Questionnaire. In S. C. Levinson (Ed.), Space stimuli kit 1.2 (pp. 29-40). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3512641.

    Abstract

    This questionnaire is designed to explore how spatial relations are encoded in Australian language, but may be of interest to researchers further afield.
  • Levinson, S. C. (1992). Activity types and language. In P. Drew, & J. Heritage (Eds.), Talk at work: Interaction in institutional settings (pp. 66-100). Cambridge University Press.
  • Levinson, S. C. (1998). Deixis. In J. L. Mey (Ed.), Concise encyclopedia of pragmatics (pp. 200-204). Amsterdam: Elsevier.
  • Levinson, S. C., Brown, P., Danzinger, E., De León, L., Haviland, J. B., Pederson, E., & Senft, G. (1992). Man and Tree & Space Games. In S. C. Levinson (Ed.), Space stimuli kit 1.2 (pp. 7-14). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.2458804.

    Abstract

    These classic tasks can be used to explore spatial reference in field settings. They provide a language-independent metric for eliciting spatial language, using a “director-matcher” paradigm. The Man and Tree task deals with location on the horizontal plane with both featured (man) and non-featured (e.g., tree) objects. The Space Games depict various objects (e.g. bananas, lemons) and elicit spatial contrasts not obviously lexicalisable in English.
  • Levinson, S. C. (1998). Minimization and conversational inference. In A. Kasher (Ed.), Pragmatics: Vol. 4 Presupposition, implicature and indirect speech acts (pp. 545-612). London: Routledge.
  • Levinson, S. C., & Toni, I. (2019). Key issues and future directions: Interactional foundations of language. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 257-261). Cambridge, MA: MIT Press.
  • Levinson, S. C. (2000). Language as nature and language as art. In J. Mittelstrass, & W. Singer (Eds.), Proceedings of the Symposium on ‘Changing concepts of nature and the turn of the Millennium (pp. 257-287). Vatican City: Pontificae Academiae Scientiarium Scripta Varia.
  • Levinson, S. C. (2000). H.P. Grice on location on Rossel Island. In S. S. Chang, L. Liaw, & J. Ruppenhofer (Eds.), Proceedings of the 25th Annual Meeting of the Berkeley Linguistic Society (pp. 210-224). Berkeley: Berkeley Linguistic Society.
  • Levinson, S. C. (2019). Interactional foundations of language: The interaction engine hypothesis. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 189-200). Cambridge, MA: MIT Press.
  • Levinson, S. C. (2019). Natural forms of purposeful interaction among humans: What makes interaction effective? In K. A. Gluck, & J. E. Laird (Eds.), Interactive task learning: Humans, robots, and agents acquiring new tasks through natural interactions (pp. 111-126). Cambridge, MA: MIT Press.
  • Levinson, S. C., & Annamalai, E. (1992). Why presuppositions aren't conventional. In R. N. Srivastava (Ed.), Language and text: Studies in honour of Ashok R. Kelkar (pp. 227-242). Dehli: Kalinga Publications.
  • Liu, S., & Zhang, Y. (2019). Why some verbs are harder to learn than others – A micro-level analysis of everyday learning contexts for early verb learning. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2173-2178). Montreal, QB: Cognitive Science Society.

    Abstract

    Verb learning is important for young children. While most
    previous research has focused on linguistic and conceptual
    challenges in early verb learning (e.g. Gentner, 1982, 2006),
    the present paper examined early verb learning at the
    attentional level and quantified the input for early verb learning
    by measuring verb-action co-occurrence statistics in parent-
    child interaction from the learner’s perspective. To do so, we
    used head-mounted eye tracking to record fine-grained
    multimodal behaviors during parent-infant joint play, and
    analyzed parent speech, parent and infant action, and infant
    attention at the moments when parents produced verb labels.
    Our results show great variability across different action verbs,
    in terms of frequency of verb utterances, frequency of
    corresponding actions related to verb meanings, and infants’
    attention to verbs and actions, which provide new insights on
    why some verbs are harder to learn than others.
  • Mai, F., Galke, L., & Scherp, A. (2019). CBOW is not all you need: Combining CBOW with the compositional matrix space model. In Proceedings of the Seventh International Conference on Learning Representations (ICLR 2019). OpenReview.net.

    Abstract

    Continuous Bag of Words (CBOW) is a powerful text embedding method. Due to its strong capabilities to encode word content, CBOW embeddings perform well on a wide range of downstream tasks while being efficient to compute. However, CBOW is not capable of capturing the word order. The reason is that the computation of CBOW's word embeddings is commutative, i.e., embeddings of XYZ and ZYX are the same. In order to address this shortcoming, we propose a
    learning algorithm for the Continuous Matrix Space Model, which we call Continual Multiplication of Words (CMOW). Our algorithm is an adaptation of word2vec, so that it can be trained on large quantities of unlabeled text. We empirically show that CMOW better captures linguistic properties, but it is inferior to CBOW in memorizing word content. Motivated by these findings, we propose a hybrid model that combines the strengths of CBOW and CMOW. Our results show that the hybrid CBOW-CMOW-model retains CBOW's strong ability to memorize word content while at the same time substantially improving its ability to encode other linguistic information by 8%. As a result, the hybrid also performs better on 8 out of 11 supervised downstream tasks with an average improvement of 1.2%.
  • Majid, A. (2019). Preface. In L. J. Speed, C. O'Meara, L. San Roque, & A. Majid (Eds.), Perception Metaphors (pp. vii-viii). Amsterdam: Benjamins.
  • Mamus, E., Rissman, L., Majid, A., & Ozyurek, A. (2019). Effects of blindfolding on verbal and gestural expression of path in auditory motion events. In A. K. Goel, C. M. Seifert, & C. C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2275-2281). Montreal, QB: Cognitive Science Society.

    Abstract

    Studies have claimed that blind people’s spatial representations are different from sighted people, and blind people display superior auditory processing. Due to the nature of auditory and haptic information, it has been proposed that blind people have spatial representations that are more sequential than sighted people. Even the temporary loss of sight—such as through blindfolding—can affect spatial representations, but not much research has been done on this topic. We compared blindfolded and sighted people’s linguistic spatial expressions and non-linguistic localization accuracy to test how blindfolding affects the representation of path in auditory motion events. We found that blindfolded people were as good as sighted people when localizing simple sounds, but they outperformed sighted people when localizing auditory motion events. Blindfolded people’s path related speech also included more sequential, and less holistic elements. Our results indicate that even temporary loss of sight influences spatial representations of auditory motion events
  • Marcoux, K., & Ernestus, M. (2019). Differences between native and non-native Lombard speech in terms of pitch range. In M. Ochmann, M. Vorländer, & J. Fels (Eds.), Proceedings of the ICA 2019 and EAA Euroregio. 23rd International Congress on Acoustics, integrating 4th EAA Euroregio 2019 (pp. 5713-5720). Berlin: Deutsche Gesellschaft für Akustik.

    Abstract

    Lombard speech, speech produced in noise, is acoustically different from speech produced in quiet (plain speech) in several ways, including having a higher and wider F0 range (pitch). Extensive research on native Lombard speech does not consider that non-natives experience a higher cognitive load while producing
    speech and that the native language may influence the non-native speech. We investigated pitch range in plain and Lombard speech in native and non-natives.
    Dutch and American-English speakers read contrastive question-answer pairs in quiet and in noise in English, while the Dutch also read Dutch sentence pairs. We found that Lombard speech is characterized by a wider pitch range than plain speech, for all speakers (native English, non-native English, and native Dutch).
    This shows that non-natives also widen their pitch range in Lombard speech. In sentences with early-focus, we see the same increase in pitch range when going from plain to Lombard speech in native and non-native English, but a smaller increase in native Dutch. In sentences with late-focus, we see the biggest increase for the native English, followed by non-native English and then native Dutch. Together these results indicate an effect of the native language on non-native Lombard speech.
  • Marcoux, K., & Ernestus, M. (2019). Pitch in native and non-native Lombard speech. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 2019) (pp. 2605-2609). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    Lombard speech, speech produced in noise, is
    typically produced with a higher fundamental
    frequency (F0, pitch) compared to speech in quiet. This paper examined the potential differences in native and non-native Lombard speech by analyzing median pitch in sentences with early- or late-focus produced in quiet and noise. We found an increase in pitch in late-focus sentences in noise for Dutch speakers in both English and Dutch, and for American-English speakers in English. These results
    show that non-native speakers produce Lombard speech, despite their higher cognitive load. For the early-focus sentences, we found a difference between the Dutch and the American-English speakers. Whereas the Dutch showed an increased F0 in noise
    in English and Dutch, the American-English speakers did not in English. Together, these results suggest that some acoustic characteristics of Lombard speech, such as pitch, may be language-specific, potentially
    resulting in the native language influencing the non-native Lombard speech.
  • McDonough, L., Choi, S., Bowerman, M., & Mandler, J. M. (1998). The use of preferential looking as a measure of semantic development. In C. Rovee-Collier, L. P. Lipsitt, & H. Hayne (Eds.), Advances in Infancy Research. Volume 12. (pp. 336-354). Stamford, CT: Ablex Publishing.
  • McQueen, J. M., & Cutler, A. (1998). Morphology in word recognition. In A. M. Zwicky, & A. Spencer (Eds.), The handbook of morphology (pp. 406-427). Oxford: Blackwell.
  • McQueen, J. M., & Meyer, A. S. (2019). Key issues and future directions: Towards a comprehensive cognitive architecture for language use. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 85-96). Cambridge, MA: MIT Press.
  • McQueen, J. M., & Cutler, A. (1998). Spotting (different kinds of) words in (different kinds of) context. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2791-2794). Sydney: ICSLP.

    Abstract

    The results of a word-spotting experiment are presented in which Dutch listeners tried to spot different types of bisyllabic Dutch words embedded in different types of nonsense contexts. Embedded verbs were not reliably harder to spot than embedded nouns; this suggests that nouns and verbs are recognised via the same basic processes. Iambic words were no harder to spot than trochaic words, suggesting that trochaic words are not in principle easier to recognise than iambic words. Words were harder to spot in consonantal contexts (i.e., contexts which themselves could not be words) than in longer contexts which contained at least one vowel (i.e., contexts which, though not words, were possible words of Dutch). A control experiment showed that this difference was not due to acoustic differences between the words in each context. The results support the claim that spoken-word recognition is sensitive to the viability of sound sequences as possible words.
  • McQueen, J. M., Cutler, A., & Norris, D. (2000). Positive and negative influences of the lexicon on phonemic decision-making. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the Sixth International Conference on Spoken Language Processing: Vol. 3 (pp. 778-781). Beijing: China Military Friendship Publish.

    Abstract

    Lexical knowledge influences how human listeners make decisions about speech sounds. Positive lexical effects (faster responses to target sounds in words than in nonwords) are robust across several laboratory tasks, while negative effects (slower responses to targets in more word-like nonwords than in less word-like nonwords) have been found in phonetic decision tasks but not phoneme monitoring tasks. The present experiments tested whether negative lexical effects are therefore a task-specific consequence of the forced choice required in phonetic decision. We compared phoneme monitoring and phonetic decision performance using the same Dutch materials in each task. In both experiments there were positive lexical effects, but no negative lexical effects. We observe that in all studies showing negative lexical effects, the materials were made by cross-splicing, which meant that they contained perceptual evidence supporting the lexically-consistent phonemes. Lexical knowledge seems to influence phonemic decision-making only when there is evidence for the lexically-consistent phoneme in the speech signal.
  • McQueen, J. M., Cutler, A., & Norris, D. (2000). Why Merge really is autonomous and parsimonious. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 47-50). Nijmegen: Max-Planck-Institute for Psycholinguistics.

    Abstract

    We briefly describe the Merge model of phonemic decision-making, and, in the light of general arguments about the possible role of feedback in spoken-word recognition, defend Merge's feedforward structure. Merge not only accounts adequately for the data, without invoking feedback connections, but does so in a parsimonious manner.
  • McQueen, J. M., & Cutler, A. (1992). Words within words: Lexical statistics and lexical access. In J. Ohala, T. Neary, & B. Derwing (Eds.), Proceedings of the Second International Conference on Spoken Language Processing: Vol. 1 (pp. 221-224). Alberta: University of Alberta.

    Abstract

    This paper presents lexical statistics on the pattern of occurrence of words embedded in other words. We report the results of an analysis of 25000 words, varying in length from two to six syllables, extracted from a phonetically-coded English dictionary (The Longman Dictionary of Contemporary English). Each syllable, and each string of syllables within each word was checked against the dictionary. Two analyses are presented: the first used a complete list of polysyllables, with look-up on the entire dictionary; the second used a sublist of content words, counting only embedded words which were themselves content words. The results have important implications for models of human speech recognition. The efficiency of these models depends, in different ways, on the number and location of words within words.
  • Merkx, D., Frank, S., & Ernestus, M. (2019). Language learning using speech to image retrieval. In Proceedings of Interspeech 2019 (pp. 1841-1845). doi:10.21437/Interspeech.2019-3067.

    Abstract

    Humans learn language by interaction with their environment and listening to other humans. It should also be possible for computational models to learn language directly from speech but so far most approaches require text. We improve on existing neural network approaches to create visually grounded embeddings for spoken utterances. Using a combination of a multi-layer GRU, importance sampling, cyclic learning rates, ensembling and vectorial self-attention our results show a remarkable increase in image-caption retrieval performance over previous work. Furthermore, we investigate which layers in the model learn to recognise words in the input. We find that deeper network layers are better at encoding word presence, although the final layer has slightly lower performance. This shows that our visually grounded sentence encoder learns to recognise words from the input even though it is not explicitly trained for word recognition.
  • Moisik, S. R., Zhi Yun, D. P., & Dediu, D. (2019). Active adjustment of the cervical spine during pitch production compensates for shape: The ArtiVarK study. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 864-868). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    The anterior lordosis of the cervical spine is thought
    to contribute to pitch (fo) production by influencing
    cricoid rotation as a function of larynx height. This
    study examines the matter of inter-individual
    variation in cervical spine shape and whether this has
    an influence on how fo is produced along increasing
    or decreasing scales, using the ArtiVarK dataset,
    which contains real-time MRI pitch production data.
    We find that the cervical spine actively participates in
    fo production, but the amount of displacement
    depends on individual shape. In general, anterior
    spine motion (tending toward cervical lordosis)
    occurs for low fo, while posterior movement (tending
    towards cervical kyphosis) occurs for high fo.
  • Nijveld, A., Ten Bosch, L., & Ernestus, M. (2019). ERP signal analysis with temporal resolution using a time window bank. In Proceedings of Interspeech 2019 (pp. 1208-1212). doi:10.21437/Interspeech.2019-2729.

    Abstract

    In order to study the cognitive processes underlying speech comprehension, neuro-physiological measures (e.g., EEG and MEG), or behavioural measures (e.g., reaction times and response accuracy) can be applied. Compared to behavioural measures, EEG signals can provide a more fine-grained and complementary view of the processes that take place during the unfolding of an auditory stimulus.

    EEG signals are often analysed after having chosen specific time windows, which are usually based on the temporal structure of ERP components expected to be sensitive to the experimental manipulation. However, as the timing of ERP components may vary between experiments, trials, and participants, such a-priori defined analysis time windows may significantly hamper the exploratory power of the analysis of components of interest. In this paper, we explore a wide-window analysis method applied to EEG signals collected in an auditory repetition priming experiment.

    This approach is based on a bank of temporal filters arranged along the time axis in combination with linear mixed effects modelling. Crucially, it permits a temporal decomposition of effects in a single comprehensive statistical model which captures the entire EEG trace.
  • Noordman, L. G., & Vonk, W. (1998). Discourse comprehension. In A. D. Friederici (Ed.), Language comprehension: a biological perspective (pp. 229-262). Berlin: Springer.

    Abstract

    The human language processor is conceived as a system that consists of several interrelated subsystems. Each subsystem performs a specific task in the complex process of language comprehension and production. A subsystem receives a particular input, performs certain specific operations on this input and yields a particular output. The subsystems can be characterized in terms of the transformations that relate the input representations to the output representations. An important issue in describing the language processing system is to identify the subsystems and to specify the relations between the subsystems. These relations can be conceived in two different ways. In one conception the subsystems are autonomous. They are related to each other only by the input-output channels. The operations in one subsystem are not affected by another system. The subsystems are modular, that is they are independent. In the other conception, the different subsystems influence each other. A subsystem affects the processes in another subsystem. In this conception there is an interaction between the subsystems.
  • Norris, D., Cutler, A., McQueen, J. M., Butterfield, S., & Kearns, R. K. (2000). Language-universal constraints on the segmentation of English. In A. Cutler, J. M. McQueen, & R. Zondervan (Eds.), Proceedings of SWAP (Workshop on Spoken Word Access Processes) (pp. 43-46). Nijmegen: Max-Planck-Institute for Psycholinguistics.

    Abstract

    Two word-spotting experiments are reported that examine whether the Possible-Word Constraint (PWC) [1] is a language-specific or language-universal strategy for the segmentation of continuous speech. The PWC disfavours parses which leave an impossible residue between the end of a candidate word and a known boundary. The experiments examined cases where the residue was either a CV syllable with a lax vowel, or a CVC syllable with a schwa. Although neither syllable context is a possible word in English, word-spotting in both contexts was easier than with a context consisting of a single consonant. The PWC appears to be language-universal rather than language-specific.
  • Norris, D., Van Ooijen, B., & Cutler, A. (1992). Speeded detection of vowels and steady-state consonants. In J. Ohala, T. Neary, & B. Derwing (Eds.), Proceedings of the Second International Conference on Spoken Language Processing; Vol. 2 (pp. 1055-1058). Alberta: University of Alberta.

    Abstract

    We report two experiments in which vowels and steady-state consonants served as targets in a speeded detection task. In the first experiment, two vowels were compared with one voiced and once unvoiced fricative. Response times (RTs) to the vowels were longer than to the fricatives. The error rate was higher for the consonants. Consonants in word-final position produced the shortest RTs, For the vowels, RT correlated negatively with target duration. In the second experiment, the same two vowel targets were compared with two nasals. This time there was no significant difference in RTs, but the error rate was still significantly higher for the consonants. Error rate and length correlated negatively for the vowels only. We conclude that RT differences between phonemes are independent of vocalic or consonantal status. Instead, we argue that the process of phoneme detection reflects more finely grained differences in acoustic/articulatory structure within the phonemic repertoire.
  • Norris, D., Cutler, A., & McQueen, J. M. (2000). The optimal architecture for simulating spoken-word recognition. In C. Davis, T. Van Gelder, & R. Wales (Eds.), Cognitive Science in Australia, 2000: Proceedings of the Fifth Biennial Conference of the Australasian Cognitive Science Society. Adelaide: Causal Productions.

    Abstract

    Simulations explored the inability of the TRACE model of spoken-word recognition to model the effects on human listening of subcategorical mismatch in word forms. The source of TRACE's failure lay not in interactive connectivity, not in the presence of inter-word competition, and not in the use of phonemic representations, but in the need for continuously optimised interpretation of the input. When an analogue of TRACE was allowed to cycle to asymptote on every slice of input, an acceptable simulation of the subcategorical mismatch data was achieved. Even then, however, the simulation was not as close as that produced by the Merge model, which has inter-word competition, phonemic representations and continuous optimisation (but no interactive connectivity).
  • O'Meara, C., Speed, L. J., San Roque, L., & Majid, A. (2019). Perception Metaphors: A view from diversity. In L. J. Speed, C. O'Meara, L. San Roque, & A. Majid (Eds.), Perception Metaphors (pp. 1-16). Amsterdam: Benjamins.

    Abstract

    Our bodily experiences play an important role in the way that we think and speak. Abstract language is, however, difficult to reconcile with this body-centred view, unless we appreciate the role metaphors play. To explore the role of the senses across semantic domains, we focus on perception metaphors, and examine their realisation across diverse languages, methods, and approaches. To what extent do mappings in perception metaphor adhere to predictions based on our biological propensities; and to what extent is there space for cross-linguistic and cross-cultural variation? We find that while some metaphors have widespread commonality, there is more diversity attested than should be comfortable for universalist accounts.
  • Otake, T., & Cutler, A. (2000). A set of Japanese word cohorts rated for relative familiarity. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the Sixth International Conference on Spoken Language Processing: Vol. 3 (pp. 766-769). Beijing: China Military Friendship Publish.

    Abstract

    A database is presented of relative familiarity ratings for 24 sets of Japanese words, each set comprising words overlapping in the initial portions. These ratings are useful for the generation of material sets for research in the recognition of spoken words.
  • Ozyurek, A. (2000). Differences in spatial conceptualization in Turkish and English discourse: Evidence from both speech and gesture. In A. Goksel, & C. Kerslake (Eds.), Studies on Turkish and Turkic languages (pp. 263-272). Wiesbaden: Harrassowitz.
  • Ozyurek, A. (1998). An analysis of the basic meaning of Turkish demonstratives in face-to-face conversational interaction. In S. Santi, I. Guaitella, C. Cave, & G. Konopczynski (Eds.), Oralite et gestualite: Communication multimodale, interaction: actes du colloque ORAGE 98 (pp. 609-614). Paris: L'Harmattan.
  • Ozyurek, A., & Woll, B. (2019). Language in the visual modality: Cospeech gesture and sign language. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 67-83). Cambridge, MA: MIT Press.
  • Ozyurek, A., & Ozcaliskan, S. (2000). How do children learn to conflate manner and path in their speech and gestures? Differences in English and Turkish. In E. V. Clark (Ed.), The proceedings of the Thirtieth Child Language Research Forum (pp. 77-85). Stanford: CSLI Publications.
  • Ozyurek, A. (2000). The influence of addressee location on spatial language and representational gestures of direction. In D. McNeill (Ed.), Language and gesture (pp. 64-83). Cambridge: Cambridge University Press.
  • Parhammer*, S. I., Ebersberg*, M., Tippmann*, J., Stärk*, K., Opitz, A., Hinger, B., & Rossi, S. (2019). The influence of distraction on speech processing: How selective is selective attention? In Proceedings of Interspeech 2019 (pp. 3093-3097). doi:10.21437/Interspeech.2019-2699.

    Abstract

    -* indicates shared first authorship -
    The present study investigated the effects of selective attention on the processing of morphosyntactic errors in unattended parts of speech. Two groups of German native (L1) speakers participated in the present study. Participants listened to sentences in which irregular verbs were manipulated in three different conditions (correct, incorrect but attested ablaut pattern, incorrect and crosslinguistically unattested ablaut pattern). In order to track fast dynamic neural reactions to the stimuli, electroencephalography was used. After each sentence, participants in Experiment 1 performed a semantic judgement task, which deliberately distracted the participants from the syntactic manipulations and directed their attention to the semantic content of the sentence. In Experiment 2, participants carried out a syntactic judgement task, which put their attention on the critical stimuli. The use of two different attentional tasks allowed for investigating the impact of selective attention on speech processing and whether morphosyntactic processing steps are performed automatically. In Experiment 2, the incorrect attested condition elicited a larger N400 component compared to the correct condition, whereas in Experiment 1 no differences between conditions were found. These results suggest that the processing of morphosyntactic violations in irregular verbs is not entirely automatic but seems to be strongly affected by selective attention.
  • Perdue, C., & Klein, W. (1992). Conclusions. In W. Klein, & C. Perdue (Eds.), Utterance structure: Developing grammars again (pp. 301-337). Amsterdam: Benjamins.
  • Perdue, C., & Klein, W. (1992). Introduction. In W. Klein, & C. Perdue (Eds.), Utterance structure: Developing grammars again (pp. 1-10). Amsterdam: Benjamins.
  • Piai, V., & Zheng, X. (2019). Speaking waves: Neuronal oscillations in language production. In K. D. Federmeier (Ed.), Psychology of Learning and Motivation (pp. 265-302). Elsevier.

    Abstract

    Language production involves the retrieval of information from memory, the planning of an articulatory program, and executive control and self-monitoring. These processes can be related to the domains of long-term memory, motor control, and executive control. Here, we argue that studying neuronal oscillations provides an important opportunity to understand how general neuronal computational principles support language production, also helping elucidate relationships between language and other domains of cognition. For each relevant domain, we provide a brief review of the findings in the literature with respect to neuronal oscillations. Then, we show how similar patterns are found in the domain of language production, both through review of previous literature and novel findings. We conclude that neurophysiological mechanisms, as reflected in modulations of neuronal oscillations, may act as a fundamental basis for bringing together and enriching the fields of language and cognition.
  • Pouw, W., Paxton, A., Harrison, S. J., & Dixon, J. A. (2019). Acoustic specification of upper limb movement in voicing. In A. Grimminger (Ed.), Proceedings of the 6th Gesture and Speech in Interaction – GESPIN 6 (pp. 68-74). Paderborn: Universitaetsbibliothek Paderborn. doi:10.17619/UNIPB/1-812.
  • Pouw, W., & Dixon, J. A. (2019). Quantifying gesture-speech synchrony. In A. Grimminger (Ed.), Proceedings of the 6th Gesture and Speech in Interaction – GESPIN 6 (pp. 75-80). Paderborn: Universitaetsbibliothek Paderborn. doi:10.17619/UNIPB/1-812.

    Abstract

    Spontaneously occurring speech is often seamlessly accompanied by hand gestures. Detailed
    observations of video data suggest that speech and gesture are tightly synchronized in time,
    consistent with a dynamic interplay between body and mind. However, spontaneous gesturespeech
    synchrony has rarely been objectively quantified beyond analyses of video data, which
    do not allow for identification of kinematic properties of gestures. Consequently, the point in
    gesture which is held to couple with speech, the so-called moment of “maximum effort”, has
    been variably equated with the peak velocity, peak acceleration, peak deceleration, or the onset
    of the gesture. In the current exploratory report, we provide novel evidence from motiontracking
    and acoustic data that peak velocity is closely aligned, and shortly leads, the peak pitch
    (F0) of speech

    Additional information

    https://osf.io/9843h/
  • Ravignani, A., Chiandetti, C., & Kotz, S. (2019). Rhythm and music in animal signals. In J. Choe (Ed.), Encyclopedia of Animal Behavior (vol. 1) (2nd ed., pp. 615-622). Amsterdam: Elsevier.
  • Rissman, L., & Majid, A. (2019). Agency drives category structure in instrumental events. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 2661-2667). Montreal, QB: Cognitive Science Society.

    Abstract

    Thematic roles such as Agent and Instrument have a long-standing place in theories of event representation. Nonetheless, the structure of these categories has been difficult to determine. We investigated how instrumental events, such as someone slicing bread with a knife, are categorized in English. Speakers described a variety of typical and atypical instrumental events, and we determined the similarity structure of their descriptions using correspondence analysis. We found that events where the instrument is an extension of an intentional agent were most likely to elicit similar language, highlighting the importance of agency in structuring instrumental categories.
  • Rojas-Berscia, L. M. (2019). Nominalization in Shawi/Chayahuita. In R. Zariquiey, M. Shibatani, & D. W. Fleck (Eds.), Nominalization in languages of the Americas (pp. 491-514). Amsterdam: Benjamins.

    Abstract

    This paper deals with the Shawi nominalizing suffixes -su’~-ru’~-nu’ ‘general nominalizer’, -napi/-te’/-tun‘performer/agent nominalizer’, -pi’‘patient nominalizer’, and -nan ‘instrument nominalizer’. The goal of this article is to provide a description of nominalization in Shawi. Throughout this paper I apply the Generalized Scale Model (GSM) (Malchukov, 2006) to Shawi verbal nominalizations, with the intention of presenting a formal representation that will provide a basis for future areal and typological studies of nominalization. In addition, I dialogue with Shibatani’s model to see how the loss or gain of categories correlates with the lexical or grammatical nature of nominalizations. strong nominalization in Shawi correlates with lexical nominalization, whereas weak nominalizations correlate with grammatical nominalization. A typology which takes into account the productivity of the nominalizers is also discussed.
  • Rowland, C. F., & Kidd, E. (2019). Key issues and future directions: How do children acquire language? In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 181-185). Cambridge, MA: MIT Press.
  • Rubio-Fernández, P. (2019). Theory of mind. In C. Cummins, & N. Katsos (Eds.), The Handbook of Experimental Semantics and Pragmatics (pp. 524-536). Oxford: Oxford University Press.
  • Sandberg, A., Lansner, A., Petersson, K. M., & Ekeberg, Ö. (2000). A palimpsest memory based on an incremental Bayesian learning rule. In J. M. Bower (Ed.), Computational Neuroscience: Trends in Research 2000 (pp. 987-994). Amsterdam: Elsevier.
  • Scharenborg, O., Bouwman, G., & Boves, L. (2000). Connected digit recognition with class specific word models. In Proceedings of the COST249 Workshop on Voice Operated Telecom Services workshop (pp. 71-74).

    Abstract

    This work focuses on efficient use of the training material by selecting the optimal set of model topologies. We do this by training multiple word models of each word class, based on a subclassification according to a priori knowledge of the training material. We will examine classification criteria with respect to duration of the word, gender of the speaker, position of the word in the utterance, pauses in the vicinity of the word, and combinations of these. Comparative experiments were carried out on a corpus consisting of Dutch spoken connected digit strings and isolated digits, which are recorded in a wide variety of acoustic conditions. The results show, that classification based on gender of the speaker, position of the digit in the string, pauses in the vicinity of the training tokens, and models based on a combination of these criteria perform significantly better than the set with single models per digit.
  • Schoenmakers, G.-J., & De Swart, P. (2019). Adverbial hurdles in Dutch scrambling. In A. Gattnar, R. Hörnig, M. Störzer, & S. Featherston (Eds.), Proceedings of Linguistic Evidence 2018: Experimental Data Drives Linguistic Theory (pp. 124-145). Tübingen: University of Tübingen.

    Abstract

    This paper addresses the role of the adverb in Dutch direct object scrambling constructions. We report four experiments in which we investigate whether the structural position and the scope sensitivity of the adverb affect acceptability judgments of scrambling constructions and native speakers' tendency to scramble definite objects. We conclude that the type of adverb plays a key role in Dutch word ordering preferences.
  • Schuerman, W. L., McQueen, J. M., & Meyer, A. S. (2019). Speaker statistical averageness modulates word recognition in adverse listening conditions. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 20195) (pp. 1203-1207). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    We tested whether statistical averageness (SA) at the level of the individual speaker could predict a speaker’s intelligibility. 28 female and 21 male speakers of Dutch were recorded producing 336 sentences,
    each containing two target nouns. Recordings were compared to those of all other same-sex speakers using dynamic time warping (DTW). For each sentence, the DTW distance constituted a metric
    of phonetic distance from one speaker to all other speakers. SA comprised the average of these distances. Later, the same participants performed a word recognition task on the target nouns in the same sentences, under three degraded listening conditions. In all three conditions, accuracy increased with SA. This held even when participants listened to their own utterances. These findings suggest that listeners process speech with respect to the statistical
    properties of the language spoken in their community, rather than using their own speech as a reference
  • Seidlmayer, E., Galke, L., Melnychuk, T., Schultz, C., Tochtermann, K., & Förstner, K. U. (2019). Take it personally - A Python library for data enrichment for infometrical applications. In M. Alam, R. Usbeck, T. Pellegrini, H. Sack, & Y. Sure-Vetter (Eds.), Proceedings of the Posters and Demo Track of the 15th International Conference on Semantic Systems co-located with 15th International Conference on Semantic Systems (SEMANTiCS 2019).

    Abstract

    Like every other social sphere, science is influenced by individual characteristics of researchers. However, for investigations on scientific networks, only little data about the social background of researchers, e.g. social origin, gender, affiliation etc., is available.
    This paper introduces ”Take it personally - TIP”, a conceptual model and library currently under development, which aims to support the
    semantic enrichment of publication databases with semantically related background information which resides elsewhere in the (semantic) web, such as Wikidata.
    The supplementary information enriches the original information in the publication databases and thus facilitates the creation of complex scientific knowledge graphs. Such enrichment helps to improve the scientometric analysis of scientific publications as they can also take social backgrounds of researchers into account and to understand social structure in research communities.
  • Seijdel, N., Sakmakidis, N., De Haan, E. H. F., Bohte, S. M., & Scholte, H. S. (2019). Implicit scene segmentation in deeper convolutional neural networks. In Proceedings of the 2019 Conference on Cognitive Computational Neuroscience (pp. 1059-1062). doi:10.32470/CCN.2019.1149-0.

    Abstract

    Feedforward deep convolutional neural networks (DCNNs) are matching and even surpassing human performance on object recognition. This performance suggests that activation of a loose collection of image
    features could support the recognition of natural object categories, without dedicated systems to solve specific visual subtasks. Recent findings in humans however, suggest that while feedforward activity may suffice for
    sparse scenes with isolated objects, additional visual operations ('routines') that aid the recognition process (e.g. segmentation or grouping) are needed for more complex scenes. Linking human visual processing to
    performance of DCNNs with increasing depth, we here explored if, how, and when object information is differentiated from the backgrounds they appear on. To this end, we controlled the information in both objects
    and backgrounds, as well as the relationship between them by adding noise, manipulating background congruence and systematically occluding parts of the image. Results indicated less distinction between object- and background features for more shallow networks. For those networks, we observed a benefit of training on segmented objects (as compared to unsegmented objects). Overall, deeper networks trained on natural
    (unsegmented) scenes seem to perform implicit 'segmentation' of the objects from their background, possibly by improved selection of relevant features.
  • Senft, G., & Labov, W. (1980). Einige Prinzipien linguistischer Methodologie [transl. from English by Gunter Senft]. In N. Dittmar, & B. O. Rieck (Eds.), William Labov: Sprache im sozialen Kontext (pp. 1-24). Königstein: Athenäum FAT.
  • Senft, G., & Labov, W. (1978). Einige Prinzipien linguistischer Methodologie [transl. from English by Gunter Senft]. In N. Dittmar, & B. O. Rieck (Eds.), William Labov: Sprache im sozialen Kontext, vol. 2 (pp. 187-207). Königstein: Scriptor.
  • Senft, G. (2000). COME and GO in Kilivila. In B. Palmer, & P. Geraghty (Eds.), SICOL. Proceedings of the second international conference on Oceanic linguistics: Volume 2, Historical and descriptive studies (pp. 105-136). Canberra: Pacific Linguistics.
  • Senft, G. (1992). As time goes by..: Changes observed in Trobriand Islanders' culture and language, Milne Bay Province, Papua New Guinea. In T. Dutton (Ed.), Culture change, language change: Case studies from Melanesia (pp. 67-89). Canberra: Pacific Linguistics.
  • Senft, G. (1998). 'Noble Savages' and the 'Islands of Love': Trobriand Islanders in 'Popular Publications'. In J. Wassmann (Ed.), Pacific answers to Western hegemony: Cultural practices of identity construction (pp. 119-140). Oxford: Berg Publishers.
  • Senft, G., & Labov, W. (1980). Hyperkorrektheit der unteren Mittelschicht als Faktor im Sprachwandel; [transl. from English by Gunter Senft]. In N. Dittmar, & B. O. Rieck (Eds.), William Labov: Sprache im sozialen Kontext (pp. 77-94). Königstein: Athneäum FAT.
  • Senft, G., & Labov, W. (1978). Hyperkorrektheit der unteren Mittelschicht als Faktor im Sprachwandel; [transl. from English by Gunter Senft]. In N. Dittmar, & B. O. Rieck (Eds.), William Labov: Sprache im sozialen Kontext, Vol.2 (pp. 129-146). Königstein: Scriptor.
  • Senft, G. (2000). Introduction. In G. Senft (Ed.), Systems of nominal classification (pp. 1-10). Cambridge University Press.
  • Senft, G. (2019). Rituelle Kommunikation. In F. Liedtke, & A. Tuchen (Eds.), Handbuch Pragmatik (pp. 423-430). Stuttgart: J. B. Metzler. doi:10.1007/978-3-476-04624-6_41.

    Abstract

    Die Sprachwissenschaft hat den Begriff und das Konzept ›Rituelle Kommunikation‹ von der vergleichenden Verhaltensforschung übernommen. Humanethologen unterscheiden eine Reihe von sogenannten ›Ausdrucksbewegungen‹, die in der Mimik, der Gestik, der Personaldistanz (Proxemik) und der Körperhaltung (Kinesik) zum Ausdruck kommen. Viele dieser Ausdrucksbewegungen haben sich zu spezifischen Signalen entwickelt. Ethologen definieren Ritualisierung als Veränderung von Verhaltensweisen im Dienst der Signalbildung. Die zu Signalen ritualisierten Verhaltensweisen sind Rituale. Im Prinzip kann jede Verhaltensweise zu einem Signal werden, entweder im Laufe der Evolution oder durch Konventionen, die in einer bestimmten Gemeinschaft gültig sind, die solche Signale kulturell entwickelt hat und die von ihren Mitgliedern tradiert und gelernt werden.
  • Senft, G. (2000). What do we really know about nominal classification systems? In Conference handbook. The 18th national conference of the English Linguistic Society of Japan. 18-19 November, 2000, Konan University (pp. 225-230). Kobe: English Linguistic Society of Japan.
  • Senft, G. (2000). What do we really know about nominal classification systems? In G. Senft (Ed.), Systems of nominal classification (pp. 11-49). Cambridge University Press.
  • Senft, G. (1998). Zeichenkonzeptionen in Ozeanien. In R. Posner, T. Robering, & T.. Sebeok (Eds.), Semiotics: A handbook on the sign-theoretic foundations of nature and culture (Vol. 2) (pp. 1971-1976). Berlin: de Gruyter.

Share this page