Publications

Displaying 1 - 100 of 102
  • Ameka, F. K. (2013). Possessive constructions in Likpe (Sɛkpɛlé). In A. Aikhenvald, & R. Dixon (Eds.), Possession and ownership: A crosslinguistic typology (pp. 224-242). Oxford: Oxford University Press.
  • Bauer, B. L. M. (2013). Impersonal verbs. In G. K. Giannakis (Ed.), Encyclopedia of Ancient Greek Language and Linguistics Online (pp. 197-198). Leiden: Brill. doi:10.1163/2214-448X_eagll_SIM_00000481.

    Abstract

    Impersonal verbs in Greek ‒ as in the other Indo-European languages ‒ exclusively feature 3rd person singular finite forms and convey one of three types of meaning: (a) meteorological conditions; (b) emotional and physical state/experience; (c) modality. In Greek, impersonal verbs predominantly convey meteorological conditions and modality. Impersonal verbs in Greek, as in the other Indo-European languages, exclusively feature 3rd person singular finite forms and convey one of three types of me…

    Files private

    Request files
  • Bögels, S., Barr, D., Garrod, S., & Kessler, K. (2013). "Are we still talking about the same thing?" MEG reveals perspective-taking in response to pragmatic violations, but not in anticipation. In M. Knauff, N. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 215-220). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0066/index.html.

    Abstract

    The current study investigates whether mentalizing, or taking the perspective of your interlocutor, plays an essential role throughout a conversation or whether it is mostly used in reaction to misunderstandings. This study is the first to use a brain-imaging method, MEG, to answer this question. In a first phase of the experiment, MEG participants interacted "live" with a confederate who set naming precedents for certain pictures. In a later phase, these precedents were sometimes broken by a speaker who named the same picture in a different way. This could be done by the same speaker, who set the precedent, or by a different speaker. Source analysis of MEG data showed that in the 800 ms before the naming, when the picture was already on the screen, episodic memory and language areas were activated, but no mentalizing areas, suggesting that the speaker's naming intentions were not anticipated by the listener on the basis of shared experiences. Mentalizing areas only became activated after the same speaker had broken a precedent, which we interpret as a reaction to the violation of conversational pragmatics.
  • Bone, D., Ramanarayanan, V., Narayanan, S., Hoedemaker, R. S., & Gordon, P. C. (2013). Analyzing eye-voice coordination in rapid automatized naming. In F. Bimbot, C. Cerisara, G. Fougeron, L. Gravier, L. Lamel, F. Pelligrino, & P. Perrier (Eds.), INTERSPEECH-2013: 14thAnnual Conference of the International Speech Communication Association (pp. 2425-2429). ISCA Archive. Retrieved from http://www.isca-speech.org/archive/interspeech_2013/i13_2425.html.

    Abstract

    Rapid Automatized Naming (RAN) is a powerful tool for pre- dicting future reading skill. A person’s ability to quickly name symbols as they scan a table is related to higher-level reading proficiency in adults and is predictive of future literacy gains in children. However, noticeable differences are present in the strategies or patterns within groups having similar task comple- tion times. Thus, a further stratification of RAN dynamics may lead to better characterization and later intervention to support reading skill acquisition. In this work, we analyze the dynamics of the eyes, voice, and the coordination between the two during performance. It is shown that fast performers are more similar to each other than to slow performers in their patterns, but not vice versa. Further insights are provided about the patterns of more proficient subjects. For instance, fast performers tended to exhibit smoother behavior contours, suggesting a more sta- ble perception-production process.
  • Bosker, H. R. (2013). Juncture (prosodic). In G. Khan (Ed.), Encyclopedia of Hebrew Language and Linguistics (pp. 432-434). Leiden: Brill.

    Abstract

    Prosodic juncture concerns the compartmentalization and partitioning of syntactic entities in spoken discourse by means of prosody. It has been argued that the Intonation Unit, defined by internal criteria and prosodic boundary phenomena (e.g., final lengthening, pitch reset, pauses), encapsulates the basic structural unit of spoken Modern Hebrew.
  • Bosker, H. R. (2013). Sibilant consonants. In G. Khan (Ed.), Encyclopedia of Hebrew Language and Linguistics (pp. 557-561). Leiden: Brill.

    Abstract

    Fricative consonants in Hebrew can be divided into bgdkpt and sibilants (ז, ס, צ, שׁ, שׂ). Hebrew sibilants have been argued to stem from Proto-Semitic affricates, laterals, interdentals and /s/. In standard Israeli Hebrew the sibilants are pronounced as [s] (ס and שׂ), [ʃ] (שׁ), [z] (ז), [ʦ] (צ).
  • Brown, P. (2013). La estructura conversacional y la adquisición del lenguaje: El papel de la repetición en el habla de los adultos y niños tzeltales. In L. de León Pasquel (Ed.), Nuevos senderos en el studio de la adquisición de lenguas mesoamericanas: Estructura, narrativa y socialización (pp. 35-82). Mexico: CIESAS-UNAM.

    Abstract

    This is a translation of the Brown 1998 article in Journal of Linguistic Anthropology, 'Conversational structure and language acquisition: The role of repetition in Tzeltal adult and child speech'.

    Files private

    Request files
  • Brown, P., Pfeiler, B., de León, L., & Pye, C. (2013). The acquisition of agreement in four Mayan languages. In E. Bavin, & S. Stoll (Eds.), The acquisition of ergativity (pp. 271-306). Amsterdam: Benjamins.

    Abstract

    This paper presents results of a comparative project documenting the development of verbal agreement inflections in children learning four different Mayan languages: K’iche’, Tzeltal, Tzotzil, and Yukatek. These languages have similar inflectional paradigms: they have a generally agglutinative morphology, with transitive verbs obligatorily marked with separate cross-referencing inflections for the two core arguments (‘ergative’ and ‘absolutive’). Verbs are also inflected for aspect and mood, and they carry a ‘status suffix’ which generally marks verb transitivity and mood. At a more detailed level, the four languages differ strikingly in the realization of cross-reference marking. For each language, we examined longitudinal language production data from two children at around 2;0, 2;6, 3;0, and 3;6 years of age. We relate differences in the acquisition patterns of verbal morphology in the languages to 1) the placement of affixes, 2) phonological and prosodic prominence, 3) language-specific constraints on the various forms of the affixes, and 4) consistent vs. split ergativity, and conclude that prosodic salience accounts provide th ebest explanation for the acquisition patterns in these four languages.

    Files private

    Request files
  • Casillas, M., & Frank, M. C. (2013). The development of predictive processes in children’s discourse understanding. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society. (pp. 299-304). Austin,TX: Cognitive Society.

    Abstract

    We investigate children’s online predictive processing as it occurs naturally, in conversation. We showed 1–7 year-olds short videos of improvised conversation between puppets, controlling for available linguistic information through phonetic manipulation. Even one- and two-year-old children made accurate and spontaneous predictions about when a turn-switch would occur: they gazed at the upcoming speaker before they heard a response begin. This predictive skill relies on both lexical and prosodic information together, and is not tied to either type of information alone. We suggest that children integrate prosodic, lexical, and visual information to effectively predict upcoming linguistic material in conversation.
  • Clifton, C. J., Meyer, A. S., Wurm, L. H., & Treiman, R. (2013). Language comprehension and production. In A. F. Healy, & R. W. Proctor (Eds.), Handbook of Psychology, Volume 4, Experimental Psychology. 2nd Edition (pp. 523-547). Hoboken, NJ: Wiley.

    Abstract

    In this chapter, we survey the processes of recognizing and producing words and of understanding and creating sentences. Theory and research on these topics have been shaped by debates about how various sources of information are integrated in these processes, and about the role of language structure, as analyzed in the discipline of linguistics. In this chapter, we describe current views of fluent language users' comprehension of spoken and written language and their production of spoken language. We review what we consider to be the most important findings and theories in psycholinguistics, returning again and again to the questions of modularity and the importance of linguistic knowledge. Although we acknowledge the importance of social factors in language use, our focus is on core processes such as parsing and word retrieval that are not necessarily affected by such factors. We do not have space to say much about the important fields of developmental psycholinguistics, which deals with the acquisition of language by children, or applied psycholinguistics, which encompasses such topics as language disorders and language teaching. Although we recognize that there is burgeoning interest in the measurement of brain activity during language processing and how language is represented in the brain, space permits only occasional pointers to work in neuropsychology and the cognitive neuroscience of language. For treatment of these topics, and others, the interested reader could begin with two recent handbooks of psycholinguistics (Gaskell, 2007; Traxler & Gemsbacher, 2006) and a handbook of cognitive neuroscience (Gazzaniga, 2004).
  • Cutler, A., & Bruggeman, L. (2013). Vocabulary structure and spoken-word recognition: Evidence from French reveals the source of embedding asymmetry. In Proceedings of INTERSPEECH: 14th Annual Conference of the International Speech Communication Association (pp. 2812-2816).

    Abstract

    Vocabularies contain hundreds of thousands of words built from only a handful of phonemes, so that inevitably longer words tend to contain shorter ones. In many languages (but not all) such embedded words occur more often word-initially than word-finally, and this asymmetry, if present, has farreaching consequences for spoken-word recognition. Prior research had ascribed the asymmetry to suffixing or to effects of stress (in particular, final syllables containing the vowel schwa). Analyses of the standard French vocabulary here reveal an effect of suffixing, as predicted by this account, and further analyses of an artificial variety of French reveal that extensive final schwa has an independent and additive effect in promoting the embedding asymmetry.
  • Dediu, D., Cysouw, M., Levinson, S. C., Baronchelli, A., Christiansen, M. H., Croft, W., Evans, N., Garrod, S., Gray, R., Kandler, A., & Lieven, E. (2013). Cultural evolution of language. In P. J. Richerson, & M. H. Christiansen (Eds.), Cultural evolution: Society, technology, language, and religion. Strüngmann Forum Reports, vol. 12 (pp. 303-332). Cambridge, Mass: MIT Press.

    Abstract

    This chapter argues that an evolutionary cultural approach to language not only has already proven fruitful, but it probably holds the key to understand many puzzling aspects of language, its change and origins. The chapter begins by highlighting several still common misconceptions about language that might seem to call into question a cultural evolutionary approach. It explores the antiquity of language and sketches a general evolutionary approach discussing the aspects of function, fi tness, replication, and selection, as well the relevant units of linguistic evolution. In this context, the chapter looks at some fundamental aspects of linguistic diversity such as the nature of the design space, the mechanisms generating it, and the shape and fabric of language. Given that biology is another evolutionary system, its complex coevolution with language needs to be understood in order to have a proper theory of language. Throughout the chapter, various challenges are identifi ed and discussed, sketching promising directions for future research. The chapter ends by listing the necessary data, methods, and theoretical developments required for a grounded evolutionary approach to language.
  • Dediu, D. (2013). Genes: Interactions with language on three levels — Inter-individual variation, historical correlations and genetic biasing. In P.-M. Binder, & K. Smith (Eds.), The language phenomenon: Human communication from milliseconds to millennia (pp. 139-161). Berlin: Springer. doi:10.1007/978-3-642-36086-2_7.

    Abstract

    The complex inter-relationships between genetics and linguistics encompass all four scales highlighted by the contributions to this book and, together with cultural transmission, the genetics of language holds the promise to offer a unitary understanding of this fascinating phenomenon. There are inter-individual differences in genetic makeup which contribute to the obvious fact that we are not identical in the way we understand and use language and, by studying them, we will be able to both better treat and enhance ourselves. There are correlations between the genetic configuration of human groups and their languages, reflecting the historical processes shaping them, and there also seem to exist genes which can influence some characteristics of language, biasing it towards or against certain states by altering the way language is transmitted across generations. Besides the joys of pure knowledge, the understanding of these three aspects of genetics relevant to language will potentially trigger advances in medicine, linguistics, psychology or the understanding of our own past and, last but not least, a profound change in the way we regard one of the emblems of being human: our capacity for language.
  • Dingemanse, M. (2013). Wie wir mit Sprache malen - How to paint with language. Forschungsbericht 2013 - Max-Planck-Institut für Psycholinguistik. In Max-Planck-Gesellschaft Jahrbuch 2013. München: Max Planck Society for the Advancement of Science. Retrieved from http://www.mpg.de/6683977/Psycholinguistik_JB_2013.

    Abstract

    Words evolve not as blobs of ink on paper but in face to face interaction. The nature of language as fundamentally interactive and multimodal is shown by the study of ideophones, vivid sensory words that thrive in conversations around the world. The ways in which these Lautbilder enable precise communication about sensory knowledge has for the first time been studied in detail. It turns out that we can paint with language, and that the onomatopoeia we sometimes classify as childish might be a subset of a much richer toolkit for depiction in speech, available to us all.
  • Dolscheid, S., Graver, C., & Casasanto, D. (2013). Spatial congruity effects reveal metaphors, not markedness. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 2213-2218). Austin,TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0405/index.html.

    Abstract

    Spatial congruity effects have often been interpreted as evidence for metaphorical thinking, but an alternative markedness-based account challenges this view. In two experiments, we directly compared metaphor and markedness explanations for spatial congruity effects, using musical pitch as a testbed. English speakers who talk about pitch in terms of spatial height were tested in speeded space-pitch compatibility tasks. To determine whether space-pitch congruency effects could be elicited by any marked spatial continuum, participants were asked to classify high- and low-frequency pitches as 'high' and 'low' or as 'front' and 'back' (both pairs of terms constitute cases of marked continuums). We found congruency effects in high/low conditions but not in front/back conditions, indicating that markedness is not sufficient to account for congruity effects (Experiment 1). A second experiment showed that congruency effects were specific to spatial words that cued a vertical schema (tall/short), and that congruity effects were not an artifact of polysemy (e.g., 'high' referring both to space and pitch). Together, these results suggest that congruency effects reveal metaphorical uses of spatial schemas, not markedness effects.
  • Durco, M., & Windhouwer, M. (2013). Semantic Mapping in CLARIN Component Metadata. In Proceedings of MTSR 2013, the 7th Metadata and Semantics Research Conference (pp. 163-168). New York: Springer.

    Abstract

    In recent years, large scale initiatives like CLARIN set out to overcome the notorious heterogeneity of metadata formats in the domain of language resource. The CLARIN Component Metadata Infrastructure established means for flexible resouce descriptions for the domain of language resources. The Data Category Registry ISOcat and the accompanying Relation Registry foster semantic interoperability within the growing heterogeneous collection of metadata records. This paper describes the CMD Infrastructure focusing on the facilities for semantic mapping, and gives also an overview of the current status in the joint component metadata domain.
  • Enfield, N. J. (2013). A ‘Composite Utterances’ approach to meaning. In C. Müller, E. Fricke, S. Ladewig, A. Cienki, D. McNeill, & S. Teßendorf (Eds.), Handbook Body – Language – Communication. Volume 1 (pp. 689-706). Berlin: Mouton de Gruyter.
  • Enfield, N. J. (2013). Doing fieldwork on the body, language, and communication. In C. Müller, E. Fricke, S. Ladewig, A. Cienki, D. McNeill, & S. Teßendorf (Eds.), Handbook Body – Language – Communication. Volume 1 (pp. 974-981). Berlin: Mouton de Gruyter.
  • Enfield, N. J. (2013). Hippie, interrupted. In J. Barker, & J. Lindquist (Eds.), Figures of Southeast Asian modernity (pp. 101-103). Honolulu: University of Hawaii Press.
  • Enfield, N. J., Dingemanse, M., Baranova, J., Blythe, J., Brown, P., Dirksmeyer, T., Drew, P., Floyd, S., Gipper, S., Gisladottir, R. S., Hoymann, G., Kendrick, K. H., Levinson, S. C., Magyari, L., Manrique, E., Rossi, G., San Roque, L., & Torreira, F. (2013). Huh? What? – A first survey in 21 languages. In M. Hayashi, G. Raymond, & J. Sidnell (Eds.), Conversational repair and human understanding (pp. 343-380). New York: Cambridge University Press.

    Abstract

    Introduction A comparison of conversation in twenty-one languages from around the world reveals commonalities and differences in the way that people do open-class other-initiation of repair (Schegloff, Jefferson, and Sacks, 1977; Drew, 1997). We find that speakers of all of the spoken languages in the sample make use of a primary interjection strategy (in English it is Huh?), where the phonetic form of the interjection is strikingly similar across the languages: a monosyllable featuring an open non-back vowel [a, æ, ə, ʌ], often nasalized, usually with rising intonation and sometimes an [h-] onset. We also find that most of the languages have another strategy for open-class other-initiation of repair, namely the use of a question word (usually “what”). Here we find significantly more variation across the languages. The phonetic form of the question word involved is completely different from language to language: e.g., English [wɑt] versus Cha'palaa [ti] versus Duna [aki]. Furthermore, the grammatical structure in which the repair-initiating question word can or must be expressed varies within and across languages. In this chapter we present data on these two strategies – primary interjections like Huh? and question words like What? – with discussion of possible reasons for the similarities and differences across the languages. We explore some implications for the notion of repair as a system, in the context of research on the typology of language use. The general outline of this chapter is as follows. We first discuss repair as a system across languages and then introduce the focus of the chapter: open-class other-initiation of repair. A discussion of the main findings follows, where we identify two alternative strategies in the data: an interjection strategy (Huh?) and a question word strategy (What?). Formal features and possible motivations are discussed for the interjection strategy and the question word strategy in order. A final section discusses bodily behavior including posture, eyebrow movements and eye gaze, both in spoken languages and in a sign language.
  • Enfield, N. J. (2013). Reference in conversation. In J. Sidnell, & T. Stivers (Eds.), The handbook of conversation analysis (pp. 433-454). Malden, MA: Wiley-Blackwell. doi:10.1002/9781118325001.ch21.

    Abstract

    This chapter contains sections titled: Introduction Lexical Selection in Reference: Introductory Examples of Reference to Times Multiple “Preferences” Future Directions Conclusion
  • Fisher, S. E. (2013). Building bridges between genes, brains and language. In J. J. Bolhuis, & M. Everaert (Eds.), Birdsong, speech and language: Exploring the evolution of mind and brain (pp. 425-454). Cambridge, Mass: MIT Press.
  • Flecken, M., & Gerwien, J. (2013). Grammatical aspect modulates event duration estimations: findings from Dutch. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th annual meeting of the Cognitive Science Society (CogSci 2013) (pp. 2309-2314). Austin,TX: Cognitive Science Society.
  • Floyd, S. (2013). Semantic transparency and cultural calquing in the Northwest Amazon. In P. Epps, & K. Stenzel (Eds.), Upper Rio Negro: Cultural and linguistic interaction in northwestern Amazonia (pp. 271-308). Rio de Janiero: Museu do Indio. Retrieved from http://www.museunacional.ufrj.br/ppgas/livros_ele.html.

    Abstract

    The ethnographic literature has sometimes described parts of the northwest Amazon as areas of shared culture across linguistic groups. This paper illustrates how a principle of semantic transparency across languages is a key means of establishing elements of a common regional culture through practices like the calquing of ethnonyms and toponyms so that they are semantically, but not phonologically, equivalent across languages. It places the upper Rio Negro area of the northwest Amazon in a general discussion of cross-linguistic naming practices in South America and considers the extent to which a preference for semantic transparency can be linked to cases of widespread cultural ‘calquing’, in which culturally-important meanings are kept similar across different linguistic systems. It also addresses the principle of semantic transparency beyond specific referential phrases and into larger discourse structures. It concludes that an attention to semiotic practices in multilingual settings can provide new and more complex ways of thinking about the idea of shared culture.
  • Gebre, B. G., Wittenburg, P., & Heskes, T. (2013). Automatic sign language identification. In Proceeding of the 20th IEEE International Conference on Image Processing (ICIP) (pp. 2626-2630).

    Abstract

    We propose a Random-Forest based sign language identification system. The system uses low-level visual features and is based on the hypothesis that sign languages have varying distributions of phonemes (hand-shapes, locations and movements). We evaluated the system on two sign languages -- British SL and Greek SL, both taken from a publicly available corpus, called Dicta Sign Corpus. Achieved average F1 scores are about 95% - indicating that sign languages can be identified with high accuracy using only low-level visual features.
  • Gebre, B. G., Wittenburg, P., & Heskes, T. (2013). Automatic signer diarization - the mover is the signer approach. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2013 IEEE Conference on (pp. 283-287). doi:10.1109/CVPRW.2013.49.

    Abstract

    We present a vision-based method for signer diarization -- the task of automatically determining "who signed when?" in a video. This task has similar motivations and applications as speaker diarization but has received little attention in the literature. In this paper, we motivate the problem and propose a method for solving it. The method is based on the hypothesis that signers make more movements than their interlocutors. Experiments on four videos (a total of 1.4 hours and each consisting of two signers) show the applicability of the method. The best diarization error rate (DER) obtained is 0.16.
  • Gebre, B. G., Zampieri, M., Wittenburg, P., & Heskes, T. (2013). Improving Native Language Identification with TF-IDF weighting. In Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications (pp. 216-223).

    Abstract

    This paper presents a Native Language Identification (NLI) system based on TF-IDF weighting schemes and using linear classifiers - support vector machines, logistic regressions and perceptrons. The system was one of the participants of the 2013 NLI Shared Task in the closed-training track, achieving 0.814 overall accuracy for a set of 11 native languages. This accuracy was only 2.2 percentage points lower than the winner's performance. Furthermore, with subsequent evaluations using 10-fold cross-validation (as given by the organizers) on the combined training and development data, the best average accuracy obtained is 0.8455 and the features that contributed to this accuracy are the TF-IDF of the combined unigrams and bigrams of words.
  • Gebre, B. G., Wittenburg, P., & Heskes, T. (2013). The gesturer is the speaker. In Proceedings of the 38th International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2013) (pp. 3751-3755).

    Abstract

    We present and solve the speaker diarization problem in a novel way. We hypothesize that the gesturer is the speaker and that identifying the gesturer can be taken as identifying the active speaker. We provide evidence in support of the hypothesis from gesture literature and audio-visual synchrony studies. We also present a vision-only diarization algorithm that relies on gestures (i.e. upper body movements). Experiments carried out on 8.9 hours of a publicly available dataset (the AMI meeting data) show that diarization error rates as low as 15% can be achieved.
  • Gijssels, T., Bottini, R., Rueschemeyer, S.-A., & Casasanto, D. (2013). Space and time in the parietal cortex: fMRI Evidence for a meural asymmetry. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 495-500). Austin,TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0113/index.html.

    Abstract

    How are space and time related in the brain? This study contrasts two proposals that make different predictions about the interaction between spatial and temporal magnitudes. Whereas ATOM implies that space and time are symmetrically related, Metaphor Theory claims they are asymmetrically related. Here we investigated whether space and time activate the same neural structures in the inferior parietal cortex (IPC) and whether the activation is symmetric or asymmetric across domains. We measured participants’ neural activity while they made temporal and spatial judgments on the same visual stimuli. The behavioral results replicated earlier observations of a space-time asymmetry: Temporal judgments were more strongly influenced by irrelevant spatial information than vice versa. The BOLD fMRI data indicated that space and time activated overlapping clusters in the IPC and that, consistent with Metaphor Theory, this activation was asymmetric: The shared region of IPC was activated more strongly during temporal judgments than during spatial judgments. We consider three possible interpretations of this neural asymmetry, based on 3 possible functions of IPC.
  • Gussenhoven, C., & Zhou, W. (2013). Revisiting pitch slope and height effects on perceived duration. In Proceedings of INTERSPEECH 2013: 14th Annual Conference of the International Speech Communication Association (pp. 1365-1369).

    Abstract

    The shape of pitch contours has been shown to have an effect on the perceived duration of vowels. For instance, vowels with high level pitch and vowels with falling contours sound longer than vowels with low level pitch. Depending on whether the comparison is between level pitches or between level and dynamic contours, these findings have been interpreted in two ways. For inter-level comparisons, where the duration results are the reverse of production results, a hypercorrection strategy in production has been proposed [1]. By contrast, for comparisons between level pitches and dynamic contours, the longer production data for dynamic contours have been held responsible. We report an experiment with Dutch and Chinese listeners which aimed to show that production data and perception data are each other’s opposites for high, low, falling and rising contours. We explain the results, which are consistent with earlier findings, in terms of the compensatory listening strategy of [2], arguing that the perception effects are due to a perceptual compensation of articulatory strategies and constraints, rather than that differences in production compensate for psycho-acoustic perception effects.
  • Hagoort, P., & Poeppel, D. (2013). The infrastructure of the language-ready brain. In M. A. Arbib (Ed.), Language, music, and the brain: A mysterious relationship (pp. 233-255). Cambridge, MA: MIT Press.

    Abstract

    This chapter sketches in very general terms the cognitive architecture of both language comprehension and production, as well as the neurobiological infrastructure that makes the human brain ready for language. Focus is on spoken language, since that compares most directly to processing music. It is worth bearing in mind that humans can also interface with language as a cognitive system using sign and text (visual) as well as Braille (tactile); that is to say, the system can connect with input/output processes in any sensory modality. Language processing consists of a complex and nested set of subroutines to get from sound to meaning (in comprehension) or meaning to sound (in production), with remarkable speed and accuracy. The fi rst section outlines a selection of the major constituent operations, from fractionating the input into manageable units to combining and unifying information in the construction of meaning. The next section addresses the neurobiological infrastructure hypothesized to form the basis for language processing. Principal insights are summarized by building on the notion of “brain networks” for speech–sound processing, syntactic processing, and the construction of meaning, bearing in mind that such a neat three-way subdivision overlooks important overlap and shared mechanisms in the neural architecture subserving language processing. Finally, in keeping with the spirit of the volume, some possible relations are highlighted between language and music that arise from the infrastructure developed here. Our characterization of language and its neurobiological foundations is necessarily selective and brief. Our aim is to identify for the reader critical questions that require an answer to have a plausible cognitive neuroscience of language processing.
  • Hammarström, H., & O'Connor, L. (2013). Dependency sensitive typological distance. In L. Borin, & A. Saxena (Eds.), Approaches to measuring linguistic differences (pp. 337-360). Berlin: Mouton de Gruyter.
  • Hammarström, H. (2013). Noun class parallels in Kordofanian and Niger-Congo: Evidence of genealogical inheritance? In T. C. Schadeberg, & R. M. Blench (Eds.), Nuba Mountain Language Studies (pp. 549-570). Köln: Köppe.
  • Haun, D. B. M., & Over, H. (2013). Like me: A homophily-based account of human culture. In P. J. Richerson, & M. H. Christiansen (Eds.), Cultural Evolution: Society, technology, language, and religion (pp. 75-85). Cambridge, MA: MIT Press.
  • Hayano, K. (2013). Question design in conversation. In J. Sidnell, & T. Stivers (Eds.), The handbook of conversation analysis (pp. 395-414). Malden, MA: Wiley-Blackwell. doi:10.1002/9781118325001.ch19.

    Abstract

    This chapter contains sections titled: Introduction Questions Questioning and the Epistemic Gradient Presuppositions, Agenda Setting and Preferences Social Actions Implemented by Questions Questions as Building Blocks of Institutional Activities Future Directions
  • Hofmeister, P., & Norcliffe, E. (2013). Does resumption facilitate sentence comprehension? In P. Hofmeister, & E. Norcliffe (Eds.), The core and the periphery: Data-driven perspectives on syntax inspired by Ivan A. Sag (pp. 225-246). Stanford, CA: CSLI Publications.
  • Holler, J., Schubotz, L., Kelly, S., Schuetze, M., Hagoort, P., & Ozyurek, A. (2013). Here's not looking at you, kid! Unaddressed recipients benefit from co-speech gestures when speech processing suffers. In M. Knauff, M. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 2560-2565). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0463/index.html.

    Abstract

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from these different modalities, and how perceived communicative intentions, often signaled through visual signals, such as eye gaze, may influence this processing. We address this question by simulating a triadic communication context in which a speaker alternated her gaze between two different recipients. Participants thus viewed speech-only or speech+gesture object-related utterances when being addressed (direct gaze) or unaddressed (averted gaze). Two object images followed each message and participants’ task was to choose the object that matched the message. Unaddressed recipients responded significantly slower than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped them up to a level identical to that of addressees. That is, when speech processing suffers due to not being addressed, gesture processing remains intact and enhances the comprehension of a speaker’s message
  • Huettig, F. (2013). Young children’s use of color information during language-vision mapping. In B. R. Kar (Ed.), Cognition and brain development: Converging evidence from various methodologies (pp. 368-391). Washington, DC: American Psychological Association Press.
  • Irvine, L., Roberts, S. G., & Kirby, S. (2013). A robustness approach to theory building: A case study of language evolution. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 2614-2619). Retrieved from http://mindmodeling.org/cogsci2013/papers/0472/index.html.

    Abstract

    Models of cognitive processes often include simplifications, idealisations, and fictionalisations, so how should we learn about cognitive processes from such models? Particularly in cognitive science, when many features of the target system are unknown, it is not always clear which simplifications, idealisations, and so on, are appropriate for a research question, and which are highly misleading. Here we use a case-study from studies of language evolution, and ideas from philosophy of science, to illustrate a robustness approach to learning from models. Robust properties are those that arise across a range of models, simulations and experiments, and can be used to identify key causal structures in the models, and the phenomenon, under investigation. For example, in studies of language evolution, the emergence of compositional structure is a robust property across models, simulations and experiments of cultural transmission, but only under pressures for learnability and expressivity. This arguably illustrates the principles underlying real cases of language evolution. We provide an outline of the robustness approach, including its limitations, and suggest that this methodology can be productively used throughout cognitive science. Perhaps of most importance, it suggests that different modelling frameworks should be used as tools to identify the abstract properties of a system, rather than being definitive expressions of theories.
  • De Jong, N. H., & Bosker, H. R. (2013). Choosing a threshold for silent pauses to measure second language fluency. In R. Eklund (Ed.), Proceedings of the 6th Workshop on Disfluency in Spontaneous Speech (DiSS) (pp. 17-20).

    Abstract

    Second language (L2) research often involves analyses of acoustic measures of fluency. The studies investigating fluency, however, have been difficult to compare because the measures of fluency that were used differed widely. One of the differences between studies concerns the lower cut-off point for silent pauses, which has been set anywhere between 100 ms and 1000 ms. The goal of this paper is to find an optimal cut-off point. We calculate acoustic measures of fluency using different pause thresholds and then relate these measures to a measure of L2 proficiency and to ratings on fluency.
  • Jordan, F. (2013). Comparative phylogenetic methods and the study of pattern and process in kinship. In P. McConvell, I. Keen, & R. Hendery (Eds.), Kinship systems: Change and reconstruction (pp. 43-58). Salt Lake City, UT: University of Utah Press.

    Abstract

    Anthropology began by comparing aspects of kinship across cultures, while linguists interested in semantic domains such as kinship necessarily compare across languages. In this chapter I show how phylogenetic comparative methods from evolutionary biology can be used to study evolutionary processes relating to kinship and kinship terminologies across language and culture.
  • Jordan, F. M., van Schaik, C. P., Francois, P., Gintis, H., Haun, D. B. M., Hruschka, D. H., Janssen, M. A., Kitts, J. A., Lehmann, L., Mathew, S., Richerson, P. J., Turchin, P., & Wiessner, P. (2013). Cultural evolution of the structure of human groups. In P. J. Richerson, & M. H. Christiansen (Eds.), Cultural Evolution: Society, technology, language, and religion (pp. 87-116). Cambridge, MA: MIT Press.
  • Jordens, P. (2013). Dummies and auxiliaries in the acquisition of L1 and L2 Dutch. In E. Blom, I. Van de Craats, & J. Verhagen (Eds.), Dummy Auxiliaries in First and Second Language Acquisition (pp. 341-368). Berlin: Mouton de Gruyter.
  • Kallmeyer, L., Osswald, R., & Van Valin Jr., R. D. (2013). Tree wrapping for Role and Reference Grammar. In G. Morrill, & M.-J. Nederhof (Eds.), Formal grammar: 17th and 18th International Conferences, FG 2012/2013, Opole, Poland, August 2012: revised Selected Papers, Düsseldorf, Germany, August 2013: proceedings (pp. 175-190). Heidelberg: Springer.
  • Khetarpal, N., Neveu, G., Majid, A., Michael, L., & Regier, T. (2013). Spatial terms across languages support near-optimal communication: Evidence from Peruvian Amazonia, and computational analyses. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (pp. 764-769). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0158/index.html.

    Abstract

    Why do languages have the categories they do? It has been argued that spatial terms in the world’s languages reflect categories that support highly informative communication, and that this accounts for the spatial categories found across languages. However, this proposal has been tested against only nine languages, and in a limited fashion. Here, we consider two new languages: Maijɨki, an under-documented language of Peruvian Amazonia, and English. We analyze spatial data from these two new languages and the original nine, using thorough and theoretically targeted computational tests. The results support the hypothesis that spatial terms across dissimilar languages enable near-optimally informative communication, over an influential competing hypothesis
  • Kidd, E., Bavin, S. L., & Brandt, S. (2013). The role of the lexicon in the development of the language processor. In D. Bittner, & N. Ruhlig (Eds.), Lexical bootstrapping: The role of lexis and semantics in child language development (pp. 217-244). Berlin: De Gruyter Mouton.
  • Klein, W. (2013). Basic variety. In P. Robinson (Ed.), The Routledge encyclopedia of second language acquisition (pp. 64-65). New York: Routledge.
  • Klein, W. (2013). European Science Foundation (ESF) Project. In P. Robinson (Ed.), The Routledge encyclopedia of second language acquisition (pp. 220-221). New York: Routledge.
  • Klein, W. (2013). L'effettivo declino e la crescita potenziale della lessicografia tedesca. In N. Maraschio, D. De Martiono, & G. Stanchina (Eds.), L'italiano dei vocabolari: Atti di La piazza delle lingue 2012 (pp. 11-20). Firenze: Accademia della Crusca.
  • Klein, W. (2013). Von Reichtum und Armut des deutschen Wortschatzes. In Deutsche Akademie für Sprache und Dichtung, & Union der deutschen Akademien der Wissenschaften (Eds.), Reichtum und Armut der deutschen Sprache (pp. 15-55). Boston: de Gruyter.
  • Kristoffersen, J. H., Troelsgard, T., & Zwitserlood, I. (2013). Issues in sign language lexicography. In H. Jackson (Ed.), The Bloomsbury companion to lexicography (pp. 259-283). London: Bloomsbury.
  • Ladd, D. R., & Dediu, D. (2013). Genes and linguistic tone. In H. Pashler (Ed.), Encyclopedia of the mind (pp. 372-373). London: Sage Publications.

    Abstract

    It is usually assumed that the language spoken by a human community is independent of the community's genetic makeup, an assumption supported by an overwhelming amount of evidence. However, the possibility that language is influenced by its speakers' genes cannot be ruled out a priori, and a recently discovered correlation between the geographic distribution of tone languages and two human genes seems to point to a genetically influenced bias affecting language. This entry describes this specific correlation and highlights its major implications. Voice pitch has a variety of communicative functions. Some of these are probably universal, such as conveying information about the speaker's sex, age, and emotional state. In many languages, including the European languages, voice pitch also conveys certain sentence-level meanings such as signaling that an utterance is a question or an exclamation; these uses of pitch are known as intonation. Some languages, however, known as tone languages, nian ...
  • Lausberg, H., & Sloetjes, H. (2013). NEUROGES in combination with the annotation tool ELAN. In H. Lausberg (Ed.), Understanding body movement: A guide to empirical research on nonverbal behaviour with an introduction to the NEUROGES coding system (pp. 199-200). Frankfurt a/M: Lang.
  • Lenkiewicz, A., & Drude, S. (2013). Automatic annotation of linguistic 2D and Kinect recordings with the Media Query Language for Elan. In Proceedings of Digital Humanities 2013 (pp. 276-278).

    Abstract

    Research in body language with use of gesture recognition and speech analysis has gained much attention in the recent times, influencing disciplines related to image and speech processing. This study aims to design the Media Query Language (MQL) (Lenkiewicz, et al. 2012) combined with the Linguistic Media Query Interface (LMQI) for Elan (Wittenburg, et al. 2006). The system integrated with the new achievements in audio-video recognition will allow querying media files with predefined gesture phases (or motion primitives) and speech characteristics as well as combinations of both. For the purpose of this work the predefined motions and speech characteristics are called patterns for atomic elements and actions for a sequence of patterns. The main assumption is that a user-customized library of patterns and actions and automated media annotation with LMQI will reduce annotation time, hence decreasing costs of creation of annotated corpora. Increase of the number of annotated data should influence the speed and number of possible research in disciplines in which human multimodal interaction is a subject of interest and where annotated corpora are required.
  • Levelt, W. J. M., & Plomp, R. (1962). Musical consonance and critical bandwidth. In Proceedings of the 4th International Congress Acoustics (pp. 55-55).
  • Levelt, W. J. M. (1962). Motion breaking and the perception of causality. In A. Michotte (Ed.), Causalité, permanence et réalité phénoménales: Etudes de psychologie expérimentale (pp. 244-258). Louvain: Publications Universitaires.
  • Levelt, W. J. M. (1966). The perceptual conflict in binocular rivalry. In M. A. Bouman (Ed.), Studies in perception: Dedicated to M.A. Bouman (pp. 47-60). Soesterberg: Institute for Perception RVO-TNO.
  • Levinson, S. C. (2013). Action formation and ascription. In T. Stivers, & J. Sidnell (Eds.), The handbook of conversation analysis (pp. 103-130). Malden, MA: Wiley-Blackwell. doi:10.1002/9781118325001.ch6.

    Abstract

    Since the core matrix for language use is interaction, the main job of language is not to express propositions or abstract meanings, but to deliver actions. For in order to respond in interaction we have to ascribe to the prior turn a primary ‘action’ – variously thought of as an ‘illocution’, ‘speech act’, ‘move’, etc. – to which we then respond. The analysis of interaction also relies heavily on attributing actions to turns, so that, e.g., sequences can be characterized in terms of actions and responses. Yet the process of action ascription remains way understudied. We don’t know much about how it is done, when it is done, nor even what kind of inventory of possible actions might exist, or the degree to which they are culturally variable. The study of action ascription remains perhaps the primary unfulfilled task in the study of language use, and it needs to be tackled from conversationanalytic, psycholinguistic, cross-linguistic and anthropological perspectives. In this talk I try to take stock of what we know, and derive a set of goals for and constraints on an adequate theory. Such a theory is likely to employ, I will suggest, a top-down plus bottom-up account of action perception, and a multi-level notion of action which may resolve some of the puzzles that have repeatedly arisen.
  • Levinson, S. C. (2013). Cross-cultural universals and communication structures. In M. A. Arbib (Ed.), Language, music, and the brain: A mysterious relationship (pp. 67-80). Cambridge, MA: MIT Press.

    Abstract

    Given the diversity of languages, it is unlikely that the human capacity for language resides in rich universal syntactic machinery. More likely, it resides centrally in the capacity for vocal learning combined with a distinctive ethology for communicative interaction, which together (no doubt with other capacities) make diverse languages learnable. This chapter focuses on face-to-face communication, which is characterized by the mapping of sounds and multimodal signals onto speech acts and which can be deeply recursively embedded in interaction structure, suggesting an interactive origin for complex syntax. These actions are recognized through Gricean intention recognition, which is a kind of “ mirroring” or simulation distinct from the classic mirror neuron system. The multimodality of conversational interaction makes evident the involvement of body, hand, and mouth, where the burden on these can be shifted, as in the use of speech and gesture, or hands and face in sign languages. Such shifts having taken place during the course of human evolution. All this suggests a slightly different approach to the mystery of music, whose origins should also be sought in joint action, albeit with a shift from turn-taking to simultaneous expression, and with an affective quality that may tap ancient sources residual in primate vocalization. The deep connection of language to music can best be seen in the only universal form of music, namely song.
  • Levinson, S. C., & Dediu, D. (2013). The interplay of genetic and cultural factors in ongoing language evolution. In P. J. Richerson, & M. H. Christiansen (Eds.), Cultural evolution: Society, technology, language, and religion. Strüngmann Forum Reports, vol. 12 (pp. 219-232). Cambridge, Mass: MIT Press.
  • Majid, A. (2013). Olfactory language and cognition. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th annual meeting of the Cognitive Science Society (CogSci 2013) (pp. 68). Austin,TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0025/index.html.

    Abstract

    Since the cognitive revolution, a widely held assumption has been that—whereas content may vary across cultures—cognitive processes would be universal, especially those on the more basic levels. Even if scholars do not fully subscribe to this assumption, they often conceptualize, or tend to investigate, cognition as if it were universal (Henrich, Heine, & Norenzayan, 2010). The insight that universality must not be presupposed but scrutinized is now gaining ground, and cognitive diversity has become one of the hot (and controversial) topics in the field (Norenzayan & Heine, 2005). We argue that, for scrutinizing the cultural dimension of cognition, taking an anthropological perspective is invaluable, not only for the task itself, but for attenuating the home-field disadvantages that are inescapably linked to cross-cultural research (Medin, Bennis, & Chandler, 2010).
  • Majid, A. (2013). Psycholinguistics. In J. L. Jackson (Ed.), Oxford Bibliographies Online: Anthropology. Oxford: Oxford University Press.
  • Mishra, R. K., Olivers, C. N. L., & Huettig, F. (2013). Spoken language and the decision to move the eyes: To what extent are language-mediated eye movements automatic? In V. S. C. Pammi, & N. Srinivasan (Eds.), Progress in Brain Research: Decision making: Neural and behavioural approaches (pp. 135-149). New York: Elsevier.

    Abstract

    Recent eye-tracking research has revealed that spoken language can guide eye gaze very rapidly (and closely time-locked to the unfolding speech) toward referents in the visual world. We discuss whether, and to what extent, such language-mediated eye movements are automatic rather than subject to conscious and controlled decision-making. We consider whether language-mediated eye movements adhere to four main criteria of automatic behavior, namely, whether they are fast and efficient, unintentional, unconscious, and overlearned (i.e., arrived at through extensive practice). Current evidence indicates that language-driven oculomotor behavior is fast but not necessarily always efficient. It seems largely unintentional though there is also some evidence that participants can actively use the information in working memory to avoid distraction in search. Language-mediated eye movements appear to be for the most part unconscious and have all the hallmarks of an overlearned behavior. These data are suggestive of automatic mechanisms linking language to potentially referred-to visual objects, but more comprehensive and rigorous testing of this hypothesis is needed.
  • Ortega, G., & Ozyurek, A. (2013). Gesture-sign interface in hearing non-signers' first exposure to sign. In Proceedings of the Tilburg Gesture Research Meeting [TiGeR 2013].

    Abstract

    Natural sign languages and gestures are complex communicative systems that allow the incorporation of features of a referent into their structure. They differ, however, in that signs are more conventionalised because they consist of meaningless phonological parameters. There is some evidence that despite non-signers finding iconic signs more memorable they can have more difficulty at articulating their exact phonological components. In the present study, hearing non-signers took part in a sign repetition task in which they had to imitate as accurately as possible a set of iconic and arbitrary signs. Their renditions showed that iconic signs were articulated significantly less accurately than arbitrary signs. Participants were recalled six months later to take part in a sign generation task. In this task, participants were shown the English translation of the iconic signs they imitated six months prior. For each word, participants were asked to generate a sign (i.e., an iconic gesture). The handshapes produced in the sign repetition and sign generation tasks were compared to detect instances in which both renditions presented the same configuration. There was a significant correlation between articulation accuracy in the sign repetition task and handshape overlap. These results suggest some form of gestural interference in the production of iconic signs by hearing non-signers. We also suggest that in some instances non-signers may deploy their own conventionalised gesture when producing some iconic signs. These findings are interpreted as evidence that non-signers process iconic signs as gestures and that in production, only when sign and gesture have overlapping features will they be capable of producing the phonological components of signs accurately.
  • Osswald, R., & Van Valin Jr., R. D. (2013). FrameNet, frame structure and the syntax-semantics interface. In T. Gamerschlag, D. Gerland, R. Osswald, & W. Petersen (Eds.), Frames and concept types: Applications in language and philosophy. Heidelberg: Springer.
  • Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). Getting to the point: The influence of communicative intent on the kinematics of pointing gestures. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1127-1132). Austin, TX: Cognitive Science Society.

    Abstract

    In everyday communication, people not only use speech but also hand gestures to convey information. One intriguing question in gesture research has been why gestures take the specific form they do. Previous research has identified the speaker-gesturer’s communicative intent as one factor shaping the form of iconic gestures. Here we investigate whether communicative intent also shapes the form of pointing gestures. In an experimental setting, twenty-four participants produced pointing gestures identifying a referent for an addressee. The communicative intent of the speakergesturer was manipulated by varying the informativeness of the pointing gesture. A second independent variable was the presence or absence of concurrent speech. As a function of their communicative intent and irrespective of the presence of speech, participants varied the durations of the stroke and the post-stroke hold-phase of their gesture. These findings add to our understanding of how the communicative context influences the form that a gesture takes.
  • Piai, V., Roelofs, A., Jensen, O., Schoffelen, J.-M., & Bonnefond, M. (2013). Distinct patterns of brain activity characterize lexical activation and competition in speech production [Abstract]. Journal of Cognitive Neuroscience, 25 Suppl., 106.

    Abstract

    A fundamental ability of speakers is to quickly retrieve words from long-term memory. According to a prominent theory, concepts activate multiple associated words, which enter into competition for selection. Previous electrophysiological studies have provided evidence for the activation of multiple alternative words, but did not identify brain responses refl ecting competition. We report a magnetoencephalography study examining the timing and neural substrates of lexical activation and competition. The degree of activation of competing words was manipulated by presenting pictures (e.g., dog) simultaneously with distractor words. The distractors were semantically related to the picture name (cat), unrelated (pin), or identical (dog). Semantic distractors are stronger competitors to the picture name, because they receive additional activation from the picture, whereas unrelated distractors do not. Picture naming times were longer with semantic than with unrelated and identical distractors. The patterns of phase-locked and non-phase-locked activity were distinct but temporally overlapping. Phase-locked activity in left middle temporal gyrus, peaking at 400 ms, was larger on unrelated than semantic and identical trials, suggesting differential effort in processing the alternative words activated by the picture-word stimuli. Non-phase-locked activity in the 4-10 Hz range between 400-650 ms in left superior frontal gyrus was larger on semantic than unrelated and identical trials, suggesting different degrees of effort in resolving the competition among the alternatives words, as refl ected in the naming times. These findings characterize distinct patterns of brain activity associated with lexical activation and competition respectively, and their temporal relation, supporting the theory that words are selected by competition.
  • Plomp, R., & Levelt, W. J. M. (1966). Perception of tonal consonance. In M. A. Bouman (Ed.), Studies in Perception - dedicated to M.A. Bouman (pp. 105-118). Soesterberg: Institute for Perception RVO-TNO.
  • Ravignani, A., Gingras, B., Asano, R., Sonnweber, R., Matellan, V., & Fitch, W. T. (2013). The evolution of rhythmic cognition: New perspectives and technologies in comparative research. In M. Knauff, M. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Conference of the Cognitive Science Society (pp. 1199-1204). Austin,TX: Cognitive Science Society.

    Abstract

    Music is a pervasive phenomenon in human culture, and musical rhythm is virtually present in all musical traditions. Research on the evolution and cognitive underpinnings of rhythm can benefit from a number of approaches. We outline key concepts and definitions, allowing fine-grained analysis of rhythmic cognition in experimental studies. We advocate comparative animal research as a useful approach to answer questions about human music cognition and review experimental evidence from different species. Finally, we suggest future directions for research on the cognitive basis of rhythm. Apart from research in semi-natural setups, possibly allowed by “drum set for chimpanzees” prototypes presented here for the first time, mathematical modeling and systematic use of circular statistics may allow promising advances.
  • Roberts, S. G. (2013). A Bottom-up approach to the cultural evolution of bilingualism. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1229-1234). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0236/index.html.

    Abstract

    The relationship between individual cognition and cultural phenomena at the society level can be transformed by cultural transmission (Kirby, Dowman, & Griffiths, 2007). Top-down models of this process have typically assumed that individuals only adopt a single linguistic trait. Recent extensions include ‘bilingual’ agents, able to adopt multiple linguistic traits (Burkett & Griffiths, 2010). However, bilingualism is more than variation within an individual: it involves the conditional use of variation with different interlocutors. That is, bilingualism is a property of a population that emerges from use. A bottom-up simulation is presented where learners are sensitive to the identity of other speakers. The simulation reveals that dynamic social structures are a key factor for the evolution of bilingualism in a population, a feature that was abstracted away in the top-down models. Top-down and bottom-up approaches may lead to different answers, but can work together to reveal and explore important features of the cultural transmission process.
  • Roberts, L. (2013). Discourse processing. In P. Robinson (Ed.), The Routledge encyclopedia of second language acquisition (pp. 190-194). New York: Routledge.
  • Roberts, L. (2013). Sentence processing in bilinguals. In R. Van Gompel (Ed.), Sentence processing. London: Psychology Press.
  • Rossano, F. (2013). Gaze in conversation. In J. Sidnell, & T. Stivers (Eds.), The handbook of conversation analysis (pp. 308-329). Malden, MA: Wiley-Blackwell. doi:10.1002/9781118325001.ch15.

    Abstract

    This chapter contains sections titled: Introduction Background: The Gaze “Machinery” Gaze “Machinery” in Social Interaction Future Directions
  • Rumsey, A., San Roque, L., & Schieffelin, B. (2013). The acquisition of ergative marking in Kaluli, Ku Waru and Duna (Trans New Guinea). In E. L. Bavin, & S. Stoll (Eds.), The acquisition of ergativity (pp. 133-182). Amsterdam: Benjamins.

    Abstract

    In this chapter we present material on the acquisition of ergative marking on noun phrases in three languages of Papua New Guinea: Kaluli, Ku Waru, and Duna. The expression of ergativity in all the languages is broadly similar, but sensitive to language-specific features, and this pattern of similarity and difference is reflected in the available acquisition data. Children acquire adult-like ergative marking at about the same pace, reaching similar levels of mastery by 3;00 despite considerable differences in morphological complexity of ergative marking among the languages. What may be more important – as a factor in accounting for the relative uniformity of acquisition in this respect – are the similarities in patterns of interactional scaffolding that emerge from a comparison of the three cases.
  • Sauppe, S., Norcliffe, E., Konopka, A. E., Van Valin Jr., R. D., & Levinson, S. C. (2013). Dependencies first: Eye tracking evidence from sentence production in Tagalog. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1265-1270). Austin, TX: Cognitive Science Society.

    Abstract

    We investigated the time course of sentence formulation in Tagalog, a verb-initial language in which the verb obligatorily agrees with one of its arguments. Eye-tracked participants described pictures of transitive events. Fixations to the two characters in the events were compared across sentences differing in agreement marking and post-verbal word order. Fixation patterns show evidence for two temporally dissociated phases in Tagalog sentence production. The first, driven by verb agreement, involves early linking of concepts to syntactic functions; the second, driven by word order, involves incremental lexical encoding of these concepts. These results suggest that even the earliest stages of sentence formulation may be guided by a language's grammatical structure.
  • Scharenborg, O., & Janse, E. (2013). Changes in the role of intensity as a cue for fricative categorisation. In Proceedings of INTERSPEECH 2013: 14th Annual Conference of the International Speech Communication Association (pp. 3147-3151).

    Abstract

    Older listeners with high-frequency hearing loss rely more on intensity for categorisation of /s/ than normal-hearing older listeners. This study addresses the question whether this increased reliance comes about immediately when the need arises, i.e., in the face of a spectrally-degraded signal. A phonetic categorisation task was carried out using intensitymodulated fricatives in a clean and a low-pass filtered condition with two younger and two older listener groups. When high-frequency information was removed from the speech signal, younger listeners started using intensity as a cue. The older adults on the other hand, when presented with the low-pass filtered speech, did not rely on intensity differences for fricative identification. These results suggest that the reliance on intensity shown by the older hearingimpaired adults may have been acquired only gradually with longer exposure to a degraded speech signal.
  • Schepens, J., Van der Slik, F., & Van Hout, R. (2013). The effect of linguistic distance across Indo-European mother tongues on learning Dutch as a second language. In L. Borin, & A. Saxena (Eds.), Approaches to measuring linguistic differences (pp. 199-230). Berlin: Mouton de Gruyter.
  • Scott, K., Sakkalou, E., Ellis-Davies, K., Hilbrink, E., Hahn, U., & Gattis, M. (2013). Infant contributions to joint attention predict vocabulary development. In M. Knauff, M. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Conference of the Cognitive Science Society (pp. 3384-3389). Austin,TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0602/index.html.

    Abstract

    Joint attention has long been accepted as constituting a privileged circumstance in which word learning prospers. Consequently research has investigated the role that maternal responsiveness to infant attention plays in predicting language outcomes. However there has been a recent expansion in research implicating similar predictive effects from individual differences in infant behaviours. Emerging from the foundations of such work comes an interesting question: do the relative contributions of the mother and infant to joint attention episodes impact upon language learning? In an attempt to address this, two joint attention behaviours were assessed as predictors of vocabulary attainment (as measured by OCDI Production Scores). These predictors were: mothers encouraging attention to an object given their infant was already attending to an object (maternal follow-in); and infants looking to an object given their mothers encouragement of attention to an object (infant follow-in). In a sample of 14-month old children (N=36) we compared the predictive power of these maternal and infant follow-in variables on concurrent and later language performance. Results using Growth Curve Analysis provided evidence that while both maternal follow-in and infant follow-in variables contributed to production scores, infant follow-in was a stronger predictor. Consequently it does appear to matter whose final contribution establishes joint attention episodes. Infants who more often follow-in into their mothers’ encouragement of attention have larger, and faster growing vocabularies between 14 and 18-months of age.
  • Scott, S. K., McGettigan, C., & Eisner, F. (2013). The neural basis of links and dissociations between speech perception and production. In J. J. Bolhuis, & M. Everaert (Eds.), Birdsong, speech and language: Exploring the evolution of mind and brain (pp. 277-294). Cambridge, Mass: MIT Press.
  • Senft, G. (2013). Ethnolinguistik. In B. Beer, & H. Fischer (Eds.), Ethnologie - Einführung und Überblick. (8. Auflage, pp. 271-286). Berlin: Reimer.
  • Senghas, A., Ozyurek, A., & Goldin-Meadow, S. (2013). Homesign as a way-station between co-speech gesture and sign language: The evolution of segmenting and sequencing. In R. Botha, & M. Everaert (Eds.), The evolutionary emergence of language: Evidence and inference (pp. 62-77). Oxford: Oxford University Press.
  • Seuren, P. A. M. (1966). Het probleem van de woorddefinitie. In Handelingen van het 29ste Nederlands Filologencongres (pp. 103-108).
  • Seuren, P. A. M. (2013). The logico-philosophical tradition. In K. Allan (Ed.), The Oxford handbook of the history of linguistics (pp. 537-554). Oxford: Oxford University Press.
  • Shayan, S., Moreira, A., Windhouwer, M., Koenig, A., & Drude, S. (2013). LEXUS 3 - a collaborative environment for multimedia lexica. In Proceedings of the Digital Humanities Conference 2013 (pp. 392-395).
  • Sloetjes, H. (2013). Step by step introduction in NEUROGES coding with ELAN. In H. Lausberg (Ed.), Understanding body movement: A guide to empirical research on nonverbal behaviour with an introduction to the NEUROGES coding system (pp. 201-212). Frankfurt a/M: Lang.
  • Sloetjes, H. (2013). The ELAN annotation tool. In H. Lausberg (Ed.), Understanding body movement: A guide to empirical research on nonverbal behaviour with an introduction to the NEUROGES coding system (pp. 193-198). Frankfurt a/M: Lang.
  • Smith, A. C., Monaghan, P., & Huettig, F. (2013). Modelling the effects of formal literacy training on language mediated visual attention. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 3420-3425). Austin, TX: Cognitive Science Society.

    Abstract

    Recent empirical evidence suggests that language-mediated eye gaze is partly determined by level of formal literacy training. Huettig, Singh and Mishra (2011) showed that high-literate individuals' eye gaze was closely time locked to phonological overlap between a spoken target word and items presented in a visual display. In contrast, low-literate individuals' eye gaze was not related to phonological overlap, but was instead strongly influenced by semantic relationships between items. Our present study tests the hypothesis that this behavior is an emergent property of an increased ability to extract phonological structure from the speech signal, as in the case of high-literates, with low-literates more reliant on more coarse grained structure. This hypothesis was tested using a neural network model, that integrates linguistic information extracted from the speech signal with visual and semantic information within a central resource. We demonstrate that contrasts in fixation behavior similar to those observed between high and low literates emerge when models are trained on speech signals of contrasting granularity.
  • Sumer, B., Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2013). Acquisition of locative expressions in children learning Turkish Sign Language (TİD) and Turkish. In E. Arik (Ed.), Current directions in Turkish Sign Language research (pp. 243-272). Newcastle upon Tyne: Cambridge Scholars Publishing.

    Abstract

    In sign languages, where space is often used to talk about space, expressions of spatial relations (e.g., ON, IN, UNDER, BEHIND) may rely on analogue mappings of real space onto signing space. In contrast, spoken languages express space in mostly categorical ways (e.g. adpositions). This raises interesting questions about the role of language modality in the acquisition of expressions of spatial relations. However, whether and to what extent modality influences the acquisition of spatial language is controversial – mostly due to the lack of direct comparisons of Deaf children to Deaf adults and to age-matched hearing children in similar tasks. Furthermore, the previous studies have taken English as the only model for spoken language development of spatial relations. Therefore, we present a balanced study in which spatial expressions by deaf and hearing children in two different age-matched groups (preschool children and school-age children) are systematically compared, as well as compared to the spatial expressions of adults. All participants performed the same tasks, describing angular (LEFT, RIGHT, FRONT, BEHIND) and non-angular spatial configurations (IN, ON, UNDER) of different objects (e.g. apple in box; car behind box). The analysis of the descriptions with non-angular spatial relations does not show an effect of modality on the development of locative expressions in TİD and Turkish. However, preliminary results of the analysis of expressions of angular spatial relations suggest that signers provide angular information in their spatial descriptions more frequently than Turkish speakers in all three age groups, and thus showing a potentially different developmental pattern in this domain. Implications of the findings with regard to the development of relations in spatial language and cognition will be discussed.
  • Sumner, M., Kurumada, C., Gafter, R., & Casillas, M. (2013). Phonetic variation and the recognition of words with pronunciation variants. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 3486-3492). Austin, TX: Cognitive Science Society.
  • Ten Bosch, L., Boves, L., & Ernestus, M. (2013). Towards an end-to-end computational model of speech comprehension: simulating a lexical decision task. In Proceedings of INTERSPEECH 2013: 14th Annual Conference of the International Speech Communication Association (pp. 2822-2826).

    Abstract

    This paper describes a computational model of speech comprehension that takes the acoustic signal as input and predicts reaction times as observed in an auditory lexical decision task. By doing so, we explore a new generation of end-to-end computational models that are able to simulate the behaviour of human subjects participating in a psycholinguistic experiment. So far, nearly all computational models of speech comprehension do not start from the speech signal itself, but from abstract representations of the speech signal, while the few existing models that do start from the acoustic signal cannot directly model reaction times as obtained in comprehension experiments. The main functional components in our model are the perception stage, which is compatible with the psycholinguistic model Shortlist B and is implemented with techniques from automatic speech recognition, and the decision stage, which is based on the linear ballistic accumulation decision model. We successfully tested our model against data from 20 participants performing a largescale auditory lexical decision experiment. Analyses show that the model is a good predictor for the average judgment and reaction time for each word.
  • Thompson-Schill, S., Hagoort, P., Dominey, P. F., Honing, H., Koelsch, S., Ladd, D. R., Lerdahl, F., Levinson, S. C., & Steedman, M. (2013). Multiple levels of structure in language and music. In M. A. Arbib (Ed.), Language, music, and the brain: A mysterious relationship (pp. 289-303). Cambridge, MA: MIT Press.

    Abstract

    A forum devoted to the relationship between music and language begins with an implicit assumption: There is at least one common principle that is central to all human musical systems and all languages, but that is not characteristic of (most) other domains. Why else should these two categories be paired together for analysis? We propose that one candidate for a common principle is their structure. In this chapter, we explore the nature of that structure—and its consequences for psychological and neurological processing mechanisms—within and across these two domains.
  • Timmer, K., Ganushchak, L. Y., Mitlina, Y., & Schiller, N. O. (2013). Choosing first or second language phonology in 125 ms [Abstract]. Journal of Cognitive Neuroscience, 25 Suppl., 164.

    Abstract

    We are often in a bilingual situation (e.g., overhearing a conversation in the train). We investigated whether first (L1) and second language (L2) phonologies are automatically activated. A masked priming paradigm was used, with Russian words as targets and either Russian or English words as primes. Event-related potentials (ERPs) were recorded while Russian (L1) – English (L2) bilinguals read aloud L1 target words (e.g. РЕЙС /reis/ ‘fl ight’) primed with either L1 (e.g. РАНА /rana/ ‘wound’) or L2 words (e.g. PACK). Target words were read faster when they were preceded by phonologically related L1 primes but not by orthographically related L2 primes. ERPs showed orthographic priming in the 125-200 ms time window. Thus, both L1 and L2 phonologies are simultaneously activated during L1 reading. The results provide support for non-selective models of bilingual reading, which assume automatic activation of the non-target language phonology even when it is not required by the task.
  • Ünal, E., & Papafragou, A. (2013). Linguistic and conceptual representations of inference as a knowledge source. In S. Baiz, N. Goldman, & R. Hawkes (Eds.), Proceedings of the 37th Annual Boston University Conference on Language Development (BUCLD 37) (pp. 433-443). Boston: Cascadilla Press.
  • Van Valin Jr., R. D. (2013). Head-marking languages and linguistic theory. In B. Bickel, L. A. Grenoble, D. A. Peterson, & A. Timberlake (Eds.), Language typology and historical contingency: In honor of Johanna Nichols (pp. 91-124). Amsterdam: Benjamins.

    Abstract

    In her path-breaking 1986 paper, Johanna Nichols proposed a typological contrast between head-marking and dependent-marking languages. Nichols argues that even though the syntactic relations between the head and its dependents are the same in both types of language, the syntactic “bond” between them is not the same; in dependent-marking languages it is one of government, whereas in head-marking languages it is one of apposition. This distinction raises an important question for linguistic theory: How can this contrast – government versus apposition – which can show up in all of the major phrasal types in a language, be captured? The purpose of this paper is to explore the various approaches that have been taken in an attempt to capture the difference between head-marked and dependent-marked syntax in different linguistic theories. The basic problem that head-marking languages pose for syntactic theory will be presented, and then generative approaches will be discussed. The analysis of head-marked structure in Role and Reference Grammar will be presented
  • Van Valin Jr., R. D. (2013). Lexical representation, co-composition, and linking syntax and semantics. In J. Pustejovsky, P. Bouillon, H. Isahara, K. Kanzaki, & C. Lee (Eds.), Advances in generative lexicon theory (pp. 67-107). Dordrecht: Springer.
  • Van Putten, S. (2013). The meaning of the Avatime additive particle tsye. In M. Balbach, L. Benz, S. Genzel, M. Grubic, A. Renans, S. Schalowski, M. Stegenwallner, & A. Zeldes (Eds.), Information structure: Empirical perspectives on theory (pp. 55-74). Potsdam: Universitätsverlag Potsdam. Retrieved from http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:de:kobv:517-opus-64804.
  • Vernes, S. C., & Fisher, S. E. (2013). Genetic pathways implicated in speech and language. In S. Helekar (Ed.), Animal models of speech and language disorders (pp. 13-40). New York: Springer. doi:10.1007/978-1-4614-8400-4_2.

    Abstract

    Disorders of speech and language are highly heritable, providing strong support for a genetic basis. However, the underlying genetic architecture is complex, involving multiple risk factors. This chapter begins by discussing genetic loci associated with common multifactorial language-related impairments and goes on to detail the only gene (known as FOXP2) to be directly implicated in a rare monogenic speech and language disorder. Although FOXP2 was initially uncovered in humans, model systems have been invaluable in progressing our understanding of the function of this gene and its associated pathways in language-related areas of the brain. Research in species from mouse to songbird has revealed effects of this gene on relevant behaviours including acquisition of motor skills and learned vocalisations and demonstrated a role for Foxp2 in neuronal connectivity and signalling, particularly in the striatum. Animal models have also facilitated the identification of wider neurogenetic networks thought to be involved in language development and disorder and allowed the investigation of new candidate genes for disorders involving language, such as CNTNAP2 and FOXP1. Ongoing work in animal models promises to yield new insights into the genetic and neural mechanisms underlying human speech and language
  • Windhouwer, M., Petro, J., Newskaya, I., Drude, S., Aristar-Dry, H., & Gippert, J. (2013). Creating a serialization of LMF: The experience of the RELISH project. In G. Francopoulo (Ed.), LMF - Lexical Markup Framework (pp. 215-226). London: Wiley.
  • Windhouwer, M., & Wright, S. E. (2013). LMF and the Data Category Registry: Principles and application. In G. Francopoulo (Ed.), LMF: Lexical Markup Framework (pp. 41-50). London: Wiley.
  • Wittenburg, P., & Ringersma, J. (2013). Metadata description for lexicons. In R. H. Gouws, U. Heid, W. Schweickard, & H. E. Wiegand (Eds.), Dictionaries: An international encyclopedia of lexicography: Supplementary volume: Recent developments with focus on electronic and computational lexicography (pp. 1329-1335). Berlin: Mouton de Gruyter.

Share this page