Publications

Displaying 201 - 300 of 1331
  • Cutler, A. (2004). On spoken-word recognition in a second language. Newsletter, American Association of Teachers of Slavic and East European Languages, 47, 15-15.
  • Cutler, A. (2003). The perception of speech: Psycholinguistic aspects. In W. Frawley (Ed.), International encyclopaedia of linguistics (pp. 154-157). Oxford: Oxford University Press.
  • Cutler, A., & Henton, C. G. (2004). There's many a slip 'twixt the cup and the lip. In H. Quené, & V. Van Heuven (Eds.), On speech and Language: Studies for Sieb G. Nooteboom (pp. 37-45). Utrecht: Netherlands Graduate School of Linguistics.

    Abstract

    The retiring academic may look back upon, inter alia, years of conference attendance. Speech error researchers are uniquely fortunate because they can collect data in any situation involving communication; accordingly, the retiring speech error researcher will have collected data at those conferences. We here address the issue of whether error data collected in situations involving conviviality (such as at conferences) is representative of error data in general. Our approach involved a comparison, across three levels of linguistic processing, between a specially constructed Conviviality Sample and the largest existing source of speech error data, the newly available Fromkin Speech Error Database. The results indicate that there are grounds for regarding the data in the Conviviality Sample as a better than average reflection of the true population of all errors committed. These findings encourage us to recommend further data collection in collaboration with like-minded colleagues.
  • Cutler, A. (2004). Twee regels voor academische vorming. In H. Procee (Ed.), Bij die wereld wil ik horen! Zesendertig columns en drie essays over de vorming tot academicus. (pp. 42-45). Amsterdam: Boom.
  • Cutler, A., & Otake, T. (1998). Assimilation of place in Japanese and Dutch. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: vol. 5 (pp. 1751-1754). Sydney: ICLSP.

    Abstract

    Assimilation of place of articulation across a nasal and a following stop consonant is obligatory in Japanese, but not in Dutch. In four experiments the processing of assimilated forms by speakers of Japanese and Dutch was compared, using a task in which listeners blended pseudo-word pairs such as ranga-serupa. An assimilated blend of this pair would be rampa, an unassimilated blend rangpa. Japanese listeners produced significantly more assimilated than unassimilated forms, both with pseudo-Japanese and pseudo-Dutch materials, while Dutch listeners produced significantly more unassimilated than assimilated forms in each materials set. This suggests that Japanese listeners, whose native-language phonology involves obligatory assimilation constraints, represent the assimilated nasals in nasal-stop sequences as unmarked for place of articulation, while Dutch listeners, who are accustomed to hearing unassimilated forms, represent the same nasal segments as marked for place of articulation.
  • Cutler, A. (1989). Auditory lexical access: Where do we start? In W. Marslen-Wilson (Ed.), Lexical representation and process (pp. 342-356). Cambridge, MA: MIT Press.

    Abstract

    The lexicon, considered as a component of the process of recognizing speech, is a device that accepts a sound image as input and outputs meaning. Lexical access is the process of formulating an appropriate input and mapping it onto an entry in the lexicon's store of sound images matched with their meanings. This chapter addresses the problems of auditory lexical access from continuous speech. The central argument to be proposed is that utterance prosody plays a crucial role in the access process. Continuous listening faces problems that are not present in visual recognition (reading) or in noncontinuous recognition (understanding isolated words). Aspects of utterance prosody offer a solution to these particular problems.
  • Cutler, A. (1972). A note on a reference by J.D. McCawley to adjectives denoting temperature. Linguistics, 87.
  • Cutler, A., & Clifton, Jr., C. (1999). Comprehending spoken language: A blueprint of the listener. In C. M. Brown, & P. Hagoort (Eds.), The neurocognition of language (pp. 123-166). Oxford University Press.
  • Cutler, A. (1985). Cross-language psycholinguistics. Linguistics, 23, 659-667.
  • Cutler, A. (1972). Describing a semantic field. ITL Review of Applied Linguistics, 15, 67-73.
  • Cutler, A., Mister, E., Norris, D., & Sebastián-Gallés, N. (2004). La perception de la parole en espagnol: Un cas particulier? In L. Ferrand, & J. Grainger (Eds.), Psycholinguistique cognitive: Essais en l'honneur de Juan Segui (pp. 57-74). Brussels: De Boeck.
  • Cutler, A. (1986). Forbear is a homophone: Lexical prosody does not constrain lexical access. Language and Speech, 29, 201-220.

    Abstract

    Because stress can occur in any position within an Eglish word, lexical prosody could serve as a minimal distinguishing feature between pairs of words. However, most pairs of English words with stress pattern opposition also differ vocalically: OBject an obJECT, CONtent and content have different vowels in their first syllables an well as different stress patters. To test whether prosodic information is made use in auditory word recognition independently of segmental phonetic information, it is necessary to examine pairs like FORbear – forBEAR of TRUSty – trusTEE, semantically unrelated words which echbit stress pattern opposition but no segmental difference. In a cross-modal priming task, such words produce the priming effects characteristic of homophones, indicating that lexical prosody is not used in the same was as segmental structure to constrain lexical access.
  • Cutler, A. (1999). Foreword. In Slips of the Ear: Errors in the perception of Casual Conversation (pp. xiii-xv). New York City, NY, USA: Academic Press.
  • Cutler, A. (1998). How listeners find the right words. In Proceedings of the Sixteenth International Congress on Acoustics: Vol. 2 (pp. 1377-1380). Melville, NY: Acoustical Society of America.

    Abstract

    Languages contain tens of thousands of words, but these are constructed from a tiny handful of phonetic elements. Consequently, words resemble one another, or can be embedded within one another, a coup stick snot with standing. me process of spoken-word recognition by human listeners involves activation of multiple word candidates consistent with the input, and direct competition between activated candidate words. Further, human listeners are sensitive, at an early, prelexical, stage of speeeh processing, to constraints on what could potentially be a word of the language.
  • Cutler, A., & McQueen, J. M. (2014). How prosody is both mandatory and optional. In J. Caspers, Y. Chen, W. Heeren, J. Pacilly, N. O. Schiller, & E. Van Zanten (Eds.), Above and Beyond the Segments: Experimental linguistics and phonetics (pp. 71-82). Amsterdam: Benjamins.

    Abstract

    Speech signals originate as a sequence of linguistic units selected by speakers, but these units are necessarily realised in the suprasegmental dimensions of time, frequency and amplitude. For this reason prosodic structure has been viewed as a mandatory target of language processing by both speakers and listeners. In apparent contradiction, however, prosody has also been argued to be ancillary rather than core linguistic structure, making processing of prosodic structure essentially optional. In the present tribute to one of the luminaries of prosodic research for the past quarter century, we review evidence from studies of the processing of lexical stress and focal accent which reconciles these views and shows that both claims are, each in their own way, fully true.
  • Cutler, A. (2014). In thrall to the vocabulary. Acoustics Australia, 42, 84-89.

    Abstract

    Vocabularies contain hundreds of thousands of words built from only a handful of phonemes; longer words inevitably tend to contain shorter ones. Recognising speech thus requires distinguishing intended words from accidentally present ones. Acoustic information in speech is used wherever it contributes significantly to this process; but as this review shows, its contribution differs across languages, with the consequences of this including: identical and equivalently present information distinguishing the same phonemes being used in Polish but not in German, or in English but not in Italian; identical stress cues being used in Dutch but not in English; expectations about likely embedding patterns differing across English, French, Japanese.
  • Cutler, A., Howard, D., & Patterson, K. E. (1989). Misplaced stress on prosody: A reply to Black and Byng. Cognitive Neuropsychology, 6, 67-83.

    Abstract

    The recent claim by Black and Byng (1986) that lexical access in reading is subject to prosodic constraints is examined and found to be unsupported. The evidence from impaired reading which Black and Byng report is based on poorly controlled stimulus materials and is inadequately analysed and reported. An alternative explanation of their findings is proposed, and new data are reported for which this alternative explanation can account but their model cannot. Finally, their proposal is shown to be theoretically unmotivated and in conflict with evidence from normal reading.
  • Cutler, A. (1976). High-stress words are easier to perceive than low-stress words, even when they are equally stressed. Texas Linguistic Forum, 2, 53-57.
  • Cutler, A., & Butterfield, S. (1989). Natural speech cues to word segmentation under difficult listening conditions. In J. Tubach, & J. Mariani (Eds.), Proceedings of Eurospeech 89: European Conference on Speech Communication and Technology: Vol. 2 (pp. 372-375). Edinburgh: CEP Consultants.

    Abstract

    One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In three experiments, we examined how word boundaries are produced in deliberately clear speech. We found that speakers do indeed attempt to mark word boundaries; moreover, they differentiate between word boundaries in a way which suggests they are sensitive to listener needs. Application of heuristic segmentation strategies makes word boundaries before strong syllables easiest for listeners to perceive; but under difficult listening conditions speakers pay more attention to marking word boundaries before weak syllables, i.e. they mark those boundaries which are otherwise particularly hard to perceive.
  • Cutler, A., & Pearson, M. (1985). On the analysis of prosodic turn-taking cues. In C. Johns-Lewis (Ed.), Intonation in discourse (pp. 139-155). London: Croom Helm.
  • Cutler, A., Treiman, R., & Van Ooijen, B. (1998). Orthografik inkoncistensy ephekts in foneme detektion? In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2783-2786). Sydney: ICSLP.

    Abstract

    The phoneme detection task is widely used in spoken word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realised. Listeners detected the target sounds [b,m,t,f,s,k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b,m,t], which have consistent word-initial spelling, than to the targets [f,s,k], which are inconsistently spelled, but only when listeners’ attention was drawn to spelling by the presence in the experiment of many irregularly spelled fillers. Within the inconsistent targets [f,s,k], there was no significant difference between responses to targets in words with majority and minority spellings. We conclude that performance in the phoneme detection task is not necessarily sensitive to orthographic effects, but that salient orthographic manipulation can induce such sensitivity.
  • Cutler, A. (1985). Performance measures of lexical complexity. In G. Hoppenbrouwers, P. A. Seuren, & A. Weijters (Eds.), Meaning and the lexicon (pp. 75). Dordrecht: Foris.
  • Cutler, A. (1976). Phoneme-monitoring reaction time as a function of preceding intonation contour. Perception and Psychophysics, 20, 55-60. Retrieved from http://www.psychonomic.org/search/view.cgi?id=18194.

    Abstract

    An acoustically invariant one-word segment occurred in two versions of one syntactic context. In one version, the preceding intonation contour indicated that a stress would fall at the point where this word occurred. In the other version, the preceding contour predicted reduced stress at that point. Reaction time to the initial phoneme of the word was faster in the former case, despite the fact that no acoustic correlates of stress were present. It is concluded that a part of the sentence comprehension process is the prediction of upcoming sentence accents.
  • Cutler, A. (1986). Phonological structure in speech recognition. Phonology Yearbook, 3, 161-178. Retrieved from http://www.jstor.org/stable/4615397.

    Abstract

    Two bodies of recent research from experimental psycholinguistics are summarised, each of which is centred upon a concept from phonology: LEXICAL STRESS and the SYLLABLE. The evidence indicates that neither construct plays a role in prelexical representations during speech recog- nition. Both constructs, however, are well supported by other performance evidence. Testing phonological claims against performance evidence from psycholinguistics can be difficult, since the results of studies designed to test processing models are often of limited relevance to phonological theory.
  • Cutler, A. (1998). Prosodic structure and word recognition. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 41-70). Heidelberg: Springer.
  • Cutler, A. (1999). Prosodische Struktur und Worterkennung bei gesprochener Sprache. In A. D. Friedrici (Ed.), Enzyklopädie der Psychologie: Sprachrezeption (pp. 49-83). Göttingen: Hogrefe.
  • Cutler, A. (1999). Prosody and intonation, processing issues. In R. A. Wilson, & F. C. Keil (Eds.), MIT encyclopedia of the cognitive sciences (pp. 682-683). Cambridge, MA: MIT Press.
  • Cutler, A., & Swinney, D. A. (1986). Prosody and the development of comprehension. Journal of Child Language, 14, 145-167.

    Abstract

    Four studies are reported in which young children’s response time to detect word targets was measured. Children under about six years of age did not show response time advantage for accented target words which adult listeners show. When semantic focus of the target word was manipulated independently of accent, children of about five years of age showed an adult-like response time advantage for focussed targets, but children younger than five did not. Id is argued that the processing advantage for accented words reflect the semantic role of accent as an expression of sentence focus. Processing advantages for accented words depend on the prior development of representations of sentence semantic structure, including the concept of focus. The previous literature on the development of prosodic competence shows an apparent anomaly in that young children’s productive skills appear to outstrip their receptive skills; however, this anomaly disappears if very young children’s prosody is assumed to be produced without an underlying representation of the relationship between prosody and semantics.
  • Cutler, A., & Norris, D. (1999). Sharpening Ockham’s razor (Commentary on W.J.M. Levelt, A. Roelofs & A.S. Meyer: A theory of lexical access in speech production). Behavioral and Brain Sciences, 22, 40-41.

    Abstract

    Language production and comprehension are intimately interrelated; and models of production and comprehension should, we argue, be constrained by common architectural guidelines. Levelt et al.'s target article adopts as guiding principle Ockham's razor: the best model of production is the simplest one. We recommend adoption of the same principle in comprehension, with consequent simplification of some well-known types of models.
  • Cutler, A. (1999). Spoken-word recognition. In R. A. Wilson, & F. C. Keil (Eds.), MIT encyclopedia of the cognitive sciences (pp. 796-798). Cambridge, MA: MIT Press.
  • Cutler, A. (1989). Straw modules [Commentary/Massaro: Speech perception]. Behavioral and Brain Sciences, 12, 760-762.
  • Cutler, A. (1984). Stress and accent in language production and understanding. In D. Gibbon, & H. Richter (Eds.), Intonation, accent and rhythm: Studies in discourse phonology (pp. 77-90). Berlin: de Gruyter.
  • Cutler, A., & Otake, T. (1999). Pitch accent in spoken-word recognition in Japanese. Journal of the Acoustical Society of America, 105, 1877-1888.

    Abstract

    Three experiments addressed the question of whether pitch-accent information may be exploited in the process of recognizing spoken words in Tokyo Japanese. In a two-choice classification task, listeners judged from which of two words, differing in accentual structure, isolated syllables had been extracted ~e.g., ka from baka HL or gaka LH!; most judgments were correct, and listeners’ decisions were correlated with the fundamental frequency characteristics of the syllables. In a gating experiment, listeners heard initial fragments of words and guessed what the words were; their guesses overwhelmingly had the same initial accent structure as the gated word even when only the beginning CV of the stimulus ~e.g., na- from nagasa HLL or nagashi LHH! was presented. In addition, listeners were more confident in guesses with the same initial accent structure as the stimulus than in guesses with different accent. In a lexical decision experiment, responses to spoken words ~e.g., ame HL! were speeded by previous presentation of the same word ~e.g., ame HL! but not by previous presentation of a word differing only in accent ~e.g., ame LH!. Together these findings provide strong evidence that accentual information constrains the activation and selection of candidates for spoken-word recognition.
  • Cutler, A. (1989). The new Victorians. New Scientist, (1663), 66.
  • Cutler, A., & Butterfield, S. (1986). The perceptual integrity of initial consonant clusters. In R. Lawrence (Ed.), Speech and Hearing: Proceedings of the Institute of Acoustics (pp. 31-36). Edinburgh: Institute of Acoustics.
  • Cutler, A. (1998). The recognition of spoken words with variable representations. In D. Duez (Ed.), Proceedings of the ESCA Workshop on Sound Patterns of Spontaneous Speech (pp. 83-92). Aix-en-Provence: Université de Aix-en-Provence.
  • Cutler, A., Hawkins, J. A., & Gilligan, G. (1985). The suffixing preference: A processing explanation. Linguistics, 23, 723-758.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1986). The syllable’s differing role in the segmentation of French and English. Journal of Memory and Language, 25, 385-400. doi:10.1016/0749-596X(86)90033-1.

    Abstract

    Speech segmentation procedures may differ in speakers of different languages. Earlier work based on French speakers listening to French words suggested that the syllable functions as a segmentation unit in speech processing. However, while French has relatively regular and clearly bounded syllables, other languages, such as English, do not. No trace of syllabifying segmentation was found in English listeners listening to English words, French words, or nonsense words. French listeners, however, showed evidence of syllabification even when they were listening to English words. We conclude that alternative segmentation routines are available to the human language processor. In some cases speech segmentation may involve the operation of more than one procedure
  • Cutler, A., Van Ooijen, B., & Norris, D. (1999). Vowels, consonants, and lexical activation. In J. Ohala, Y. Hasegawa, M. Ohala, D. Granville, & A. Bailey (Eds.), Proceedings of the Fourteenth International Congress of Phonetic Sciences: Vol. 3 (pp. 2053-2056). Berkeley: University of California.

    Abstract

    Two lexical decision studies examined the effects of single-phoneme mismatches on lexical activation in spoken-word recognition. One study was carried out in English, and involved spoken primes and visually presented lexical decision targets. The other study was carried out in Dutch, and primes and targets were both presented auditorily. Facilitation was found only for spoken targets preceded immediately by spoken primes; no facilitation occurred when targets were presented visually, or when intervening input occurred between prime and target. The effects of vowel mismatches and consonant mismatches were equivalent.
  • Cutler, A., & Clifton Jr., C. (1984). The use of prosodic information in word recognition. In H. Bouma, & D. Bouwhuis (Eds.), Attention and Performance X: Control of Language Processes (pp. 183-196). Hillsdale, NJ: Erlbaum.
  • Cutler, A., & Clifton, Jr., C. (1984). The use of prosodic information in word recognition. In H. Bouma, & D. G. Bouwhuis (Eds.), Attention and performance X: Control of language processes (pp. 183-196). London: Erlbaum.

    Abstract

    In languages with variable stress placement, lexical stress patterns can convey information about word identity. The experiments reported here address the question of whether lexical stress information can be used in word recognition. The results allow the following conclusions: 1. Prior information as to the number of syllables and lexical stress patterns of words and nonwords does not facilitate lexical decision responses (Experiment 1). 2. The strong correspondences between grammatical category membership and stress pattern in bisyllabic English words (strong-weak stress being associated primarily with nouns, weak-strong with verbs) are not exploited in the recognition of isolated words (Experiment 2). 3. When a change in lexical stress also involves a change in vowel quality, i.e., a segmental as well as a suprasegmental alteration, effects on word recognition are greater when no segmental correlates of suprasegmental changes are involved (Experiments 2 and 3). 4. Despite the above finding, when all other factors are controlled, lexical stress information per se can indeed be shown to play a part in word-recognition process (Experiment 3).
  • Cutler, A. (1986). Why readers of this newsletter should run cross-linguistic experiments. European Psycholinguistics Association Newsletter, 13, 4-8.
  • Dahan, D., & Tanenhaus, M. K. (2004). Continuous mapping from sound to meaning in spoken-language comprehension: Immediate effects of verb-based thematic constraints. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2), 498-513. doi:10.1037/0278-7393.30.2.498.

    Abstract

    The authors used 2 “visual-world” eye-tracking experiments to examine lexical access using Dutch constructions in which the verb did or did not place semantic constraints on its subsequent subject noun phrase. In Experiment 1, fixations to the picture of a cohort competitor (overlapping with the onset of the referent’s name, the subject) did not differ from fixations to a distractor in the constraining-verb condition. In Experiment 2, cross-splicing introduced phonetic information that temporarily biased the input toward the cohort competitor. Fixations to the cohort competitor temporarily increased in both the neutral and constraining conditions. These results favor models in which mapping from the input onto meaning is continuous over models in which contextual effects follow access of an initial form-based competitor set.
  • Dalli, A., Tablan, V., Bontcheva, K., Wilks, Y., Broeder, D., Brugman, H., & Wittenburg, P. (2004). Web services architecture for language resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC2004) (pp. 365-368). Paris: ELRA - European Language Resources Association.
  • Damian, M. F., & Abdel Rahman, R. (2003). Semantic priming in the naming of objects and famous faces. British Journal of Psychology, 94(4), 517-527.

    Abstract

    Researchers interested in face processing have recently debated whether access to the name of a known person occurs in parallel with retrieval of semantic-biographical codes, rather than in a sequential fashion. Recently, Schweinberger, Burton, and Kelly (2001) took a failure to obtain a semantic context effect in a manual syllable judgment task on names of famous faces as support for this position. In two experiments, we compared the effects of visually presented categorically related prime words with either objects (e.g. prime: animal; target: dog) or faces of celebrities (e.g. prime: actor; target: Bruce Willis) as targets. Targets were either manually categorized with regard to the number of syllables (as in Schweinberger et al.), or they were overtly named. For neither objects nor faces was semantic priming obtained in syllable decisions; crucially, however, priming was obtained when objects and faces were overtly named. These results suggest that both face and object naming are susceptible to semantic context effects
  • Dautriche, I., Cristia, A., Brusini, P., Yuan, S., Fisher, C., & Christophe, A. (2014). Toddlers default to canonical surface-to-meaning mapping when learning verbs. Child Development, 85(3), 1168-1180. doi:10.1111/cdev.12183.

    Abstract

    This work was supported by grants from the French Agence Nationale de la Recherche (ANR-2010-BLAN-1901) and from French Fondation de France to Anne Christophe, from the National Institute of Child Health and Human Development (HD054448) to Cynthia Fisher, Fondation Fyssen and Ecole de Neurosciences de Paris to Alex Cristia, and a PhD fellowship from the Direction Générale de l'Armement (DGA, France) supported by the PhD program FdV (Frontières du Vivant) to Isabelle Dautriche. We thank Isabelle Brunet for the recruitment, Michel Dutat for the technical support, and Hernan Anllo for his puppet mastery skill. We are grateful to the families that participated in this study. We also thank two anonymous reviewers for their comments on an earlier draft of this manuscript.
  • Declerck, T., Cunningham, H., Saggion, H., Kuper, J., Reidsma, D., & Wittenburg, P. (2003). MUMIS - Advanced information extraction for multimedia indexing and searching digital media - Processing for multimedia interactive services. 4th European Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS), 553-556.
  • Dediu, D., & Graham, S. A. (2014). Genetics and Language. In M. Aronoff (Ed.), Oxford Bibliographies in Linguistics. New York: Oxford University Press. Retrieved from http://www.oxfordbibliographies.com/view/document/obo-9780199772810/obo-9780199772810-0184.xml.

    Abstract

    This article surveys what is currently known about the complex interplay between genetics and the language sciences. It focuses not only on the genetic architecture of language and speech, but also on their interactions on the cultural and evolutionary timescales. Given the complexity of these issues and their current state of flux and high dynamism, this article surveys the main findings and topics of interest while also briefly introducing the main relevant methods, thus allowing the interested reader to fully appreciate and understand them in their proper context. Of course, not all the relevant publications and resources are mentioned, but this article aims to select the most relevant, promising, or accessible for nonspecialists.

    Files private

    Request files
  • Dediu, D. (2014). Language and biology: The multiple interactions between genetics and language. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 686-707). Cambridge: Cambridge University Press.
  • Dediu, D., & Levinson, S. C. (2014). Language and speech are old: A review of the evidence and consequences for modern linguistic diversity. In E. A. Cartmill, S. G. Roberts, H. Lyn, & H. Cornish (Eds.), The Evolution of Language: Proceedings of the 10th International Conference (pp. 421-422). Singapore: World Scientific.
  • Dediu, D., & Levinson, S. C. (2014). The time frame of the emergence of modern language and its implications. In D. Dor, C. Knight, & J. Lewis (Eds.), The social origins of language (pp. 184-195). Oxford: Oxford University Press.
  • Defina, R. (2014). Arbil: Free tool for creating, editing and searching metadata. Language Documentation and Conservation, 8, 307-314.
  • Den Os, E., & Boves, L. (2004). Natural multimodal interaction for design applications. In P. Cunningham (Ed.), Adoption and the knowledge economy (pp. 1403-1410). Amsterdam: IOS Press.
  • Deriziotis, P., O'Roak, B. J., Graham, S. A., Estruch, S. B., Dimitropoulou, D., Bernier, R. A., Gerdts, J., Shendure, J., Eichler, E. E., & Fisher, S. E. (2014). De novo TBR1 mutations in sporadic autism disrupt protein functions. Nature Communications, 5: 4954. doi:10.1038/ncomms5954.

    Abstract

    Next-generation sequencing recently revealed that recurrent disruptive mutations in a few genes may account for 1% of sporadic autism cases. Coupling these novel genetic data to empirical assays of protein function can illuminate crucial molecular networks. Here we demonstrate the power of the approach, performing the first functional analyses of TBR1 variants identified in sporadic autism. De novo truncating and missense mutations disrupt multiple aspects of TBR1 function, including subcellular localization, interactions with co-regulators and transcriptional repression. Missense mutations inherited from unaffected parents did not disturb function in our assays. We show that TBR1 homodimerizes, that it interacts with FOXP2, a transcription factor implicated in speech/language disorders, and that this interaction is disrupted by pathogenic mutations affecting either protein. These findings support the hypothesis that de novo mutations in sporadic autism have severe functional consequences. Moreover, they uncover neurogenetic mechanisms that bridge different neurodevelopmental disorders involving language deficits.
  • Deriziotis, P., Graham, S. A., Estruch, S. B., & Fisher, S. E. (2014). Investigating protein-protein interactions in live cells using Bioluminescence Resonance Energy Transfer. Journal of visualized experiments, 87: e51438. doi:10.3791/51438.

    Abstract

    Assays based on Bioluminescence Resonance Energy Transfer (BRET) provide a sensitive and reliable means to monitor protein-protein interactions in live cells. BRET is the non-radiative transfer of energy from a ‘donor’ luciferase enzyme to an ‘acceptor’ fluorescent protein. In the most common configuration of this assay, the donor is Renilla reniformis luciferase and the acceptor is Yellow Fluorescent Protein (YFP). Because the efficiency of energy transfer is strongly distance-dependent, observation of the BRET phenomenon requires that the donor and acceptor be in close proximity. To test for an interaction between two proteins of interest in cultured mammalian cells, one protein is expressed as a fusion with luciferase and the second as a fusion with YFP. An interaction between the two proteins of interest may bring the donor and acceptor sufficiently close for energy transfer to occur. Compared to other techniques for investigating protein-protein interactions, the BRET assay is sensitive, requires little hands-on time and few reagents, and is able to detect interactions which are weak, transient, or dependent on the biochemical environment found within a live cell. It is therefore an ideal approach for confirming putative interactions suggested by yeast two-hybrid or mass spectrometry proteomics studies, and in addition it is well-suited for mapping interacting regions, assessing the effect of post-translational modifications on protein-protein interactions, and evaluating the impact of mutations identified in patient DNA.

    Additional information

    video
  • Deutsch, W., & Frauenfelder, U. (1985). Max-Planck-Institute for Psycholinguistics: Annual Report Nr.6 1985. Nijmegen: MPI for Psycholinguistics.
  • Devanna, P., & Vernes, S. C. (2014). A direct molecular link between the autism candidate gene RORa and the schizophrenia candidate MIR137. Scientific Reports, 4: 3994. doi:10.1038/srep03994.

    Abstract

    Retinoic acid-related orphan receptor alpha gene (RORa) and the microRNA MIR137 have both recently been identified as novel candidate genes for neuropsychiatric disorders. RORa encodes a ligand-dependent orphan nuclear receptor that acts as a transcriptional regulator and miR-137 is a brain enriched small non-coding RNA that interacts with gene transcripts to control protein levels. Given the mounting evidence for RORa in autism spectrum disorders (ASD) and MIR137 in schizophrenia and ASD, we investigated if there was a functional biological relationship between these two genes. Herein, we demonstrate that miR-137 targets the 3'UTR of RORa in a site specific manner. We also provide further support for MIR137 as an autism candidate by showing that a large number of previously implicated autism genes are also putatively targeted by miR-137. This work supports the role of MIR137 as an ASD candidate and demonstrates a direct biological link between these previously unrelated autism candidate genes
  • Devanna, P., Middelbeek, J., & Vernes, S. C. (2014). FOXP2 drives neuronal differentiation by interacting with retinoic acid signaling pathways. Frontiers in Cellular Neuroscience, 8: 305. doi:10.3389/fncel.2014.00305.

    Abstract

    FOXP2 was the first gene shown to cause a Mendelian form of speech and language disorder. Although developmentally expressed in many organs, loss of a single copy of FOXP2 leads to a phenotype that is largely restricted to orofacial impairment during articulation and linguistic processing deficits. Why perturbed FOXP2 function affects specific aspects of the developing brain remains elusive. We investigated the role of FOXP2 in neuronal differentiation and found that FOXP2 drives molecular changes consistent with neuronal differentiation in a human model system. We identified a network of FOXP2 regulated genes related to retinoic acid signaling and neuronal differentiation. FOXP2 also produced phenotypic changes associated with neuronal differentiation including increased neurite outgrowth and reduced migration. Crucially, cells expressing FOXP2 displayed increased sensitivity to retinoic acid exposure. This suggests a mechanism by which FOXP2 may be able to increase the cellular differentiation response to environmental retinoic acid cues for specific subsets of neurons in the brain. These data demonstrate that FOXP2 promotes neuronal differentiation by interacting with the retinoic acid signaling pathway and regulates key processes required for normal circuit formation such as neuronal migration and neurite outgrowth. In this way, FOXP2, which is found only in specific subpopulations of neurons in the brain, may drive precise neuronal differentiation patterns and/or control localization and connectivity of these FOXP2 positive cells
  • Dietrich, R., & Klein, W. (1986). Simple language. Interdisciplinary Science Reviews, 11(2), 110-117.
  • Dijkstra, T., & Kempen, G. (1984). Taal in uitvoering: Inleiding tot de psycholinguistiek. Groningen: Wolters-Noordhoff.
  • Dimroth, C., & Starren, M. (Eds.). (2003). Information structure and the dynamics of language acquisition. Amsterdam: John Benjamins.

    Abstract

    The papers in this volume focus on the impact of information structure on language acquisition, thereby taking different linguistic approaches into account. They start from an empirical point of view, and examine data from natural first and second language acquisition, which cover a wide range of varieties, from early learner language to native speaker production and from gesture to Creole prototypes. The central theme is the interplay between principles of information structure and linguistic structure and its impact on the functioning and development of the learner's system. The papers examine language-internal explanatory factors and in particular the communicative and structural forces that push and shape the acquisition process, and its outcome. On the theoretical level, the approach adopted appeals both to formal and communicative constraints on a learner’s language in use. Two empirical domains provide a 'testing ground' for the respective weight of grammatical versus functional determinants in the acquisition process: (1) the expression of finiteness and scope relations at the utterance level and (2) the expression of anaphoric relations at the discourse level.
  • Dimroth, C., Gretsch, P., Jordens, P., Perdue, C., & Starren, M. (2003). Finiteness in Germanic languages: A stage-model for first and second language development. In C. Dimroth, & M. Starren (Eds.), Information structure and the dynamics of language acquisition (pp. 65-94). Amsterdam: Benjamins.
  • Dimroth, C. (2004). Fokuspartikeln und Informationsgliederung im Deutschen. Tübingen: Stauffenburg.
  • Dimroth, C., & Starren, M. (2003). Introduction. In C. Dimroth, & M. Starren (Eds.), Information structure and the dynamics of language acquisition (pp. 1-14). Amsterdam: John Benjamins.
  • Dimroth, C. (1998). Indiquer la portée en allemand L2: Une étude longitudinale de l'acquisition des particules de portée. AILE (Acquisition et Interaction en Langue étrangère), 11, 11-34.
  • Dingemanse, M., & Enfield, N. J. (2014). Ongeschreven regels van de taal. Psyche en Brein, 6, 6-11.

    Abstract

    Als je wereldwijd gesprekken beluistert, merk je dat de menselijke dialoog universele regels volgt. Die sturen en verrijken onze sociale interactie.
  • Dingemanse, M., & Floyd, S. (2014). Conversation across cultures. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 447-480). Cambridge: Cambridge University Press.
  • Dingemanse, M., Torreira, F., & Enfield, N. J. (2014). Conversational infrastructure and the convergent evolution of linguistic items. In E. A. Cartmill, S. G. Roberts, H. Lyn, & H. Cornish (Eds.), The Evolution of Language: Proceedings of the 10th International Conference (pp. 425-426). Singapore: World Scientific.
  • Dingemanse, M., Blythe, J., & Dirksmeyer, T. (2014). Formats for other-initiation of repair across languages: An exercise in pragmatic typology. Studies in Language, 38, 5-43. doi:10.1075/sl.38.1.01din.

    Abstract

    In conversation, people have to deal with problems of speaking, hearing, and understanding. We report on a cross-linguistic investigation of the conversational structure of other-initiated repair (also known as collaborative repair, feedback, requests for clarification, or grounding sequences). We take stock of formats for initiating repair across languages (comparable to English huh?, who?, y’mean X?, etc.) and find that different languages make available a wide but remarkably similar range of linguistic resources for this function. We exploit the patterned variation as evidence for several underlying concerns addressed by repair initiation: characterising trouble, managing responsibility, and handling knowledge. The concerns do not always point in the same direction and thus provide participants in interaction with alternative principles for selecting one format over possible others. By comparing conversational structures across languages, this paper contributes to pragmatic typology: the typology of systems of language use and the principles that shape them
  • Dingemanse, M. (2014). Making new ideophones in Siwu: Creative depiction in conversation. Pragmatics and Society, 5(3), 384-405. doi:10.1075/ps.5.3.04din.

    Abstract

    Ideophones are found in many of the world’s languages. Though they are a major word class on a par with nouns and verbs, their origins are ill-understood, and the question of ideophone creation has been a source of controversy. This paper studies ideophone creation in naturally occurring speech. New, unconventionalised ideophones are identified using native speaker judgements, and are studied in context to understand the rules and regularities underlying their production and interpretation. People produce and interpret new ideophones with the help of the semiotic infrastructure that underlies the use of existing ideophones: foregrounding frames certain stretches of speech as depictive enactments of sensory imagery, and various types of iconicity link forms and meanings. As with any creative use of linguistic resources, context and common ground also play an important role in supporting rapid ‘good enough’ interpretations of new material. The making of new ideophones is a special case of a more general phenomenon of creative depiction: the art of presenting verbal material in such a way that the interlocutor recognises and interprets it as a depiction.
  • Dingemanse, M., & Enfield, N. J. (2014). Let's talk: Universal social rules underlie languages. Scientific American Mind, 25, 64-69. doi:10.1038/scientificamericanmind0914-64.

    Abstract

    Recent developments in the science of language signal the emergence of a new paradigm for language study: a social approach to the fundamental questions of what language is like, how much languages really have in common, and why only our species has it. The key to these developments is a new appreciation of the need to study everyday spoken language, with all its complications and ‘imperfections’, in a systematic way. The work reviewed in this article —on turn-taking, timing, and other-initiated repair in languages around the world— has important implications for our understanding of human sociality and sheds new light on the social shape of language. For the first time in the history of linguistics, we are no longer tied to what can be written down or thought up. Rather, we look at language as a biologist would: as it occurs in nature.
  • Dingemanse, M., Verhoef, T., & Roberts, S. G. (2014). The role of iconicity in the cultural evolution of communicative signals. In B. De Boer, & T. Verhoef (Eds.), Proceedings of Evolang X, Workshop on Signals, Speech, and Signs (pp. 11-15).
  • Dolscheid, S., Hunnius, S., Casasanto, D., & Majid, A. (2014). Prelinguistic infants are sensitive to space-pitch associations found across cultures. Psychological Science, 25(6), 1256-1261. doi:10.1177/0956797614528521.

    Abstract

    People often talk about musical pitch using spatial metaphors. In English, for instance, pitches can be “high” or “low” (i.e., height-pitch association), whereas in other languages, pitches are described as “thin” or “thick” (i.e., thickness-pitch association). According to results from psychophysical studies, metaphors in language can shape people’s nonlinguistic space-pitch representations. But does language establish mappings between space and pitch in the first place, or does it only modify preexisting associations? To find out, we tested 4-month-old Dutch infants’ sensitivity to height-pitch and thickness-pitch mappings using a preferential-looking paradigm. The infants looked significantly longer at cross-modally congruent stimuli for both space-pitch mappings, which indicates that infants are sensitive to these associations before language acquisition. The early presence of space-pitch mappings means that these associations do not originate from language. Instead, language builds on preexisting mappings, changing them gradually via competitive associative learning. Space-pitch mappings that are language-specific in adults develop from mappings that may be universal in infants.
  • Dolscheid, S., Willems, R. M., Hagoort, P., & Casasanto, D. (2014). The relation of space and musical pitch in the brain. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 421-426). Austin, Tx: Cognitive Science Society.

    Abstract

    Numerous experiments show that space and musical pitch are
    closely linked in people's minds. However, the exact nature of
    space-pitch associations and their neuronal underpinnings are
    not well understood. In an fMRI experiment we investigated
    different types of spatial representations that may underlie
    musical pitch. Participants judged stimuli that varied in
    spatial height in both the visual and tactile modalities, as well
    as auditory stimuli that varied in pitch height. In order to
    distinguish between unimodal and multimodal spatial bases of
    musical pitch, we examined whether pitch activations were
    present in modality-specific (visual or tactile) versus
    multimodal (visual and tactile) regions active during spatial
    height processing. Judgments of musical pitch were found to
    activate unimodal visual areas, suggesting that space-pitch
    associations may involve modality-specific spatial
    representations, supporting a key assumption of embodied
    theories of metaphorical mental representation.
  • Drolet, M., & Kempen, G. (1985). IPG: A cognitive approach to sentence generation. CCAI: The Journal for the Integrated Study of Artificial Intelligence, Cognitive Science and Applied Epistemology, 2, 37-61.
  • Dronkers, N. F., Wilkins, D. P., Van Valin Jr., R. D., Redfern, B. B., & Jaeger, J. J. (2004). Lesion analysis of the brain areas involved in language comprehension. Cognition, 92, 145-177. doi:10.1016/j.cognition.2003.11.002.

    Abstract

    The cortical regions of the brain traditionally associated with the comprehension of language are Wernicke's area and Broca's area. However, recent evidence suggests that other brain regions might also be involved in this complex process. This paper describes the opportunity to evaluate a large number of brain-injured patients to determine which lesioned brain areas might affect language comprehension. Sixty-four chronic left hemisphere stroke patients were evaluated on 11 subtests of the Curtiss–Yamada Comprehensive Language Evaluation – Receptive (CYCLE-R; Curtiss, S., & Yamada, J. (1988). Curtiss–Yamada Comprehensive Language Evaluation. Unpublished test, UCLA). Eight right hemisphere stroke patients and 15 neurologically normal older controls also participated. Patients were required to select a single line drawing from an array of three or four choices that best depicted the content of an auditorily-presented sentence. Patients' lesions obtained from structural neuroimaging were reconstructed onto templates and entered into a voxel-based lesion-symptom mapping (VLSM; Bates, E., Wilson, S., Saygin, A. P., Dick, F., Sereno, M., Knight, R. T., & Dronkers, N. F. (2003). Voxel-based lesion-symptom mapping. Nature Neuroscience, 6(5), 448–450.) analysis along with the behavioral data. VLSM is a brain–behavior mapping technique that evaluates the relationships between areas of injury and behavioral performance in all patients on a voxel-by-voxel basis, similar to the analysis of functional neuroimaging data. Results indicated that lesions to five left hemisphere brain regions affected performance on the CYCLE-R, including the posterior middle temporal gyrus and underlying white matter, the anterior superior temporal gyrus, the superior temporal sulcus and angular gyrus, mid-frontal cortex in Brodmann's area 46, and Brodmann's area 47 of the inferior frontal gyrus. Lesions to Broca's and Wernicke's areas were not found to significantly alter language comprehension on this particular measure. Further analysis suggested that the middle temporal gyrus may be more important for comprehension at the word level, while the other regions may play a greater role at the level of the sentence. These results are consistent with those seen in recent functional neuroimaging studies and offer complementary data in the effort to understand the brain areas underlying language comprehension.
  • Drozd, K. F. (1998). No as a determiner in child English: A summary of categorical evidence. In A. Sorace, C. Heycock, & R. Shillcock (Eds.), Proceedings of the Gala '97 Conference on Language Acquisition (pp. 34-39). Edinburgh, UK: Edinburgh University Press,.

    Abstract

    This paper summarizes the results of a descriptive syntactic category analysis of child English no which reveals that young children use and represent no as a determiner and negatives like no pen as NPs, contra standard analyses.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2014). Phoneme category retuning in a non-native language. In Proceedings of Interspeech 2014: 15th Annual Conference of the International Speech Communication Association (pp. 553-557).

    Abstract

    Previous studies have demonstrated that native listeners
    modify their interpretation of a speech sound when a talker
    produces an ambiguous sound in order to quickly tune into a
    speaker, but there is hardly any evidence that non-native
    listeners employ a similar mechanism when encountering
    ambiguous pronunciations. So far, one study demonstrated
    this lexically-guided perceptual learning effect for nonnatives,
    using phoneme categories similar in the native
    language of the listeners and the non-native language of the
    stimulus materials. The present study investigates the question
    whether phoneme category retuning is possible in a nonnative
    language for a contrast, /l/-/r/, which is phonetically
    differently embedded in the native (Dutch) and nonnative
    (English) languages involved. Listening experiments indeed
    showed a lexically-guided perceptual learning effect.
    Assuming that Dutch listeners have different phoneme
    categories for the native Dutch and non-native English /r/, as
    marked differences between the languages exist for /r/, these
    results, for the first time, seem to suggest that listeners are not
    only able to retune their native phoneme categories but also
    their non-native phoneme categories to include ambiguous
    pronunciations.
  • Drude, S., Trilsbeek, P., Sloetjes, H., & Broeder, D. (2014). Best practices in the creation, archiving and dissemination of speech corpora at the Language Archive. In S. Ruhi, M. Haugh, T. Schmidt, & K. Wörner (Eds.), Best Practices for Spoken Corpora in Linguistic Research (pp. 183-207). Newcastle upon Tyne: Cambridge Scholars Publishing.
  • Drude, S. (2003). Advanced glossing: A language documentation format and its implementation with Shoebox. In Proceedings of the 2002 International Conference on Language Resources and Evaluation (LREC 2002). Paris: ELRA.

    Abstract

    This paper presents Advanced Glossing, a proposal for a general glossing format designed for language documentation, and a specific setup for the Shoebox-program that implements Advanced Glossing to a large extent. Advanced Glossing (AG) goes beyond the traditional Interlinear Morphemic Translation, keeping syntactic and morphological information apart from each other in separate glossing tables. AG provides specific lines for different kinds of annotation – phonetic, phonological, orthographical, prosodic, categorial, structural, relational, and semantic, and it allows for gradual and successive, incomplete, and partial filling in case that some information may be irrelevant, unknown or uncertain. The implementation of AG in Shoebox sets up several databases. Each documented text is represented as a file of syntactic glossings. The morphological glossings are kept in a separate database. As an additional feature interaction with lexical databases is possible. The implementation makes use of the interlinearizing automatism provided by Shoebox, thus obtaining the table format for the alignment of lines in cells, and for semi-automatic filling-in of information in glossing tables which has been extracted from databases
  • Drude, S. (2003). Digitizing and annotating texts and field recordings in the Awetí project. In Proceedings of the EMELD Language Digitization Project Conference 2003. Workshop on Digitizing and Annotating Text and Field Recordings, LSA Institute, Michigan State University, July 11th -13th.

    Abstract

    Digitizing and annotating texts and field recordings Given that several initiatives worldwide currently explore the new field of documentation of endangered languages, the E-MELD project proposes to survey and unite procedures, techniques and results in order to achieve its main goal, ''the formulation and promulgation of best practice in linguistic markup of texts and lexicons''. In this context, this year's workshop deals with the processing of recorded texts. I assume the most valuable contribution I could make to the workshop is to show the procedures and methods used in the Awetí Language Documentation Project. The procedures applied in the Awetí Project are not necessarily representative of all the projects in the DOBES program, and they may very well fall short in several respects of being best practice, but I hope they might provide a good and concrete starting point for comparison, criticism and further discussion. The procedures to be exposed include: * taping with digital devices, * digitizing (preliminarily in the field, later definitely by the TIDEL-team at the Max Planck Institute in Nijmegen), * segmenting and transcribing, using the transcriber computer program, * translating (on paper, or while transcribing), * adding more specific annotation, using the Shoebox program, * converting the annotation to the ELAN-format developed by the TIDEL-team, and doing annotation with ELAN. Focus will be on the different types of annotation. Especially, I will present, justify and discuss Advanced Glossing, a text annotation format developed by H.-H. Lieb and myself designed for language documentation. It will be shown how Advanced Glossing can be applied using the Shoebox program. The Shoebox setup used in the Awetí Project will be shown in greater detail, including lexical databases and semi-automatic interaction between different database types (jumping, interlinearization). ( Freie Universität Berlin and Museu Paraense Emílio Goeldi, with funding from the Volkswagen Foundation.)
  • Drude, S. (2014). Reduplication as a tool for morphological and phonological analysis in Awetí. In G. G. Gómez, & H. Van der Voort (Eds.), Reduplication in Indigenous languages of South America (pp. 185-216). Leiden: Brill.
  • Drude, S., Broeder, D., & Trilsbeek, P. (2014). The Language Archive and its solutions for sustainable endangered languages corpora. Book 2.0, 4, 5-20. doi:10.1386/btwo.4.1-2.5_1.

    Abstract

    Since the late 1990s, the technical group at the Max-Planck-Institute for Psycholinguistics has worked on solutions for important challenges in building sustainable data archives, in particular, how to guarantee long-time-availability of digital research data for future research. The support for the well-known DOBES (Documentation of Endangered Languages) programme has greatly inspired and advanced this work, and lead to the ongoing development of a whole suite of tools for annotating, cataloguing and archiving multi-media data. At the core of the LAT (Language Archiving Technology) tools is the IMDI metadata schema, now being integrated into a larger network of digital resources in the European CLARIN project. The multi-media annotator ELAN (with its web-based cousin ANNEX) is now well known not only among documentary linguists. We aim at presenting an overview of the solutions, both achieved and in development, for creating and exploiting sustainable digital data, in particular in the area of documenting languages and cultures, and their interfaces with related other developments
  • Drude, S. (2004). Wörterbuchinterpretation: Integrative Lexikographie am Beispiel des Guaraní. Tübingen: Niemeyer.

    Abstract

    This study provides an answer to the question of how dictionaries should be read. For this purpose, articles taken from an outline for a Guaraní-German dictionary geared to established lexicographic practice are provided with standardized interpretations. Each article is systematically assigned a formal sentence making its meaning explicit both for content words (including polysemes) and functional words or affixes. Integrative Linguistics proves its theoretical and practical value both for the description of Guaraní (indigenous Indian language spoken in Paraguay, Argentina and Brazil) and in metalexicographic terms.
  • Duffield, N., & Matsuo, A. (2003). Factoring out the parallelism effect in ellipsis: An interactional approach? In J. Chilar, A. Franklin, D. Keizer, & I. Kimbara (Eds.), Proceedings of the 39th Annual Meeting of the Chicago Linguistic Society (CLS) (pp. 591-603). Chicago: Chicago Linguistics Society.

    Abstract

    Traditionally, there have been three standard assumptions made about the Parallelism Effect on VP-ellipsis, namely that the effect is categorical, that it applies asymmetrically and that it is uniquely due to syntactic factors. Based on the results of a series of experiments involving online and offline tasks, it will be argued that the Parallelism Effect is instead noncategorical and interactional. The factors investigated include construction type, conceptual and morpho-syntactic recoverability, finiteness and anaphor type (to test VP-anaphora). The results show that parallelism is gradient rather than categorical, effects both VP-ellipsis and anaphora, and is influenced by both structural and non-structural factors.
  • Dunn, M. (2003). Pioneers of Island Melanesia project. Oceania Newsletter, 30/31, 1-3.
  • Dunn, M. (2014). [Review of the book Evolutionary Linguistics by April McMahon and Robert McMahon]. American Anthropologist, 116(3), 690-691.
  • Dunn, M. (2014). Gender determined dialect variation. In G. G. Corbett (Ed.), The expression of gender (pp. 39-68). Berlin: De Gruyter.
  • Dunn, M. (2014). Language phylogenies. In C. Bowern, & B. Evans (Eds.), The Routledge handbook of historical linguistics (pp. 190-211). London: Routlege.
  • Dunn, M., & Terrill, A. (2004). Lexical comparison between Papuan languages: Inland bird and tree species. In A. Majid (Ed.), Field Manual Volume 9 (pp. 65-69). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492942.

    Abstract

    The Pioneers project seeks to uncover relationships between the Papuan languages of Island Melanesia. One basic way to uncover linguistic relationships, either contact or genetic, is through lexical comparison. We have seen very few shared words between our Papuan languages and any other languages, either Oceanic or Papuan, but most of the words which are shared are shared because they are commonly borrowed from Oceanic languages. This task is aimed at enabling fieldworkers to collect terms for inland bird and tree species. In the past it is has proved very difficult for non-experts to identify plant and bird species, so the task consists of a booklet of colour pictures of some of the more common species, with information on the range and habits of each species, as well as some information on their cultural uses, which should enable better identification. It is intended that fieldworkers will show this book to consultants and use it as an elicitation aid.
  • Dunn, M., Levinson, S. C., Lindström, E., Reesink, G., & Terrill, A. (2003). Island Melanesia elicitation materials. Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.885547.

    Abstract

    The Island Melanesia project was initiated to collect data on the little-known Papuan languages of Island Melanesia, and to explore the origins of and relationships between these languages. The project materials from the 2003 field season focus on language related to cultural domains (e.g., material culture) and on targeted grammatical description. Five tasks are included: Proto-Oceanic lexicon, Grammatical questionnaire and lexicon, Kinship questionnaire, Domains of likely pre-Austronesian terminology, and Botanical collection questionnaire.
  • Eaves, L. J., St Pourcain, B., Smith, G. D., York, T. P., & Evans, D. M. (2014). Resolving the Effects of Maternal and Offspring Genotype on Dyadic Outcomes in Genome Wide Complex Trait Analysis (“M-GCTA”). Behavior Genetics, 44(5), 445-455. doi:10.1007/s10519-014-9666-6.

    Abstract

    Genome wide complex trait analysis (GCTA) is extended to include environmental effects of the maternal genotype on offspring phenotype (“maternal effects”, M-GCTA). The model includes parameters for the direct effects of the offspring genotype, maternal effects and the covariance between direct and maternal effects. Analysis of simulated data, conducted in OpenMx, confirmed that model parameters could be recovered by full information maximum likelihood (FIML) and evaluated the biases that arise in conventional GCTA when indirect genetic effects are ignored. Estimates derived from FIML in OpenMx showed very close agreement to those obtained by restricted maximum likelihood using the published algorithm for GCTA. The method was also applied to illustrative perinatal phenotypes from ~4,000 mother-offspring pairs from the Avon Longitudinal Study of Parents and Children. The relative merits of extended GCTA in contrast to quantitative genetic approaches based on analyzing the phenotypic covariance structure of kinships are considered.
  • Edlinger, G., Bastiaansen, M. C. M., Brunia, C., Neuper, C., & Pfurtscheller, G. (1999). Cortical oscillatory activity assessed by combined EEG and MEG recordings and high resolution ERD methods. Biomedizinische Technik, 44(2), 131-134.
  • Eibl-Eibesfeldt, I., Senft, B., & Senft, G. (1998). Trobriander (Ost-Neuguinea, Trobriand Inseln, Kaile'una) Fadenspiele 'ninikula'. In Ethnologie - Humanethologische Begleitpublikationen von I. Eibl-Eibesfeldt und Mitarbeitern. Sammelband I, 1985-1987. Göttingen: Institut für den Wissenschaftlichen Film.
  • Eisenbeiss, S., McGregor, B., & Schmidt, C. M. (1999). Story book stimulus for the elicitation of external possessor constructions and dative constructions ('the circle of dirt'). In D. Wilkins (Ed.), Manual for the 1999 Field Season (pp. 140-144). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.3002750.

    Abstract

    How involved in an event is a person that possesses one of the event participants? Some languages can treat such “external possessors” as very closely involved, even marking them on the verb along with core roles such as subject and object. Other languages only allow possessors to be expressed as non-core participants. This task explores possibilities for the encoding of possessors and other related roles such as beneficiaries. The materials consist of a sequence of thirty drawings designed to elicit target construction types.

    Additional information

    1999_Story_book_booklet.pdf
  • Emmorey, K., & Ozyurek, A. (2014). Language in our hands: Neural underpinnings of sign language and co-speech gesture. In M. S. Gazzaniga, & G. R. Mangun (Eds.), The cognitive neurosciences (5th ed., pp. 657-666). Cambridge, Mass: MIT Press.
  • Enfield, N. J. (2003). Producing and editing diagrams using co-speech gesture: Spatializing non-spatial relations in explanations of kinship in Laos. Journal of Linguistic Anthropology, 13(1), 7-50. doi:10.1525/jlin.2003.13.1.7.

    Abstract

    This article presents a description of two sequences of talk by urban speakers of Lao (a southwestern Tai language spoken in Laos) in which co-speech gesture plays a central role in explanations of kinship relations and terminology. The speakers spontaneously use hand gestures and gaze to spatially diagram relationships that have no inherent spatial structure. The descriptive sections of the article are prefaced by a discussion of the semiotic complexity of illustrative gestures and gesture diagrams. Gestured signals feature iconic, indexical, and symbolic components, usually in combination, as well as using motion and three-dimensional space to convey meaning. Such diagrams show temporal persistence and structural integrity despite having been projected in midair by evanescent signals (i.e., handmovements anddirected gaze). Speakers sometimes need or want to revise these spatial representations without destroying their structural integrity. The need to "edit" gesture diagrams involves such techniques as hold-and-drag, hold-and-work-with-free-hand, reassignment-of-old-chunk-tonew-chunk, and move-body-into-new-space.
  • Enfield, N. J. (2004). On linear segmentation and combinatorics in co-speech gesture: A symmetry-dominance construction in Lao fish trap descriptions. Semiotica, 149(1/4), 57-123. doi:10.1515/semi.2004.038.
  • Enfield, N. J. (2003). The definition of WHAT-d'you-call-it: Semantics and pragmatics of 'recognitional deixis'. Journal of Pragmatics, 35(1), 101-117. doi:10.1016/S0378-2166(02)00066-8.

    Abstract

    Words such as what -d'you-call-it raise issues at the heart of the semantics/pragmatics interface. Expressions of this kind are conventionalised and have meanings which, while very general, are explicitly oriented to the interactional nature of the speech context, drawing attention to a speaker's assumption that the listener can figure out what the speaker is referring to. The details of such meanings can account for functional contrast among similar expressions, in a single language as well as cross-linguistically. The English expressions what -d'you-call-it and you-know-what are compared, along with a comparable Lao expression meaning, roughly, ‘that thing’. Proposed definitions of the meanings of these expressions account for their different patterns of use. These definitions include reference to the speech act participants, a point which supports the view that what -d'you-call-it words can be considered deictic. Issues arising from the descriptive section of this paper include the question of how such terms are derived, as well as their degree of conventionality.
  • Enfield, N. J., Levinson, S. C., De Ruiter, J. P., & Stivers, T. (2004). Building a corpus of multimodal interaction in your field site. In A. Majid (Ed.), Field Manual Volume 9 (pp. 32-36). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.506951.

    Abstract

    This Field Manual entry has been superceded by the 2007 version:
    https://doi.org/10.17617/2.468728

    Files private

    Request files

Share this page