Publications

Displaying 201 - 300 of 1308
  • Crago, M. B., & Allen, S. E. M. (1998). Acquiring Inuktitut. In O. L. Taylor, & L. Leonard (Eds.), Language Acquisition Across North America: Cross-Cultural And Cross-Linguistic Perspectives (pp. 245-279). San Diego, CA, USA: Singular Publishing Group, Inc.
  • Crago, M. B., & Allen, S. E. M. (1997). Linguistic and cultural aspects of simplicity and complexity in Inuktitut child directed speech. In E. Hughes, M. Hughes, & A. Greenhill (Eds.), Proceedings of the 21st annual Boston University Conference on Language Development (pp. 91-102).
  • Crago, M. B., Allen, S. E. M., & Pesco, D. (1998). Issues of Complexity in Inuktitut and English Child Directed Speech. In Proceedings of the twenty-ninth Annual Stanford Child Language Research Forum (pp. 37-46).
  • Crago, M. B., Allen, S. E. M., & Hough-Eyamie, W. P. (1997). Exploring innateness through cultural and linguistic variation. In M. Gopnik (Ed.), The inheritance and innateness of grammars (pp. 70-90). New York City, NY, USA: Oxford University Press, Inc.
  • Crago, M. B., Chen, C., Genesee, F., & Allen, S. E. M. (1998). Power and deference. Journal for a Just and Caring Education, 4(1), 78-95.
  • Cristia, A., Dupoux, E., Hakuno, Y., Lloyd-Fox, S., Schuetze, M., Kivits, J., Bergvelt, T., Van Gelder, M., Filippin, L., Charron, S., & Minagawa-Kawai, Y. (2013). An online database of infant functional Near InfraRed Spectroscopy studies: A community-augmented systematic review. PLoS One, 8(3): e58906. doi:10.1371/journal.pone.0058906.

    Abstract

    Until recently, imaging the infant brain was very challenging. Functional Near InfraRed Spectroscopy (fNIRS) is a promising, relatively novel technique, whose use is rapidly expanding. As an emergent field, it is particularly important to share methodological knowledge to ensure replicable and robust results. In this paper, we present a community-augmented database which will facilitate precisely this exchange. We tabulated articles and theses reporting empirical fNIRS research carried out on infants below three years of age along several methodological variables. The resulting spreadsheet has been uploaded in a format allowing individuals to continue adding new results, and download the most recent version of the table. Thus, this database is ideal to carry out systematic reviews. We illustrate its academic utility by focusing on the factors affecting three key variables: infant attrition, the reliability of oxygenated and deoxygenated responses, and signal-to-noise ratios. We then discuss strengths and weaknesses of the DBIfNIRS, and conclude by suggesting a set of simple guidelines aimed to facilitate methodological convergence through the standardization of reports.
  • Cristia, A. (2013). Input to language: The phonetics of infant-directed speech. Language and Linguistics Compass, 7, 157-170. doi:10.1111/lnc3.12015.

    Abstract

    Over the first year of life, infant perception changes radically as the child learns the phonology of the ambient language from the speech she is exposed to. Since infant-directed speech attracts the child's attention more than other registers, it is necessary to describe that input in order to understand language development, and to address questions of learnability. In this review, evidence from corpora analyses, experimental studies, and observational paradigms is brought together to outline the first comprehensive empirical picture of infant-directed speech and its effects on language acquisition. The ensuing landscape suggests that infant-directed speech provides an emotionally and linguistically rich input to language acquisition

    Additional information

    Cristia_Suppl_Material.xls
  • Cristia, A., Mielke, J., Daland, R., & Peperkamp, S. (2013). Similarity in the generalization of implicitly learned sound patterns. Journal of Laboratory Phonology, 4(2), 259-285.

    Abstract

    A core property of language is the ability to generalize beyond observed examples. In two experiments, we explore how listeners generalize implicitly learned sound patterns to new nonwords and to new sounds, with the goal of shedding light on how similarity affects treatment of potential generalization targets. During the exposure phase, listeners heard nonwords whose onset consonant was restricted to a subset of a natural class (e.g., /d g v z Z/). During the test phase, listeners were presented with new nonwords and asked to judge how frequently they had been presented before; some of the test items began with a consonant from the exposure set (e.g., /d/), and some began with novel consonants with varying relations to the exposure set (e.g., /b/, which is highly similar to all onsets in the training set; /t/, which is highly similar to one of the training onsets; and /p/, which is less similar than the other two). The exposure onset was rated most frequent, indicating that participants encoded onset attestation in the exposure set, and generalized it to new nonwords. Participants also rated novel consonants as somewhat frequent, indicating generalization to onsets that did not occur in the exposure phase. While generalization could be accounted for in terms of featural distance, it was insensitive to natural class structure. Generalization to new sounds was predicted better by models requiring prior linguistic knowledge (either traditional distinctive features or articulatory phonetic information) than by a model based on a linguistically naïve measure of acoustic similarity.
  • Cronin, K. A., Kurian, A. V., & Snowdon, C. T. (2005). Cooperative problem solving in a cooperatively breeding primate. Animal Behaviour, 69, 133-142. doi:10.1016/j.anbehav.2004.02.024.

    Abstract

    We investigated cooperative problem solving in unrelated pairs of the cooperatively breeding cottontop tamarin, Saguinus oedipus, to assess the cognitive basis of cooperative behaviour in this species and to compare abilities with other apes and monkeys. A transparent apparatus was used that required extension of two handles at opposite ends of the apparatus for access to rewards. Resistance was applied to both handles so that two tamarins had to act simultaneously in order to receive rewards. In contrast to several previous studies of cooperation, both tamarins received rewards as a result of simultaneous pulling. The results from two experiments indicated that the cottontop tamarins (1) had a much higher success rate and efficiency of pulling than many of the other species previously studied, (2) adjusted pulling behaviour to the presence or absence of a partner, and (3) spontaneously developed sustained pulling techniques to solve the task. These findings suggest that cottontop tamarins understand the role of the partner in this cooperative task, a cognitive ability widely ascribed only to great apes. The cooperative social system of tamarins, the intuitive design of the apparatus, and the provision of rewards to both participants may explain the performance of the tamarins.
  • Cronin, K. A. (2013). [Review of the book Chimpanzees of the Lakeshore: Natural history and culture at Mahale by Toshisada Nishida]. Animal Behaviour, 85, 685-686. doi:10.1016/j.anbehav.2013.01.001.

    Abstract

    First paragraph: Motivated by his quest to characterize the society of the last common ancestor of humans and other great apes, Toshisada Nishida set out as a graduate student to the Mahale Mountains on the eastern shore of Lake Tanganyika, Tanzania. This book is a story of his 45 years with the Mahale chimpanzees, or as he calls it, their ethnography. Beginning with his accounts of meeting the Tongwe people and the challenges of provisioning the chimpanzees for habituation, Nishida reveals how he slowly unravelled the unit group and community basis of chimpanzee social organization. The book begins and ends with a feeling of chronological order, starting with his arrival at Mahale and ending with an eye towards the future, with concrete recommendations for protecting wild chimpanzees. However, the bulk of the book is topically organized with chapters on feeding behaviour, growth and development, play and exploration, communication, life histories, sexual strategies, politics and culture.
  • Cutler, A., Norris, D., & Sebastián-Gallés, N. (2004). Phonemic repertoire and similarity within the vocabulary. In S. Kin, & M. J. Bae (Eds.), Proceedings of the 8th International Conference on Spoken Language Processing (Interspeech 2004-ICSLP) (pp. 65-68). Seoul: Sunjijn Printing Co.

    Abstract

    Language-specific differences in the size and distribution of the phonemic repertoire can have implications for the task facing listeners in recognising spoken words. A language with more phonemes will allow shorter words and reduced embedding of short words within longer ones, decreasing the potential for spurious lexical competitors to be activated by speech signals. We demonstrate that this is the case via comparative analyses of the vocabularies of English and Spanish. A language which uses suprasegmental as well as segmental contrasts, however, can substantially reduce the extent of spurious embedding.
  • Cutler, A., & Broersma, M. (2005). Phonetic precision in listening. In W. J. Hardcastle, & J. M. Beck (Eds.), A figure of speech: A Festschrift for John Laver (pp. 63-91). Mahwah, NJ: Erlbaum.
  • Cutler, A. (2004). Segmentation of spoken language by normal adult listeners. In R. Kent (Ed.), MIT encyclopedia of communication sciences and disorders (pp. 392-395). Cambridge, MA: MIT Press.
  • Cutler, A., Weber, A., Smits, R., & Cooper, N. (2004). Patterns of English phoneme confusions by native and non-native listeners. Journal of the Acoustical Society of America, 116(6), 3668-3678. doi:10.1121/1.1810292.

    Abstract

    Native American English and non-native(Dutch)listeners identified either the consonant or the vowel in all possible American English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios(0, 8, and 16 dB). The phoneme identification
    performance of the non-native listeners was less accurate than that of the native listeners. All listeners were adversely affected by noise. With these isolated syllables, initial segments were harder to identify than final segments. Crucially, the effects of language background and noise did not interact; the performance asymmetry between the native and non-native groups was not significantly different across signal-to-noise ratios. It is concluded that the frequently reported disproportionate difficulty of non-native listening under disadvantageous conditions is not due to a disproportionate increase in phoneme misidentifications.
  • Cutler, A. (2004). On spoken-word recognition in a second language. Newsletter, American Association of Teachers of Slavic and East European Languages, 47, 15-15.
  • Cutler, A., Klein, W., & Levinson, S. C. (2005). The cornerstones of twenty-first century psycholinguistics. In A. Cutler (Ed.), Twenty-first century psycholinguistics: Four cornerstones (pp. 1-20). Mahwah, NJ: Erlbaum.
  • Cutler, A. (2005). The lexical statistics of word recognition problems caused by L2 phonetic confusion. In Proceedings of the 9th European Conference on Speech Communication and Technology (pp. 413-416).
  • Cutler, A., McQueen, J. M., & Norris, D. (2005). The lexical utility of phoneme-category plasticity. In Proceedings of the ISCA Workshop on Plasticity in Speech Perception (PSP2005) (pp. 103-107).
  • Cutler, A., & Henton, C. G. (2004). There's many a slip 'twixt the cup and the lip. In H. Quené, & V. Van Heuven (Eds.), On speech and Language: Studies for Sieb G. Nooteboom (pp. 37-45). Utrecht: Netherlands Graduate School of Linguistics.

    Abstract

    The retiring academic may look back upon, inter alia, years of conference attendance. Speech error researchers are uniquely fortunate because they can collect data in any situation involving communication; accordingly, the retiring speech error researcher will have collected data at those conferences. We here address the issue of whether error data collected in situations involving conviviality (such as at conferences) is representative of error data in general. Our approach involved a comparison, across three levels of linguistic processing, between a specially constructed Conviviality Sample and the largest existing source of speech error data, the newly available Fromkin Speech Error Database. The results indicate that there are grounds for regarding the data in the Conviviality Sample as a better than average reflection of the true population of all errors committed. These findings encourage us to recommend further data collection in collaboration with like-minded colleagues.
  • Cutler, A. (2004). Twee regels voor academische vorming. In H. Procee (Ed.), Bij die wereld wil ik horen! Zesendertig columns en drie essays over de vorming tot academicus. (pp. 42-45). Amsterdam: Boom.
  • Cutler, A. (Ed.). (2005). Twenty-first century psycholinguistics: Four cornerstones. Mahwah, NJ: Erlbaum.
  • Cutler, A., Smits, R., & Cooper, N. (2005). Vowel perception: Effects of non-native language vs. non-native dialect. Speech Communication, 47(1-2), 32-42. doi:10.1016/j.specom.2005.02.001.

    Abstract

    Three groups of listeners identified the vowel in CV and VC syllables produced by an American English talker. The listeners were (a) native speakers of American English, (b) native speakers of Australian English (different dialect), and (c) native speakers of Dutch (different language). The syllables were embedded in multispeaker babble at three signal-to-noise ratios (0 dB, 8 dB, and 16 dB). The identification performance of native listeners was significantly better than that of listeners with another language but did not significantly differ from the performance of listeners with another dialect. Dialect differences did however affect the type of perceptual confusions which listeners made; in particular, the Australian listeners’ judgements of vowel tenseness were more variable than the American listeners’ judgements, which may be ascribed to cross-dialectal differences in this vocalic feature. Although listening difficulty can result when speech input mismatches the native dialect in terms of the precise cues for and boundaries of phonetic categories, the difficulty is very much less than that which arises when speech input mismatches the native language in terms of the repertoire of phonemic categories available.
  • Cutler, A. (2005). Why is it so hard to understand a second language in noise? Newsletter, American Association of Teachers of Slavic and East European Languages, 48, 16-16.
  • Cutler, A., & Otake, T. (1998). Assimilation of place in Japanese and Dutch. In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: vol. 5 (pp. 1751-1754). Sydney: ICLSP.

    Abstract

    Assimilation of place of articulation across a nasal and a following stop consonant is obligatory in Japanese, but not in Dutch. In four experiments the processing of assimilated forms by speakers of Japanese and Dutch was compared, using a task in which listeners blended pseudo-word pairs such as ranga-serupa. An assimilated blend of this pair would be rampa, an unassimilated blend rangpa. Japanese listeners produced significantly more assimilated than unassimilated forms, both with pseudo-Japanese and pseudo-Dutch materials, while Dutch listeners produced significantly more unassimilated than assimilated forms in each materials set. This suggests that Japanese listeners, whose native-language phonology involves obligatory assimilation constraints, represent the assimilated nasals in nasal-stop sequences as unmarked for place of articulation, while Dutch listeners, who are accustomed to hearing unassimilated forms, represent the same nasal segments as marked for place of articulation.
  • Cutler, A., & Fear, B. D. (1991). Categoricality in acceptability judgements for strong versus weak vowels. In J. Llisterri (Ed.), Proceedings of the ESCA Workshop on Phonetics and Phonology of Speaking Styles (pp. 18.1-18.5). Barcelona, Catalonia: Universitat Autonoma de Barcelona.

    Abstract

    A distinction between strong and weak vowels can be drawn on the basis of vowel quality, of stress, or of both factors. An experiment was conducted in which sets of contextually matched word-intial vowels ranging from clearly strong to clearly weak were cross-spliced, and the naturalness of the resulting words was rated by listeners. The ratings showed that in general cross-spliced words were only significantly less acceptable than unspliced words when schwa was not involved; this supports a categorical distinction based on vowel quality.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1983). A language-specific comprehension strategy [Letters to Nature]. Nature, 304, 159-160. doi:10.1038/304159a0.

    Abstract

    Infants acquire whatever language is spoken in the environment into which they are born. The mental capability of the newborn child is not biased in any way towards the acquisition of one human language rather than another. Because psychologists who attempt to model the process of language comprehension are interested in the structure of the human mind, rather than in the properties of individual languages, strategies which they incorporate in their models are presumed to be universal, not language-specific. In other words, strategies of comprehension are presumed to be characteristic of the human language processing system, rather than, say, the French, English, or Igbo language processing systems. We report here, however, on a comprehension strategy which appears to be used by native speakers of French but not by native speakers of English.
  • Cutler, A., & Otake, T. (1997). Contrastive studies of spoken-language processing. Journal of Phonetic Society of Japan, 1, 4-13.
  • Cutler, A. (1992). Cross-linguistic differences in speech segmentation. MRC News, 56, 8-9.
  • Cutler, A., & Norris, D. (1992). Detection of vowels and consonants with minimal acoustic variation. Speech Communication, 11, 101-108. doi:10.1016/0167-6393(92)90004-Q.

    Abstract

    Previous research has shown that, in a phoneme detection task, vowels produce longer reaction times than consonants, suggesting that they are harder to perceive. One possible explanation for this difference is based upon their respective acoustic/articulatory characteristics. Another way of accounting for the findings would be to relate them to the differential functioning of vowels and consonants in the syllabic structure of words. In this experiment, we examined the second possibility. Targets were two pairs of phonemes, each containing a vowel and a consonant with similar phonetic characteristics. Subjects heard lists of English words had to press a response key upon detecting the occurrence of a pre-specified target. This time, the phonemes which functioned as vowels in syllabic structure yielded shorter reaction times than those which functioned as consonants. This rules out an explanation for response time difference between vowels and consonants in terms of function in syllable structure. Instead, we propose that consonantal and vocalic segments differ with respect to variability of tokens, both in the acoustic realisation of targets and in the representation of targets by listeners.
  • Cutler, A. (2005). Lexical stress. In D. B. Pisoni, & R. E. Remez (Eds.), The handbook of speech perception (pp. 264-289). Oxford: Blackwell.
  • Cutler, A., Mister, E., Norris, D., & Sebastián-Gallés, N. (2004). La perception de la parole en espagnol: Un cas particulier? In L. Ferrand, & J. Grainger (Eds.), Psycholinguistique cognitive: Essais en l'honneur de Juan Segui (pp. 57-74). Brussels: De Boeck.
  • Cutler, A. (1986). Forbear is a homophone: Lexical prosody does not constrain lexical access. Language and Speech, 29, 201-220.

    Abstract

    Because stress can occur in any position within an Eglish word, lexical prosody could serve as a minimal distinguishing feature between pairs of words. However, most pairs of English words with stress pattern opposition also differ vocalically: OBject an obJECT, CONtent and content have different vowels in their first syllables an well as different stress patters. To test whether prosodic information is made use in auditory word recognition independently of segmental phonetic information, it is necessary to examine pairs like FORbear – forBEAR of TRUSty – trusTEE, semantically unrelated words which echbit stress pattern opposition but no segmental difference. In a cross-modal priming task, such words produce the priming effects characteristic of homophones, indicating that lexical prosody is not used in the same was as segmental structure to constrain lexical access.
  • Cutler, A. (1998). How listeners find the right words. In Proceedings of the Sixteenth International Congress on Acoustics: Vol. 2 (pp. 1377-1380). Melville, NY: Acoustical Society of America.

    Abstract

    Languages contain tens of thousands of words, but these are constructed from a tiny handful of phonetic elements. Consequently, words resemble one another, or can be embedded within one another, a coup stick snot with standing. me process of spoken-word recognition by human listeners involves activation of multiple word candidates consistent with the input, and direct competition between activated candidate words. Further, human listeners are sensitive, at an early, prelexical, stage of speeeh processing, to constraints on what could potentially be a word of the language.
  • Cutler, A. (1982). Idioms: the older the colder. Linguistic Inquiry, 13(2), 317-320. Retrieved from http://www.jstor.org/stable/4178278?origin=JSTOR-pdf.
  • Cutler, A. (1983). Lexical complexity and sentence processing. In G. B. Flores d'Arcais, & R. J. Jarvella (Eds.), The process of language understanding (pp. 43-79). Chichester, Sussex: Wiley.
  • Cutler, A., & Chen, H.-C. (1997). Lexical tone in Cantonese spoken-word processing. Perception and Psychophysics, 59, 165-179. Retrieved from http://www.psychonomic.org/search/view.cgi?id=778.

    Abstract

    In three experiments, the processing of lexical tone in Cantonese was examined. Cantonese listeners more often accepted a nonword as a word when the only difference between the nonword and the word was in tone, especially when the F0 onset difference between correct and erroneous tone was small. Same–different judgments by these listeners were also slower and less accurate when the only difference between two syllables was in tone, and this was true whether the F0 onset difference between the two tones was large or small. Listeners with no knowledge of Cantonese produced essentially the same same-different judgment pattern as that produced by the native listeners, suggesting that the results display the effects of simple perceptual processing rather than of linguistic knowledge. It is argued that the processing of lexical tone distinctions may be slowed, relative to the processing of segmental distinctions, and that, in speeded-response tasks, tone is thus more likely to be misprocessed than is segmental structure.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1988). Limits on bilingualism [Letters to Nature]. Nature, 340, 229-230. doi:10.1038/340229a0.

    Abstract

    SPEECH, in any language, is continuous; speakers provide few reliable cues to the boundaries of words, phrases, or other meaningful units. To understand speech, listeners must divide the continuous speech stream into portions that correspond to such units. This segmentation process is so basic to human language comprehension that psycholinguists long assumed that all speakers would do it in the same way. In previous research1,2, however, we reported that segmentation routines can be language-specific: speakers of French process spoken words syllable by syllable, but speakers of English do not. French has relatively clear syllable boundaries and syllable-based timing patterns, whereas English has relatively unclear syllable boundaries and stress-based timing; thus syllabic segmentation would work more efficiently in the comprehension of French than in the comprehension of English. Our present study suggests that at this level of language processing, there are limits to bilingualism: a bilingual speaker has one and only one basic language.
  • Cutler, A. (1991). Linguistic rhythm and speech segmentation. In J. Sundberg, L. Nord, & R. Carlson (Eds.), Music, language, speech and brain (pp. 157-166). London: Macmillan.
  • Cutler, A., Kearns, R., Norris, D., & Scott, D. (1992). Listeners’ responses to extraneous signals coincident with English and French speech. In J. Pittam (Ed.), Proceedings of the 4th Australian International Conference on Speech Science and Technology (pp. 666-671). Canberra: Australian Speech Science and Technology Association.

    Abstract

    English and French listeners performed two tasks - click location and speeded click detection - with both English and French sentences, closely matched for syntactic and phonological structure. Clicks were located more accurately in open- than in closed-class words in both English and French; they were detected more rapidly in open- than in closed-class words in English, but not in French. The two listener groups produced the same pattern of responses, suggesting that higher-level linguistic processing was not involved in these tasks.
  • Cutler, A., & Fay, D. A. (1982). One mental lexicon, phonologically arranged: Comments on Hurford’s comments. Linguistic Inquiry, 13, 107-113. Retrieved from http://www.jstor.org/stable/4178262.
  • Cutler, A., Treiman, R., & Van Ooijen, B. (1998). Orthografik inkoncistensy ephekts in foneme detektion? In R. Mannell, & J. Robert-Ribes (Eds.), Proceedings of the Fifth International Conference on Spoken Language Processing: Vol. 6 (pp. 2783-2786). Sydney: ICSLP.

    Abstract

    The phoneme detection task is widely used in spoken word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realised. Listeners detected the target sounds [b,m,t,f,s,k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b,m,t], which have consistent word-initial spelling, than to the targets [f,s,k], which are inconsistently spelled, but only when listeners’ attention was drawn to spelling by the presence in the experiment of many irregularly spelled fillers. Within the inconsistent targets [f,s,k], there was no significant difference between responses to targets in words with majority and minority spellings. We conclude that performance in the phoneme detection task is not necessarily sensitive to orthographic effects, but that salient orthographic manipulation can induce such sensitivity.
  • Cutler, A. (1986). Phonological structure in speech recognition. Phonology Yearbook, 3, 161-178. Retrieved from http://www.jstor.org/stable/4615397.

    Abstract

    Two bodies of recent research from experimental psycholinguistics are summarised, each of which is centred upon a concept from phonology: LEXICAL STRESS and the SYLLABLE. The evidence indicates that neither construct plays a role in prelexical representations during speech recog- nition. Both constructs, however, are well supported by other performance evidence. Testing phonological claims against performance evidence from psycholinguistics can be difficult, since the results of studies designed to test processing models are often of limited relevance to phonological theory.
  • Cutler, A. (1991). Proceed with caution. New Scientist, (1799), 53-54.
  • Cutler, A. (1992). Proceedings with confidence. New Scientist, (1825), 54.
  • Cutler, A. (1992). Processing constraints of the native phonological repertoire on the native language. In Y. Tohkura, E. Vatikiotis-Bateson, & Y. Sagisaka (Eds.), Speech perception, production and linguistic structure (pp. 275-278). Tokyo: Ohmsha.
  • Cutler, A. (1998). Prosodic structure and word recognition. In A. D. Friederici (Ed.), Language comprehension: A biological perspective (pp. 41-70). Heidelberg: Springer.
  • Cutler, A. (1982). Prosody and sentence perception in English. In J. Mehler, E. C. Walker, & M. Garrett (Eds.), Perspectives on mental representation: Experimental and theoretical studies of cognitive processes and capacities (pp. 201-216). Hillsdale, N.J: Erlbaum.
  • Cutler, A., & Swinney, D. A. (1986). Prosody and the development of comprehension. Journal of Child Language, 14, 145-167.

    Abstract

    Four studies are reported in which young children’s response time to detect word targets was measured. Children under about six years of age did not show response time advantage for accented target words which adult listeners show. When semantic focus of the target word was manipulated independently of accent, children of about five years of age showed an adult-like response time advantage for focussed targets, but children younger than five did not. Id is argued that the processing advantage for accented words reflect the semantic role of accent as an expression of sentence focus. Processing advantages for accented words depend on the prior development of representations of sentence semantic structure, including the concept of focus. The previous literature on the development of prosodic competence shows an apparent anomaly in that young children’s productive skills appear to outstrip their receptive skills; however, this anomaly disappears if very young children’s prosody is assumed to be produced without an underlying representation of the relationship between prosody and semantics.
  • Cutler, A. (1997). Prosody and the structure of the message. In Y. Sagisaka, N. Campbell, & N. Higuchi (Eds.), Computing prosody: Computational models for processing spontaneous speech (pp. 63-66). Heidelberg: Springer.
  • Cutler, A. (1991). Prosody in situations of communication: Salience and segmentation. In Proceedings of the Twelfth International Congress of Phonetic Sciences: Vol. 1 (pp. 264-270). Aix-en-Provence: Université de Provence, Service des publications.

    Abstract

    Speakers and listeners have a shared goal: to communicate. The processes of speech perception and of speech production interact in many ways under the constraints of this communicative goal; such interaction is as characteristic of prosodic processing as of the processing of other aspects of linguistic structure. Two of the major uses of prosodic information in situations of communication are to encode salience and segmentation, and these themes unite the contributions to the symposium introduced by the present review.
  • Cutler, A., Dahan, D., & Van Donselaar, W. (1997). Prosody in the comprehension of spoken language: A literature review. Language and Speech, 40, 141-201.

    Abstract

    Research on the exploitation of prosodic information in the recognition of spoken language is reviewed. The research falls into three main areas: the use of prosody in the recognition of spoken words, in which most attention has been paid to the question of whether the prosodic structure of a word plays a role in initial contact with stored lexical representations; the use of prosody in the computation of syntactic structure, in which the resolution of global and local ambiguities has formed the central focus; and the role of prosody in the processing of discourse structure, in which there has been a preponderance of work on the contribution of accentuation and deaccentuation to integration of concepts with an existing discourse model. The review reveals that in each area progress has been made towards new conceptions of prosody's role in processing, and in particular this has involved abandonment of previously held deterministic views of the relationship between prosodic structure and other aspects of linguistic structure
  • Cutler, A., & Ladd, D. R. (Eds.). (1983). Prosody: Models and measurements. Heidelberg: Springer.
  • Cutler, A. (1992). Psychology and the segment. In G. Docherty, & D. Ladd (Eds.), Papers in laboratory phonology II: Gesture, segment, prosody (pp. 290-295). Cambridge: Cambridge University Press.
  • Cutler, A. (1997). The comparative perspective on spoken-language processing. Speech Communication, 21, 3-15. doi:10.1016/S0167-6393(96)00075-1.

    Abstract

    Psycholinguists strive to construct a model of human language processing in general. But this does not imply that they should confine their research to universal aspects of linguistic structure, and avoid research on language-specific phenomena. First, even universal characteristics of language structure can only be accurately observed cross-linguistically. This point is illustrated here by research on the role of the syllable in spoken-word recognition, on the perceptual processing of vowels versus consonants, and on the contribution of phonetic assimilation phonemena to phoneme identification. In each case, it is only by looking at the pattern of effects across languages that it is possible to understand the general principle. Second, language-specific processing can certainly shed light on the universal model of language comprehension. This second point is illustrated by studies of the exploitation of vowel harmony in the lexical segmentation of Finnish, of the recognition of Dutch words with and without vowel epenthesis, and of the contribution of different kinds of lexical prosodic structure (tone, pitch accent, stress) to the initial activation of candidate words in lexical access. In each case, aspects of the universal processing model are revealed by analysis of these language-specific effects. In short, the study of spoken-language processing by human listeners requires cross-linguistic comparison.
  • Cutler, A. (1983). Semantics, syntax and sentence accent. In M. Van den Broecke, & A. Cohen (Eds.), Proceedings of the Tenth International Congress of Phonetic Sciences (pp. 85-91). Dordrecht: Foris.
  • Cutler, A. (Ed.). (1982). Slips of the tongue and language production. The Hague: Mouton.
  • Cutler, A. (1983). Speakers’ conceptions of the functions of prosody. In A. Cutler, & D. R. Ladd (Eds.), Prosody: Models and measurements (pp. 79-91). Heidelberg: Springer.
  • Cutler, A. (1982). Speech errors: A classified bibliography. Bloomington: Indiana University Linguistics Club.
  • Cutler, A., & Robinson, T. (1992). Response time as a metric for comparison of speech recognition by humans and machines. In J. Ohala, T. Neary, & B. Derwing (Eds.), Proceedings of the Second International Conference on Spoken Language Processing: Vol. 1 (pp. 189-192). Alberta: University of Alberta.

    Abstract

    The performance of automatic speech recognition systems is usually assessed in terms of error rate. Human speech recognition produces few errors, but relative difficulty of processing can be assessed via response time techniques. We report the construction of a measure analogous to response time in a machine recognition system. This measure may be compared directly with human response times. We conducted a trial comparison of this type at the phoneme level, including both tense and lax vowels and a variety of consonant classes. The results suggested similarities between human and machine processing in the case of consonants, but differences in the case of vowels.
  • Cutler, A., & Butterfield, S. (1992). Rhythmic cues to speech segmentation: Evidence from juncture misperception. Journal of Memory and Language, 31, 218-236. doi:10.1016/0749-596X(92)90012-M.

    Abstract

    Segmentation of continuous speech into its component words is a nontrivial task for listeners. Previous work has suggested that listeners develop heuristic segmentation procedures based on experience with the structure of their language; for English, the heuristic is that strong syllables (containing full vowels) are most likely to be the initial syllables of lexical words, whereas weak syllables (containing central, or reduced, vowels) are nonword-initial, or, if word-initial, are grammatical words. This hypothesis is here tested against natural and laboratory-induced missegmentations of continuous speech. Precisely the expected pattern is found: listeners erroneously insert boundaries before strong syllables but delete them before weak syllables; boundaries inserted before strong syllables produce lexical words, while boundaries inserted before weak syllables produce grammatical words.
  • Cutler, A. (1992). The perception of speech: Psycholinguistic aspects. In W. Bright (Ed.), International encyclopedia of language: Vol. 3 (pp. 181-183). New York: Oxford University Press.
  • Cutler, A., & Butterfield, S. (1986). The perceptual integrity of initial consonant clusters. In R. Lawrence (Ed.), Speech and Hearing: Proceedings of the Institute of Acoustics (pp. 31-36). Edinburgh: Institute of Acoustics.
  • Cutler, A. (1988). The perfect speech error. In L. Hyman, & C. Li (Eds.), Language, speech and mind: Studies in honor of Victoria A. Fromkin (pp. 209-223). London: Croom Helm.
  • Cutler, A. (1992). The production and perception of word boundaries. In Y. Tohkura, E. Vatikiotis-Bateson, & Y. Sagisaka (Eds.), Speech perception, production and linguistic structure (pp. 419-425). Tokyo: Ohsma.
  • Cutler, A. (1998). The recognition of spoken words with variable representations. In D. Duez (Ed.), Proceedings of the ESCA Workshop on Sound Patterns of Spontaneous Speech (pp. 83-92). Aix-en-Provence: Université de Aix-en-Provence.
  • Cutler, A., & Norris, D. (1988). The role of strong syllables in segmentation for lexical access. Journal of Experimental Psychology: Human Perception and Performance, 14, 113-121. doi:10.1037/0096-1523.14.1.113.

    Abstract

    A model of speech segmentation in a stress language is proposed, according to which the occurrence of a strong syllable triggers segmentation of the speech signal, whereas occurrence of a weak syllable does not trigger segmentation. We report experiments in which listeners detected words embedded in nonsense bisyllables more slowly when the bisyllable had two strong syllables than when it had a strong and a weak syllable; mint was detected more slowly in mintayve than in mintesh. According to our proposed model, this result is an effect of segmentation: When the second syllable is strong, it is segmented from the first syllable, and successful detection of the embedded word therefore requires assembly of speech material across a segmentation position. Speech recognition models involving phonemic or syllabic recoding, or based on strictly left-to-right processes, do not predict this result. It is argued that segmentation at strong syllables in continuous speech recognition serves the purpose of detecting the most efficient locations at which to initiate lexical access. (C) 1988 by the American Psychological Association
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1986). The syllable’s differing role in the segmentation of French and English. Journal of Memory and Language, 25, 385-400. doi:10.1016/0749-596X(86)90033-1.

    Abstract

    Speech segmentation procedures may differ in speakers of different languages. Earlier work based on French speakers listening to French words suggested that the syllable functions as a segmentation unit in speech processing. However, while French has relatively regular and clearly bounded syllables, other languages, such as English, do not. No trace of syllabifying segmentation was found in English listeners listening to English words, French words, or nonsense words. French listeners, however, showed evidence of syllabification even when they were listening to English words. We conclude that alternative segmentation routines are available to the human language processor. In some cases speech segmentation may involve the operation of more than one procedure
  • Cutler, A. (1997). The syllable’s role in the segmentation of stress languages. Language and Cognitive Processes, 12, 839-845. doi:10.1080/016909697386718.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1992). The monolingual nature of speech segmentation by bilinguals. Cognitive Psychology, 24, 381-410.

    Abstract

    Monolingual French speakers employ a syllable-based procedure in speech segmentation; monolingual English speakers use a stress-based segmentation procedure and do not use the syllable-based procedure. In the present study French-English bilinguals participated in segmentation experiments with English and French materials. Their results as a group did not simply mimic the performance of English monolinguals with English language materials and of French monolinguals with French language materials. Instead, the bilinguals formed two groups, defined by forced choice of a dominant language. Only the French-dominant group showed syllabic segmentation and only with French language materials. The English-dominant group showed no syllabic segmentation in either language. However, the English-dominant group showed stress-based segmentation with English language materials; the French-dominant group did not. We argue that rhythmically based segmentation procedures are mutually exclusive, as a consequence of which speech segmentation by bilinguals is, in one respect at least, functionally monolingual.
  • Cutler, A. (1992). Why not abolish psycholinguistics? In W. Dressler, H. Luschützky, O. Pfeiffer, & J. Rennison (Eds.), Phonologica 1988 (pp. 77-87). Cambridge: Cambridge University Press.
  • Cutler, A. (1986). Why readers of this newsletter should run cross-linguistic experiments. European Psycholinguistics Association Newsletter, 13, 4-8.
  • Cutler, A., & Butterfield, S. (1991). Word boundary cues in clear speech: A supplementary report. Speech Communication, 10, 335-353. doi:10.1016/0167-6393(91)90002-B.

    Abstract

    One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In four experiments, we examined how word boundaries are produced in deliberately clear speech. In an earlier report we showed that speakers do indeed mark word boundaries in clear speech, by pausing at the boundary and lengthening pre-boundary syllables; moreover, these effects are applied particularly to boundaries preceding weak syllables. In English, listeners use segmentation procedures which make word boundaries before strong syllables easier to perceive; thus marking word boundaries before weak syllables in clear speech will make clear precisely those boundaries which are otherwise hard to perceive. The present report presents supplementary data, namely prosodic analyses of the syllable following a critical word boundary. More lengthening and greater increases in intensity were applied in clear speech to weak syllables than to strong. Mean F0 was also increased to a greater extent on weak syllables than on strong. Pitch movement, however, increased to a greater extent on strong syllables than on weak. The effects were, however, very small in comparison to the durational effects we observed earlier for syllables preceding the boundary and for pauses at the boundary.
  • Cutler, A. (Ed.). (2005). Twenty-first century psycholinguistics: Four cornerstones. Hillsdale, NJ: Erlbaum.
  • Cutler, A., & Bruggeman, L. (2013). Vocabulary structure and spoken-word recognition: Evidence from French reveals the source of embedding asymmetry. In Proceedings of INTERSPEECH: 14th Annual Conference of the International Speech Communication Association (pp. 2812-2816).

    Abstract

    Vocabularies contain hundreds of thousands of words built from only a handful of phonemes, so that inevitably longer words tend to contain shorter ones. In many languages (but not all) such embedded words occur more often word-initially than word-finally, and this asymmetry, if present, has farreaching consequences for spoken-word recognition. Prior research had ascribed the asymmetry to suffixing or to effects of stress (in particular, final syllables containing the vowel schwa). Analyses of the standard French vocabulary here reveal an effect of suffixing, as predicted by this account, and further analyses of an artificial variety of French reveal that extensive final schwa has an independent and additive effect in promoting the embedding asymmetry.
  • Dahan, D., & Tanenhaus, M. K. (2004). Continuous mapping from sound to meaning in spoken-language comprehension: Immediate effects of verb-based thematic constraints. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2), 498-513. doi:10.1037/0278-7393.30.2.498.

    Abstract

    The authors used 2 “visual-world” eye-tracking experiments to examine lexical access using Dutch constructions in which the verb did or did not place semantic constraints on its subsequent subject noun phrase. In Experiment 1, fixations to the picture of a cohort competitor (overlapping with the onset of the referent’s name, the subject) did not differ from fixations to a distractor in the constraining-verb condition. In Experiment 2, cross-splicing introduced phonetic information that temporarily biased the input toward the cohort competitor. Fixations to the cohort competitor temporarily increased in both the neutral and constraining conditions. These results favor models in which mapping from the input onto meaning is continuous over models in which contextual effects follow access of an initial form-based competitor set.
  • Dahan, D., & Tanenhaus, M. K. (2005). Looking at the rope when looking for the snake: Conceptually mediated eye movements during spoken-word recognition. Psychonomic Bulletin & Review, 12(3), 453-459.

    Abstract

    Participants' eye movements to four objects displayed on a computer screen were monitored as the participants clicked on the object named in a spoken instruction. The display contained pictures of the referent (e.g., a snake), a competitor that shared features with the visual representation associated with the referent's concept (e.g., a rope), and two distractor objects (e.g., a couch and an umbrella). As the first sounds of the referent's name were heard, the participants were more likely to fixate the visual competitor than to fixate either of the distractor objects. Moreover, this effect was not modulated by the visual similarity between the referent and competitor pictures, independently estimated in a visual similarity rating task. Because the name of the visual competitor did not overlap with the phonetic input, eye movements reflected word-object matching at the level of lexically activated perceptual features and not merely at the level of preactivated sound forms.
  • D'Alessandra, Y., Carena, M. C., Spazzafumo, L., Martinelli, F., Bassetti, B., Devanna, P., Rubino, M., Marenzi, G., Colombo, G. I., Achilli, F., Maggiolini, S., Capogrossi, M. C., & Pompilio, G. (2013). Diagnostic Potential of Plasmatic MicroRNA Signatures in Stable and Unstable Angina. PLoS ONE, 8(11), e80345. doi:10.1371/journal.pone.0080345.

    Abstract

    PURPOSE: We examined circulating miRNA expression profiles in plasma of patients with coronary artery disease (CAD) vs. matched controls, with the aim of identifying novel discriminating biomarkers of Stable (SA) and Unstable (UA) angina. METHODS: An exploratory analysis of plasmatic expression profile of 367 miRNAs was conducted in a group of SA and UA patients and control donors, using TaqMan microRNA Arrays. Screening confirmation and expression analysis were performed by qRT-PCR: all miRNAs found dysregulated were examined in the plasma of troponin-negative UA (n=19) and SA (n=34) patients and control subjects (n=20), matched for sex, age, and cardiovascular risk factors. In addition, the expression of 14 known CAD-associated miRNAs was also investigated. RESULTS: Out of 178 miRNAs consistently detected in plasma samples, 3 showed positive modulation by CAD when compared to controls: miR-337-5p, miR-433, and miR-485-3p. Further, miR-1, -122, -126, -133a, -133b, and miR-199a were positively modulated in both UA and SA patients, while miR-337-5p and miR-145 showed a positive modulation only in SA or UA patients, respectively. ROC curve analyses showed a good diagnostic potential (AUC ≥ 0.85) for miR-1, -126, and -483-5p in SA and for miR-1, -126, and -133a in UA patients vs. controls, respectively. No discriminating AUC values were observed comparing SA vs. UA patients. Hierarchical cluster analysis showed that the combination of miR-1, -133a, and -126 in UA and of miR-1, -126, and -485-3p in SA correctly classified patients vs. controls with an efficiency ≥ 87%. No combination of miRNAs was able to reliably discriminate patients with UA from patients with SA. CONCLUSIONS: This work showed that specific plasmatic miRNA signatures have the potential to accurately discriminate patients with angiographically documented CAD from matched controls. We failed to identify a plasmatic miRNA expression pattern capable to differentiate SA from UA patients.
  • Dalli, A., Tablan, V., Bontcheva, K., Wilks, Y., Broeder, D., Brugman, H., & Wittenburg, P. (2004). Web services architecture for language resources. In M. Lino, M. Xavier, F. Ferreira, R. Costa, & R. Silva (Eds.), Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC2004) (pp. 365-368). Paris: ELRA - European Language Resources Association.
  • Dastjerdi, M., Ozker, M., Foster, B. L., Rangarajan, V., & Parvizi, J. (2013). Numerical processing in the human parietal cortex during experimental and natural conditions. Nature Communications, 4: 2528. doi:10.1038/ncomms3528.

    Abstract

    Human cognition is traditionally studied in experimental conditions wherein confounding complexities of the natural environment are intentionally eliminated. Thus, it remains unknown how a brain region involved in a particular experimental condition is engaged in natural conditions. Here we use electrocorticography to address this uncertainty in three participants implanted with intracranial electrodes and identify activations of neuronal populations within the intraparietal sulcus region during an experimental arithmetic condition. In a subsequent analysis, we report that the same intraparietal sulcus neural populations are activated when participants, engaged in social conversations, refer to objects with numerical content. Our prototype approach provides a means for both exploring human brain dynamics as they unfold in complex social settings and reconstructing natural experiences from recorded brain signals.
  • Davidson, D., & Martin, A. E. (2013). Modeling accuracy as a function of response time with the generalized linear mixed effects model. Acta Psychologica, 144(1), 83-96. doi:10.1016/j.actpsy.2013.04.016.

    Abstract

    In psycholinguistic studies using error rates as a response measure, response times (RT) are most often analyzed independently of the error rate, although it is widely recognized that they are related. In this paper we present a mixed effects logistic regression model for the error rate that uses RT as a trial-level fixed- and random-effect regression input. Production data from a translation–recall experiment are analyzed as an example. Several model comparisons reveal that RT improves the fit of the regression model for the error rate. Two simulation studies then show how the mixed effects regression model can identify individual participants for whom (a) faster responses are more accurate, (b) faster responses are less accurate, or (c) there is no relation between speed and accuracy. These results show that this type of model can serve as a useful adjunct to traditional techniques, allowing psycholinguistic researchers to examine more closely the relationship between RT and accuracy in individual subjects and better account for the variability which may be present, as well as a preliminary step to more advanced RT–accuracy modeling.
  • Davis, M. H., Johnsrude, I. S., Hervais-Adelman, A., Taylor, K., & McGettigan, C. (2005). Lexical information drives perceptual learning of distorted speech: Evidence from the comprehension of noise-vocoded sentences. Journal of Experimental Psychology-General, 134(2), 222-241. doi:10.1037/0096-3445.134.2.222.

    Abstract

    Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. For noise-vocoded sentences, a manipulation that removes spectral detail from speech, listeners' reporting improved from near 0% to 70% correct over 30 sentences (Experiment 1). Learning was enhanced if listeners heard distorted sentences while they knew the identity of the undistorted target (Experiments 2 and 3). Learning was absent when listeners were trained with nonword sentences (Experiments 4 and 5), although the meaning of the training sentences did not affect learning (Experiment 5). Perceptual learning of noise-vocoded speech depends on higher level information, consistent with top-down, lexically driven learning. Similar processes may facilitate comprehension of speech in an unfamiliar accent or following cochlear implantation.
  • Debreslioska, S., Ozyurek, A., Gullberg, M., & Perniss, P. M. (2013). Gestural viewpoint signals referent accessibility. Discourse Processes, 50(7), 431-456. doi:10.1080/0163853x.2013.824286.

    Abstract

    The tracking of entities in discourse is known to be a bimodal phenomenon. Speakers achieve cohesion in speech by alternating between full lexical forms, pronouns, and zero anaphora as they track referents. They also track referents in co-speech gestures. In this study, we explored how viewpoint is deployed in reference tracking, focusing on representations of animate entities in German narrative discourse. We found that gestural viewpoint systematically varies depending on discourse context. Speakers predominantly use character viewpoint in maintained contexts and observer viewpoint in reintroduced contexts. Thus, gestural viewpoint seems to function as a cohesive device in narrative discourse. The findings expand on and provide further evidence for the coordination between speech and gesture on the discourse level that is crucial to understanding the tight link between the two modalities.
  • Dediu, D., Cysouw, M., Levinson, S. C., Baronchelli, A., Christiansen, M. H., Croft, W., Evans, N., Garrod, S., Gray, R., Kandler, A., & Lieven, E. (2013). Cultural evolution of language. In P. J. Richerson, & M. H. Christiansen (Eds.), Cultural evolution: Society, technology, language, and religion. Strüngmann Forum Reports, vol. 12 (pp. 303-332). Cambridge, Mass: MIT Press.

    Abstract

    This chapter argues that an evolutionary cultural approach to language not only has already proven fruitful, but it probably holds the key to understand many puzzling aspects of language, its change and origins. The chapter begins by highlighting several still common misconceptions about language that might seem to call into question a cultural evolutionary approach. It explores the antiquity of language and sketches a general evolutionary approach discussing the aspects of function, fi tness, replication, and selection, as well the relevant units of linguistic evolution. In this context, the chapter looks at some fundamental aspects of linguistic diversity such as the nature of the design space, the mechanisms generating it, and the shape and fabric of language. Given that biology is another evolutionary system, its complex coevolution with language needs to be understood in order to have a proper theory of language. Throughout the chapter, various challenges are identifi ed and discussed, sketching promising directions for future research. The chapter ends by listing the necessary data, methods, and theoretical developments required for a grounded evolutionary approach to language.
  • Dediu, D. (2013). Genes: Interactions with language on three levels — Inter-individual variation, historical correlations and genetic biasing. In P.-M. Binder, & K. Smith (Eds.), The language phenomenon: Human communication from milliseconds to millennia (pp. 139-161). Berlin: Springer. doi:10.1007/978-3-642-36086-2_7.

    Abstract

    The complex inter-relationships between genetics and linguistics encompass all four scales highlighted by the contributions to this book and, together with cultural transmission, the genetics of language holds the promise to offer a unitary understanding of this fascinating phenomenon. There are inter-individual differences in genetic makeup which contribute to the obvious fact that we are not identical in the way we understand and use language and, by studying them, we will be able to both better treat and enhance ourselves. There are correlations between the genetic configuration of human groups and their languages, reflecting the historical processes shaping them, and there also seem to exist genes which can influence some characteristics of language, biasing it towards or against certain states by altering the way language is transmitted across generations. Besides the joys of pure knowledge, the understanding of these three aspects of genetics relevant to language will potentially trigger advances in medicine, linguistics, psychology or the understanding of our own past and, last but not least, a profound change in the way we regard one of the emblems of being human: our capacity for language.
  • Dediu, D., & Levinson, S. C. (2013). On the antiquity of language: The reinterpretation of Neandertal linguistic capacities and its consequences. Frontiers in Language Sciences, 4: 397. doi:10.3389/fpsyg.2013.00397.

    Abstract

    It is usually assumed that modern language is a recent phenomenon, coinciding with the emergence of modern humans themselves. Many assume as well that this is the result of a single, sudden mutation giving rise to the full “modern package”. However, we argue here that recognizably modern language is likely an ancient feature of our genus pre-dating at least the common ancestor of modern humans and Neandertals about half a million years ago. To this end, we adduce a broad range of evidence from linguistics, genetics, palaeontology and archaeology clearly suggesting that Neandertals shared with us something like modern speech and language. This reassessment of the antiquity of modern language, from the usually quoted 50,000-100,000 years to half a million years, has profound consequences for our understanding of our own evolution in general and especially for the sciences of speech and language. As such, it argues against a saltationist scenario for the evolution of language, and towards a gradual process of culture-gene co-evolution extending to the present day. Another consequence is that the present-day linguistic diversity might better reflect the properties of the design space for language and not just the vagaries of history, and could also contain traces of the languages spoken by other human forms such as the Neandertals.
  • Dediu, D., & Cysouw, M. A. (2013). Some structural aspects of language are more stable than others: A comparison of seven methods. PLoS One, 8: e55009. doi:10.1371/journal.pone.0055009.

    Abstract

    Understanding the patterns and causes of differential structural stability is an area of major interest for the study of language change and evolution. It is still debated whether structural features have intrinsic stabilities across language families and geographic areas, or if the processes governing their rate of change are completely dependent upon the specific context of a given language or language family. We conducted an extensive literature review and selected seven different approaches to conceptualising and estimating the stability of structural linguistic features, aiming at comparing them using the same dataset, the World Atlas of Language Structures. We found that, despite profound conceptual and empirical differences between these methods, they tend to agree in classifying some structural linguistic features as being more stable than others. This suggests that there are intrinsic properties of such structural features influencing their stability across methods, language families and geographic areas. This finding is a major step towards understanding the nature of structural linguistic features and their interaction with idiosyncratic, lineage- and area-specific factors during language change and evolution.
  • Den Os, E., & Boves, L. (2004). Natural multimodal interaction for design applications. In P. Cunningham (Ed.), Adoption and the knowledge economy (pp. 1403-1410). Amsterdam: IOS Press.
  • den Hoed, M., Eijgelsheim, M., Esko, T., Brundel, B. J. J. M., Peal, D. S., Evans, D. M., Nolte, I. M., Segrè, A. V., Holm, H., Handsaker, R. E., Westra, H.-J., Johnson, T., Isaacs, A., Yang, J., Lundby, A., Zhao, J. H., Kim, Y. J., Go, M. J., Almgren, P., Bochud, M. and 249 moreden Hoed, M., Eijgelsheim, M., Esko, T., Brundel, B. J. J. M., Peal, D. S., Evans, D. M., Nolte, I. M., Segrè, A. V., Holm, H., Handsaker, R. E., Westra, H.-J., Johnson, T., Isaacs, A., Yang, J., Lundby, A., Zhao, J. H., Kim, Y. J., Go, M. J., Almgren, P., Bochud, M., Boucher, G., Cornelis, M. C., Gudbjartsson, D., Hadley, D., van der Harst, P., Hayward, C., den Heijer, M., Igl, W., Jackson, A. U., Kutalik, Z., Luan, J., Kemp, J. P., Kristiansson, K., Ladenvall, C., Lorentzon, M., Montasser, M. E., Njajou, O. T., O'Reilly, P. F., Padmanabhan, S., St Pourcain, B., Rankinen, T., Salo, P., Tanaka, T., Timpson, N. J., Vitart, V., Waite, L., Wheeler, W., Zhang, W., Draisma, H. H. M., Feitosa, M. F., Kerr, K. F., Lind, P. A., Mihailov, E., Onland-Moret, N. C., Song, C., Weedon, M. N., Xie, W., Yengo, L., Absher, D., Albert, C. M., Alonso, A., Arking, D. E., de Bakker, P. I. W., Balkau, B., Barlassina, C., Benaglio, P., Bis, J. C., Bouatia-Naji, N., Brage, S., Chanock, S. J., Chines, P. S., Chung, M., Darbar, D., Dina, C., Dörr, M., Elliott, P., Felix, S. B., Fischer, K., Fuchsberger, C., de Geus, E. J. C., Goyette, P., Gudnason, V., Harris, T. B., Hartikainen, A.-L., Havulinna, A. S., Heckbert, S. R., Hicks, A. A., Hofman, A., Holewijn, S., Hoogstra-Berends, F., Hottenga, J.-J., Jensen, M. K., Johansson, A., Junttila, J., Kääb, S., Kanon, B., Ketkar, S., Khaw, K.-T., Knowles, J. W., Kooner, A. S., Kors, J. A., Kumari, M., Milani, L., Laiho, P., Lakatta, E. G., Langenberg, C., Leusink, M., Liu, Y., Luben, R. N., Lunetta, K. L., Lynch, S. N., Markus, M. R. P., Marques-Vidal, P., Mateo Leach, I., McArdle, W. L., McCarroll, S. A., Medland, S. E., Miller, K. A., Montgomery, G. W., Morrison, A. C., Müller-Nurasyid, M., Navarro, P., Nelis, M., O'Connell, J. R., O'Donnell, C. J., Ong, K. K., Newman, A. B., Peters, A., Polasek, O., Pouta, A., Pramstaller, P. P., Psaty, B. M., Rao, D. C., Ring, S. M., Rossin, E. J., Rudan, D., Sanna, S., Scott, R. A., Sehmi, J. S., Sharp, S., Shin, J. T., Singleton, A. B., Smith, A. V., Soranzo, N., Spector, T. D., Stewart, C., Stringham, H. M., Tarasov, K. V., Uitterlinden, A. G., Vandenput, L., Hwang, S.-J., Whitfield, J. B., Wijmenga, C., Wild, S. H., Willemsen, G., Wilson, J. F., Witteman, J. C. M., Wong, A., Wong, Q., Jamshidi, Y., Zitting, P., Boer, J. M. A., Boomsma, D. I., Borecki, I. B., van Duijn, C. M., Ekelund, U., Forouhi, N. G., Froguel, P., Hingorani, A., Ingelsson, E., Kivimaki, M., Kronmal, R. A., Kuh, D., Lind, L., Martin, N. G., Oostra, B. A., Pedersen, N. L., Quertermous, T., Rotter, J. I., van der Schouw, Y. T., Verschuren, W. M. M., Walker, M., Albanes, D., Arnar, D. O., Assimes, T. L., Bandinelli, S., Boehnke, M., de Boer, R. A., Bouchard, C., Caulfield, W. L. M., Chambers, J. C., Curhan, G., Cusi, D., Eriksson, J., Ferrucci, L., van Gilst, W. H., Glorioso, N., de Graaf, J., Groop, L., Gyllensten, U., Hsueh, W.-C., Hu, F. B., Huikuri, H. V., Hunter, D. J., Iribarren, C., Isomaa, B., Jarvelin, M.-R., Jula, A., Kähönen, M., Kiemeney, L. A., van der Klauw, M. M., Kooner, J. S., Kraft, P., Iacoviello, L., Lehtimäki, T., Lokki, M.-L.-L., Mitchell, B. D., Navis, G., Nieminen, M. S., Ohlsson, C., Poulter, N. R., Qi, L., Raitakari, O. T., Rimm, E. B., Rioux, J. D., Rizzi, F., Rudan, I., Salomaa, V., Sever, P. S., Shields, D. C., Shuldiner, A. R., Sinisalo, J., Stanton, A. V., Stolk, R. P., Strachan, D. P., Tardif, J.-C., Thorsteinsdottir, U., Tuomilehto, J., van Veldhuisen, D. J., Virtamo, J., Viikari, J., Vollenweider, P., Waeber, G., Widen, E., Cho, Y. S., Olsen, J. V., Visscher, P. M., Willer, C., Franke, L., Erdmann, J., Thompson, J. R., Pfeufer, A., Sotoodehnia, N., Newton-Cheh, C., Ellinor, P. T., Stricker, B. H. C., Metspalu, A., Perola, M., Beckmann, J. S., Smith, G. D., Stefansson, K., Wareham, N. J., Munroe, P. B., Sibon, O. C. M., Milan, D. J., Snieder, H., Samani, N. J., Loos, R. J. F., Global BPgen Consortium, CARDIoGRAM Consortium, PR GWAS Consortium, QRS GWAS Consortium, QT-IGC Consortium, & CHARGE-AF Consortium (2013). Identification of heart rate-associated loci and their effects on cardiac conduction and rhythm disorders. Nature Genetics, 45(6), 621-631. doi:10.1038/ng.2610.

    Abstract

    Elevated resting heart rate is associated with greater risk of cardiovascular disease and mortality. In a 2-stage meta-analysis of genome-wide association studies in up to 181,171 individuals, we identified 14 new loci associated with heart rate and confirmed associations with all 7 previously established loci. Experimental downregulation of gene expression in Drosophila melanogaster and Danio rerio identified 20 genes at 11 loci that are relevant for heart rate regulation and highlight a role for genes involved in signal transmission, embryonic cardiac development and the pathophysiology of dilated cardiomyopathy, congenital heart failure and/or sudden cardiac death. In addition, genetic susceptibility to increased heart rate is associated with altered cardiac conduction and reduced risk of sick sinus syndrome, and both heart rate-increasing and heart rate-decreasing variants associate with risk of atrial fibrillation. Our findings provide fresh insights into the mechanisms regulating heart rate and identify new therapeutic targets.
  • Deriziotis, P., & Fisher, S. E. (2013). Neurogenomics of speech and language disorders: The road ahead. Genome Biology, 14: 204. doi:10.1186/gb-2013-14-4-204.

    Abstract

    Next-generation sequencing is set to transform the discovery of genes underlying neurodevelopmental disorders, and so off er important insights into the biological bases of spoken language. Success will depend on functional assessments in neuronal cell lines, animal models and humans themselves.
  • Devaraju, K., Barnabé-Heider, F., Kokaia, Z., & Lindvall, O. (2013). FoxJ1-expressing cells contribute to neurogenesis in forebrain of adult rats: Evidence from in vivo electroporation combined with piggyBac transposon. ScienceDirect, 319(18), 2790-2800. doi:10.1016/j.yexcr.2013.08.028.

    Abstract

    Ependymal cells in the lateral ventricular wall are considered to be post-mitotic but can give rise to neuroblasts and astrocytes after stroke in adult mice due to insult-induced suppression of Notch signaling. The transcription factor FoxJ1, which has been used to characterize mouse ependymal cells, is also expressed by a subset of astrocytes. Cells expressing FoxJ1, which drives the expression of motile cilia, contribute to early postnatal neurogenesis in mouse olfactory bulb. The distribution and progeny of FoxJ1-expressing cells in rat forebrain are unknown. Here we show using immunohistochemistry that the overall majority of FoxJ1-expressing cells in the lateral ventricular wall of adult rats are ependymal cells with a minor population being astrocytes. To allow for long-term fate mapping of FoxJ1-derived cells, we used the piggyBac system for in vivo gene transfer with electroporation. Using this method, we found that FoxJ1-expressing cells, presumably the astrocytes, give rise to neuroblasts and mature neurons in the olfactory bulb both in intact and stroke-damaged brain of adult rats. No significant contribution of FoxJ1-derived cells to stroke-induced striatal neurogenesis was detected. These data indicate that in the adult rat brain, FoxJ1-expressing cells contribute to the formation of new neurons in the olfactory bulb but are not involved in the cellular repair after stroke.
  • Dietrich, R., & Klein, W. (1986). Simple language. Interdisciplinary Science Reviews, 11(2), 110-117.
  • Dijkstra, T., Moscoso del Prado Martín, F., Schulpen, B., Schreuder, R., & Baayen, R. H. (2005). A roommate in cream: Morphological family size effects on interlingual homograph recognition. Language and Cognitive Processes, 20, 7-41. doi:10.1080/01690960444000124.
  • Dijkstra, T., & Kempen, G. (1997). Het taalgebruikersmodel. In H. Hulshof, & T. Hendrix (Eds.), De taalcentrale. Amsterdam: Bulkboek.
  • Dimroth, C., & Lindner, K. (2005). Was langsame Lerner uns zeigen können: der Erwerb der Finitheit im Deutschen durch einsprachige Kinder mit spezifischen Sprachentwicklungsstörung und durch Zweit-sprach-lerner. Zeitschrift für Literaturwissenschaft und Linguistik, 140, 40-61.
  • Dimroth, C., & Watorek, M. (2005). Additive scope particles in advanced learner and native speaker discourse. In Hendriks, & Henriëtte (Eds.), The structure of learner varieties (pp. 461-488). Berlin: Mouton de Gruyter.
  • Dimroth, C. (2004). Fokuspartikeln und Informationsgliederung im Deutschen. Tübingen: Stauffenburg.
  • Dimroth, C. (1998). Indiquer la portée en allemand L2: Une étude longitudinale de l'acquisition des particules de portée. AILE (Acquisition et Interaction en Langue étrangère), 11, 11-34.
  • Dingemanse, M. (2013). Wie wir mit Sprache malen - How to paint with language. Forschungsbericht 2013 - Max-Planck-Institut für Psycholinguistik. In Max-Planck-Gesellschaft Jahrbuch 2013. München: Max Planck Society for the Advancement of Science. Retrieved from http://www.mpg.de/6683977/Psycholinguistik_JB_2013.

    Abstract

    Words evolve not as blobs of ink on paper but in face to face interaction. The nature of language as fundamentally interactive and multimodal is shown by the study of ideophones, vivid sensory words that thrive in conversations around the world. The ways in which these Lautbilder enable precise communication about sensory knowledge has for the first time been studied in detail. It turns out that we can paint with language, and that the onomatopoeia we sometimes classify as childish might be a subset of a much richer toolkit for depiction in speech, available to us all.
  • Dingemanse, M. (2013). Ideophones and gesture in everyday speech. Gesture, 13, 143-165. doi:10.1075/gest.13.2.02din.

    Abstract

    This article examines the relation between ideophones and gestures in a corpus of everyday discourse in Siwu, a richly ideophonic language spoken in Ghana. The overall frequency of ideophone-gesture couplings in everyday speech is lower than previously suggested, but two findings shed new light on the relation between ideophones and gesture. First, discourse type makes a difference: ideophone-gesture couplings are more frequent in narrative contexts, a finding that explains earlier claims, which were based not on everyday language use but on elicited narratives. Second, there is a particularly strong coupling between ideophones and one type of gesture: iconic gestures. This coupling allows us to better understand iconicity in relation to the affordances of meaning and modality. Ultimately, the connection between ideophones and iconic gestures is explained by reference to the depictive nature of both. Ideophone and iconic gesture are two aspects of the process of depiction
  • Dingemanse, M., Torreira, F., & Enfield, N. J. (2013). Is “Huh?” a universal word? Conversational infrastructure and the convergent evolution of linguistic items. PLoS One, 8(11): e78273. doi:10.1371/journal.pone.0078273.

    Abstract

    A word like Huh?–used as a repair initiator when, for example, one has not clearly heard what someone just said– is found in roughly the same form and function in spoken languages across the globe. We investigate it in naturally occurring conversations in ten languages and present evidence and arguments for two distinct claims: that Huh? is universal, and that it is a word. In support of the first, we show that the similarities in form and function of this interjection across languages are much greater than expected by chance. In support of the second claim we show that it is a lexical, conventionalised form that has to be learnt, unlike grunts or emotional cries. We discuss possible reasons for the cross-linguistic similarity and propose an account in terms of convergent evolution. Huh? is a universal word not because it is innate but because it is shaped by selective pressures in an interactional environment that all languages share: that of other-initiated repair. Our proposal enhances evolutionary models of language change by suggesting that conversational infrastructure can drive the convergent cultural evolution of linguistic items.

Share this page