Publications

Displaying 101 - 200 of 771
  • Cronin, K. A., Mitchell, M. A., Lonsdorf, E. V., & Thompson, S. D. (2006). One year later: Evaluation of PMC-Recommended births and transfers. Zoo Biology, 25, 267-277. doi:10.1002/zoo.20100.

    Abstract

    To meet their exhibition, conservation, education, and scientific goals, members of the American Zoo and Aquarium Association (AZA) collaborate to manage their living collections as single species populations. These cooperative population management programs, Species Survival Planss (SSP) and Population Management Plans (PMP), issue specimen-by-specimen recommendations aimed at perpetuating captive populations by maintaining genetic diversity and demographic stability. Species Survival Plans and PMPs differ in that SSP participants agree to complete recommendations, whereas PMP participants need only take recommendations under advisement. We evaluated the effect of program type and the number of participating institutions on the success of actions recommended by the Population Management Center (PMC): transfers of specimens between institutions, breeding, and target number of offspring. We analyzed AZA studbook databases for the occurrence of recommended or unrecommended transfers and births during the 1-year period after the distribution of standard AZA Breeding-and-Transfer Plans. We had three major findings: 1) on average, both SSPs and PMPs fell about 25% short of their target; however, as the number of participating institutions increased so too did the likelihood that programs met or exceeded their target; 2) SSPs exhibited significantly greater transfer success than PMPs, although transfer success for both program types was below 50%; and 3) SSPs exhibited significantly greater breeding success than PMPs, although breeding success for both program types was below 20%. Together, these results indicate that the science and sophistication behind genetic and demographic management of captive populations may be compromised by the challenges of implementation.
  • Cutler, A., Weber, A., Smits, R., & Cooper, N. (2004). Patterns of English phoneme confusions by native and non-native listeners. Journal of the Acoustical Society of America, 116(6), 3668-3678. doi:10.1121/1.1810292.

    Abstract

    Native American English and non-native(Dutch)listeners identified either the consonant or the vowel in all possible American English CV and VC syllables. The syllables were embedded in multispeaker babble at three signal-to-noise ratios(0, 8, and 16 dB). The phoneme identification
    performance of the non-native listeners was less accurate than that of the native listeners. All listeners were adversely affected by noise. With these isolated syllables, initial segments were harder to identify than final segments. Crucially, the effects of language background and noise did not interact; the performance asymmetry between the native and non-native groups was not significantly different across signal-to-noise ratios. It is concluded that the frequently reported disproportionate difficulty of non-native listening under disadvantageous conditions is not due to a disproportionate increase in phoneme misidentifications.
  • Cutler, A. (2004). On spoken-word recognition in a second language. Newsletter, American Association of Teachers of Slavic and East European Languages, 47, 15-15.
  • Cutler, A., & Norris, D. (2016). Bottoms up! How top-down pitfalls ensnare speech perception researchers too. Commentary on C. Firestone & B. Scholl: Cognition does not affect perception: Evaluating the evidence for 'top-down' effects. Behavioral and Brain Sciences, e236. doi:10.1017/S0140525X15002745.

    Abstract

    Not only can the pitfalls that Firestone & Scholl (F&S) identify be generalised across multiple studies within the field of visual perception, but also they have general application outside the field wherever perceptual and cognitive processing are compared. We call attention to the widespread susceptibility of research on the perception of speech to versions of the same pitfalls.
  • Cutler, A., Norris, D., & Williams, J. (1987). A note on the role of phonological expectations in speech segmentation. Journal of Memory and Language, 26, 480-487. doi:10.1016/0749-596X(87)90103-3.

    Abstract

    Word-initial CVC syllables are detected faster in words beginning consonant-vowel-consonant-vowel (CVCV-) than in words beginning consonant-vowel-consonant-consonant (CVCC-). This effect was reported independently by M. Taft and G. Hambly (1985, Journal of Memory and Language, 24, 320–335) and by A. Cutler, J. Mehler, D. Norris, and J. Segui (1986, Journal of Memory and Language, 25, 385–400). Taft and Hambly explained the effect in terms of lexical factors. This explanation cannot account for Cutler et al.'s results, in which the effect also appeared with nonwords and foreign words. Cutler et al. suggested that CVCV-sequences might simply be easier to perceive than CVCC-sequences. The present study confirms this suggestion, and explains it as a reflection of listener expectations constructed on the basis of distributional characteristics of the language.
  • Cutler, A., & Otake, T. (1997). Contrastive studies of spoken-language processing. Journal of Phonetic Society of Japan, 1, 4-13.
  • Cutler, A., Weber, A., & Otake, T. (2006). Asymmetric mapping from phonetic to lexical representations in second-language listening. Journal of Phonetics, 34(2), 269-284. doi:10.1016/j.wocn.2005.06.002.

    Abstract

    The mapping of phonetic information to lexical representations in second-language (L2) listening was examined using an eyetracking paradigm. Japanese listeners followed instructions in English to click on pictures in a display. When instructed to click on a picture of a rocket, they experienced interference when a picture of a locker was present, that is, they tended to look at the locker instead. However, when instructed to click on the locker, they were unlikely to look at the rocket. This asymmetry is consistent with a similar asymmetry previously observed in Dutch listeners’ mapping of English vowel contrasts to lexical representations. The results suggest that L2 listeners may maintain a distinction between two phonetic categories of the L2 in their lexical representations, even though their phonetic processing is incapable of delivering the perceptual discrimination required for correct mapping to the lexical distinction. At the phonetic processing level, one of the L2 categories is dominant; the present results suggest that dominance is determined by acoustic–phonetic proximity to the nearest L1 category. At the lexical processing level, representations containing this dominant category are more likely than representations containing the non-dominant category to be correctly contacted by the phonetic input.
  • Cutler, A. (1986). Forbear is a homophone: Lexical prosody does not constrain lexical access. Language and Speech, 29, 201-220.

    Abstract

    Because stress can occur in any position within an Eglish word, lexical prosody could serve as a minimal distinguishing feature between pairs of words. However, most pairs of English words with stress pattern opposition also differ vocalically: OBject an obJECT, CONtent and content have different vowels in their first syllables an well as different stress patters. To test whether prosodic information is made use in auditory word recognition independently of segmental phonetic information, it is necessary to examine pairs like FORbear – forBEAR of TRUSty – trusTEE, semantically unrelated words which echbit stress pattern opposition but no segmental difference. In a cross-modal priming task, such words produce the priming effects characteristic of homophones, indicating that lexical prosody is not used in the same was as segmental structure to constrain lexical access.
  • Cutler, A., & Chen, H.-C. (1997). Lexical tone in Cantonese spoken-word processing. Perception and Psychophysics, 59, 165-179. Retrieved from http://www.psychonomic.org/search/view.cgi?id=778.

    Abstract

    In three experiments, the processing of lexical tone in Cantonese was examined. Cantonese listeners more often accepted a nonword as a word when the only difference between the nonword and the word was in tone, especially when the F0 onset difference between correct and erroneous tone was small. Same–different judgments by these listeners were also slower and less accurate when the only difference between two syllables was in tone, and this was true whether the F0 onset difference between the two tones was large or small. Listeners with no knowledge of Cantonese produced essentially the same same-different judgment pattern as that produced by the native listeners, suggesting that the results display the effects of simple perceptual processing rather than of linguistic knowledge. It is argued that the processing of lexical tone distinctions may be slowed, relative to the processing of segmental distinctions, and that, in speeded-response tasks, tone is thus more likely to be misprocessed than is segmental structure.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1988). Limits on bilingualism [Letters to Nature]. Nature, 340, 229-230. doi:10.1038/340229a0.

    Abstract

    SPEECH, in any language, is continuous; speakers provide few reliable cues to the boundaries of words, phrases, or other meaningful units. To understand speech, listeners must divide the continuous speech stream into portions that correspond to such units. This segmentation process is so basic to human language comprehension that psycholinguists long assumed that all speakers would do it in the same way. In previous research1,2, however, we reported that segmentation routines can be language-specific: speakers of French process spoken words syllable by syllable, but speakers of English do not. French has relatively clear syllable boundaries and syllable-based timing patterns, whereas English has relatively unclear syllable boundaries and stress-based timing; thus syllabic segmentation would work more efficiently in the comprehension of French than in the comprehension of English. Our present study suggests that at this level of language processing, there are limits to bilingualism: a bilingual speaker has one and only one basic language.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1987). Phoneme identification and the lexicon. Cognitive Psychology, 19, 141-177. doi:10.1016/0010-0285(87)90010-7.
  • Cutler, A. (1986). Phonological structure in speech recognition. Phonology Yearbook, 3, 161-178. Retrieved from http://www.jstor.org/stable/4615397.

    Abstract

    Two bodies of recent research from experimental psycholinguistics are summarised, each of which is centred upon a concept from phonology: LEXICAL STRESS and the SYLLABLE. The evidence indicates that neither construct plays a role in prelexical representations during speech recog- nition. Both constructs, however, are well supported by other performance evidence. Testing phonological claims against performance evidence from psycholinguistics can be difficult, since the results of studies designed to test processing models are often of limited relevance to phonological theory.
  • Cutler, A. (1991). Proceed with caution. New Scientist, (1799), 53-54.
  • Cutler, A., & Swinney, D. A. (1986). Prosody and the development of comprehension. Journal of Child Language, 14, 145-167.

    Abstract

    Four studies are reported in which young children’s response time to detect word targets was measured. Children under about six years of age did not show response time advantage for accented target words which adult listeners show. When semantic focus of the target word was manipulated independently of accent, children of about five years of age showed an adult-like response time advantage for focussed targets, but children younger than five did not. Id is argued that the processing advantage for accented words reflect the semantic role of accent as an expression of sentence focus. Processing advantages for accented words depend on the prior development of representations of sentence semantic structure, including the concept of focus. The previous literature on the development of prosodic competence shows an apparent anomaly in that young children’s productive skills appear to outstrip their receptive skills; however, this anomaly disappears if very young children’s prosody is assumed to be produced without an underlying representation of the relationship between prosody and semantics.
  • Cutler, A., Dahan, D., & Van Donselaar, W. (1997). Prosody in the comprehension of spoken language: A literature review. Language and Speech, 40, 141-201.

    Abstract

    Research on the exploitation of prosodic information in the recognition of spoken language is reviewed. The research falls into three main areas: the use of prosody in the recognition of spoken words, in which most attention has been paid to the question of whether the prosodic structure of a word plays a role in initial contact with stored lexical representations; the use of prosody in the computation of syntactic structure, in which the resolution of global and local ambiguities has formed the central focus; and the role of prosody in the processing of discourse structure, in which there has been a preponderance of work on the contribution of accentuation and deaccentuation to integration of concepts with an existing discourse model. The review reveals that in each area progress has been made towards new conceptions of prosody's role in processing, and in particular this has involved abandonment of previously held deterministic views of the relationship between prosodic structure and other aspects of linguistic structure
  • Cutler, A. (1997). The comparative perspective on spoken-language processing. Speech Communication, 21, 3-15. doi:10.1016/S0167-6393(96)00075-1.

    Abstract

    Psycholinguists strive to construct a model of human language processing in general. But this does not imply that they should confine their research to universal aspects of linguistic structure, and avoid research on language-specific phenomena. First, even universal characteristics of language structure can only be accurately observed cross-linguistically. This point is illustrated here by research on the role of the syllable in spoken-word recognition, on the perceptual processing of vowels versus consonants, and on the contribution of phonetic assimilation phonemena to phoneme identification. In each case, it is only by looking at the pattern of effects across languages that it is possible to understand the general principle. Second, language-specific processing can certainly shed light on the universal model of language comprehension. This second point is illustrated by studies of the exploitation of vowel harmony in the lexical segmentation of Finnish, of the recognition of Dutch words with and without vowel epenthesis, and of the contribution of different kinds of lexical prosodic structure (tone, pitch accent, stress) to the initial activation of candidate words in lexical access. In each case, aspects of the universal processing model are revealed by analysis of these language-specific effects. In short, the study of spoken-language processing by human listeners requires cross-linguistic comparison.
  • Cutler, A., Butterfield, S., & Williams, J. (1987). The perceptual integrity of syllabic onsets. Journal of Memory and Language, 26, 406-418. doi:10.1016/0749-596X(87)90099-4.
  • Cutler, A., & Carter, D. (1987). The predominance of strong initial syllables in the English vocabulary. Computer Speech and Language, 2, 133-142. doi:10.1016/0885-2308(87)90004-0.

    Abstract

    Studies of human speech processing have provided evidence for a segmentation strategy in the perception of continuous speech, whereby a word boundary is postulated, and a lexical access procedure initiated, at each metrically strong syllable. The likely success of this strategy was here estimated against the characteristics of the English vocabulary. Two computerized dictionaries were found to list approximately three times as many words beginning with strong syllables (i.e. syllables containing a full vowel) as beginning with weak syllables (i.e. syllables containing a reduced vowel). Consideration of frequency of lexical word occurrence reveals that words beginning with strong syllables occur on average more often than words beginning with weak syllables. Together, these findings motivate an estimate for everyday speech recognition that approximately 85% of lexical words (i.e. excluding function words) will begin with strong syllables. This estimate was tested against a corpus of 190 000 words of spontaneous British English conversion. In this corpus, 90% of lexical words were found to begin with strong syllables. This suggests that a strategy of postulating word boundaries at the onset of strong syllables would have a high success rate in that few actual lexical word onsets would be missed.
  • Cutler, A., & Norris, D. (1988). The role of strong syllables in segmentation for lexical access. Journal of Experimental Psychology: Human Perception and Performance, 14, 113-121. doi:10.1037/0096-1523.14.1.113.

    Abstract

    A model of speech segmentation in a stress language is proposed, according to which the occurrence of a strong syllable triggers segmentation of the speech signal, whereas occurrence of a weak syllable does not trigger segmentation. We report experiments in which listeners detected words embedded in nonsense bisyllables more slowly when the bisyllable had two strong syllables than when it had a strong and a weak syllable; mint was detected more slowly in mintayve than in mintesh. According to our proposed model, this result is an effect of segmentation: When the second syllable is strong, it is segmented from the first syllable, and successful detection of the embedded word therefore requires assembly of speech material across a segmentation position. Speech recognition models involving phonemic or syllabic recoding, or based on strictly left-to-right processes, do not predict this result. It is argued that segmentation at strong syllables in continuous speech recognition serves the purpose of detecting the most efficient locations at which to initiate lexical access. (C) 1988 by the American Psychological Association
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1986). The syllable’s differing role in the segmentation of French and English. Journal of Memory and Language, 25, 385-400. doi:10.1016/0749-596X(86)90033-1.

    Abstract

    Speech segmentation procedures may differ in speakers of different languages. Earlier work based on French speakers listening to French words suggested that the syllable functions as a segmentation unit in speech processing. However, while French has relatively regular and clearly bounded syllables, other languages, such as English, do not. No trace of syllabifying segmentation was found in English listeners listening to English words, French words, or nonsense words. French listeners, however, showed evidence of syllabification even when they were listening to English words. We conclude that alternative segmentation routines are available to the human language processor. In some cases speech segmentation may involve the operation of more than one procedure
  • Cutler, A. (1997). The syllable’s role in the segmentation of stress languages. Language and Cognitive Processes, 12, 839-845. doi:10.1080/016909697386718.
  • Cutler, A. (1987). The task of the speaker and the task of the hearer [Commentary/Sperber & Wilson: Relevance]. Behavioral and Brain Sciences, 10, 715-716.
  • Cutler, A. (1986). Why readers of this newsletter should run cross-linguistic experiments. European Psycholinguistics Association Newsletter, 13, 4-8.
  • Cutler, A., & Butterfield, S. (1991). Word boundary cues in clear speech: A supplementary report. Speech Communication, 10, 335-353. doi:10.1016/0167-6393(91)90002-B.

    Abstract

    One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In four experiments, we examined how word boundaries are produced in deliberately clear speech. In an earlier report we showed that speakers do indeed mark word boundaries in clear speech, by pausing at the boundary and lengthening pre-boundary syllables; moreover, these effects are applied particularly to boundaries preceding weak syllables. In English, listeners use segmentation procedures which make word boundaries before strong syllables easier to perceive; thus marking word boundaries before weak syllables in clear speech will make clear precisely those boundaries which are otherwise hard to perceive. The present report presents supplementary data, namely prosodic analyses of the syllable following a critical word boundary. More lengthening and greater increases in intensity were applied in clear speech to weak syllables than to strong. Mean F0 was also increased to a greater extent on weak syllables than on strong. Pitch movement, however, increased to a greater extent on strong syllables than on weak. The effects were, however, very small in comparison to the durational effects we observed earlier for syllables preceding the boundary and for pauses at the boundary.
  • Dahan, D., & Tanenhaus, M. K. (2004). Continuous mapping from sound to meaning in spoken-language comprehension: Immediate effects of verb-based thematic constraints. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2), 498-513. doi:10.1037/0278-7393.30.2.498.

    Abstract

    The authors used 2 “visual-world” eye-tracking experiments to examine lexical access using Dutch constructions in which the verb did or did not place semantic constraints on its subsequent subject noun phrase. In Experiment 1, fixations to the picture of a cohort competitor (overlapping with the onset of the referent’s name, the subject) did not differ from fixations to a distractor in the constraining-verb condition. In Experiment 2, cross-splicing introduced phonetic information that temporarily biased the input toward the cohort competitor. Fixations to the cohort competitor temporarily increased in both the neutral and constraining conditions. These results favor models in which mapping from the input onto meaning is continuous over models in which contextual effects follow access of an initial form-based competitor set.
  • Davidson, D. J. (2006). Strategies for longitudinal neurophysiology [commentary on Osterhout et al.]. Language Learning, 56(suppl. 1), 231-234. doi:10.1111/j.1467-9922.2006.00362.x.
  • Dediu, D. (2016). A multi-layered problem. IEEE CDS Newsletter, 13, 14-15.

    Abstract

    A response to Moving Beyond Nature-Nurture: a Problem of Science or Communication? by John Spencer, Mark Blumberg and David Shenk
  • Dediu, D., & de Boer, B. (2016). Language evolution needs its own journal. Journal of Language Evolution, 1, 1-6. doi:10.1093/jole/lzv001.

    Abstract

    Interest in the origins and evolution of language has been around for as long as language has been around. However, only recently has the empirical study of language come of age. We argue that the field has sufficiently advanced that it now needs its own journal—the Journal of Language Evolution.
  • Dediu, D., & Christiansen, M. H. (2016). Language evolution: Constraints and opportunities from modern genetics. Topics in Cognitive Science, 8, 361-370. doi:10.1111/tops.12195.

    Abstract

    Our understanding of language, its origins and subsequent evolution (including language change) is shaped not only by data and theories from the language sciences, but also fundamentally by the biological sciences. Recent developments in genetics and evolutionary theory offer both very strong constraints on what scenarios of language evolution are possible and probable but also offer exciting opportunities for understanding otherwise puzzling phenomena. Due to the intrinsic breathtaking rate of advancement in these fields, the complexity, subtlety and sometimes apparent non-intuitiveness of the phenomena discovered, some of these recent developments have either being completely missed by language scientists, or misperceived and misrepresented. In this short paper, we offer an update on some of these findings and theoretical developments through a selection of illustrative examples and discussions that cast new light on current debates in the language sciences. The main message of our paper is that life is much more complex and nuanced than anybody could have predicted even a few decades ago, and that we need to be flexible in our theorizing instead of embracing a priori dogmas and trying to patch paradigms that are no longer satisfactory.
  • Dediu, D. (2016). Typology for the masses. Linguistic typology, 20(3), 579-581. doi:10.1515/lingty-2016-0029.
  • Defina, R. (2016). Do serial verb constructions describe single events? A study of co-speech gestures in Avatime. Language, 92(4), 890-910. doi:10.1353/lan.2016.0076.

    Abstract

    Serial verb constructions have often been said to refer to single conceptual events. However, evidence to support this claim has been elusive. This article introduces co-speech gestures as a new way of investigating the relationship. The alignment patterns of gestures with serial verb constructions and other complex clauses were compared in Avatime (Ka-Togo, Kwa, Niger-Congo). Serial verb constructions tended to occur with single gestures overlapping the entire construction. In contrast, other complex clauses were more likely to be accompanied by distinct gestures overlapping individual verbs. This pattern of alignment suggests that serial verb constructions are in fact used to describe single events.

    Additional information

    https://doi.org/10.1353/lan.2016.0069
  • Defina, R. (2016). Serial verb constructions and their subtypes in Avatime. Studies in Language, 40(3), 648-680. doi:10.1075/sl.40.3.07def.
  • Desmet, T., De Baecke, C., Drieghe, D., Brysbaert, M., & Vonk, W. (2006). Relative clause attachment in Dutch: On-line comprehension corresponds to corpus frequencies when lexical variables are taken into account. Language and Cognitive Processes, 21(4), 453-485. doi:10.1080/01690960400023485.

    Abstract

    Desmet, Brysbaert, and De Baecke (2002a) showed that the production of relative clauses following two potential attachment hosts (e.g., ‘Someone shot the servant of the actress who was on the balcony’) was influenced by the animacy of the first host. These results were important because they refuted evidence from Dutch against experience-based accounts of syntactic ambiguity resolution, such as the tuning hypothesis. However, Desmet et al. did not provide direct evidence in favour of tuning, because their study focused on production and did not include reading experiments. In the present paper this line of research was extended. A corpus analysis and an eye-tracking experiment revealed that when taking into account lexical properties of the NP host sites (i.e., animacy and concreteness) the frequency pattern and the on-line comprehension of the relative clause attachment ambiguity do correspond. The implications for exposure-based accounts of sentence processing are discussed.
  • Dias, C., Estruch, S. B., Graham, S. A., McRae, J., Sawiak, S. J., Hurst, J. A., Joss, S. K., Holder, S. E., Morton, J. E., Turner, C., Thevenon, J., Mellul, K., Sánchez-Andrade, G., Ibarra-Soria, X., Derizioti, P., Santos, R. F., Lee, S.-C., Faivre, L., Kleefstra, T., Liu, P. and 3 moreDias, C., Estruch, S. B., Graham, S. A., McRae, J., Sawiak, S. J., Hurst, J. A., Joss, S. K., Holder, S. E., Morton, J. E., Turner, C., Thevenon, J., Mellul, K., Sánchez-Andrade, G., Ibarra-Soria, X., Derizioti, P., Santos, R. F., Lee, S.-C., Faivre, L., Kleefstra, T., Liu, P., Hurles, M. E., DDD Study, Fisher, S. E., & Logan, D. W. (2016). BCL11A haploinsufficiency causes an intellectual disability syndrome and dysregulates transcription. The American Journal of Human Genetics, 99(2), 253-274. doi:10.1016/j.ajhg.2016.05.030.

    Abstract

    Intellectual disability (ID) is a common condition with considerable genetic heterogeneity. Next-generation sequencing of large cohorts has identified an increasing number of genes implicated in ID, but their roles in neurodevelopment remain largely unexplored. Here we report an ID syndrome caused by de novo heterozygous missense, nonsense, and frameshift mutations in BCL11A, encoding a transcription factor that is a putative member of the BAF swi/snf chromatin-remodeling complex. Using a comprehensive integrated approach to ID disease modeling, involving human cellular analyses coupled to mouse behavioral, neuroanatomical, and molecular phenotyping, we provide multiple lines of functional evidence for phenotypic effects. The etiological missense variants cluster in the amino-terminal region of human BCL11A, and we demonstrate that they all disrupt its localization, dimerization, and transcriptional regulatory activity, consistent with a loss of function. We show that Bcl11a haploinsufficiency in mice causes impaired cognition, abnormal social behavior, and microcephaly in accordance with the human phenotype. Furthermore, we identify shared aberrant transcriptional profiles in the cortex and hippocampus of these mouse models. Thus, our work implicates BCL11A haploinsufficiency in neurodevelopmental disorders and defines additional targets regulated by this gene, with broad relevance for our understanding of ID and related syndromes
  • Diaz, B., Mitterer, H., Broersma, M., Escara, C., & Sebastián-Gallés, N. (2016). Variability in L2 phonemic learning originates from speech-specific capabilities: An MMN study on late bilinguals. Bilingualism: Language and Cognition, 19(5), 955-970. doi:10.1017/S1366728915000450.

    Abstract

    People differ in their ability to perceive second language (L2) sounds. In early bilinguals the variability in learning L2 phonemes stems from speech-specific capabilities (Díaz, Baus, Escera, Costa & Sebastián-Gallés, 2008). The present study addresses whether speech-specific capabilities similarly explain variability in late bilinguals. Event-related potentials were recorded (using a design similar to Díaz et al., 2008) in two groups of late Dutch–English bilinguals who were good or poor in overtly discriminating the L2 English vowels /ε-æ/. The mismatch negativity, an index of discrimination sensitivity, was similar between the groups in conditions involving pure tones (of different length, frequency, and presentation order) but was attenuated in poor L2 perceivers for native, unknown, and L2 phonemes. These results suggest that variability in L2 phonemic learning originates from speech-specific capabilities and imply a continuity of L2 phonemic learning mechanisms throughout the lifespan
  • Dietrich, R., & Klein, W. (1986). Simple language. Interdisciplinary Science Reviews, 11(2), 110-117.
  • Dima, A. L., & Dediu, D. (2016). Computation of Adherence to Medications and Visualization of Medication Histories in R with AdhereR: Towards Transparent and Reproducible Use of Electronic Healthcare Data. PLoS One, 12(4): e0174426. doi:10.1371/journal.pone.0174426.

    Abstract

    Adherence to medications is an important indicator of the quality of medication management and impacts on health outcomes and cost-effectiveness of healthcare delivery. Electronic healthcare data (EHD) are increasingly used to estimate adherence in research and clinical practice, yet standardization and transparency of data processing are still a concern. Comprehensive and flexible open-source algorithms can facilitate the development of high-quality, consistent, and reproducible evidence in this field. Some EHD-based clinical decision support systems (CDSS) include visualization of medication histories, but this is rarely integrated in adherence analyses and not easily accessible for data exploration or implementation in new clinical settings. We introduce AdhereR, a package for the widely used open-source statistical environment R, designed to support researchers in computing EHD-based adherence estimates and in visualizing individual medication histories and adherence patterns. AdhereR implements a set of functions that are consistent with current adherence guidelines, definitions and operationalizations. We illustrate the use of AdhereR with an example dataset of 2-year records of 100 patients and describe the various analysis choices possible and how they can be adapted to different health conditions and types of medications. The package is freely available for use and its implementation facilitates the integration of medication history visualizations in open-source CDSS platforms.
  • Dimitrova, D. V., Chu, M., Wang, L., Ozyurek, A., & Hagoort, P. (2016). Beat that word: How listeners integrate beat gesture and focus in multimodal speech discourse. Journal of Cognitive Neuroscience, 28(9), 1255-1269. doi:10.1162/jocn_a_00963.

    Abstract

    Communication is facilitated when listeners allocate their attention to important information (focus) in the message, a process called "information structure." Linguistic cues like the preceding context and pitch accent help listeners to identify focused information. In multimodal communication, relevant information can be emphasized by nonverbal cues like beat gestures, which represent rhythmic nonmeaningful hand movements. Recent studies have found that linguistic and nonverbal attention cues are integrated independently in single sentences. However, it is possible that these two cues interact when information is embedded in context, because context allows listeners to predict what information is important. In an ERP study, we tested this hypothesis and asked listeners to view videos capturing a dialogue. In the critical sentence, focused and nonfocused words were accompanied by beat gestures, grooming hand movements, or no gestures. ERP results showed that focused words are processed more attentively than nonfocused words as reflected in an N1 and P300 component. Hand movements also captured attention and elicited a P300 component. Importantly, beat gesture and focus interacted in a late time window of 600-900 msec relative to target word onset, giving rise to a late positivity when nonfocused words were accompanied by beat gestures. Our results show that listeners integrate beat gesture with the focus of the message and that integration costs arise when beat gesture falls on nonfocused information. This suggests that beat gestures fulfill a unique focusing function in multimodal discourse processing and that they have to be integrated with the information structure of the message.
  • Dimroth, C. (1998). Indiquer la portée en allemand L2: Une étude longitudinale de l'acquisition des particules de portée. AILE (Acquisition et Interaction en Langue étrangère), 11, 11-34.
  • Dingemanse, M., Kendrick, K. H., & Enfield, N. J. (2016). A Coding Scheme for Other-Initiated Repair across Languages. Open Linguistics, 2, 35-46. doi:10.1515/opli-2016-0002.

    Abstract

    We provide an annotated coding scheme for other-initiated repair, along with guidelines for building collections and aggregating cases based on interactionally relevant similarities and differences. The questions and categories of the scheme are grounded in inductive observations of conversational data and connected to a rich body of work on other-initiated repair in conversation analysis. The scheme is developed and tested in a 12-language comparative project and can serve as a stepping stone for future work on other-initiated repair and the systematic comparative study of conversational structures.
  • Dingemanse, M., Schuerman, W. L., Reinisch, E., Tufvesson, S., & Mitterer, H. (2016). What sound symbolism can and cannot do: Testing the iconicity of ideophones from five languages. Language, 92(2), e117-e133. doi:10.1353/lan.2016.0034.

    Abstract

    Sound symbolism is a phenomenon with broad relevance to the study of language and mind, but there has been a disconnect between its investigations in linguistics and psychology. This study tests the sound-symbolic potential of ideophones—words described as iconic—in an experimental task that improves over prior work in terms of ecological validity and experimental control. We presented 203 ideophones from five languages to eighty-two Dutch listeners in a binary-choice task, in four versions: original recording, full diphone resynthesis, segments-only resynthesis, and prosody-only resynthesis. Listeners guessed the meaning of all four versions above chance, confirming the iconicity of ideophones and showing the viability of speech synthesis as a way of controlling for segmental and suprasegmental properties in experimental studies of sound symbolism. The success rate was more modest than prior studies using pseudowords like bouba/kiki, implying that assumptions based on such words cannot simply be transferred to natural languages. Prosody and segments together drive the effect: neither alone is sufficient, showing that segments and prosody work together as cues supporting iconic interpretations. The findings cast doubt on attempts to ascribe iconic meanings to segments alone and support a view of ideophones as words that combine arbitrariness and iconicity.We discuss the implications for theory and methods in the empirical study of sound symbolism and iconicity.

    Additional information

    https://muse.jhu.edu/article/619540
  • Djemie, T., Weckhuysen, S., von Spiczak, S., Carvill, G. L., Jaehn, J., Anttonen, A. K., Brilstra, E., Caglayan, H. S., De Kovel, C. G. F., Depienne, C., Gaily, E., Gennaro, E., Giraldez, B. G., Gormley, P., Guerrero-Lopez, R., Guerrini, R., Hamalainen, E., Hartmann, `., Hernandez-Hernandez, L., Hjalgrim, H. and 26 moreDjemie, T., Weckhuysen, S., von Spiczak, S., Carvill, G. L., Jaehn, J., Anttonen, A. K., Brilstra, E., Caglayan, H. S., De Kovel, C. G. F., Depienne, C., Gaily, E., Gennaro, E., Giraldez, B. G., Gormley, P., Guerrero-Lopez, R., Guerrini, R., Hamalainen, E., Hartmann, `., Hernandez-Hernandez, L., Hjalgrim, H., Koeleman, B. P., Leguern, E., Lehesjoki, A. E., Lemke, J. R., Leu, C., Marini, C., McMahon, J. M., Mei, D., Moller, R. S., Muhle, H., Myers, C. T., Nava, C., Serratosa, J. M., Sisodiya, S. M., Stephani, U., Striano, P., van Kempen, M. J., Verbeek, N. E., Usluer, S., Zara, F., Palotie, A., Mefford, H. C., Scheffer, I. E., De Jonghe, P., Helbig, I., & Suls, A. (2016). Pitfalls in genetic testing: the story of missed SCN1A mutations. Molecular Genetics & Genomic Medicine, 4(4), 457-64. doi:10.1002/mgg3.217.

    Abstract

    Background Sanger sequencing, still the standard technique for genetic testing in most diagnostic laboratories and until recently widely used in research, is gradually being complemented by next-generation sequencing (NGS). No single mutation detection technique is however perfect in identifying all mutations. Therefore, we wondered to what extent inconsistencies between Sanger sequencing and NGS affect the molecular diagnosis of patients. Since mutations in SCN1A, the major gene implicated in epilepsy, are found in the majority of Dravet syndrome (DS) patients, we focused on missed SCN1A mutations. Methods We sent out a survey to 16 genetic centers performing SCN1A testing. Results We collected data on 28 mutations initially missed using Sanger sequencing. All patients were falsely reported as SCN1A mutation-negative, both due to technical limitations and human errors. Conclusion We illustrate the pitfalls of Sanger sequencing and most importantly provide evidence that SCN1A mutations are an even more frequent cause of DS than already anticipated.
  • Doherty, M., & Klein, W. (Eds.). (1991). Übersetzung [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (84).
  • Drijvers, L., Mulder, K., & Ernestus, M. (2016). Alpha and gamma band oscillations index differential processing of acoustically reduced and full forms. Brain and Language, 153-154, 27-37. doi:10.1016/j.bandl.2016.01.003.

    Abstract

    Reduced forms like yeshay for yesterday often occur in conversations. Previous behavioral research reported a processing advantage for full over reduced forms. The present study investigated whether this processing advantage is reflected in a modulation of alpha (8–12 Hz) and gamma (30+ Hz) band activity. In three electrophysiological experiments, participants listened to full and reduced forms in isolation (Experiment 1), sentence-final position (Experiment 2), or mid-sentence position (Experiment 3). Alpha power was larger in response to reduced forms than to full forms, but only in Experiments 1 and 2. We interpret these increases in alpha power as reflections of higher auditory cognitive load. In all experiments, gamma power only increased in response to full forms, which we interpret as showing that lexical activation spreads more quickly through the semantic network for full than for reduced forms. These results confirm a processing advantage for full forms, especially in non-medial sentence position.
  • Dronkers, N. F., Wilkins, D. P., Van Valin Jr., R. D., Redfern, B. B., & Jaeger, J. J. (2004). Lesion analysis of the brain areas involved in language comprehension. Cognition, 92, 145-177. doi:10.1016/j.cognition.2003.11.002.

    Abstract

    The cortical regions of the brain traditionally associated with the comprehension of language are Wernicke's area and Broca's area. However, recent evidence suggests that other brain regions might also be involved in this complex process. This paper describes the opportunity to evaluate a large number of brain-injured patients to determine which lesioned brain areas might affect language comprehension. Sixty-four chronic left hemisphere stroke patients were evaluated on 11 subtests of the Curtiss–Yamada Comprehensive Language Evaluation – Receptive (CYCLE-R; Curtiss, S., & Yamada, J. (1988). Curtiss–Yamada Comprehensive Language Evaluation. Unpublished test, UCLA). Eight right hemisphere stroke patients and 15 neurologically normal older controls also participated. Patients were required to select a single line drawing from an array of three or four choices that best depicted the content of an auditorily-presented sentence. Patients' lesions obtained from structural neuroimaging were reconstructed onto templates and entered into a voxel-based lesion-symptom mapping (VLSM; Bates, E., Wilson, S., Saygin, A. P., Dick, F., Sereno, M., Knight, R. T., & Dronkers, N. F. (2003). Voxel-based lesion-symptom mapping. Nature Neuroscience, 6(5), 448–450.) analysis along with the behavioral data. VLSM is a brain–behavior mapping technique that evaluates the relationships between areas of injury and behavioral performance in all patients on a voxel-by-voxel basis, similar to the analysis of functional neuroimaging data. Results indicated that lesions to five left hemisphere brain regions affected performance on the CYCLE-R, including the posterior middle temporal gyrus and underlying white matter, the anterior superior temporal gyrus, the superior temporal sulcus and angular gyrus, mid-frontal cortex in Brodmann's area 46, and Brodmann's area 47 of the inferior frontal gyrus. Lesions to Broca's and Wernicke's areas were not found to significantly alter language comprehension on this particular measure. Further analysis suggested that the middle temporal gyrus may be more important for comprehension at the word level, while the other regions may play a greater role at the level of the sentence. These results are consistent with those seen in recent functional neuroimaging studies and offer complementary data in the effort to understand the brain areas underlying language comprehension.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2016). Lexically-guided perceptual learning in non-native listening. Bilingualism: Language and Cognition, 19(5), 914-920. doi:10.1017/S136672891600002X.

    Abstract

    There is ample evidence that native and non-native listeners use lexical knowledge to retune their native phonetic categories following ambiguous pronunciations. The present study investigates whether a non-native ambiguous sound can retune non-native phonetic categories. After a brief exposure to an ambiguous British English [l/ɹ] sound, Dutch listeners demonstrated retuning. This retuning was, however, asymmetrical: the non-native listeners seemed to show (more) retuning of the /ɹ/ category than of the /l/ category, suggesting that non-native listeners can retune non-native phonetic categories. This asymmetry is argued to be related to the large phonetic variability of /r/ in both Dutch and English.
  • Drude, S. (2006). Documentação lingüística: O formato de anotação de textos. Cadernos de Estudos Lingüísticos, 35, 27-51.

    Abstract

    This paper presents the methods of language documentation as applied in the Awetí Language Documentation Project, one of the projects in the Documentation of Endangered Languages Programme (DOBES). It describes the steps of how a large digital corpus of annotated multi-media data is built. Special attention is devoted to the format of annotation of linguistic data. The Advanced Glossing format is presented and justified
  • Dunn, M. (2006). [Review of the book Comparative Chukotko-Kamchatkan dictionary by Michael Fortescue]. Anthropological Linguistics, 48(3), 296-298.
  • Edmunds, R., L'Hours, H., Rickards, L., Trilsbeek, P., Vardigan, M., & Mokrane, M. (2016). Core trustworthy data repositories requirements. Zenodo, 168411. doi:10.5281/zenodo.168411.

    Abstract

    The Core Trustworthy Data Repository Requirements were developed by the DSA–WDS Partnership Working Group on Repository Audit and Certification, a Working Group (WG) of the Research Data Alliance . The goal of the effort was to create a set of harmonized common requirements for certification of repositories at the core level, drawing from criteria already put in place by the Data Seal of Approval (DSA: www.datasealofapproval.org) and the ICSU World Data System (ICSU-WDS: https://www.icsu-wds.org/services/certification). An additional goal of the project was to develop common procedures to be implemented by both DSA and ICSU-WDS. Ultimately, the DSA and ICSU-WDS plan to collaborate on a global framework for repository certification that moves from the core to the extended (nestor-Seal DIN 31644), to the formal (ISO 16363) level.
  • Eibl-Eibesfeldt, I., Senft, B., & Senft, G. (1987). Trobriander (Ost-Neuguinea, Trobriand Inseln, Kaile'una) Fadenspiele 'ninikula'. Publikation zu Wissenschaftlichen Filmen, Sektion Ethnologie, 25, 1-15.
  • Eibl-Eibesfeldt, I., & Senft, G. (1991). Trobriander (Papua-Neu-guinea, Trobriand -Inseln, Kaile'una) Tänze zur Einleitung des Erntefeier-Rituals. Film E 3129. Trobriander (Papua-Neuguinea, Trobriand-Inseln, Kiriwina); Ausschnitte aus einem Erntefesttanz. Film E3130. Publikationen zu wissenschaftlichen Filmen. Sektion Ethnologie, 17, 1-17.
  • Eising, E., Huisman, S. M., Mahfouz, A., Vijfhuizen, L. S., Anttila, V., Winsvold, B. S., Kurth, T., Ikram, M. A., Freilinger, T., Kaprio, J., Boomsma, D. I., van Duijn, C. M., Järvelin, M.-R.-R., Zwart, J.-A., Quaye, L., Strachan, D. P., Kubisch, C., Dichgans, M., Davey Smith, G., Stefansson, K. and 9 moreEising, E., Huisman, S. M., Mahfouz, A., Vijfhuizen, L. S., Anttila, V., Winsvold, B. S., Kurth, T., Ikram, M. A., Freilinger, T., Kaprio, J., Boomsma, D. I., van Duijn, C. M., Järvelin, M.-R.-R., Zwart, J.-A., Quaye, L., Strachan, D. P., Kubisch, C., Dichgans, M., Davey Smith, G., Stefansson, K., Palotie, A., Chasman, D. I., Ferrari, M. D., Terwindt, G. M., de Vries, B., Nyholt, D. R., Lelieveldt, B. P., van den Maagdenberg, A. M., & Reinders, M. J. (2016). Gene co‑expression analysis identifies brain regions and cell types involved in migraine pathophysiology: a GWAS‑based study using the Allen Human Brain Atlas. Human Genetics, 135(4), 425-439. doi:10.1007/s00439-016-1638-x.

    Abstract

    Migraine is a common disabling neurovascular brain disorder typically characterised by attacks of severe headache and associated with autonomic and neurological symptoms. Migraine is caused by an interplay of genetic and environmental factors. Genome-wide association studies (GWAS) have identified over a dozen genetic loci associated with migraine. Here, we integrated migraine GWAS data with high-resolution spatial gene expression data of normal adult brains from the Allen Human Brain Atlas to identify specific brain regions and molecular pathways that are possibly involved in migraine pathophysiology. To this end, we used two complementary methods. In GWAS data from 23,285 migraine cases and 95,425 controls, we first studied modules of co-expressed genes that were calculated based on human brain expression data for enrichment of genes that showed association with migraine. Enrichment of a migraine GWAS signal was found for five modules that suggest involvement in migraine pathophysiology of: (i) neurotransmission, protein catabolism and mitochondria in the cortex; (ii) transcription regulation in the cortex and cerebellum; and (iii) oligodendrocytes and mitochondria in subcortical areas. Second, we used the high-confidence genes from the migraine GWAS as a basis to construct local migraine-related co-expression gene networks. Signatures of all brain regions and pathways that were prominent in the first method also surfaced in the second method, thus providing support that these brain regions and pathways are indeed involved in migraine pathophysiology.
  • Eising, E., De Leeuw, C., Min, J. L., Anttila, V., Verheijen, M. H. G., Terwindt, G. M., Dichgans, M., Freilinger, T., Kubisch, C., Ferrari, M. D., Smit, A. B., De Vries, B., Palotie, A., Van Den Maagdenberg, A. M. J. M., & Posthuma, D. (2016). Involvement of astrocyte and oligodendrocyte gene sets in migraine. Cephalalgia, 36(7), 640-647. doi:10.1177/0333102415618614.

    Abstract

    Migraine is a common episodic brain disorder characterized by recurrent attacks of severe unilateral headache and additional neurological symptoms. Two main migraine types can be distinguished based on the presence of aura symptoms that can accompany the headache: migraine with aura and migraine without aura. Multiple genetic and environmental factors confer disease susceptibility. Recent genome-wide association studies (GWAS) indicate that migraine susceptibility genes are involved in various pathways, including neurotransmission, which have already been implicated in genetic studies of monogenic familial hemiplegic migraine, a subtype of migraine with aura. Methods To further explore the genetic background of migraine, we performed a gene set analysis of migraine GWAS data of 4954 clinic-based patients with migraine, as well as 13,390 controls. Curated sets of synaptic genes and sets of genes predominantly expressed in three glial cell types (astrocytes, microglia and oligodendrocytes) were investigated. Discussion Our results show that gene sets containing astrocyte- and oligodendrocyte-related genes are associated with migraine, which is especially true for gene sets involved in protein modification and signal transduction. Observed differences between migraine with aura and migraine without aura indicate that both migraine types, at least in part, seem to have a different genetic background.
  • Eisner, F., & McQueen, J. M. (2006). Perceptual learning in speech: Stability over time (L). Journal of the Acoustical Society of America, 119(4), 1950-1953. doi:10.1121/1.2178721.

    Abstract

    Perceptual representations of phonemes are flexible and adapt rapidly to accommodate idiosyncratic articulation in the speech of a particular talker. This letter addresses whether such adjustments remain stable over time and under exposure to other talkers. During exposure to a story, listeners learned to interpret an ambiguous sound as [f] or [s]. Perceptual adjustments measured after 12 h were as robust as those measured immediately after learning. Equivalent effects were found when listeners heard speech from other talkers in the 12 h interval, and when they had the opportunity to consolidate learning during sleep.
  • Enfield, N. J. (2004). On linear segmentation and combinatorics in co-speech gesture: A symmetry-dominance construction in Lao fish trap descriptions. Semiotica, 149(1/4), 57-123. doi:10.1515/semi.2004.038.
  • Enfield, N. J., Majid, A., & Van Staden, M. (2006). Cross-linguistic categorisation of the body: Introduction. Language Sciences, 28(2-3), 137-147. doi:10.1016/j.langsci.2005.11.001.

    Abstract

    The domain of the human body is an ideal focus for semantic typology, since the body is a physical universal and all languages have terms referring to its parts. Previous research on body part terms has depended on secondary sources (e.g. dictionaries), and has lacked sufficient detail or clarity for a thorough understanding of these terms’ semantics. The present special issue is the outcome of a collaborative project aimed at improving approaches to investigating the semantics of body part terms, by developing materials to elicit information that provides for cross-linguistic comparison. The articles in this volume are original fieldwork-based descriptions of terminology for parts of the body in ten languages. Also included are an elicitation guide and experimental protocol used in gathering data. The contributions provide inventories of body part terms in each language, with analysis of both intensional and extensional aspects of meaning, differences in morphological complexity, semantic relations among terms, and discussion of partonomic structure within the domain.
  • Enfield, N. J. (2006). Elicitation guide on parts of the body. Language Sciences, 28(2-3), 148-157. doi:10.1016/j.langsci.2005.11.003.

    Abstract

    This document is intended for use as an elicitation guide for the field linguist consulting with native speakers in collecting terms for parts of the body, and in the exploration of their semantics.
  • Enfield, N. J. (2006). [Review of the book A grammar of Semelai by Nicole Kruspe]. Linguistic Typology, 10(3), 452-455. doi:10.1515/LINGTY.2006.014.
  • Enfield, N. J. (2006). Languages as historical documents: The endangered archive in Laos. South East Asia Research, 14(3), 471-488.

    Abstract

    Abstract: This paper reviews current discussion of the issue of just what is lost when a language dies. Special reference is made to the current situation in Laos, a country renowned for its considerable cultural and linguistic diversity. It focuses on the historical, anthropological and ecological knowledge that a language can encode, and the social and cultural consequences of the loss of such traditional knowledge when a language is no longer passed on. Finally, the article points out the paucity of studies and obstacles to field research on minority languages in Laos, which seriously hamper their documentation.
  • Enfield, N. J. (2006). Lao body part terms. Language Sciences, 28(2-3), 181-200. doi:10.1016/j.langsci.2005.11.011.

    Abstract

    This article presents a description of nominal expressions for parts of the human body conventionalised in Lao, a Southwestern Tai language spoken in Laos, Northeast Thailand, and Northeast Cambodia. An inventory of around 170 Lao expressions is listed, with commentary where some notability is determined, usually based on explicit comparison to the metalanguage, English. Notes on aspects of the grammatical and semantic structure of the set of body part terms are provided, including a discussion of semantic relations pertaining among members of the set of body part terms. I conclude that the semantic relations which pertain between terms for different parts of the body not only include part/whole relations, but also relations of location, connectedness, and general association. Calling the whole system a ‘partonomy’ attributes greater centrality to the part/whole relation than is warranted.
  • Enfield, N. J. (2004). Nominal classification in Lao: A sketch. Sprachtypologie und Universalienforschung, 57(2/3), 117-143.
  • Enfield, N. J. (1997). Review of 'Give: a cognitive linguistic study', by John Newman. Australian Journal of Linguistics, 17(1), 89-92. doi:10.1080/07268609708599546.
  • Enfield, N. J. (1997). Review of 'Plastic glasses and church fathers: semantic extension from the ethnoscience tradition', by David Kronenfeld. Anthropological Linguistics, 39(3), 459-464. Retrieved from http://www.jstor.org/stable/30028999.
  • Erard, M. (2016). Solving Australia's language puzzle. Science, 353(6306), 1357-1359. doi:10.1126/science.353.6306.1357.
  • Ernestus, M. (2006). Statistically gradient generalizations for contrastive phonological features. The Linguistic Review, 23(3), 217-233. doi:10.1515/TLR.2006.008.

    Abstract

    In mainstream phonology, contrastive properties, like stem-final voicing, are simply listed in the lexicon. This article reviews experimental evidence that such contrastive properties may be predictable to some degree and that the relevant statistically gradient generalizations form an inherent part of the grammar. The evidence comes from the underlying voice specification of stem-final obstruents in Dutch. Contrary to received wisdom, this voice specification is partly predictable from the obstruent’s manner and place of articulation and from the phonological properties of the preceding segments. The degree of predictability, which depends on the exact contents of the lexicon, directs speakers’ guesses of underlying voice specifications. Moreover, existing words that disobey the generalizations are disadvantaged by being recognized and produced more slowly and less accurately, also under natural conditions.We discuss how these observations can be accounted for in two types of different approaches to grammar, Stochastic Optimality Theory and exemplar-based modeling.
  • Ernestus, M., & Mak, W. M. (2004). Distinctive phonological features differ in relevance for both spoken and written word recognition. Brain and Language, 90(1-3), 378-392. doi:10.1016/S0093-934X(03)00449-8.

    Abstract

    This paper discusses four experiments on Dutch which show that distinctive phonological features differ in their relevance for word recognition. The relevance of a feature for word recognition depends on its phonological stability, that is, the extent to which that feature is generally realized in accordance with its lexical specification in the relevant word position. If one feature value is uninformative, all values of that feature are less relevant for word recognition, with the least informative feature being the least relevant. Features differ in their relevance both in spoken and written word recognition, though the differences are more pronounced in auditory lexical decision than in self-paced reading.
  • Ernestus, M., & Baayen, R. H. (2004). Analogical effects in regular past tense production in Dutch. Linguistics, 42(5), 873-903. doi:10.1515/ling.2004.031.

    Abstract

    This study addresses the question to what extent the production of regular past tense forms in Dutch is a¤ected by analogical processes. We report an experiment in which native speakers of Dutch listened to existing regular verbs over headphones, and had to indicate which of the past tense allomorphs, te or de, was appropriate for these verbs. According to generative analyses, the choice between the two su‰xes is completely regular and governed by the underlying [voice]-specification of the stem-final segment. In this approach, no analogical e¤ects are expected. In connectionist and analogical approaches, by contrast, the phonological similarity structure in the lexicon is expected to a¤ect lexical processing. Our experimental results support the latter approach: all participants created more nonstandard past tense forms, produced more inconsistency errors, and responded more slowly for verbs with stronger analogical support for the nonstandard form.
  • Ernestus, M., Lahey, M., Verhees, F., & Baayen, R. H. (2006). Lexical frequency and voice assimilation. Journal of the Acoustical Society of America, 120(2), 1040-1051. doi:10.1121/1.2211548.

    Abstract

    Acoustic duration and degree of vowel reduction are known to correlate with a word’s frequency of occurrence. The present study broadens the research on the role of frequency in speech production to voice assimilation. The test case was regressive voice assimilation in Dutch. Clusters from a corpus of read speech were more often perceived as unassimilated in lower-frequency words and as either completely voiced regressive assimilation or, unexpectedly, as completely voiceless progressive assimilation in higher-frequency words. Frequency did not predict the voice classifications over and above important acoustic cues to voicing, suggesting that the frequency effects on the classifications were carried exclusively by the acoustic signal. The duration of the cluster and the period of glottal vibration during the cluster decreased while the duration of the release noises increased with frequency. This indicates that speakers reduce articulatory effort for higher-frequency words, with some acoustic cues signaling more voicing and others less voicing. A higher frequency leads not only to acoustic reduction but also to more assimilation.
  • Ernestus, M., & Baayen, R. H. (2004). Kuchde, tobte, en turfte: Lekkage in 't kofschip. Onze Taal, 73(12), 360-361.
  • Ernestus, M., Giezenaar, G., & Dikmans, M. (2016). Ikfstajezotuuknie: Half uitgesproken woorden in alledaagse gesprekken. Les, 199, 7-9.

    Abstract

    Amsterdam klinkt in informele gesprekken vaak als Amsdam en Rotterdam als Rodam, zonder dat de meeste moedertaalsprekers zich daar bewust van zijn. In alledaagse situaties valt een aanzienlijk deel van de klanken weg. Daarnaast worden veel klanken zwakker gearticuleerd (bijvoorbeeld een d als een j, als de mond niet helemaal afgesloten wordt). Het lijkt waarschijnlijk dat deze half uitgesproken woorden een probleem vormen voor tweedetaalleerders. Gereduceerde vormen kunnen immers sterk afwijken van de vormen die deze leerders geleerd hebben. Of dit werkelijk zo is, hebben de auteurs onderzocht in twee studies. Voordat ze deze twee studies bespreken, vertellen ze eerst kort iets over de verschillende typen reducties die voorkomen.
  • Estruch, S. B., Graham, S. A., Chinnappa, S. M., Deriziotis, P., & Fisher, S. E. (2016). Functional characterization of rare FOXP2 variants in neurodevelopmental disorder. Journal of Neurodevelopmental Disorders, 8: 44. doi:10.1186/s11689-016-9177-2.
  • Estruch, S. B., Graham, S. A., Deriziotis, P., & Fisher, S. E. (2016). The language-related transcription factor FOXP2 is post-translationally modified with small ubiquitin-like modifiers. Scientific Reports, 6: 20911. doi:10.1038/srep20911.

    Abstract

    Mutations affecting the transcription factor FOXP2 cause a rare form of severe speech and language disorder. Although it is clear that sufficient FOXP2 expression is crucial for normal brain development, little is known about how this transcription factor is regulated. To investigate post-translational mechanisms for FOXP2 regulation, we searched for protein interaction partners of FOXP2, and identified members of the PIAS family as novel FOXP2 interactors. PIAS proteins mediate post-translational modification of a range of target proteins with small ubiquitin-like modifiers (SUMOs). We found that FOXP2 can be modified with all three human SUMO proteins and that PIAS1 promotes this process. An aetiological FOXP2 mutation found in a family with speech and language disorder markedly reduced FOXP2 SUMOylation. We demonstrate that FOXP2 is SUMOylated at a single major site, which is conserved in all FOXP2 vertebrate orthologues and in the paralogues FOXP1 and FOXP4. Abolishing this site did not lead to detectable changes in FOXP2 subcellular localization, stability, dimerization or transcriptional repression in cellular assays, but the conservation of this site suggests a potential role for SUMOylation in regulating FOXP2 activity in vivo.

    Additional information

    srep20911-s1.pdf
  • Ho, Y. Y. W., Evans, D. M., Montgomery, G. W., Henders, A. K., Kemp, J. P., Timpson, N. J., St Pourcain, B., Heath, A. C., Madden, P. A. F., Loesch, D. Z., McNevin, D., Daniel, R., Davey-Smith, G., Martin, N. G., & Medland, S. E. (2016). Common genetic variants influence whorls in fingerprint patterns. Journal of Investigative Dermatology, 136(4), 859-862. doi:10.1016/j.jid.2015.10.062.
  • Everaerd, D., Klumpers, F., Zwiers, M., Guadalupe, T., Franke, B., Van Oostrum, I., Schene, A., Fernandez, G., & Tendolkar, I. (2016). Childhood abuse and deprivation are associated with distinct sex-dependent differences in brain morphology. Neuropsychopharmacology, 41, 1716-1723. doi:10.1038/npp.2015.344.

    Abstract

    Childhood adversity (CA) has been associated with long-term structural brain alterations and an increased risk for psychiatric disorders. Evidence is emerging that subtypes of CA, varying in the dimensions of threat and deprivation, lead to distinct neural and behavioral outcomes. However, these specific associations have yet to be established without potential confounders such as psychopathology. Moreover, differences in neural development and psychopathology necessitate the exploration of sexual dimorphism. Young healthy adult subjects were selected based on history of CA from a large database to assess gray matter (GM) differences associated with specific subtypes of adversity. We compared voxel-based morphometry data of subjects reporting specific childhood exposure to abuse (n = 127) or deprivation (n = 126) and a similar sized group of controls (n = 129) without reported CA. Subjects were matched on age, gender, and educational level. Differences between CA subtypes were found in the fusiform gyrus and middle occipital gyms, where subjects with a history of deprivation showed reduced GM compared with subjects with a history of abuse. An interaction between sex and CA subtype was found. Women showed less GM in the visual posterior precuneal region after both subtypes of CA than controls. Men had less GM in the postcentral gyms after childhood deprivation compared with abuse. Our results suggest that even in a healthy population, CA subtypes are related to specific alterations in brain structure, which are modulated by sex. These findings may help understand neurodevelopmental consequences related to CA
  • Everett, C., Blasi, D. E., & Roberts, S. G. (2016). Language evolution and climate: The case of desiccation and tone. Journal of Language Evolution, 1, 33-46. doi:10.1093/jole/lzv004.

    Abstract

    We make the case that, contra standard assumption in linguistic theory, the sound systems of human languages are adapted to their environment. While not conclusive, this plausible case rests on several points discussed in this work: First, human behavior is generally adaptive and the assumption that this characteristic does not extend to linguistic structure is empirically unsubstantiated. Second, animal communication systems are well known to be adaptive within species across a variety of phyla and taxa. Third, research in laryngology demonstrates clearly that ambient desiccation impacts the performance of the human vocal cords. The latter point motivates a clear, testable hypothesis with respect to the synchronic global distribution of language types. Fourth, this hypothesis is supported in our own previous work, and here we discuss new approaches being developed to further explore the hypothesis. We conclude by suggesting that the time has come to more substantively examine the possibility that linguistic sound systems are adapted to their physical ecology
  • Everett, C., Blasi, D., & Roberts, S. G. (2016). Response: Climate and language: has the discourse shifted? Journal of Language Evolution, 1(1), 83-87. doi:10.1093/jole/lzv013.

    Abstract

    We begin by thanking the respondents for their thoughtful comments and insightful leads. The overall impression we are left with by this exchange is one of progress, even if no consensus remains about the particular hypothesis we raise. To date, there has been a failure to seriously engage with the possibility that humans might adapt their communication to ecological factors. In these exchanges, we see signs of serious engagement with that possibility. Most respondents expressed agreement with the notion that our central premise—that language is ecologically adaptive—requires further exploration and may in fact be operative. We are pleased to see this shift in discourse, and to witness a heightening appreciation of possible ecological constraints on language evolution. It is that shift in discourse that represents progress in our view. Our hope is that future work will continue to explore these issues, paying careful attention to the fact that the human larynx is clearly sensitive to characteristics of ambient air. More generally, we think this exchange is indicative of the growing realization that inquiries into language development must consider potential external factors (see Dediu 2015)...

    Additional information

    AppendixResponseToHammarstrom.pdf
  • Fan, Q., Guo, X., Tideman, J. W. L., Williams, K. M., Yazar, S., Hosseini, S. M., Howe, L. D., St Pourcain, B., Evans, D. M., Timpson, N. J., McMahon, G., Hysi, P. G., Krapohl, E., Wang, Y. X., Jonas, J. B., Baird, P. N., Wang, J. J., Cheng, C. Y., Teo, Y. Y., Wong, T. Y. and 17 moreFan, Q., Guo, X., Tideman, J. W. L., Williams, K. M., Yazar, S., Hosseini, S. M., Howe, L. D., St Pourcain, B., Evans, D. M., Timpson, N. J., McMahon, G., Hysi, P. G., Krapohl, E., Wang, Y. X., Jonas, J. B., Baird, P. N., Wang, J. J., Cheng, C. Y., Teo, Y. Y., Wong, T. Y., Ding, X., Wojciechowski, R., Young, T. L., Parssinen, O., Oexle, K., Pfeiffer, N., Bailey-Wilson, J. E., Paterson, A. D., Klaver, C. C. W., Plomin, R., Hammond, C. J., Mackey, D. A., He, M. G., Saw, S. M., Williams, C., Guggenheim, J. A., & Cream, C. (2016). Childhood gene-environment interactions and age-dependent effects of genetic variants associated with refractive error and myopia: The CREAM Consortium. Scientific Reports, 6: 25853. doi:10.1038/srep25853.

    Abstract

    Myopia, currently at epidemic levels in East Asia, is a leading cause of untreatable visual impairment. Genome-wide association studies (GWAS) in adults have identified 39 loci associated with refractive error and myopia. Here, the age-of-onset of association between genetic variants at these 39 loci and refractive error was investigated in 5200 children assessed longitudinally across ages 7-15 years, along with gene-environment interactions involving the major environmental risk-factors, nearwork and time outdoors. Specific variants could be categorized as showing evidence of: (a) early-onset effects remaining stable through childhood, (b) early-onset effects that progressed further with increasing age, or (c) onset later in childhood (N = 10, 5 and 11 variants, respectively). A genetic risk score (GRS) for all 39 variants explained 0.6% (P = 6.6E-08) and 2.3% (P = 6.9E-21) of the variance in refractive error at ages 7 and 15, respectively, supporting increased effects from these genetic variants at older ages. Replication in multi-ancestry samples (combined N = 5599) yielded evidence of childhood onset for 6 of 12 variants present in both Asians and Europeans. There was no indication that variant or GRS effects altered depending on time outdoors, however 5 variants showed nominal evidence of interactions with nearwork (top variant, rs7829127 in ZMAT4; P = 6.3E-04).

    Additional information

    srep25853-s1.pdf
  • Fan, Q., Verhoeven, V. J., Wojciechowski, R., Barathi, V. A., Hysi, P. G., Guggenheim, J. A., Höhn, R., Vitart, V., Khawaja, A. P., Yamashiro, K., Hosseini, S. M., Lehtimäki, T., Lu, Y., Haller, T., Xie, J., Delcourt, C., Pirastu, M., Wedenoja, J., Gharahkhani, P., Venturini, C. and 83 moreFan, Q., Verhoeven, V. J., Wojciechowski, R., Barathi, V. A., Hysi, P. G., Guggenheim, J. A., Höhn, R., Vitart, V., Khawaja, A. P., Yamashiro, K., Hosseini, S. M., Lehtimäki, T., Lu, Y., Haller, T., Xie, J., Delcourt, C., Pirastu, M., Wedenoja, J., Gharahkhani, P., Venturini, C., Miyake, M., Hewitt, A. W., Guo, X., Mazur, J., Huffman, J. E., Williams, K. M., Polasek, O., Campbell, H., Rudan, I., Vatavuk, Z., Wilson, J. F., Joshi, P. K., McMahon, G., St Pourcain, B., Evans, D. M., Simpson, C. L., Schwantes-An, T.-H., Igo, R. P., Mirshahi, A., Cougnard-Gregoire, A., Bellenguez, C., Blettner, M., Raitakari, O., Kähönen, M., Seppälä, I., Zeller, T., Meitinger, T., Ried, J. S., Gieger, C., Portas, L., Van Leeuwen, E. M., Amin, N., Uitterlinden, A. G., Rivadeneira, F., Hofman, A., Vingerling, J. R., Wang, Y. X., Wang, X., Boh, E.-T.-H., Ikram, M. K., Sabanayagam, C., Gupta, P., Tan, V., Zhou, L., Ho, C. E., Lim, W., Beuerman, R. W., Siantar, R., Tai, E.-S., Vithana, E., Mihailov, E., Khor, C.-C., Hayward, C., Luben, R. N., Foster, P. J., Klein, B. E., Klein, R., Wong, H.-S., Mitchell, P., Metspalu, A., Aung, T., Young, T. L., He, M., Pärssinen, O., Van Duijn, C. M., Wang, J. J., Williams, C., Jonas, J. B., Teo, Y.-Y., Mackey, D. A., Oexle, K., Yoshimura, N., Paterson, A. D., Pfeiffer, N., Wong, T.-Y., Baird, P. N., Stambolian, D., Bailey-Wilson, J. E., Cheng, C.-Y., Hammond, C. J., Klaver, C. C., Saw, S.-M., & Consortium for Refractive Error and Myopia (CREAM) (2016). Meta-analysis of gene–environment-wide association scans accounting for education level identifies additional loci for refractive error. Nature Communications, 7: 11008. doi:10.1038/ncomms11008.

    Abstract

    Myopia is the most common human eye disorder and it results from complex genetic and environmental causes. The rapidly increasing prevalence of myopia poses a major public health challenge. Here, the CREAM consortium performs a joint meta-analysis to test single-nucleotide polymorphism (SNP) main effects and SNP × education interaction effects on refractive error in 40,036 adults from 25 studies of European ancestry and 10,315 adults from 9 studies of Asian ancestry. In European ancestry individuals, we identify six novel loci (FAM150B-ACP1, LINC00340, FBN1, DIS3L-MAP2K1, ARID2-SNAT1 and SLC14A2) associated with refractive error. In Asian populations, three genome-wide significant loci AREG, GABRR1 and PDE10A also exhibit strong interactions with education (P<8.5 × 10−5), whereas the interactions are less evident in Europeans. The discovery of these loci represents an important advance in understanding how gene and environment interactions contribute to the heterogeneity of myopia

    Additional information

    Fan_etal_2016sup.pdf
  • Fedorenko, E., Morgan, A., Murray, E., Cardinaux, A., Mei, C., Tager-Flusberg, H., Fisher, S. E., & Kanwisher, N. (2016). A highly penetrant form of childhood apraxia of speech due to deletion of 16p11.2. European Journal of Human Genetics, 24(2), 302-306. doi:10.1038/ejhg.2015.149.

    Abstract

    Individuals with heterozygous 16p11.2 deletions reportedly suffer from a variety of difficulties with speech and language. Indeed, recent copy-number variant screens of children with childhood apraxia of speech (CAS), a specific and rare motor speech disorder, have identified three unrelated individuals with 16p11.2 deletions. However, the nature and prevalence of speech and language disorders in general, and CAS in particular, is unknown for individuals with 16p11.2 deletions. Here we took a genotype-first approach, conducting detailed and systematic characterization of speech abilities in a group of 11 unrelated children ascertained on the basis of 16p11.2 deletions. To obtain the most precise and replicable phenotyping, we included tasks that are highly diagnostic for CAS, and we tested children under the age of 18 years, an age group where CAS has been best characterized. Two individuals were largely nonverbal, preventing detailed speech analysis, whereas the remaining nine met the standard accepted diagnostic criteria for CAS. These results link 16p11.2 deletions to a highly penetrant form of CAS. Our findings underline the need for further precise characterization of speech and language profiles in larger groups of affected individuals, which will also enhance our understanding of how genetic pathways contribute to human communication disorders.
  • Ferreri, L., & Verga, L. (2016). Benefits of music on verbal learning and memory: How and when does it work? Music Perception, 34(2), 167-182. doi:10.1525/mp.2016.34.2.167.

    Abstract

    A long-standing debate in cognitive neurosciences concerns the effect of music on verbal learning and memory. Research in this field has largely provided conflicting results in both clinical as well as non-clinical populations. Although several studies have shown a positive effect of music on the encoding and retrieval of verbal stimuli, music has also been suggested to hinder mnemonic performance by dividing attention. In an attempt to explain this conflict, we review the most relevant literature on the effects of music on verbal learning and memory. Furthermore, we specify several mechanisms through which music may modulate these cognitive functions. We suggest that the extent to which music boosts these cognitive functions relies on experimental factors, such as the relative complexity of musical and verbal stimuli employed. These factors should be carefully considered in further studies, in order to reliably establish how and when music boosts verbal memory and learning. The answers to these questions are not only crucial for our knowledge of how music influences cognitive and brain functions, but may have important clinical implications. Considering the increasing number of approaches using music as a therapeutic tool, the importance of understanding exactly how music works can no longer be underestimated.
  • Filippi, P. (2016). Emotional and Interactional Prosody across Animal Communication Systems: A Comparative Approach to the Emergence of Language. Frontiers in Psychology, 7: 1393. doi:10.3389/fpsyg.2016.01393.

    Abstract

    Across a wide range of animal taxa, prosodic modulation of the voice can express emotional information and is used to coordinate vocal interactions between multiple individuals. Within a comparative approach to animal communication systems, I hypothesize that the ability for emotional and interactional prosody (EIP) paved the way for the evolution of linguistic prosody – and perhaps also of music, continuing to play a vital role in the acquisition of language. In support of this hypothesis, I review three research fields: (i) empirical studies on the adaptive value of EIP in non-human primates, mammals, songbirds, anurans, and insects; (ii) the beneficial effects of EIP in scaffolding language learning and social development in human infants; (iii) the cognitive relationship between linguistic prosody and the ability for music, which has often been identified as the evolutionary precursor of language.
  • Filippi, P., Jadoul, Y., Ravignani, A., Thompson, B., & de Boer, B. (2016). Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages. Frontiers in Human Neuroscience, 10: 586. doi:10.3389/fnhum.2016.00586.

    Abstract

    Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure—regularities arising in an ordered series of syllable timings—testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint.
  • Fisher, S. E., & Francks, C. (2006). Genes, cognition and dyslexia: Learning to read the genome. Trends in Cognitive Sciences, 10, 250-257. doi:10.1016/j.tics.2006.04.003.

    Abstract

    Studies of dyslexia provide vital insights into the cognitive architecture underpinning both disordered and normal reading. It is well established that inherited factors contribute to dyslexia susceptibility, but only very recently has evidence emerged to implicate specific candidate genes. In this article, we provide an accessible overview of four prominent examples--DYX1C1, KIAA0319, DCDC2 and ROBO1--and discuss their relevance for cognition. In each case correlations have been found between genetic variation and reading impairments, but precise risk variants remain elusive. Although none of these genes is specific to reading-related neuronal circuits, or even to the human brain, they have intriguing roles in neuronal migration or connectivity. Dissection of cognitive mechanisms that subserve reading will ultimately depend on an integrated approach, uniting data from genetic investigations, behavioural studies and neuroimaging.
  • Fisher, S. E., Vargha-Khadem, F., Watkins, K. E., Monaco, A. P., & Pembrey, M. E. (1998). Localisation of a gene implicated in a severe speech and language disorder. Nature Genetics, 18, 168 -170. doi:10.1038/ng0298-168.

    Abstract

    Between 2 and 5% of children who are otherwise unimpaired have significant difficulties in acquiring expressive and/or receptive language, despite adequate intelligence and opportunity. While twin studies indicate a significant role for genetic factors in developmental disorders of speech and language, the majority of families segregating such disorders show complex patterns of inheritance, and are thus not amenable for conventional linkage analysis. A rare exception is the KE family, a large three-generation pedigree in which approximately half of the members are affected with a severe speech and language disorder which appears to be transmitted as an autosomal dominant monogenic trait. This family has been widely publicised as suffering primarily from a defect in the use of grammatical suffixation rules, thus supposedly supporting the existence of genes specific to grammar. The phenotype, however, is broader in nature, with virtually every aspect of grammar and of language affected. In addition, affected members have a severe orofacial dyspraxia, and their speech is largely incomprehensible to the naive listener. We initiated a genome-wide search for linkage in the KE family and have identified a region on chromosome 7 which co-segregates with the speech and language disorder (maximum lod score = 6.62 at theta = 0.0), confirming autosomal dominant inheritance with full penetrance. Further analysis of microsatellites from within the region enabled us to fine map the locus responsible (designated SPCH1) to a 5.6-cM interval in 7q31, thus providing an important step towards its identification. Isolation of SPCH1 may offer the first insight into the molecular genetics of the developmental process that culminates in speech and language.
  • Fisher, S. E. (2006). Tangled webs: Tracing the connections between genes and cognition. Cognition, 101, 270-297. doi:10.1016/j.cognition.2006.04.004.

    Abstract

    The rise of molecular genetics is having a pervasive influence in a wide variety of fields, including research into neurodevelopmental disorders like dyslexia, speech and language impairments, and autism. There are many studies underway which are attempting to determine the roles of genetic factors in the aetiology of these disorders. Beyond the obvious implications for diagnosis, treatment and understanding, success in these efforts promises to shed light on the links between genes and aspects of cognition and behaviour. However, the deceptive simplicity of finding correlations between genetic and phenotypic variation has led to a common misconception that there exist straightforward linear relationships between specific genes and particular behavioural and/or cognitive outputs. The problem is exacerbated by the adoption of an abstract view of the nature of the gene, without consideration of molecular, developmental or ontogenetic frameworks. To illustrate the limitations of this perspective, I select two cases from recent research into the genetic underpinnings of neurodevelopmental disorders. First, I discuss the proposal that dyslexia can be dissected into distinct components specified by different genes. Second, I review the story of the FOXP2 gene and its role in human speech and language. In both cases, adoption of an abstract concept of the gene can lead to erroneous conclusions, which are incompatible with current knowledge of molecular and developmental systems. Genes do not specify behaviours or cognitive processes; they make regulatory factors, signalling molecules, receptors, enzymes, and so on, that interact in highly complex networks, modulated by environmental influences, in order to build and maintain the brain. I propose that it is necessary for us to fully embrace the complexity of biological systems, if we are ever to untangle the webs that link genes to cognition.
  • Fisher, S. E., & Marcus, G. (2006). The eloquent ape: Genes, brains and the evolution of language. Nature Reviews Genetics, 7, 9-20. doi:10.1038/nrg1747.

    Abstract

    The human capacity to acquire complex language seems to be without parallel in the natural world. The origins of this remarkable trait have long resisted adequate explanation, but advances in fields that range from molecular genetics to cognitive neuroscience offer new promise. Here we synthesize recent developments in linguistics, psychology and neuroimaging with progress in comparative genomics, gene-expression profiling and studies of developmental disorders. We argue that language should be viewed not as a wholesale innovation, but as a complex reconfiguration of ancestral systems that have been adapted in evolutionarily novel ways.
  • Fisher, S. E., Ciccodicola, A., Tanaka, K., Curci, A., Desicato, S., D'urso, M., & Craig, I. W. (1997). Sequence-based exon prediction around the synaptophysin locus reveals a gene-rich area containing novel genes in human proximal Xp. Genomics, 45, 340-347. doi:10.1006/geno.1997.4941.

    Abstract

    The human Xp11.23-p11.22 interval has been implicated in several inherited diseases including Wiskott-Aldrich syndrome; three forms of X-linked hypercalciuric nephrolithiaisis; and the eye disorders retinitis pigmentosa 2, congenital stationary night blindness, and Aland Island eye disease. In constructing YAC contigs spanning Xp11. 23-p11.22, we have previously shown that the region around the synaptophysin (SYP) gene is refractory to cloning in YACs, but highly stable in cosmids. Preliminary analysis of the latter suggested that this might reflect a high density of coding sequences and we therefore undertook the complete sequencing of a SYP-containing cosmid. Sequence data were extensively analyzed using computer programs such as CENSOR (to mask repeats), BLAST (for homology searches), and GRAIL and GENE-ID (to predict exons). This revealed the presence of 29 putative exons, organized into three genes, in addition to the 7 exons of the complete SYP coding region, all mapping within a 44-kb interval. Two genes are novel, one (CACNA1F) showing high homology to alpha1 subunits of calcium channels, the other (LMO6) encoding a product with significant similarity to LIM-domain proteins. RT-PCR and Northern blot studies confirmed that these loci are indeed transcribed. The third locus is the previously described, but not previously localized, A4 differentiation-dependent gene. Given that the intron-exon boundaries predicted by the analysis are consistent with previous information where available, we have been able to suggest the genomic organization of the novel genes with some confidence. The region has an elevated GC content (>53%), and we identified CpG islands associated with the 5' ends of SYP, A4, and LMO6. The order of loci was Xpter-A4-LMO6-SYP-CACNA1F-Xcen, with intergenic distances ranging from approximately 300 bp to approximately 5 kb. The density of transcribed sequences in this area (>80%) is comparable to that found in the highly gene-rich chromosomal band Xq28. Further studies may aid our understanding of the long-range organization surrounding such gene-enriched regions.
  • FitzPatrick, I., & Indefrey, P. (2016). Accessing Conceptual Representations for Speaking [Editorial]. Frontiers in Psychology, 7: 1216. doi:10.3389/fpsyg.2016.01216.

    Abstract

    Systematic investigations into the role of semantics in the speech production process have remained elusive. This special issue aims at moving forward toward a more detailed account of how precisely conceptual information is used to access the lexicon in speaking and what corresponding format of conceptual representations needs to be assumed. The studies presented in this volume investigated effects of conceptual processing on different processing stages of language production, including sentence formulation, lemma selection, and word form access.
  • Floyd, S. (2016). [Review of the book Fluent Selves: Autobiography, Person, and History in Lowland South America ed. by Suzanne Oakdale and Magnus Course]. Journal of Linguistic Anthropology, 26(1), 110-111. doi:10.1111/jola.12112.
  • Floyd, S. (2016). Modally hybrid grammar? Celestial pointing for time-of-day reference in Nheengatú. Language, 92(1), 31-64. doi:10.1353/lan.2016.0013.

    Abstract

    From the study of sign languages we know that the visual modality robustly supports the encoding of conventionalized linguistic elements, yet while the same possibility exists for the visual bodily behavior of speakers of spoken languages, such practices are often referred to as ‘gestural’ and are not usually described in linguistic terms. This article describes a practice of speakers of the Brazilian indigenous language Nheengatú of pointing to positions along the east-west axis of the sun’s arc for time-of-day reference, and illustrates how it satisfies any of the common criteria for linguistic elements, as a system of standardized and productive form-meaning pairings whose contributions to propositional meaning remain stable across contexts. First, examples from a video corpus of natural speech demonstrate these conventionalized properties of Nheengatú time reference across multiple speakers. Second, a series of video-based elicitation stimuli test several dimensions of its conventionalization for nine participants. The results illustrate why modality is not an a priori reason that linguistic properties cannot develop in the visual practices that accompany spoken language. The conclusion discusses different possible morphosyntactic and pragmatic analyses for such conventionalized visual elements and asks whether they might be more crosslinguistically common than we presently know.
  • Floyd, S., Manrique, E., Rossi, G., & Torreira, F. (2016). Timing of visual bodily behavior in repair sequences: Evidence from three languages. Discourse Processes, 53(3), 175-204. doi:10.1080/0163853X.2014.992680.

    Abstract

    This article expands the study of other-initiated repair in conversation—when one party
    signals a problemwith producing or perceiving another’s turn at talk—into the domain
    of visual bodily behavior. It presents one primary cross-linguistic finding about the
    timing of visual bodily behavior in repair sequences: if the party who initiates repair
    accompanies their turn with a “hold”—when relatively dynamic movements are
    temporarily andmeaningfully held static—this positionwill not be disengaged until the
    problem is resolved and the sequence closed. We base this finding on qualitative and
    quantitative analysis of corpora of conversational interaction from three unrelated languages representing two different modalities: Northern Italian, the Cha’palaa language of Ecuador, and Argentine Sign Language. The cross-linguistic similarities
    uncovered by this comparison suggest that visual bodily practices have been
    semiotized for similar interactive functions across different languages and modalities
    due to common pressures in face-to-face interaction.
  • Forkstam, C., Hagoort, P., Fernandez, G., Ingvar, M., & Petersson, K. M. (2006). Neural correlates of artificial syntactic structure classification. NeuroImage, 32(2), 956-967. doi:10.1016/j.neuroimage.2006.03.057.

    Abstract

    The human brain supports acquisition mechanisms that extract structural regularities implicitly from experience without the induction of an explicit model. It has been argued that the capacity to generalize to new input is based on the acquisition of abstract representations, which reflect underlying structural regularities in the input ensemble. In this study, we explored the outcome of this acquisition mechanism, and to this end, we investigated the neural correlates of artificial syntactic classification using event-related functional magnetic resonance imaging. The participants engaged once a day during an 8-day period in a short-term memory acquisition task in which consonant-strings generated from an artificial grammar were presented in a sequential fashion without performance feedback. They performed reliably above chance on the grammaticality classification tasks on days 1 and 8 which correlated with a corticostriatal processing network, including frontal, cingulate, inferior parietal, and middle occipital/occipitotemporal regions as well as the caudate nucleus. Part of the left inferior frontal region (BA 45) was specifically related to syntactic violations and showed no sensitivity to local substring familiarity. In addition, the head of the caudate nucleus correlated positively with syntactic correctness on day 8 but not day 1, suggesting that this region contributes to an increase in cognitive processing fluency.
  • Francks, C., Paracchini, S., Smith, S. D., Richardson, A. J., Scerri, T. S., Cardon, L. R., Marlow, A. J., MacPhie, I. L., Walter, J., Pennington, B. F., Fisher, S. E., Olson, R. K., DeFries, J. C., Stein, J. F., & Monaco, A. P. (2004). A 77-kilobase region of chromosome 6p22.2 is associated with dyslexia in families from the United Kingdom and from the United States. American Journal of Human Genetics, 75(6), 1046-1058. doi:10.1086/426404.

    Abstract

    Several quantitative trait loci (QTLs) that influence developmental dyslexia (reading disability [RD]) have been mapped to chromosome regions by linkage analysis. The most consistently replicated area of linkage is on chromosome 6p23-21.3. We used association analysis in 223 siblings from the United Kingdom to identify an underlying QTL on 6p22.2. Our association study implicates a 77-kb region spanning the gene TTRAP and the first four exons of the neighboring uncharacterized gene KIAA0319. The region of association is also directly upstream of a third gene, THEM2. We found evidence of these associations in a second sample of siblings from the United Kingdom, as well as in an independent sample of twin-based sibships from Colorado. One main RD risk haplotype that has a frequency of ∼12% was found in both the U.K. and U.S. samples. The haplotype is not distinguished by any protein-coding polymorphisms, and, therefore, the functional variation may relate to gene expression. The QTL influences a broad range of reading-related cognitive abilities but has no significant impact on general cognitive performance in these samples. In addition, the QTL effect may be largely limited to the severe range of reading disability.
  • Frank, S. L., & Fitz, H. (2016). Reservoir computing and the Sooner-is-Better bottleneck [Commentary on Christiansen & Slater]. Behavioral and Brain Sciences, 39: e73. doi:10.1017/S0140525X15000783.

    Abstract

    Prior language input is not lost but integrated with the current input. This principle is demonstrated by “reservoir computing”: Untrained recurrent neural networks project input sequences onto a random point in high-dimensional state space. Earlier inputs can be retrieved from this projection, albeit less reliably so as more input is received. The bottleneck is therefore not “Now-or-Never” but “Sooner-is-Better.
  • Franke, B., Stein, J. L., Ripke, S., Anttila, V., Hibar, D. P., Van Hulzen, K. J. E., Arias-Vasquez, A., Smoller, J. W., Nichols, T. E., Neale, M. C., McIntosh, A. M., Lee, P., McMahon, F. J., Meyer-Lindenberg, A., Mattheisen, M., Andreassen, O. A., Gruber, O., Sachdev, P. S., Roiz-Santiañez, R., Saykin, A. J. and 17 moreFranke, B., Stein, J. L., Ripke, S., Anttila, V., Hibar, D. P., Van Hulzen, K. J. E., Arias-Vasquez, A., Smoller, J. W., Nichols, T. E., Neale, M. C., McIntosh, A. M., Lee, P., McMahon, F. J., Meyer-Lindenberg, A., Mattheisen, M., Andreassen, O. A., Gruber, O., Sachdev, P. S., Roiz-Santiañez, R., Saykin, A. J., Ehrlich, S., Mather, K. A., Turner, J. A., Schwarz, E., Thalamuthu, A., Yao, Y., Ho, Y. Y. W., Martin, N. G., Wright, M. J., Guadalupe, T., Fisher, S. E., Francks, C., Schizophrenia Working Group of the Psychiatric Genomics Consortium, ENIGMA Consortium, O’Donovan, M. C., Thompson, P. M., Neale, B. M., Medland, S. E., & Sullivan, P. F. (2016). Genetic influences on schizophrenia and subcortical brain volumes: large-scale proof of concept. Nature Neuroscience, 19, 420-431. doi:10.1038/nn.4228.

    Abstract

    Schizophrenia is a devastating psychiatric illness with high heritability. Brain structure and function differ, on average, between people with schizophrenia and healthy individuals. As common genetic associations are emerging for both schizophrenia and brain imaging phenotypes, we can now use genome-wide data to investigate genetic overlap. Here we integrated results from common variant studies of schizophrenia (33,636 cases, 43,008 controls) and volumes of several (mainly subcortical) brain structures (11,840 subjects). We did not find evidence of genetic overlap between schizophrenia risk and subcortical volume measures either at the level of common variant genetic architecture or for single genetic markers. These results provide a proof of concept (albeit based on a limited set of structural brain measures) and define a roadmap for future studies investigating the genetic covariance between structural or functional brain phenotypes and risk for psychiatric disorders

    Additional information

    Franke_etal_2016_supp1.pdf
  • Freunberger, D., & Nieuwland, M. S. (2016). Incremental comprehension of spoken quantifier sentences: Evidence from brain potentials. Brain Research, 1646, 475-481. doi:10.1016/j.brainres.2016.06.035.

    Abstract

    Do people incrementally incorporate the meaning of quantifier expressions to understand an unfolding sentence? Most previous studies concluded that quantifiers do not immediately influence how a sentence is understood based on the observation that online N400-effects differed from offline plausibility judgments. Those studies, however, used serial visual presentation (SVP), which involves unnatural reading. In the current ERP-experiment, we presented spoken positive and negative quantifier sentences (“Practically all/practically no postmen prefer delivering mail, when the weather is good/bad during the day”). Different from results obtained in a previously reported SVP-study (Nieuwland, 2016) sentence truth-value N400 effects occurred in positive and negative quantifier sentences alike, reflecting fully incremental quantifier comprehension. This suggests that the prosodic information available during spoken language comprehension supports the generation of online predictions for upcoming words and that, at least for quantifier sentences, comprehension of spoken language may proceed more incrementally than comprehension during SVP reading.
  • Friederici, A. D., & Levelt, W. J. M. (1986). Cognitive processes of spatial coordinate assignment: On weighting perceptual cues. Naturwissenschaften, 73, 455-458.
  • Friederici, A., & Levelt, W. J. M. (1987). Resolving perceptual conflicts: The cognitive mechanism of spatial orientation. Aviation, Space, and Environmental Medicine, 58(9), A164-A169.
  • Frost, R. L. A., & Monaghan, P. (2016). Simultaneous segmentation and generalisation of non-adjacent dependencies from continuous speech. Cognition, 147, 70-74. doi:10.1016/j.cognition.2015.11.010.

    Abstract

    Language learning requires mastering multiple tasks, including segmenting speech to identify words, and learning the syntactic role of these words within sentences. A key question in language acquisition research is the extent to which these tasks are sequential or successive, and consequently whether they may be driven by distinct or similar computations. We explored a classic artificial language learning paradigm, where the language structure is defined in terms of non-adjacent dependencies. We show that participants are able to use the same statistical information at the same time to segment continuous speech to both identify words and to generalise over the structure, when the generalisations were over novel speech that the participants had not previously experienced. We suggest that, in the absence of evidence to the contrary, the most economical explanation for the effects is that speech segmentation and grammatical generalisation are dependent on similar statistical processing mechanisms.
  • Gaby, A. R. (2006). The Thaayorre 'true man': Lexicon of the human body in an Australian language. Language Sciences, 28(2-3), 201-220. doi:10.1016/j.langsci.2005.11.006.

    Abstract

    Segmentation (and, indeed, definition) of the human body in Kuuk Thaayorre (a Paman language of Cape York Peninsula, Australia) is in some respects typologically unusual, while at other times it conforms to cross-linguistic patterns. The process of deriving complex body part terms from monolexemic items is revealing of metaphorical associations between parts of the body. Associations between parts of the body and entities and phenomena in the broader environment are evidenced by the ubiquity of body part terms (in their extended uses) throughout Thaayorre speech. Understanding the categorisation of the body is therefore prerequisite to understanding the Thaayorre language and worldview.

Share this page