Publications

Displaying 301 - 400 of 1391
  • Enfield, N. J., & Sidnell, J. (2014). Language presupposes an enchronic infrastructure for social interaction. In D. Dor, C. Knight, & J. Lewis (Eds.), The social origins of language (pp. 92-104). Oxford: Oxford University Press.
  • Enfield, N. J. (2009). Language: Social motives for syntax [Review of the book Origins of human communication by Michael Tomasello]. Science, 324(5923), 39. doi:10.1126/science.1172660.
  • Enfield, N. J., Kockelman, P., & Sidnell, J. (2014). Interdisciplinary perspectives. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 599-602). Cambridge: Cambridge University Press.
  • Enfield, N. J., Kockelman, P., & Sidnell, J. (2014). Introduction: Directions in the anthropology of language. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 1-24). Cambridge: Cambridge University Press.
  • Enfield, N. J. (2014). Natural causes of language: Frames, biases and cultural transmission. Berlin: Language Science Press. Retrieved from http://langsci-press.org/catalog/book/48.

    Abstract

    What causes a language to be the way it is? Some features are universal, some are inherited, others are borrowed, and yet others are internally innovated. But no matter where a bit of language is from, it will only exist if it has been diffused and kept in circulation through social interaction in the history of a community. This book makes the case that a proper understanding of the ontology of language systems has to be grounded in the causal mechanisms by which linguistic items are socially transmitted, in communicative contexts. A biased transmission model provides a basis for understanding why certain things and not others are likely to develop, spread, and stick in languages. Because bits of language are always parts of systems, we also need to show how it is that items of knowledge and behavior become structured wholes. The book argues that to achieve this, we need to see how causal processes apply in multiple frames or 'time scales' simultaneously, and we need to understand and address each and all of these frames in our work on language. This forces us to confront implications that are not always comfortable: for example, that "a language" is not a real thing but a convenient fiction, that language-internal and language-external processes have a lot in common, and that tree diagrams are poor conceptual tools for understanding the history of languages. By exploring avenues for clear solutions to these problems, this book suggests a conceptual framework for ultimately explaining, in causal terms, what languages are like and why they are like that.
  • Enfield, N. J. (2009). Everyday ritual in the residential world. In G. Senft, & E. B. Basso (Eds.), Ritual communication (pp. 51-80). Oxford: Berg.
  • Enfield, N. J., & Diffloth, G. (2009). Phonology and sketch grammar of Kri, a Vietic language of Laos. Cahiers de Linguistique - Asie Orientale (CLAO), 38(1), 3-69.
  • Enfield, N. J. (2009). Relationship thinking and human pragmatics. Journal of Pragmatics, 41, 60-78. doi:10.1016/j.pragma.2008.09.007.

    Abstract

    The approach to pragmatics explored in this article focuses on elements of social interaction which are of universal relevance, and which may provide bases for a comparative approach. The discussion is anchored by reference to a fragment of conversation from a video-recording of Lao speakers during a home visit in rural Laos. The following points are discussed. First, an understanding of the full richness of context is indispensable for a proper understanding of any interaction. Second, human relationships are a primary locus of social organization, and as such constitute a key focus for pragmatics. Third, human social intelligence forms a universal cognitive under-carriage for interaction, and requires careful cross-cultural study. Fourth, a neo-Peircean framework for a general understanding of semiotic processes gives us a way of stepping away from language as our basic analytical frame. It is argued that in order to get a grip on pragmatics across human groups, we need to take a comparative approach in the biological sense—i.e. with reference to other species as well. From this perspective, human pragmatics is about using semiotic resources to try to meet goals in the realm of social relationships.
  • Enfield, N. J. (2009). The anatomy of meaning: Speech, gesture, and composite utterances. Cambridge: Cambridge University Press.
  • Enfield, N. J., Kockelman, P., & Sidnell, J. (Eds.). (2014). The Cambridge handbook of linguistic anthropology. Cambridge: Cambridge University Press.
  • Enfield, N. J., Levinson, S. C., & Stivers, T. (2009). Social action formulation: A "10-minutes" task. In A. Majid (Ed.), Field manual volume 12 (pp. 54-55). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.883564.

    Abstract

    Human actions in the social world – like greeting, requesting, complaining, accusing, asking, confirming, etc. – are recognised through the interpretation of signs. Language is where much of the action is, but gesture, facial expression and other bodily actions matter as well. The goal of this task is to establish a maximally rich description of a representative, good quality piece of conversational interaction, which will serve as a reference point for comparative exploration of the status of social actions and their formulation across language
  • Enfield, N. J., Sidnell, J., & Kockelman, P. (2014). System and function. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 25-28). Cambridge: Cambridge University Press.
  • Enfield, N. J., Stivers, T., Brown, P., Englert, C., Harjunpää, K., Hayashi, M., Heinemann, T., Hoymann, G., Keisanen, T., Rauniomaa, M., Raymond, C. W., Rossano, F., Yoon, K.-E., Zwitserlood, I., & Levinson, S. C. (2019). Polar answers. Journal of Linguistics, 55(2), 277-304. doi:10.1017/S0022226718000336.

    Abstract

    How do people answer polar questions? In this fourteen-language study of answers to questions in conversation, we compare the two main strategies; first, interjection-type answers such as uh-huh (or equivalents yes, mm, head nods, etc.), and second, repetition-type answers that repeat some or all of the question. We find that all languages offer both options, but that there is a strong asymmetry in their frequency of use, with a global preference for interjection-type answers. We propose that this preference is motivated by the fact that the two options are not equivalent in meaning. We argue that interjection-type answers are intrinsically suited to be the pragmatically unmarked, and thus more frequent, strategy for confirming polar questions, regardless of the language spoken. Our analysis is based on the semantic-pragmatic profile of the interjection-type and repetition-type answer strategies, in the context of certain asymmetries inherent to the dialogic speech act structure of question–answer sequences, including sequential agency and thematic agency. This allows us to see possible explanations for the outlier distributions found in ǂĀkhoe Haiǁom and Tzeltal.
  • Enfield, N. J. (2014). The item/system problem. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 48-77). Cambridge: Cambridge University Press.
  • Enfield, N. J. (2014). Transmission biases in the cultural evolution of language: Towards an explanatory framework. In D. Dor, C. Knight, & J. Lewis (Eds.), The social origins of language (pp. 325-335). Oxford: Oxford University Press.
  • Erard, M. (2009). How Many Languages? Linguists Discover New Tongues in China. Science, 324(5925), 332-333. doi:10.1126/science.324.5925.332a.
  • Erard, M. (2019). Language aptitude: Insights from hyperpolyglots. In Z. Wen, P. Skehan, A. Biedroń, S. Li, & R. L. Sparks (Eds.), Language aptitude: Advancing theory, testing, research and practice (pp. 153-167). Abingdon, UK: Taylor & Francis.

    Abstract

    Over the decades, high-intensity language learners scattered over the globe referred to as “hyperpolyglots” have undertaken a natural experiment into the limits of learning and acquiring proficiencies in multiple languages. This chapter details several ways in which hyperpolyglots are relevant to research on aptitude. First, historical hyperpolyglots Cardinal Giuseppe Mezzofanti, Emil Krebs, Elihu Burritt, and Lomb Kató are described in terms of how they viewed their own exceptional outcomes. Next, I draw on results from an online survey with 390 individuals to explore how contemporary hyperpolyglots consider the explanatory value of aptitude. Third, the challenges involved in studying the genetic basis of hyperpolyglottism (and by extension of language aptitude) are discussed. This mosaic of data is meant to inform the direction of future aptitude research that takes hyperpolyglots, one type of exceptional language learner and user, into account.
  • Ernestus, M., Baayen, R. H., & Schreuder, R. (2002). The recognition of reduced word forms. Brain and Language, 81(1-3), 162-173. doi:10.1006/brln.2001.2514.

    Abstract

    This article addresses the recognition of reduced word forms, which are frequent in casual speech. We describe two experiments on Dutch showing that listeners only recognize highly reduced forms well when these forms are presented in their full context and that the probability that a listener recognizes a word form in limited context is strongly correlated with the degree of reduction of the form. Moreover, we show that the effect of degree of reduction can only partly be interpreted as the effect of the intelligibility of the acoustic signal, which is negatively correlated with degree of reduction. We discuss the consequences of our findings for models of spoken word recognition and especially for the role that storage plays in these models.
  • Ernestus, M. (2014). Acoustic reduction and the roles of abstractions and exemplars in speech processing. Lingua, 142, 27-41. doi:10.1016/j.lingua.2012.12.006.

    Abstract

    Acoustic reduction refers to the frequent phenomenon in conversational speech that words are produced with fewer or lenited segments compared to their citation forms. The few published studies on the production and comprehension of acoustic reduction have important implications for the debate on the relevance of abstractions and exemplars in speech processing. This article discusses these implications. It first briefly introduces the key assumptions of simple abstractionist and simple exemplar-based models. It then discusses the literature on acoustic reduction and draws the conclusion that both types of models need to be extended to explain all findings. The ultimate model should allow for the storage of different pronunciation variants, but also reserve an important role for phonetic implementation. Furthermore, the recognition of a highly reduced pronunciation variant requires top down information and leads to activation of the corresponding unreduced variant, the variant that reaches listeners’ consciousness. These findings are best accounted for in hybrids models, assuming both abstract representations and exemplars. None of the hybrid models formulated so far can account for all data on reduced speech and we need further research for obtaining detailed insight into how speakers produce and listeners comprehend reduced speech.
  • Ernestus, M., & Giezenaar, G. (2014). Een goed verstaander heeft maar een half woord nodig. In B. Bossers (Ed.), Vakwerk 9: Achtergronden van de NT2-lespraktijk: Lezingen conferentie Hoeven 2014 (pp. 81-92). Amsterdam: BV NT2.
  • Ernestus, M., Kočková-Amortová, L., & Pollak, P. (2014). The Nijmegen corpus of casual Czech. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 365-370).

    Abstract

    This article introduces a new speech corpus, the Nijmegen Corpus of Casual Czech (NCCCz), which contains more than 30 hours of high-quality recordings of casual conversations in Common Czech, among ten groups of three male and ten groups of three female friends. All speakers were native speakers of Czech, raised in Prague or in the region of Central Bohemia, and were between 19 and 26 years old. Every group of speakers consisted of one confederate, who was instructed to keep the conversations lively, and two speakers naive to the purposes of the recordings. The naive speakers were engaged in conversations for approximately 90 minutes, while the confederate joined them for approximately the last 72 minutes. The corpus was orthographically annotated by experienced transcribers and this orthographic transcription was aligned with the speech signal. In addition, the conversations were videotaped. This corpus can form the basis for all types of research on casual conversations in Czech, including phonetic research and research on how to improve automatic speech recognition. The corpus will be freely available
  • Ernestus, M. (2009). The roles of reconstruction and lexical storage in the comprehension of regular pronunciation variants. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 1875-1878). Causal Productions Pty Ltd.

    Abstract

    This paper investigates how listeners process regular pronunciation variants, resulting from simple general reduction processes. Study 1 shows that when listeners are presented with new words, they store the pronunciation variants presented to them, whether these are unreduced or reduced. Listeners thus store information on word-specific pronunciation variation. Study 2 suggests that if participants are presented with regularly reduced pronunciations, they also reconstruct and store the corresponding unreduced pronunciations. These unreduced pronunciations apparently have special status. Together the results support hybrid models of speech processing, assuming roles for both exemplars and abstract representations.
  • Evans, N., & Levinson, S. C. (2009). The myth of language universals: Language diversity and its importance for cognitive science. Behavioral and Brain Sciences, 32(5), 429-492. doi:10.1017/S0140525X0999094X.

    Abstract

    Talk of linguistic universals has given cognitive scientists the impression that languages are all built to a common pattern. In fact, there are vanishingly few universals of language in the direct sense that all languages exhibit them. Instead, diversity can be found at almost every level of linguistic organization. This fundamentally changes the object of enquiry from a cognitive science perspective. This target article summarizes decades of cross-linguistic work by typologists and descriptive linguists, showing just how few and unprofound the universal characteristics of language are, once we honestly confront the diversity offered to us by the world's 6,000 to 8,000 languages. After surveying the various uses of “universal,” we illustrate the ways languages vary radically in sound, meaning, and syntactic organization, and then we examine in more detail the core grammatical machinery of recursion, constituency, and grammatical relations. Although there are significant recurrent patterns in organization, these are better explained as stable engineering solutions satisfying multiple design constraints, reflecting both cultural-historical factors and the constraints of human cognition.
  • Evans, S., McGettigan, C., Agnew, Z., Rosen, S., Cesar, L., Boebinger, D., Ostarek, M., Chen, S. H., Richards, A., Meekins, S., & Scott, S. K. (2014). The neural basis of informational and energetic masking effects in the perception and production of speech [abstract]. The Journal of the Acoustical Society of America, 136(4), 2243. doi:10.1121/1.4900096.

    Abstract

    When we have spoken conversations, it is usually in the context of competing sounds within our environment. Speech can be masked by many different kinds of sounds, for example, machinery noise and the speech of others, and these different sounds place differing demands on cognitive resources. In this talk, I will present data from a series of functional magnetic resonance imaging (fMRI) studies in which the informational properties of background sounds have been manipulated to make them more or less similar to speech. I will demonstrate the neural effects associated with speaking over and listening to these sounds, and demonstrate how in perception these effects are modulated by the age of the listener. The results will be interpreted within a framework of auditory processing developed from primate neurophysiology and human functional imaging work (Rauschecker and Scott 2009).
  • Evans, N., & Levinson, S. C. (2009). With diversity in mind: Freeing the language sciences from universal grammar [Author's response]. Behavioral and Brain Sciences, 32(5), 472-484. doi:10.1017/S0140525X09990525.

    Abstract

    Our response takes advantage of the wide-ranging commentary to clarify some aspects of our original proposal and augment others. We argue against the generative critics of our coevolutionary program for the language sciences, defend the use of close-to-surface models as minimizing crosslinguistic data distortion, and stress the growing role of stochastic simulations in making generalized historical accounts testable. These methods lead the search for general principles away from idealized representations and towards selective processes. Putting cultural evolution central in understanding language diversity makes learning fundamental in the cognition of language: increasingly powerful models of general learning, paired with channelled caregiver input, seem set to manage language acquisition without recourse to any innate “universal grammar.” Understanding why human language has no clear parallels in the animal world requires a cross-species perspective: crucial ingredients are vocal learning (for which there are clear non-primate parallels) and an intentionattributing cognitive infrastructure that provides a universal base for language evolution. We conclude by situating linguistic diversity within a broader trend towards understanding human cognition through the study of variation in, for example, human genetics, neurocognition, and psycholinguistic processing.
  • Everett, D., & Majid, A. (2009). Adventures in the jungle of language. [Interview by Asifa Majid and Jon Sutton.]. The Psychologist, 22(4), 312-313. Retrieved from http://www.thepsychologist.org.uk/archive/archive_home.cfm?volumeID=22&editionID=174&ArticleID=1494.

    Abstract

    Daniel Everett has spent his career in the Amazon, challenging some fundamental ideas about language and thought. Asifa Majid and Jon Sutton pose the questions
  • Fairs, A. (2019). Linguistic dual-tasking: Understanding temporal overlap between production and comprehension. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Faller, M. (2002). Remarks on evidential hierarchies. In D. I. Beaver, L. D. C. Martinez, B. Z. Clark., & S. Kaufmann (Eds.), The construction of meaning (pp. 89-111). Stanford: CSLI Publications.
  • Faller, M. (2002). The evidential and validational licensing conditions for the Cusco Quechua enclitic-mi. Belgian Journal of Linguistics, 16, 7-21. doi:10.1075/bjl.16.02fa.
  • Favier, S., Wright, A., Meyer, A. S., & Huettig, F. (2019). Proficiency modulates between- but not within-language structural priming. Journal of Cultural Cognitive Science, 3(suppl. 1), 105-124. doi:10.1007/s41809-019-00029-1.

    Abstract

    The oldest of the Celtic language family, Irish differs considerably from English, notably with respect to word order and case marking. In spite of differences in surface constituent structure, less restricted accounts of bilingual shared syntax predict that processing datives and passives in Irish should prime the production of their English equivalents. Furthermore, this cross-linguistic influence should be sensitive to L2 proficiency, if shared structural representations are assumed to develop over time. In Experiment 1, we investigated cross-linguistic structural priming from Irish to English in 47 bilingual adolescents who are educated through Irish. Testing took place in a classroom setting, using written primes and written sentence generation. We found that priming for prepositional-object (PO) datives was predicted by self-rated Irish (L2) proficiency, in line with previous studies. In Experiment 2, we presented translations of the materials to an English-educated control group (n=54). We found a within-language priming effect for PO datives, which was not modulated by English (L1) proficiency. Our findings are compatible with current theories of bilingual language processing and L2 syntactic acquisition.
  • Fedor, A., Pléh, C., Brauer, J., Caplan, D., Friederici, A. D., Gulyás, B., Hagoort, P., Nazir, T., & Singer, W. (2009). What are the brain mechanisms underlying syntactic operations? In D. Bickerton, & E. Szathmáry (Eds.), Biological foundations and origin of syntax (pp. 299-324). Cambridge, MA: MIT Press.

    Abstract

    This chapter summarizes the extensive discussions that took place during the Forum as well as the subsequent months thereafter. It assesses current understanding of the neuronal mechanisms that underlie syntactic structure and processing.... It is posited that to understand the neurobiology of syntax, it might be worthwhile to shift the balance from comprehension to syntactic encoding in language production
  • Fedorenko, E., Patel, A., Casasanto, D., Winawer, J., & Gibson, E. (2009). Structural integration in language and music: Evidence for a shared system. Memory & Cognition, 37, 1-9. doi:10.3758/MC.37.1.1.

    Abstract

    In this study, we investigate whether language and music share cognitive resources for structural processing. We report an experiment that used sung materials and manipulated linguistic complexity (subject-extracted relative clauses, object-extracted relative clauses) and musical complexity (in-key critical note, out-of-key critical note, auditory anomaly on the critical note involving a loudness increase). The auditory-anomaly manipulation was included in order to test whether the difference between in-key and out-of-key conditions might be due to any salient, unexpected acoustic event. The critical dependent measure involved comprehension accuracies to questions about the propositional content of the sentences asked at the end of each trial. The results revealed an interaction between linguistic and musical complexity such that the difference between the subject- and object-extracted relative clause conditions was larger in the out-of-key condition than in the in-key and auditory-anomaly conditions. These results provide evidence for an overlap in structural processing between language and music.
  • Felker, E. R., Ernestus, M., & Broersma, M. (2019). Evaluating dictation task measures for the study of speech perception. In S. Calhoun, P. Escudero, M. Tabain, & P. Warren (Eds.), Proceedings of the 19th International Congress of Phonetic Sciences (ICPhS 2019) (pp. 383-387). Canberra, Australia: Australasian Speech Science and Technology Association Inc.

    Abstract

    This paper shows that the dictation task, a well-
    known testing instrument in language education, has
    untapped potential as a research tool for studying
    speech perception. We describe how transcriptions
    can be scored on measures of lexical, orthographic,
    phonological, and semantic similarity to target
    phrases to provide comprehensive information about
    accuracy at different processing levels. The former
    three measures are automatically extractable,
    increasing objectivity, and the middle two are
    gradient, providing finer-grained information than
    traditionally used. We evaluate the measures in an
    English dictation task featuring phonetically reduced
    continuous speech. Whereas the lexical and
    orthographic measures emphasize listeners’ word
    identification difficulties, the phonological measure
    demonstrates that listeners can often still recover
    phonological features, and the semantic measure
    captures their ability to get the gist of the utterances.
    Correlational analyses and a discussion of practical
    and theoretical considerations show that combining
    multiple measures improves the dictation task’s
    utility as a research tool.
  • Felker, E. R., Ernestus, M., & Broersma, M. (2019). Lexically guided perceptual learning of a vowel shift in an interactive L2 listening context. In Proceedings of Interspeech 2019 (pp. 3123-3127). doi:10.21437/Interspeech.2019-1414.

    Abstract

    Lexically guided perceptual learning has traditionally been studied with ambiguous consonant sounds to which native listeners are exposed in a purely receptive listening context. To extend previous research, we investigate whether lexically guided learning applies to a vowel shift encountered by non-native listeners in an interactive dialogue. Dutch participants played a two-player game in English in either a control condition, which contained no evidence for a vowel shift, or a lexically constraining condition, in which onscreen lexical information required them to re-interpret their interlocutor’s /ɪ/ pronunciations as representing /ε/. A phonetic categorization pre-test and post-test were used to assess whether the game shifted listeners’ phonemic boundaries such that more of the /ε/-/ɪ/ continuum came to be perceived as /ε/. Both listener groups showed an overall post-test shift toward /ɪ/, suggesting that vowel perception may be sensitive to directional biases related to properties of the speaker’s vowel space. Importantly, listeners in the lexically constraining condition made relatively more post-test /ε/ responses than the control group, thereby exhibiting an effect of lexically guided adaptation. The results thus demonstrate that non-native listeners can adjust their phonemic boundaries on the basis of lexical information to accommodate a vowel shift learned in interactive conversation.
  • Felker, E. R., Klockmann, H. E., & De Jong, N. H. (2019). How conceptualizing influences fluency in first and second language speech production. Applied Psycholinguistics, 40(1), 111-136. doi:10.1017/S0142716418000474.

    Abstract

    When speaking in any language, speakers must conceptualize what they want to say before they can formulate and articulate their message. We present two experiments employing a novel experimental paradigm in which the formulating and articulating stages of speech production were kept identical across conditions of differing conceptualizing difficulty. We tracked the effect of difficulty in conceptualizing during the generation of speech (Experiment 1) and during the abandonment and regeneration of speech (Experiment 2) on speaking fluency by Dutch native speakers in their first (L1) and second (L2) language (English). The results showed that abandoning and especially regenerating a speech plan taxes the speaker, leading to disfluencies. For most fluency measures, the increases in disfluency were similar across L1 and L2. However, a significant interaction revealed that abandoning and regenerating a speech plan increases the time needed to solve conceptual difficulties while speaking in the L2 to a greater degree than in the L1. This finding supports theories in which cognitive resources for conceptualizing are shared with those used for later stages of speech planning. Furthermore, a practical implication for language assessment is that increasing the conceptual difficulty of speaking tasks should be considered with caution.
  • Fields, E. C., Weber, K., Stillerman, B., Delaney-Busch, N., & Kuperberg, G. (2019). Functional MRI reveals evidence of a self-positivity bias in the medial prefrontal cortex during the comprehension of social vignettes. Social Cognitive and Affective Neuroscience, 14(6), 613-621. doi:10.1093/scan/nsz035.

    Abstract

    A large literature in social neuroscience has associated the medial prefrontal cortex (mPFC) with the processing of self-related information. However, only recently have social neuroscience studies begun to consider the large behavioral literature showing a strong self-positivity bias, and these studies have mostly focused on its correlates during self-related judgments and decision making. We carried out a functional MRI (fMRI) study to ask whether the mPFC would show effects of the self-positivity bias in a paradigm that probed participants’ self-concept without any requirement of explicit self-judgment. We presented social vignettes that were either self-relevant or non-self-relevant with a neutral, positive, or negative outcome described in the second sentence. In previous work using event-related potentials, this paradigm has shown evidence of a self-positivity bias that influences early stages of semantically processing incoming stimuli. In the present fMRI study, we found evidence for this bias within the mPFC: an interaction between self-relevance and valence, with only positive scenarios showing a self vs other effect within the mPFC. We suggest that the mPFC may play a role in maintaining a positively-biased self-concept and discuss the implications of these findings for the social neuroscience of the self and the role of the mPFC.

    Additional information

    Supplementary data
  • Filippi, P. (2014). Linguistic animals: understanding language through a comparative approach. In E. A. Cartmill, S. Roberts, H. Lyn, & H. Crnish (Eds.), The Evolution of Language: Proceedings of the 10th International Conference (pp. 74-81). doi:10.1142/9789814603638_0082.

    Abstract

    With the aim to clarify the definition of humans as “linguistic animals”, in the present paper I functionally distinguish three types of language competences: i) language as a general biological tool for communication, ii) “perceptual syntax”, iii) propositional language. Following this terminological distinction, I review pivotal findings on animals' communication systems, which constitute useful evidence for the investigation of the nature of three core components of humans' faculty of language: semantics, syntax, and theory of mind. In fact, despite the capacity to process and share utterances with an open-ended structure is uniquely human, some isolated components of our linguistic competence are in common with nonhuman animals. Therefore, as I argue in the present paper, the investigation of animals' communicative competence provide crucial insights into the range of cognitive constraints underlying humans' ability of language, enabling at the same time the analysis of its phylogenetic path as well as of the selective pressures that have led to its emergence.
  • Filippi, P., Gingras, B., & Fitch, W. T. (2014). The effect of pitch enhancement on spoken language acquisition. In E. A. Cartmill, S. Roberts, H. Lyn, & H. Crnish (Eds.), The Evolution of Language: Proceedings of the 10th International Conference (pp. 437-438). doi:10.1142/9789814603638_0082.

    Abstract

    The aim of this study is to investigate the word-learning phenomenon utilizing a new model that integrates three processes: a) extracting a word out of a continuous sounds sequence, b) inducing referential meanings, c) mapping a word onto its intended referent, with the possibility to extend the acquired word over a potentially infinite sets of objects of the same semantic category, and over not-previously-heard utterances. Previous work has examined the role of statistical learning and/or of prosody in each of these processes separately. In order to examine the multilayered word-learning task, we integrate these two strands of investigation into a single approach. We have conducted the study on adults and included six different experimental conditions, each including specific perceptual manipulations of the signal. In condition 1, the only cue to word-meaning mapping was the co-occurrence between words and referents (“statistical cue”). This cue was present in all the conditions. In condition 2, we added infant-directed-speech (IDS) typical pitch enhancement as a marker of the target word and of the statistical cue. In condition 3 we placed IDS typical pitch enhancement on random words of the utterances, i.e. inconsistently matching the statistical cue. In conditions 4, 5 and 6 we manipulated respectively duration, a non-prosodic acoustic cue and a visual cue as markers of the target word and of the statistical cue. Systematic comparisons between learning performance in condition 1 with the other conditions revealed that the word-learning process is facilitated only when pitch prominence consistently marks the target word and the statistical cue…
  • Filippi, P., Gingras, B., & Fitch, W. T. (2014). Pitch enhancement facilitates word learning across visual contexts. Frontiers in Psychology, 5: 1468. doi:10.3389%2Ffpsyg.2014.01468.

    Abstract

    This study investigates word-learning using a new experimental paradigm that integrates three processes: (a) extracting a word out of a continuous sound sequence, (b) inferring its referential meanings in context, (c) mapping the segmented word onto its broader intended referent, such as other objects of the same semantic category, and to novel utterances. Previous work has examined the role of statistical learning and/or of prosody in each of these processes separately. Here, we combine these strands of investigation into a single experimental approach, in which participants viewed a photograph belonging to one of three semantic categories while hearing a complex, five-word utterance containing a target word. Six between-subjects conditions were tested with 20 adult participants each. In condition 1, the only cue to word-meaning mapping was the co-occurrence of word and referents. This statistical cue was present in all conditions. In condition 2, the target word was sounded at a higher pitch. In condition 3, random words were sounded at a higher pitch, creating an inconsistent cue. In condition 4, the duration of the target word was lengthened. In conditions 5 and 6, an extraneous acoustic cue and a visual cue were associated with the target word, respectively. Performance in this word-learning task was significantly higher than that observed with simple co-occurrence only when pitch prominence consistently marked the target word. We discuss implications for the pragmatic value of pitch marking as well as the relevance of our findings to language acquisition and language evolution.
  • Fisher, S. E., & Tilot, A. K. (2019). Bridging senses: Novel insights from synaesthesia. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 374: 20190022. doi:10.1098/rstb.2019.0022.
  • Fisher, S. E., & Tilot, A. K. (Eds.). (2019). Bridging senses: Novel insights from synaesthesia [Special Issue]. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 374.
  • Fisher, S. E., Francks, C., McCracken, J. T., McGough, J. J., Marlow, A. J., MacPhie, I. L., Newbury, D. F., Crawford, L. R., Palmer, C. G. S., Woodward, J. A., Del’Homme, M., Cantwell, D. P., Nelson, S. F., Monaco, A. P., & Smalley, S. L. (2002). A genomewide scan for loci involved in Attention-Deficit/Hyperactivity Disorder. American Journal of Human Genetics, 70(5), 1183-1196. doi:10.1086/340112.

    Abstract

    Attention deficit/hyperactivity disorder (ADHD) is a common heritable disorder with a childhood onset. Molecular genetic studies of ADHD have previously focused on examining the roles of specific candidate genes, primarily those involved in dopaminergic pathways. We have performed the first systematic genomewide linkage scan for loci influencing ADHD in 126 affected sib pairs, using a ∼10-cM grid of microsatellite markers. Allele-sharing linkage methods enabled us to exclude any loci with a λs of ⩾3 from 96% of the genome and those with a λs of ⩾2.5 from 91%, indicating that there is unlikely to be a major gene involved in ADHD susceptibility in our sample. Under a strict diagnostic scheme we could exclude all screened regions of the X chromosome for a locus-specific λs of ⩾2 in brother-brother pairs, demonstrating that the excess of affected males with ADHD is probably not attributable to a major X-linked effect. Qualitative trait maximum LOD score analyses pointed to a number of chromosomal sites that may contain genetic risk factors of moderate effect. None exceeded genomewide significance thresholds, but LOD scores were >1.5 for regions on 5p12, 10q26, 12q23, and 16p13. Quantitative-trait analysis of ADHD symptom counts implicated a region on 12p13 (maximum LOD 2.6) that also yielded a LOD >1 when qualitative methods were used. A survey of regions containing 36 genes that have been proposed as candidates for ADHD indicated that 29 of these genes, including DRD4 and DAT1, could be excluded for a λs of 2. Only three of the candidates—DRD5, 5HTT, and CALCYON—coincided with sites of positive linkage identified by our screen. Two of the regions highlighted in the present study, 2q24 and 16p13, coincided with the top linkage peaks reported by a recent genome-scan study of autistic sib pairs.
  • Fisher, S. E., & DeFries, J. C. (2002). Developmental dyslexia: Genetic dissection of a complex cognitive trait. Nature Reviews Neuroscience, 3, 767-780. doi:10.1038/nrn936.

    Abstract

    Developmental dyslexia, a specific impairment of reading ability despite adequate intelligence and educational opportunity, is one of the most frequent childhood disorders. Since the first documented cases at the beginning of the last century, it has become increasingly apparent that the reading problems of people with dyslexia form part of a heritable neurobiological syndrome. As for most cognitive and behavioural traits, phenotypic definition is fraught with difficulties and the genetic basis is complex, making the isolation of genetic risk factors a formidable challenge. Against such a background, it is notable that several recent studies have reported the localization of genes that influence dyslexia and other language-related traits. These investigations exploit novel research approaches that are relevant to many areas of human neurogenetics.
  • Fisher, S. E., & Scharff, C. (2009). FOXP2 as a molecular window into speech and language [Review article]. Trends in Genetics, 25, 166-177. doi:10.1016/j.tig.2009.03.002.

    Abstract

    Rare mutations of the FOXP2 transcription factor gene cause a monogenic syndrome characterized by impaired speech development and linguistic deficits. Recent genomic investigations indicate that its downstream neural targets make broader impacts on common language impairments, bridging clinically distinct disorders. Moreover, the striking conservation of both FoxP2 sequence and neural expression in different vertebrates facilitates the use of animal models to study ancestral pathways that have been recruited towards human speech and language. Intriguingly, reduced FoxP2 dosage yields abnormal synaptic plasticity and impaired motor-skill learning in mice, and disrupts vocal learning in songbirds. Converging data indicate that Foxp2 is important for modulating the plasticity of relevant neural circuits. This body of research represents the first functional genetic forays into neural mechanisms contributing to human spoken language.
  • Fisher, S. E. (2019). Human genetics: The evolving story of FOXP2. Current Biology, 29(2), R65-R67. doi:10.1016/j.cub.2018.11.047.

    Abstract

    FOXP2 mutations cause a speech and language disorder, raising interest in potential roles of this gene in human evolution. A new study re-evaluates genomic variation at the human FOXP2 locus but finds no evidence of recent adaptive evolution.
  • Fisher, S. E., Francks, C., Marlow, A. J., MacPhie, I. L., Newbury, D. F., Cardon, L. R., Ishikawa-Brush, Y., Richardson, A. J., Talcott, J. B., Gayán, J., Olson, R. K., Pennington, B. F., Smith, S. D., DeFries, J. C., Stein, J. F., & Monaco, A. P. (2002). Independent genome-wide scans identify a chromosome 18 quantitative-trait locus influencing dyslexia. Nature Genetics, 30(1), 86-91. doi:10.1038/ng792.

    Abstract

    Developmental dyslexia is defined as a specific and significant impairment in reading ability that cannot be explained by deficits in intelligence, learning opportunity, motivation or sensory acuity. It is one of the most frequently diagnosed disorders in childhood, representing a major educational and social problem. It is well established that dyslexia is a significantly heritable trait with a neurobiological basis. The etiological mechanisms remain elusive, however, despite being the focus of intensive multidisciplinary research. All attempts to map quantitative-trait loci (QTLs) influencing dyslexia susceptibility have targeted specific chromosomal regions, so that inferences regarding genetic etiology have been made on the basis of very limited information. Here we present the first two complete QTL-based genome-wide scans for this trait, in large samples of families from the United Kingdom and United States. Using single-point analysis, linkage to marker D18S53 was independently identified as being one of the most significant results of the genome in each scan (P< or =0.0004 for single word-reading ability in each family sample). Multipoint analysis gave increased evidence of 18p11.2 linkage for single-word reading, yielding top empirical P values of 0.00001 (UK) and 0.0004 (US). Measures related to phonological and orthographic processing also showed linkage at this locus. We replicated linkage to 18p11.2 in a third independent sample of families (from the UK), in which the strongest evidence came from a phoneme-awareness measure (most significant P value=0.00004). A combined analysis of all UK families confirmed that this newly discovered 18p QTL is probably a general risk factor for dyslexia, influencing several reading-related processes. This is the first report of QTL-based genome-wide scanning for a human cognitive trait.
  • Fisher, S. E. (2019). Key issues and future directions: Genes and language. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 609-620). Cambridge, MA: MIT Press.
  • Fisher, S. E. (2002). Isolation of the genetic factors underlying speech and language disorders. In R. Plomin, J. C. DeFries, I. W. Craig, & P. McGuffin (Eds.), Behavioral genetics in the postgenomic era (pp. 205-226). Washington, DC: American Psychological Association.

    Abstract

    This chapter highlights the research in isolating genetic factors underlying specific language impairment (SLI), or developmental dysphasia, which exploits newly developed genotyping technology, novel statistical methodology, and DNA sequence data generated by the Human Genome Project. The author begins with an overview of results from family, twin, and adoption studies supporting genetic involvement and then goes on to outline progress in a number of genetic mapping efforts that have been recently completed or are currently under way. It has been possible for genetic researchers to pinpoint the specific mutation responsible for some speech and language disorders, providing an example of how the availability of human genomic sequence data can greatly accelerate the pace of disease gene discovery. Finally, the author discusses future prospects on how molecular genetics may offer new insight into the etiology underlying speech and language disorders, leading to improvements in diagnosis and treatment.
  • Fitz, H. (2014). Computermodelle für Spracherwerb und Sprachproduktion. Forschungsbericht 2014 - Max-Planck-Institut für Psycholinguistik. In Max-Planck-Gesellschaft Jahrbuch 2014. München: Max Planck Society for the Advancement of Science. Retrieved from http://www.mpg.de/7850678/Psycholinguistik_JB_2014?c=8236817.

    Abstract

    Relative clauses are a syntactic device to create complex sentences and they make language structurally productive. Despite a considerable number of experimental studies, it is still largely unclear how children learn relative clauses and how these are processed in the language system. Researchers at the MPI for Psycholinguistics used a computational learning model to gain novel insights into these issues. The model explains the differential development of relative clauses in English as well as cross-linguistic differences
  • Fitz, H., & Chang, F. (2019). Language ERPs reflect learning through prediction error propagation. Cognitive Psychology, 111, 15-52. doi:10.1016/j.cogpsych.2019.03.002.

    Abstract

    Event-related potentials (ERPs) provide a window into how the brain is processing language. Here, we propose a theory that argues that ERPs such as the N400 and P600 arise as side effects of an error-based learning mechanism that explains linguistic adaptation and language learning. We instantiated this theory in a connectionist model that can simulate data from three studies on the N400 (amplitude modulation by expectancy, contextual constraint, and sentence position), five studies on the P600 (agreement, tense, word category, subcategorization and garden-path sentences), and a study on the semantic P600 in role reversal anomalies. Since ERPs are learning signals, this account explains adaptation of ERP amplitude to within-experiment frequency manipulations and the way ERP effects are shaped by word predictability in earlier sentences. Moreover, it predicts that ERPs can change over language development. The model provides an account of the sensitivity of ERPs to expectation mismatch, the relative timing of the N400 and P600, the semantic nature of the N400, the syntactic nature of the P600, and the fact that ERPs can change with experience. This approach suggests that comprehension ERPs are related to sentence production and language acquisition mechanisms
  • Fitz, H. (2009). Neural syntax. PhD Thesis, Universiteit van Amsterdam, Institute for Logic, Language, and Computation.

    Abstract

    Children learn their mother tongue spontaneously and effortlessly through communicative interaction with their environment; they do not have to be taught explicitly or learn how to learn first. The ambient language to which children are exposed, however, is highly variable and arguably deficient with regard to the learning target. Nonetheless, most normally developing children learn their native language rapidly and with ease. To explain this accomplishment, many theories of acquisition posit innate constraints on learning, or even a biological endowment for language which is specific to language. Usage-based theories, on the other hand, place more emphasis on the role of experience and domain-general learning mechanisms than on innate language-specific knowledge. But languages are lexically open and combinatorial in structure, so no amount of experience covers their expressivity. Usage-based theories therefore have to explain how children can generalize the properties of their linguistic input to an adult-like grammar. In this thesis I provide an explicit computational mechanism with which usage-based theories of language can be tested and evaluated. The focus of my work lies on complex syntax and the human ability to form sentences which express more than one proposition by means of relativization. This `capacity for recursion' is a hallmark of an adult grammar and, as some have argued, the human language faculty itself. The manuscript is organized as follows. In the second chapter, I give an overview of results that characterize the properties of neural networks as mathematical objects and review previous attempts at modelling the acquisition of complex syntax with such networks. The chapter introduces the conceptual landscape in which the current work is located. In the third chapter, I argue that the construction and use of meaning is essential in child language acquisition and adult processing. Neural network models need to incorporate this dimension of human linguistic behavior. I introduce the Dual-path model of sentence production and syntactic development which is able to represent semantics and learns from exposure to sentences paired with their meaning (cf. Chang et al. 2006). I explain the architecture of this model, motivate critical assumptions behind its design, and discuss existing research using this model. The fourth chapter describes and compares several extensions of the basic architecture to accommodate the processing of multi-clause utterances. These extensions are evaluated against computational desiderata, such as good learning and generalization performance and the parsimony of input representations. A single-best solution for encoding the meaning of complex sentences with restrictive relative clauses is identified, which forms the basis for all subsequent simulations. Chapter five analyzes the learning dynamics in more detail. I first examine the model's behavior for different relative clause types. Syntactic alternations prove to be particularly difficult to learn because they complicate the meaning-to-form mapping the model has to acquire. In the second part, I probe the internal representations the model has developed during learning. It is argued that the model acquires the argument structure of the construction types in its input language and represents the hierarchical organization of distinct multi-clause utterances. The juice of this thesis is contained in chapters six to eight. In chapter six, I test the Dual-path model's generalization capacities in a variety of tasks. I show that its syntactic representations are sufficiently transparent to allow structural generalization to novel complex utterances. Semantic similarities between novel and familiar sentence types play a critical role in this task. The Dual-path model also has a capacity for generalizing familiar words to novel slots in novel constructions (strong semantic systematicity). Moreover, I identify learning conditions under which the model displays recursive productivity. It is argued that the model's behavior is consistent with human behavior in that production accuracy degrades with depth of embedding, and right-branching is learned faster than center-embedding recursion. In chapter seven, I address the issue of learning complex polar interrogatives in the absence of positive exemplars in the input. I show that the Dual-path model can acquire the syntax of these questions from simpler and similar structures which are warranted in a child's linguistic environment. The model's errors closely match children's errors, and it is suggested that children might not require an innate learning bias to acquire auxiliary fronting. Since the model does not implement a traditional kind of language-specific universal grammar, these results are relevant to the poverty of the stimulus debate. English relative clause constructions give rise to similar performance orderings in adult processing and child language acquisition. This pattern matches the typological universal called the noun phrase accessibility hierarchy. I propose an input-based explanation of this data in chapter eight. The Dual-path model displays this ordering in syntactic development when exposed to plausible input distributions. But it is possible to manipulate and completely remove the ordering by varying properties of the input from which the model learns. This indicates, I argue, that patterns of interference and facilitation among input structures can explain the hierarchy when all structures are simultaneously learned and represented over a single set of connection weights. Finally, I draw conclusions from this work, address some unanswered questions, and give a brief outlook on how this research might be continued.

    Additional information

    http://dare.uva.nl/record/328271
  • Fitz, H., & Chang, F. (2009). Syntactic generalization in a connectionist model of sentence production. In J. Mayor, N. Ruh, & K. Plunkett (Eds.), Connectionist models of behaviour and cognition II: Proceedings of the 11th Neural Computation and Psychology Workshop (pp. 289-300). River Edge, NJ: World Scientific Publishing.

    Abstract

    We present a neural-symbolic learning model of sentence production which displays strong semantic systematicity and recursive productivity. Using this model, we provide evidence for the data-driven learnability of complex yes/no- questions.
  • FitzPatrick, I., & Indefrey, P. (2014). Head start for target language in bilingual listening. Brain Research, 1542, 111-130. doi:10.1016/j.brainres.2013.10.014.

    Abstract

    In this study we investigated the availability of non-target language semantic features in bilingual speech processing. We recorded EEG from Dutch-English bilinguals who listened to spoken sentences in their L2 (English) or L1 (Dutch). In Experiments 1 and 3 the sentences contained an interlingual homophone. The sentence context was either biased towards the target language meaning of the homophone (target biased), the non-target language meaning (non-target biased), or neither meaning of the homophone (fully incongruent). These conditions were each compared to a semantically congruent control condition. In L2 sentences we observed an N400 in the non-target biased condition that had an earlier offset than the N400 to fully incongruent homophones. In the target biased condition, a negativity emerged that was later than the N400 to fully incongruent homophones. In L1 contexts, neither target biased nor non-target biased homophones yielded significant N400 effects (compared to the control condition). In Experiments 2 and 4 the sentences contained a language switch to a non-target language word that could be semantically congruent or incongruent. Semantically incongruent words (switched, and non-switched) elicited an N400 effect. The N400 to semantically congruent language-switched words had an earlier offset than the N400 to incongruent words. Both congruent and incongruent language switches elicited a Late Positive Component (LPC). These findings show that bilinguals activate both meanings of interlingual homophones irrespective of their contextual fit. In L2 contexts, the target-language meaning of the homophone has a head start over the non-target language meaning. The target-language head start is also evident for language switches from both L2-to-L1 and L1-to-L2
  • Flecken, M., von Stutterheim, C., & Carroll, M. (2014). Grammatical aspect influences motion event perception: Evidence from a cross-linguistic non-verbal recognition task. Language and Cognition, 6(1), 45-78. doi:10.1017/langcog.2013.2.

    Abstract

    Using eye-tracking as a window on cognitive processing, this study investigates language effects on attention to motion events in a non-verbal task. We compare gaze allocation patterns by native speakers of German and Modern Standard Arabic (MSA), two languages that differ with regard to the grammaticalization of temporal concepts. Findings of the non-verbal task, in which speakers watch dynamic event scenes while performing an auditory distracter task, are compared to gaze allocation patterns which were obtained in an event description task, using the same stimuli. We investigate whether differences in the grammatical aspectual systems of German and MSA affect the extent to which endpoints of motion events are linguistically encoded and visually processed in the two tasks. In the linguistic task, we find clear language differences in endpoint encoding and in the eye-tracking data (attention to event endpoints) as well: German speakers attend to and linguistically encode endpoints more frequently than speakers of MSA. The fixation data in the non-verbal task show similar language effects, providing relevant insights with regard to the language-and-thought debate. The present study is one of the few studies that focus explicitly on language effects related to grammatical concepts, as opposed to lexical concepts.
  • Floyd, S. (2014). 'We’ as social categorization in Cha’palaa: A language of Ecuador. In T.-S. Pavlidou (Ed.), Constructing collectivity: 'We' across languages and contexts (pp. 135-158). Amsterdam: Benjamins.

    Abstract

    This chapter connects the grammar of the first person collective pronoun in the Cha’palaa language of Ecuador with its use in interaction for collective reference and social category membership attribution, addressing the problem posed by the fact that non-singular pronouns do not have distributional semantics (“speakers”) but are rather associational (“speaker and relevant associates”). It advocates a cross-disciplinary approach that jointly considers elements of linguistic form, situated usages of those forms in instances of interaction, and the broader ethnographic context of those instances. Focusing on large-scale and relatively stable categories such as racial and ethnic groups, it argues that looking at how speakers categorize themselves and others in the speech situation by using pronouns provides empirical data on the status of macro-social categories for members of a society

    Files private

    Request files
  • Floyd, S. (2014). [Review of the book Flexible word classes: Typological studies of underspecified parts of speech ed. by Jan Rijkhoff and Eva van Lier]. Linguistics, 52, 1499-1502. doi:10.1515/ling-2014-0027.
  • Floyd, S. (2014). Four types of reduplication in the Cha'palaa language of Ecuador. In H. van der Voort, & G. Goodwin Gómez (Eds.), Reduplication in Indigenous Languages of South America (pp. 77-114). Leiden: Brill.
  • Floyd, S. (2009). Nexos históricos, gramaticales y culturales de los números en cha'palaa [Historical, grammatical and cultural connections of Cha'palaa numerals]. In Proceedings of the Conference on Indigenous Languages of Latin America (CILLA) -IV.

    Abstract

    Los idiomas sudamericanas tienen una diversidad de sistemas numéricos, desde sistemas con solamente dos o tres términos en algunos idiomas amazónicos hasta sistemas con numerales extendiendo a miles. Una mirada al sistema del idioma cha’palaa de Ecuador demuestra rasgos de base-2, base-5, base-10 y base-20, ligados a diferentes etapas de cambio, desarrollo y contacto lingüístico. Conocer estas etapas nos permite proponer algunas correlaciones con lo que conocemos de la historia de contactos culturales en la región. The South American languages have diverse types of numeral systems, from systems of just two or three terms in some Amazonian languages to systems extending into the thousands. A look a the system of the Cha'palaa language of Ecuador demonstrates base-2, base-5, base-10 and base-20 features, linked to different stages of change, development and language contact. Learning about these stages permits up to propose some correlations between them and what we know about the history of cultural contact in the region.
  • Foley, W., & Van Valin Jr., R. D. (2009). Functional syntax and universal grammar (Repr.). Cambridge University Press.

    Abstract

    The key argument of this book, originally published in 1984, is that when human beings communicate with each other by means of a natural language they typically do not do so in simple sentences but rather in connected discourse - complex expressions made up of a number of clauses linked together in various ways. A necessary precondition for intelligible discourse is the speaker’s ability to signal the temporal relations between the events that are being discussed and to refer to the participants in those events in such a way that it is clear who is being talked about. A great deal of the grammatical machinery in a language is devoted to this task, and Functional Syntax and Universal Grammar explores how different grammatical systems accomplish it. This book is an important attempt to integrate the study of linguistic form with the study of language use and meaning. It will be of particular interest to field linguists and those concerned with typology and language universals, and also to anthropologists involved in the study of language function.
  • Folia, V., & Petersson, K. M. (2014). Implicit structured sequence learning: An fMRI study of the structural mere-exposure effect. Frontiers in Psychology, 5: 41. doi:10.3389/fpsyg.2014.00041.

    Abstract

    In this event-related FMRI study we investigated the effect of five days of implicit acquisition on preference classification by means of an artificial grammar learning (AGL) paradigm based on the structural mere-exposure effect and preference classification using a simple right-linear unification grammar. This allowed us to investigate implicit AGL in a proper learning design by including baseline measurements prior to grammar exposure. After 5 days of implicit acquisition, the FMRI results showed activations in a network of brain regions including the inferior frontal (centered on BA 44/45) and the medial prefrontal regions (centered on BA 8/32). Importantly, and central to this study, the inclusion of a naive preference FMRI baseline measurement allowed us to conclude that these FMRI findings were the intrinsic outcomes of the learning process itself and not a reflection of a preexisting functionality recruited during classification, independent of acquisition. Support for the implicit nature of the knowledge utilized during preference classification on day 5 come from the fact that the basal ganglia, associated with implicit procedural learning, were activated during classification, while the medial temporal lobe system, associated with explicit declarative memory, was consistently deactivated. Thus, preference classification in combination with structural mere-exposure can be used to investigate structural sequence processing (syntax) in unsupervised AGL paradigms with proper learning designs.
  • Folia, V., Forkstam, C., Hagoort, P., & Petersson, K. M. (2009). Language comprehension: The interplay between form and content. In N. Taatgen, & H. van Rijn (Eds.), Proceedings of the 31th Annual Conference of the Cognitive Science Society (pp. 1686-1691). Austin, TX: Cognitive Science Society.

    Abstract

    In a 2x2 event-related FMRI study we find support for the idea that the inferior frontal cortex, centered on Broca’s region and its homologue, is involved in constructive unification operations during the structure-building process in parsing for comprehension. Tentatively, we provide evidence for a role of the dorsolateral prefrontal cortex centered on BA 9/46 in the control component of the language system. Finally, the left temporo-parietal cortex, in the vicinity of Wernicke’s region, supports the interaction between the syntax of gender agreement and sentence-level semantics.
  • Forkel, S. J., Thiebaut de Schotten, M., Dell’Acqua, F., Kalra, L., Murphy, D. G. M., Williams, S. C. R., & Catani, M. (2014). Anatomical predictors of aphasia recovery: a tractography study of bilateral perisylvian language networks. Brain, 137, 2027-2039. doi:10.1093/brain/awu113.

    Abstract

    Stroke-induced aphasia is associated with adverse effects on quality of life and the ability to return to work. For patients and clinicians the possibility of relying on valid predictors of recovery is an important asset in the clinical management of stroke-related impairment. Age, level of education, type and severity of initial symptoms are established predictors of recovery. However, anatomical predictors are still poorly understood. In this prospective longitudinal study, we intended to assess anatomical predictors of recovery derived from diffusion tractography of the perisylvian language networks. Our study focused on the arcuate fasciculus, a language pathway composed of three segments connecting Wernicke’s to Broca’s region (i.e. long segment), Wernicke’s to Geschwind’s region (i.e. posterior segment) and Broca’s to Geschwind’s region (i.e. anterior segment). In our study we were particularly interested in understanding how lateralization of the arcuate fasciculus impacts on severity of symptoms and their recovery. Sixteen patients (10 males; mean age 60 ± 17 years, range 28–87 years) underwent post stroke language assessment with the Revised Western Aphasia Battery and neuroimaging scanning within a fortnight from symptoms onset. Language assessment was repeated at 6 months. Backward elimination analysis identified a subset of predictor variables (age, sex, lesion size) to be introduced to further regression analyses. A hierarchical regression was conducted with the longitudinal aphasia severity as the dependent variable. The first model included the subset of variables as previously defined. The second model additionally introduced the left and right arcuate fasciculus (separate analysis for each segment). Lesion size was identified as the only independent predictor of longitudinal aphasia severity in the left hemisphere [beta = −0.630, t(−3.129), P = 0.011]. For the right hemisphere, age [beta = −0.678, t(–3.087), P = 0.010] and volume of the long segment of the arcuate fasciculus [beta = 0.730, t(2.732), P = 0.020] were predictors of longitudinal aphasia severity. Adding the volume of the right long segment to the first-level model increased the overall predictive power of the model from 28% to 57% [F(1,11) = 7.46, P = 0.02]. These findings suggest that different predictors of recovery are at play in the left and right hemisphere. The right hemisphere language network seems to be important in aphasia recovery after left hemispheric stroke.

    Additional information

    supplementary information
  • Forkel, S. J. (2014). Identification of anatomical predictors of language recovery after stroke with diffusion tensor imaging. PhD Thesis, King's College London, London.

    Abstract

    Background Stroke-induced aphasia is associated with adverse effects on quality of life and the ability to return to work. However, the predictors of recovery are still poorly understood. Anatomical variability of the arcuate fasciculus, connecting Broca’s and Wernicke’s areas, has been reported in the healthy population using diffusion tensor imaging tractography. In about 40% of the population the arcuate fasciculus is bilateral and this pattern is advantageous for certain language related functions, such as auditory verbal learning (Catani et al. 2007). Methods In this prospective longitudinal study, anatomical predictors of post-stroke aphasia recovery were investigated using diffusion tractography and arterial spin labelling. Patients An 18-subject strong aphasia cohort with first-ever unilateral left hemispheric middle cerebral artery infarcts underwent post stroke language (mean 5±5 days) and neuroimaging (mean 10±6 days) assessments and neuropsychological follow-up at six months. Ten of these patients were available for reassessment one year after symptom onset. Aphasia was assessed with the Western Aphasia Battery, which provides a global measure of severity (Aphasia Quotient, AQ). Results Better recover from aphasia was observed in patients with a right arcuate fasciculus [beta=.730, t(2.732), p=.020] (tractography) and increased fractional anisotropy in the right hemisphere (p<0.05) (Tract-based spatial statistics). Further, an increase in left hemisphere perfusion was observed after one year (p<0.01) (perfusion). Lesion analysis identified maximal overlay in the periinsular white matter (WM). Lesion-symptom mapping identified damage to periinsular structure as predictive for overall aphasia severity and damage to frontal lobe white matter as predictive of repetition deficits. Conclusion These findings suggest an important role for the right hemisphere language network in recovery from aphasia after left hemispheric stroke.

    Additional information

    Link to repository
  • Forkel, S. J., Thiebaut de Schotten, M., Kawadler, J. M., Dell'Acqua, F., Danek, A., & Catani, M. (2014). The anatomy of fronto-occipital connections from early blunt dissections to contemporary tractography. Cortex, 56, 73-84. doi:10.1016/j.cortex.2012.09.005.

    Abstract

    The occipital and frontal lobes are anatomically distant yet functionally highly integrated to generate some of the most complex behaviour. A series of long associative fibres, such as the fronto-occipital networks, mediate this integration via rapid feed-forward propagation of visual input to anterior frontal regions and direct top–down modulation of early visual processing.

    Despite the vast number of anatomical investigations a general consensus on the anatomy of fronto-occipital connections is not forthcoming. For example, in the monkey the existence of a human equivalent of the ‘inferior fronto-occipital fasciculus’ (iFOF) has not been demonstrated. Conversely, a ‘superior fronto-occipital fasciculus’ (sFOF), also referred to as ‘subcallosal bundle’ by some authors, is reported in monkey axonal tracing studies but not in human dissections.

    In this study our aim is twofold. First, we use diffusion tractography to delineate the in vivo anatomy of the sFOF and the iFOF in 30 healthy subjects and three acallosal brains. Second, we provide a comprehensive review of the post-mortem and neuroimaging studies of the fronto-occipital connections published over the last two centuries, together with the first integral translation of Onufrowicz's original description of a human fronto-occipital fasciculus (1887) and Muratoff's report of the ‘subcallosal bundle’ in animals (1893).

    Our tractography dissections suggest that in the human brain (i) the iFOF is a bilateral association pathway connecting ventro-medial occipital cortex to orbital and polar frontal cortex, (ii) the sFOF overlaps with branches of the superior longitudinal fasciculus (SLF) and probably represents an ‘occipital extension’ of the SLF, (iii) the subcallosal bundle of Muratoff is probably a complex tract encompassing ascending thalamo-frontal and descending fronto-caudate connections and is therefore a projection rather than an associative tract.

    In conclusion, our experimental findings and review of the literature suggest that a ventral pathway in humans, namely the iFOF, mediates a direct communication between occipital and frontal lobes. Whether the iFOF represents a unique human pathway awaits further ad hoc investigations in animals.
  • Forkstam, C., Jansson, A., Ingvar, M., & Petersson, K. M. (2009). Modality transfer of acquired structural regularities: A preference for an acoustic route. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the 31th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society.

    Abstract

    Human implicit learning can be investigated with implicit artificial grammar learning, a simple model for aspects of natural language acquisition. In this paper we investigate the remaining effect of modality transfer in syntactic classification of an acquired grammatical sequence structure after implicit grammar acquisition. Participants practiced either on acoustically presented syllable sequences or visually presented consonant letter sequences. During classification we independently manipulated the statistical frequency-based and rule-based characteristics of the classification stimuli. Participants performed reliably above chance on the within modality classification task although more so for those working on syllable sequence acquisition. These subjects were also the only group that kept a significant performance level in transfer classification. We speculate that this finding is of particular relevance in consideration of an ecological validity in the input signal in the use of artificial grammar learning and in language learning paradigms at large.
  • Francisco, A. A., Jesse, A., Groen, M. a., & McQueen, J. M. (2014). Audiovisual temporal sensitivity in typical and dyslexic adult readers. In Proceedings of the 15th Annual Conference of the International Speech Communication Association (INTERSPEECH 2014) (pp. 2575-2579).

    Abstract

    Reading is an audiovisual process that requires the learning of systematic links between graphemes and phonemes. It is thus possible that reading impairments reflect an audiovisual processing deficit. In this study, we compared audiovisual processing in adults with developmental dyslexia and adults without reading difficulties. We focused on differences in cross-modal temporal sensitivity both for speech and for non-speech events. When compared to adults without reading difficulties, adults with developmental dyslexia presented a wider temporal window in which unsynchronized speech events were perceived as synchronized. No differences were found between groups for the non-speech events. These results suggests a deficit in dyslexia in the perception of cross-modal temporal synchrony for speech events.
  • Francks, C., Fisher, S. E., MacPhie, I. L., Richardson, A. J., Marlow, A. J., Stein, J. F., & Monaco, A. P. (2002). A genomewide linkage screen for relative hand skill in sibling pairs. American Journal of Human Genetics, 70(3), 800-805. doi:10.1086/339249.

    Abstract

    Genomewide quantitative-trait locus (QTL) linkage analysis was performed using a continuous measure of relative hand skill (PegQ) in a sample of 195 reading-disabled sibling pairs from the United Kingdom. This was the first genomewide screen for any measure related to handedness. The mean PegQ in the sample was equivalent to that of normative data, and PegQ was not correlated with tests of reading ability (correlations between −0.13 and 0.05). Relative hand skill could therefore be considered normal within the sample. A QTL on chromosome 2p11.2-12 yielded strong evidence for linkage to PegQ (empirical P=.00007), and another suggestive QTL on 17p11-q23 was also identified (empirical P=.002). The 2p11.2-12 locus was further analyzed in an independent sample of 143 reading-disabled sibling pairs, and this analysis yielded an empirical P=.13. Relative hand skill therefore is probably a complex multifactorial phenotype with a heterogeneous background, but nevertheless is amenable to QTL-based gene-mapping approaches.
  • Francks, C. (2009). 13 - LRRTM1: A maternally suppressed genetic effect on handedness and schizophrenia. In I. E. C. Sommer, & R. S. Kahn (Eds.), Cerebral lateralization and psychosis (pp. 181-196). Cambridge: Cambridge University Press.

    Abstract

    The molecular, developmental, and evolutionary bases of human brain asymmetry are almost completely unknown. Genetic linkage and association mapping have pin-pointed a gene called LRRTM1 (leucine-rich repeat transmembrane neuronal 1) that may contribute to variability in human handedness. Here I describe how LRRTM1's involvement in handedness was discovered, and also the latest knowledge of its functions in brain development and disease. The association of LRRTM1 with handedness was derived entirely from the paternally inherited gene, and follow-up analysis of gene expression confirmed that LRRTM1 is one of a small number of genes that are imprinted in the human genome, for which the maternally inherited copy is suppressed. The same variation at LRRTM1 that was associated paternally with mixed-/left-handedness was also over-transmitted paternally to schizophrenic patients in a large family study.
    LRRTM1 is expressed in specific regions of the developing and adult forebrain by post-mitotic neurons, and the protein may be involved in axonal trafficking. Thus LRRTM1 has a probable role in neurodevelopment, and its association with handedness suggests that one of its functions may be in establishing or consolidating human brain asymmetry.
    LRRTM1 is the first gene for which allelic variation has been associated with human handedness. The genetic data also suggest indirectly that the epigenetic regulation of this gene may yet prove more important than DNA sequence variation for influencing brain development and disease.
    Intriguingly, the parent-of-origin activity of LRRTM1 suggests that men and women have had conflicting interests in relation to the outcome of lateralized brain development in their offspring.
  • Francks, C., Fisher, S. E., Olson, R. K., Pennington, B. F., Smith, S. D., DeFries, J. C., & Monaco, A. P. (2002). Fine mapping of the chromosome 2p12-16 dyslexia susceptibility locus: Quantitative association analysis and positional candidate genes SEMA4F and OTX1. Psychiatric Genetics, 12(1), 35-41.

    Abstract

    A locus on chromosome 2p12-16 has been implicated in dyslexia susceptibility by two independent linkage studies, including our own study of 119 nuclear twin-based families, each with at least one reading-disabled child. Nonetheless, no variant of any gene has been reported to show association with dyslexia, and no consistent clinical evidence exists to identify candidate genes with any strong a priori logic. We used 21 microsatellite markers spanning 2p12-16 to refine our 1-LOD unit linkage support interval to 12cM between D2S337 and D2S286. Then, in quantitative association analysis, two microsatellites yielded P values<0.05 across a range of reading-related measures (D2S2378 and D2S2114). The exon/intron borders of two positional candidate genes within the region were characterized, and the exons were screened for polymorphisms. The genes were Semaphorin4F (SEMA4F), which encodes a protein involved in axonal growth cone guidance, and OTX1, encoding a homeodomain transcription factor involved in forebrain development. Two non-synonymous single nucleotide polymorphisms were found in SEMA4F, each with a heterozygosity of 0.03. One intronic single nucleotide polymorphism between exons 12 and 13 of SEMA4F was tested for quantitative association, but no significant association was found. Only one single nucleotide polymorphism was found in OTX1, which was exonic but silent. Our data therefore suggest that linkage with reading disability at 2p12-16 is not caused by coding variants of SEMA4F or OTX1. Our study outlines the approach necessary for the identification of genetic variants causing dyslexia susceptibility in an epidemiological population of dyslexics.
  • Francks, C. (2019). In search of the biological roots of typical and atypical human brain asymmetry. Physics of Life Reviews, 30, 22-24. doi:10.1016/j.plrev.2019.07.004.
  • Francks, C. (2019). The genetic bases of brain lateralization. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 595-608). Cambridge, MA: MIT Press.
  • Francks, C., MacPhie, I. L., & Monaco, A. P. (2002). The genetic basis of dyslexia. The Lancet Neurology, 1(8), 483-490. doi:10.1016/S1474-4422(02)00221-1.

    Abstract

    Dyslexia, a disorder of reading and spelling, is a heterogeneous neurological syndrome with a complex genetic and environmental aetiology. People with dyslexia differ in their individual profiles across a range of cognitive, physiological, and behavioural measures related to reading disability. Some or all of the subtypes of dyslexia might have partly or wholly distinct genetic causes. An understanding of the role of genetics in dyslexia could help to diagnose and treat susceptible children more effectively and rapidly than is currently possible and in ways that account for their individual disabilities. This knowledge will also give new insights into the neurobiology of reading and language cognition. Genetic linkage analysis has identified regions of the genome that might harbour inherited variants that cause reading disability. In particular, loci on chromosomes 6 and 18 have shown strong and replicable effects on reading abilities. These genomic regions contain tens or hundreds of candidate genes, and studies aimed at the identification of the specific causal genetic variants are underway.
  • Francks, C. (2009). Understanding the genetics of behavioural and psychiatric traits will only be achieved through a realistic assessment of their complexity. Laterality: Asymmetries of Body, Brain and Cognition, 14(1), 11-16. doi:10.1080/13576500802536439.

    Abstract

    Francks et al. (2007) performed a recent study in which the first putative genetic effect on human handedness was identified (the imprinted locus LRRTM1 on human chromosome 2). In this issue of Laterality, Tim Crow and colleagues present a critique of that study. The present paper presents a personal response to that critique which argues that Francks et al. (2007) published a substantial body of evidence implicating LRRTM1 in handedness and schizophrenia. Progress will now be achieved by others trying to validate, refute, or extend those findings, rather than by further armchair discussion.
  • Frank, S. L., Monaghan, P., & Tsoukala, C. (2019). Neural network models of language acquisition and processing. In P. Hagoort (Ed.), Human language: From genes and brain to behavior (pp. 277-293). Cambridge, MA: MIT Press.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Hagoort, P., & Eisner, F. (2019). Consistency influences altered auditory feedback processing. Quarterly Journal of Experimental Psychology, 72(10), 2371-2379. doi:10.1177/1747021819838939.

    Abstract

    Previous research on the effect of perturbed auditory feedback in speech production has focused on two types of responses. In the short term, speakers generate compensatory motor commands in response to unexpected perturbations. In the longer term, speakers adapt feedforward motor programmes in response to feedback perturbations, to avoid future errors. The current study investigated the relation between these two types of responses to altered auditory feedback. Specifically, it was hypothesised that consistency in previous feedback perturbations would influence whether speakers adapt their feedforward motor programmes. In an altered auditory feedback paradigm, formant perturbations were applied either across all trials (the consistent condition) or only to some trials, whereas the others remained unperturbed (the inconsistent condition). The results showed that speakers’ responses were affected by feedback consistency, with stronger speech changes in the consistent condition compared with the inconsistent condition. Current models of speech-motor control can explain this consistency effect. However, the data also suggest that compensation and adaptation are distinct processes, which are not in line with all current models.
  • Fransson, P., Merboldt, K.-D., Petersson, K. M., Ingvar, M., & Frahm, J. (2002). On the effects of spatial filtering — A comparative fMRI study of episodic memory encoding at high and low resolution. NeuroImage, 16(4), 977-984. doi:10.1006/nimg.2002.1079.

    Abstract

    Theeffects of spatial filtering in functional magnetic resonance imaging were investigated by reevaluating the data of a previous study of episodic memory encoding at 2 × 2 × 4-mm3 resolution with use of a SPM99 analysis involving a Gaussian kernel of 8-mm full width at half maximum. In addition, a multisubject analysis of activated regions was performed by normalizing the functional images to an approximate Talairach brain atlas. In individual subjects, spatial filtering merged activations in anatomically separated brain regions. Moreover, small foci of activated pixels which originated from veins became blurred and hence indistinguishable from parenchymal responses. The multisubject analysis resulted in activation of the hippocampus proper, a finding which could not be confirmed by the activation maps obtained at high resolution. It is concluded that the validity of multisubject fMRI analyses can be considerably improved by first analyzing individual data sets at optimum resolution to assess the effects of spatial filtering and minimize the risk of signal contamination by macroscopically visible vessels.
  • Frega, M., Linda, K., Keller, J. M., Gümüş-Akay, G., Mossink, B., Van Rhijn, J. R., Negwer, M., Klein Gunnewiek, T., Foreman, K., Kompier, N., Schoenmaker, C., Van den Akker, W., Van der Werf, I., Oudakker, A., Zhou, H., Kleefstra, T., Schubert, D., Van Bokhoven, H., & Nadif Kasri, N. (2019). Neuronal network dysfunction in a model for Kleefstra syndrome mediated by enhanced NMDAR signaling. Nature Communications, 10: 4928. doi:10.1038/s41467-019-12947-3.

    Abstract

    Kleefstra syndrome (KS) is a neurodevelopmental disorder caused by mutations in the histone methyltransferase EHMT1. To study the impact of decreased EHMT1 function in human cells, we generated excitatory cortical neurons from induced pluripotent stem (iPS) cells derived from KS patients. Neuronal networks of patient-derived cells exhibit network bursting with a reduced rate, longer duration, and increased temporal irregularity compared to control networks. We show that these changes are mediated by upregulation of NMDA receptor (NMDAR) subunit 1 correlating with reduced deposition of the repressive H3K9me2 mark, the catalytic product of EHMT1, at the GRIN1 promoter. In mice EHMT1 deficiency leads to similar neuronal network impairments with increased NMDAR function. Finally, we rescue the KS patient-derived neuronal network phenotypes by pharmacological inhibition of NMDARs. Summarized, we demonstrate a direct link between EHMT1 deficiency and NMDAR hyperfunction in human neurons, providing a potential basis for more targeted therapeutic approaches for KS.

    Additional information

    supplementary information
  • French, C. A., Vinueza Veloz, M. F., Zhou, K., Peter, S., Fisher, S. E., Costa, R. M., & De Zeeuw, C. I. (2019). Differential effects of Foxp2 disruption in distinct motor circuits. Molecular Psychiatry, 24, 447-462. doi:10.1038/s41380-018-0199-x.

    Abstract

    Disruptions of the FOXP2 gene cause a speech and language disorder involving difficulties in sequencing orofacial movements. FOXP2 is expressed in cortico-striatal and cortico-cerebellar circuits important for fine motor skills, and affected individuals show abnormalities in these brain regions. We selectively disrupted Foxp2 in the cerebellar Purkinje cells, striatum or cortex of mice and assessed the effects on skilled motor behaviour using an operant lever-pressing task. Foxp2 loss in each region impacted behaviour differently, with striatal and Purkinje cell disruptions affecting the variability and the speed of lever-press sequences, respectively. Mice lacking Foxp2 in Purkinje cells showed a prominent phenotype involving slowed lever pressing as well as deficits in skilled locomotion. In vivo recordings from Purkinje cells uncovered an increased simple spike firing rate and decreased modulation of firing during limb movements. This was caused by increased intrinsic excitability rather than changes in excitatory or inhibitory inputs. Our findings show that Foxp2 can modulate different aspects of motor behaviour in distinct brain regions, and uncover an unknown role for Foxp2 in the modulation of Purkinje cell activity that severely impacts skilled movements.
  • French, C. A., & Fisher, S. E. (2014). What can mice tell us about Foxp2 function? Current Opinion in Neurobiology, 28, 72-79. doi:10.1016/j.conb.2014.07.003.

    Abstract

    Disruptions of the FOXP2 gene cause a rare speech and language disorder, a discovery that has opened up novel avenues for investigating the relevant neural pathways. FOXP2 shows remarkably high conservation of sequence and neural expression in diverse vertebrates, suggesting that studies in other species are useful in elucidating its functions. Here we describe how investigations of mice that carry disruptions of Foxp2 provide insights at multiple levels: molecules, cells, circuits and behaviour. Work thus far has implicated the gene in key processes including neurite outgrowth, synaptic plasticity, sensorimotor integration and motor-skill learning.
  • Friedlaender, J., Hunley, K., Dunn, M., Terrill, A., Lindström, E., Reesink, G., & Friedlaender, F. (2009). Linguistics more robust than genetics [Letter to the editor]. Science, 324, 464-465. doi:10.1126/science.324_464c.
  • Frost, R. L. A., Isbilen, E. S., Christiansen, M. H., & Monaghan, P. (2019). Testing the limits of non-adjacent dependency learning: Statistical segmentation and generalisation across domains. In A. K. Goel, C. M. Seifert, & C. Freksa (Eds.), Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci 2019) (pp. 1787-1793). Montreal, QB: Cognitive Science Society.

    Abstract

    Achieving linguistic proficiency requires identifying words from speech, and discovering the constraints that govern the way those words are used. In a recent study of non-adjacent dependency learning, Frost and Monaghan (2016) demonstrated that learners may perform these tasks together, using similar statistical processes - contrary to prior suggestions. However, in their study, non-adjacent dependencies were marked by phonological cues (plosive-continuant-plosive structure), which may have influenced learning. Here, we test the necessity of these cues by comparing learning across three conditions; fixed phonology, which contains these cues, varied phonology, which omits them, and shapes, which uses visual shape sequences to assess the generality of statistical processing for these tasks. Participants segmented the sequences and generalized the structure in both auditory conditions, but learning was best when phonological cues were present. Learning was around chance on both tasks for the visual shapes group, indicating statistical processing may critically differ across domains.
  • Frost, R. L. A., Monaghan, P., & Christiansen, M. H. (2019). Mark my words: High frequency marker words impact early stages of language learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 45(10), 1883-1898. doi:10.1037/xlm0000683.

    Abstract

    High frequency words have been suggested to benefit both speech segmentation and grammatical categorization of the words around them. Despite utilizing similar information, these tasks are usually investigated separately in studies examining learning. We determined whether including high frequency words in continuous speech could support categorization when words are being segmented for the first time. We familiarized learners with continuous artificial speech comprising repetitions of target words, which were preceded by high-frequency marker words. Crucially, marker words distinguished targets into 2 distributionally defined categories. We measured learning with segmentation and categorization tests and compared performance against a control group that heard the artificial speech without these marker words (i.e., just the targets, with no cues for categorization). Participants segmented the target words from speech in both conditions, but critically when the marker words were present, they influenced acquisition of word-referent mappings in a subsequent transfer task, with participants demonstrating better early learning for mappings that were consistent (rather than inconsistent) with the distributional categories. We propose that high-frequency words may assist early grammatical categorization, while speech segmentation is still being learned.

    Additional information

    Supplemental Material
  • Frost, R. (2014). Learning grammatical structures with and without sleep. PhD Thesis, Lancaster University, Lancaster.
  • Fuhrmann, D., Ravignani, A., Marshall-Pescini, S., & Whiten, A. (2014). Synchrony and motor mimicking in chimpanzee observational learning. Scientific Reports, 4: 5283. doi:10.1038/srep05283.

    Abstract

    Cumulative tool-based culture underwrote our species' evolutionary success and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function.

    Additional information

    Supplementary Information
  • Furman, R., Kuntay, A., & Ozyurek, A. (2014). Early language-specificity of children's event encoding in speech and gesture: Evidence from caused motion in Turkish. Language, Cognition and Neuroscience, 29, 620-634. doi:10.1080/01690965.2013.824993.

    Abstract

    Previous research on language development shows that children are tuned early on to the language-specific semantic and syntactic encoding of events in their native language. Here we ask whether language-specificity is also evident in children's early representations in gesture accompanying speech. In a longitudinal study, we examined the spontaneous speech and cospeech gestures of eight Turkish-speaking children aged one to three and focused on their caused motion event expressions. In Turkish, unlike in English, the main semantic elements of caused motion such as Action and Path can be encoded in the verb (e.g. sok- ‘put in’) and the arguments of a verb can be easily omitted. We found that Turkish-speaking children's speech indeed displayed these language-specific features and focused on verbs to encode caused motion. More interestingly, we found that their early gestures also manifested specificity. Children used iconic cospeech gestures (from 19 months onwards) as often as pointing gestures and represented semantic elements such as Action with Figure and/or Path that reinforced or supplemented speech in language-specific ways until the age of three. In the light of previous reports on the scarcity of iconic gestures in English-speaking children's early productions, we argue that the language children learn shapes gestures and how they get integrated with speech in the first three years of life.
  • Galbiati, A., Verga, L., Giora, E., Zucconi, M., & Ferini-Strambi, L. (2019). The risk of neurodegeneration in REM sleep behavior disorder: A systematic review and meta-analysis of longitudinal studies. Sleep Medicine Reviews, 43, 37-46. doi:10.1016/j.smrv.2018.09.008.

    Abstract

    Several studies report an association between REM Sleep Behavior Disorder (RBD) and neurodegenerative diseases, in particular synucleinopathies. Interestingly, the onset of RBD precedes the development of neurodegeneration by several years. This review and meta-analysis aims to establish the rate of conversion of RBD into neurodegenerative diseases. Longitudinal studies were searched from the PubMed, Web of Science, and SCOPUS databases. Using random-effect modeling, we performed a meta-analysis on the rate of RBD conversions into neurodegeneration. Furthermore, we fitted a Kaplan-Meier analysis and compared the differences between survival curves of different diseases with log-rank tests. The risk for developing neurodegenerative diseases was 33.5% at five years follow-up, 82.4% at 10.5 years and 96.6% at 14 years. The average conversion rate was 31.95% after a mean duration of follow-up of 4.75 ± 2.43 years. The majority of RBD patients converted to Parkinson's Disease (43%), followed by Dementia with Lewy Bodies (25%). The estimated risk for RBD patients to develop a neurodegenerative disease over a long-term follow-up is more than 90%. Future studies should include control group for the evaluation of REM sleep without atonia as marker for neurodegeneration also in non-clinical population and target RBD as precursor of neurodegeneration to develop protective trials.
  • Galke, L., Vagliano, I., & Scherp, A. (2019). Can graph neural networks go „online“? An analysis of pretraining and inference. In Proceedings of the Representation Learning on Graphs and Manifolds: ICLR2019 Workshop.

    Abstract

    Large-scale graph data in real-world applications is often not static but dynamic,
    i. e., new nodes and edges appear over time. Current graph convolution approaches
    are promising, especially, when all the graph’s nodes and edges are available dur-
    ing training. When unseen nodes and edges are inserted after training, it is not
    yet evaluated whether up-training or re-training from scratch is preferable. We
    construct an experimental setup, in which we insert previously unseen nodes and
    edges after training and conduct a limited amount of inference epochs. In this
    setup, we compare adapting pretrained graph neural networks against retraining
    from scratch. Our results show that pretrained models yield high accuracy scores
    on the unseen nodes and that pretraining is preferable over retraining from scratch.
    Our experiments represent a first step to evaluate and develop truly online variants
    of graph neural networks.
  • Galke, L., Melnychuk, T., Seidlmayer, E., Trog, S., Foerstner, K., Schultz, C., & Tochtermann, K. (2019). Inductive learning of concept representations from library-scale bibliographic corpora. In K. David, K. Geihs, M. Lange, & G. Stumme (Eds.), Informatik 2019: 50 Jahre Gesellschaft für Informatik - Informatik für Gesellschaft (pp. 219-232). Bonn: Gesellschaft für Informatik e.V. doi:10.18420/inf2019_26.
  • Ganushchak, L. Y., & Schiller, N. O. (2009). Speaking in one’s second language under time pressure: An ERP study on verbal self-monitoring in German-Dutch bilinguals. Psychophysiology, 46, 410-419. doi:10.1111/j.1469-8986.2008.00774.x.

    Abstract

    This study addresses how verbal self-monitoring and the Error-Related Negativity (ERN) are affected by time pressure
    when a task is performed in a second language as opposed to performance in the native language. German–Dutch
    bilinguals were required to perform a phoneme-monitoring task in Dutch with and without a time pressure manipulation.
    We obtained an ERN following verbal errors that showed an atypical increase in amplitude under time
    pressure. This finding is taken to suggest that under time pressure participants had more interference from their native
    language, which in turn led to a greater response conflict and thus enhancement of the amplitude of the ERN. This
    result demonstrates once more that the ERN is sensitive to psycholinguistic manipulations and suggests that the
    functioning of the verbal self-monitoring systemduring speaking is comparable to other performance monitoring, such
    as action monitoring.
  • Ganushchak, L., Konopka, A. E., & Chen, Y. (2014). What the eyes say about planning of focused referents during sentence formulation: a cross-linguistic investigation. Frontiers in Psychology, 5: 1124. doi:10.3389/fpsyg.2014.01124.

    Abstract

    This study investigated how sentence formulation is influenced by a preceding discourse context. In two eye-tracking experiments, participants described pictures of two-character transitive events in Dutch (Experiment 1) and Chinese (Experiment 2). Focus was manipulated by presenting questions before each picture. In the Neutral condition, participants first heard ‘What is happening here?’ In the Object or Subject Focus conditions, the questions asked about the Object or Subject character (What is the policeman stopping? Who is stopping the truck?). The target response was the same in all conditions (The policeman is stopping the truck). In both experiments, sentence formulation in the Neutral condition showed the expected pattern of speakers fixating the subject character (policeman) before the object character (truck). In contrast, in the focus conditions speakers rapidly directed their gaze preferentially only to the character they needed to encode to answer the question (the new, or focused, character). The timing of gaze shifts to the new character varied by language group (Dutch vs. Chinese): shifts to the new character occurred earlier when information in the question can be repeated in the response with the same syntactic structure (in Chinese but not in Dutch). The results show that discourse affects the timecourse of linguistic formulation in simple sentences and that these effects can be modulated by language-specific linguistic structures such as parallels in the syntax of questions and declarative sentences.
  • Ganushchak, L. Y., & Acheson, D. J. (Eds.). (2014). What's to be learned from speaking aloud? - Advances in the neurophysiological measurement of overt language production. [Research topic] [Special Issue]. Frontiers in Language Sciences. Retrieved from http://www.frontiersin.org/Language_Sciences/researchtopics/What_s_to_be_Learned_from_Spea/1671.

    Abstract

    Researchers have long avoided neurophysiological experiments of overt speech production due to the suspicion that artifacts caused by muscle activity may lead to a bad signal-to-noise ratio in the measurements. However, the need to actually produce speech may influence earlier processing and qualitatively change speech production processes and what we can infer from neurophysiological measures thereof. Recently, however, overt speech has been successfully investigated using EEG, MEG, and fMRI. The aim of this Research Topic is to draw together recent research on the neurophysiological basis of language production, with the aim of developing and extending theoretical accounts of the language production process. In this Research Topic of Frontiers in Language Sciences, we invite both experimental and review papers, as well as those about the latest methods in acquisition and analysis of overt language production data. All aspects of language production are welcome: i.e., from conceptualization to articulation during native as well as multilingual language production. Focus should be placed on using the neurophysiological data to inform questions about the processing stages of language production. In addition, emphasis should be placed on the extent to which the identified components of the electrophysiological signal (e.g., ERP/ERF, neuronal oscillations, etc.), brain areas or networks are related to language comprehension and other cognitive domains. By bringing together electrophysiological and neuroimaging evidence on language production mechanisms, a more complete picture of the locus of language production processes and their temporal and neurophysiological signatures will emerge.
  • Gao, Y., Zheng, L., Liu, X., Nichols, E. S., Zhang, M., Shang, L., Ding, G., Meng, Z., & Liu, L. (2019). First and second language reading difficulty among Chinese–English bilingual children: The prevalence and influences from demographic characteristics. Frontiers in Psychology, 10: 2544. doi:10.3389/fpsyg.2019.02544.

    Abstract

    Learning to read a second language (L2) can pose a great challenge for children who have already been struggling to read in their first language (L1). Moreover, it is not clear whether, to what extent, and under what circumstances L1 reading difficulty increases the risk of L2 reading difficulty. This study investigated Chinese (L1) and English (L2) reading skills in a large representative sample of 1,824 Chinese–English bilingual children in Grades 4 and 5 from both urban and rural schools in Beijing. We examined the prevalence of reading difficulty in Chinese only (poor Chinese readers, PC), English only (poor English readers, PE), and both Chinese and English (poor bilingual readers, PB) and calculated the co-occurrence, that is, the chances of becoming a poor reader in English given that the child was already a poor reader in Chinese. We then conducted a multinomial logistic regression analysis and compared the prevalence of PC, PE, and PB between children in Grade 4 versus Grade 5, in urban versus rural areas, and in boys versus girls. Results showed that compared to girls, boys demonstrated significantly higher risk of PC, PE, and PB. Meanwhile, compared to the 5th graders, the 4th graders demonstrated significantly higher risk of PC and PB. In addition, children enrolled in the urban schools were more likely to become better second language readers, thus leading to a concerning rural–urban gap in the prevalence of L2 reading difficulty. Finally, among these Chinese–English bilingual children, regardless of sex and school location, poor reading skill in Chinese significantly increased the risk of also being a poor English reader, with a considerable and stable co-occurrence of approximately 36%. In sum, this study suggests that despite striking differences between alphabetic and logographic writing systems, L1 reading difficulty still significantly increases the risk of L2 reading difficulty. This indicates the shared meta-linguistic skills in reading different writing systems and the importance of understanding the universality and the interdependent relationship of reading between different writing systems. Furthermore, the male disadvantage (in both L1 and L2) and the urban–rural gap (in L2) found in the prevalence of reading difficulty calls for special attention to disadvantaged populations in educational practice.
  • Gao, X., Dera, J., Nijhoff, A. D., & Willems, R. M. (2019). Is less readable liked better? The case of font readability in poetry appreciation. PLoS One, 14(12): e0225757. doi:10.1371/journal.pone.0225757.

    Abstract

    Previous research shows conflicting findings for the effect of font readability on comprehension and memory for language. It has been found that—perhaps counterintuitively–a hard to read font can be beneficial for language comprehension, especially for difficult language. Here we test how font readability influences the subjective experience of poetry reading. In three experiments we tested the influence of poem difficulty and font readability on the subjective experience of poems. We specifically predicted that font readability would have opposite effects on the subjective experience of easy versus difficult poems. Participants read poems which could be more or less difficult in terms of conceptual or structural aspects, and which were presented in a font that was either easy or more difficult to read. Participants read existing poems and subsequently rated their subjective experience (measured through four dependent variables: overall liking, perceived flow of the poem, perceived topic clarity, and perceived structure). In line with previous literature we observed a Poem Difficulty x Font Readability interaction effect for subjective measures of poetry reading. We found that participants rated easy poems as nicer when presented in an easy to read font, as compared to when presented in a hard to read font. Despite the presence of the interaction effect, we did not observe the predicted opposite effect for more difficult poems. We conclude that font readability can influence reading of easy and more difficult poems differentially, with strongest effects for easy poems.

    Additional information

    https://osf.io/jwcqt/
  • Garcia, N., Lenkiewicz, P., Freire, M., & Monteiro, P. (2009). A new architecture for optical burst switching networks based on cooperative control. In Proceeding of the 8th IEEE International Symposium on Network Computing and Applications (IEEE NCA09) (pp. 310-313).

    Abstract

    This paper presents a new architecture for optical burst switched networks where the control plane of the network functions in a cooperative manner. Each node interprets the data conveyed by the control packet and forwards it to the next nodes, making the control plane of the network distribute the relevant information to all the nodes in the network. A cooperation transmission tree is used, thus allowing all the nodes to store the information related to the traffic management in the network, and enabling better network resource planning at each node. A model of this network architecture is proposed, and its performance is evaluated.
  • Garcia, R., Roeser, J., & Höhle, B. (2019). Thematic role assignment in the L1 acquisition of Tagalog: Use of word order and morphosyntactic markers. Language Acquisition, 26(3), 235-261. doi:10.1080/10489223.2018.1525613.

    Abstract

    It is a common finding across languages that young children have problems in understanding patient-initial sentences. We used Tagalog, a verb-initial language with a reliable voice-marking system and highly frequent patient voice constructions, to test the predictions of several accounts that have been proposed to explain this difficulty: the frequency account, the Competition Model, and the incremental processing account. Study 1 presents an analysis of Tagalog child-directed speech, which showed that the dominant argument order is agent-before-patient and that morphosyntactic markers are highly valid cues to thematic role assignment. In Study 2, we used a combined self-paced listening and picture verification task to test how Tagalog-speaking adults and 5- and 7-year-old children process reversible transitive sentences. Results showed that adults performed well in all conditions, while children’s accuracy and listening times for the first noun phrase indicated more difficulty in interpreting patient-initial sentences in the agent voice compared to the patient voice. The patient voice advantage is partly explained by both the frequency account and incremental processing account.
  • Garrido, L., Eisner, F., McGettigan, C., Stewart, L., Sauter, D., Hanley, J. R., Schweinberger, S. R., Warren, J. D., & Duchaine, B. (2009). Developmental phonagnosia: A selective deficit of vocal identity recognition. Neuropsychologia, 47(1), 123-131. doi:10.1016/j.neuropsychologia.2008.08.003.

    Abstract

    Phonagnosia, the inability to recognize familiar voices, has been studied in brain-damaged patients but no cases due to developmental problems have been reported. Here we describe the case of KH, a 60-year-old active professional woman who reports that she has always experienced severe voice recognition difficulties. Her hearing abilities are normal, and an MRI scan showed no evidence of brain damage in regions associated with voice or auditory perception. To better understand her condition and to assess models of voice and high-level auditory processing, we tested KH on behavioural tasks measuring voice recognition, recognition of vocal emotions, face recognition, speech perception, and processing of environmental sounds and music. KH was impaired on tasks requiring the recognition of famous voices and the learning and recognition of new voices. In contrast, she performed well on nearly all other tasks. Her case is the first report of developmental phonagnosia, and the results suggest that the recognition of a speaker’s vocal identity depends on separable mechanisms from those used to recognize other information from the voice or non-vocal auditory stimuli.
  • Gaskell, M. G., Warker, J., Lindsay, S., Frost, R. L. A., Guest, J., Snowdon, R., & Stackhouse, A. (2014). Sleep Underpins the Plasticity of Language Production. Psychological Science, 25(7), 1457-1465. doi:10.1177/0956797614535937.

    Abstract

    The constraints that govern acceptable phoneme combinations in speech perception and production have considerable plasticity. We addressed whether sleep influences the acquisition of new constraints and their integration into the speech-production system. Participants repeated sequences of syllables in which two phonemes were artificially restricted to syllable onset or syllable coda, depending on the vowel in that sequence. After 48 sequences, participants either had a 90-min nap or remained awake. Participants then repeated 96 sequences so implicit constraint learning could be examined, and then were tested for constraint generalization in a forced-choice task. The sleep group, but not the wake group, produced speech errors at test that were consistent with restrictions on the placement of phonemes in training. Furthermore, only the sleep group generalized their learning to new materials. Polysomnography data showed that implicit constraint learning was associated with slow-wave sleep. These results show that sleep facilitates the integration of new linguistic knowledge with existing production constraints. These data have relevance for systems-consolidation models of sleep.

    Additional information

    https://osf.io/zqg9y/
  • Gast, V., & Levshina, N. (2014). Motivating w(h)-Clefts in English and German: A hypothesis-driven parallel corpus study. In A.-M. De Cesare (Ed.), Frequency, Forms and Functions of Cleft Constructions in Romance and Germanic: Contrastive, Corpus-Based Studies (pp. 377-414). Berlin: De Gruyter.
  • Gazendam, L., Wartena, C., Malaise, V., Schreiber, G., De Jong, A., & Brugman, H. (2009). Automatic annotation suggestions for audiovisual archives: Evaluation aspects. Interdisciplinary Science Reviews, 34(2/3), 172-188. doi:10.1179/174327909X441090.

    Abstract

    In the context of large and ever growing archives, generating annotation suggestions automatically from textual resources related to the documents to be archived is an interesting option in theory. It could save a lot of work in the time consuming and expensive task of manual annotation and it could help cataloguers attain a higher inter-annotator agreement. However, some questions arise in practice: what is the quality of the automatically produced annotations? How do they compare with manual annotations and with the requirements for annotation that were defined in the archive? If different from the manual annotations, are the automatic annotations wrong? In the CHOICE project, partially hosted at the Netherlands Institute for Sound and Vision, the Dutch public archive for audiovisual broadcasts, we automatically generate annotation suggestions for cataloguers. In this paper, we define three types of evaluation of these annotation suggestions: (1) a classic and strict evaluation measure expressing the overlap between automatically generated keywords and the manual annotations, (2) a loosened evaluation measure for which semantically very similar annotations are also considered as relevant matches, and (3) an in-use evaluation of the usefulness of manual versus automatic annotations in the context of serendipitous browsing. During serendipitous browsing, the annotations (manual or automatic) are used to retrieve and visualize semantically related documents.
  • Gebre, B. G., Wittenburg, P., Heskes, T., & Drude, S. (2014). Motion history images for online speaker/signer diarization. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (pp. 1537-1541). Piscataway, NJ: IEEE.

    Abstract

    We present a solution to the problem of online speaker/signer diarization - the task of determining "who spoke/signed when?". Our solution is based on the idea that gestural activity (hands and body movement) is highly correlated with uttering activity. This correlation is necessarily true for sign languages and mostly true for spoken languages. The novel part of our solution is the use of motion history images (MHI) as a likelihood measure for probabilistically detecting uttering activities. MHI is an efficient representation of where and how motion occurred for a fixed period of time. We conducted experiments on 4.9 hours of a publicly available dataset (the AMI meeting data) and 1.4 hours of sign language dataset (Kata Kolok data). The best performance obtained is 15.70% for sign language and 31.90% for spoken language (measurements are in DER). These results show that our solution is applicable in real-world applications like video conferences.

    Files private

    Request files

Share this page