Publications

Displaying 301 - 400 of 1228
  • Enfield, N. J. (2014). Causal dynamics of language. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 325-342). Cambridge: Cambridge University Press.
  • Enfield, N. J., & Majid, A. (2008). Constructions in 'language and perception'. In A. Majid (Ed.), Field Manual Volume 11 (pp. 11-17). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492949.

    Abstract

    This field guide is for eliciting information about grammatical resources used in describing perceptual events and perception-based properties and states. A list of leading questions outlines an underlying semantic space for events/states of perception, against which language-specific constructions may be defined. It should be used as an entry point into a flexible exploration of the structures and constraints which are specific to the language you are working on. The goal is to provide a cross-linguistically comparable description of the constructions of a language used in describing perceptual events and states. The core focus is to discover any sensory asymmetries, i.e., ways in which different sensory modalities are treated differently with respect to these constructions.
  • Enfield, N. J. (2008). Common ground as a resource for social affiliation. In I. Kecskes, & J. L. Mey (Eds.), Intention, common ground and the egocentric speaker-hearer (pp. 223-254). Berlin: Mouton de Gruyter.
  • Enfield, N. J. (2004). Adjectives in Lao. In R. M. W. Dixon, & A. Y. Aikhenvald (Eds.), Adjective classes: A cross-linguistic typology (pp. 323-347). Oxford: Oxford University Press.
  • Enfield, N. J. (2004). Areal grammaticalisation of postverbal 'acquire' in mainland Southeast Asia. In S. Burusphat (Ed.), Proceedings of the 11th Southeast Asia Linguistics Society Meeting (pp. 275-296). Arizona State University: Tempe.
  • Enfield, N. J. (2008). [Review of the book Constructions at work: The nature of generalization in language by Adele E. Goldberg]. Linguistic Typology, 12(1), 155-159. doi:10.1515/LITY.2008.034.
  • Enfield, N. J. (2008). It's a leopard [Review of the book Book review The origin of speech by Peter F. MacNeilage]. Times Literary Supplement, September 12, 2008, 12-13.
  • Enfield, N. J. (2008). Linguistic categories and their utilities: The case of Lao landscape terms. Language Sciences, 30(2/3), 227-255. doi:10.1016/j.langsci.2006.12.030.

    Abstract

    Different domains of concrete referential semantics have provided testing grounds for investigation of the differential roles of perception, cognition, language, and culture in human categorization. A vast literature on semantics of biological classification, color, shape and topological relations, artifacts, and more, raises a range of theoretical and analytical debates. This article uses landscape terms to address a key debate from within research on ethnobiological classification: the opposition between so-called utilitarian and intellectualist accounts for patterns of lexicalization of the natural world [Berlin, B., 1992. Ethnobiological Classification: Principles of Categorization of Plants and Animals in Traditional Societies. Princeton University Press, Princeton, NJ]. ‘Utilitarianists’ argue that lexical categories reflect practical consequences of knowing certain category distinctions, related to cultural practice and functional affordances of referents. ‘Intellectualists’ argue that lexical categories reflect people’s innate interest in the natural world, combined with the perceptual discontinuities supplied by ‘Nature’s Plan’. The debate is generalizable to other domains, including landscape terminology, the topic of this special issue. This article brings landscape terminology into this larger debate, arguing in favor of a utilitarian account of linguistic categories in the domain of landscape, but proposing a significant revision to the concept of utility in linguistic categorization. The proposal is that for linguistic categorization, what is at issue is not (primarily) the utility of the referent (e.g. a river), but the utility of the word (e.g. the English word river). By considering how landscape terms are actually used in conversation, we see that they are deployed in communicative contexts which fit a rich, ‘functionalist’ semantics. A landscape term is not employed for mere referring, but functions to bring particular associated ideas into social discourse. In turn, language use reveals a range of evidence for the semantic content of any such term, of utility both to the language learner and to the semanticist. This kind of evidence can be argued to underlie the acquisition of semantic categories in language learning. The arguments are illustrated with examples from Lao, a Tai language of mainland Southeast Asia.
  • Enfield, N. J. (2008). Language as shaped by social interaction [Commentary on Christiansen and Chater]. Behavioral and Brain Sciences, 31(5), 519-520. doi:10.1017/S0140525X08005104.

    Abstract

    Language is shaped by its environment, which includes not only the brain, but also the public context in which speech acts are effected. To fully account for why language has the shape it has, we need to examine the constraints imposed by language use as a sequentially organized joint activity, and as the very conduit for linguistic diffusion and change.
  • Enfield, N. J. (2008). Lao linguistics in the 20th century and since. In Y. Goudineau, & M. Lorrillard (Eds.), Recherches nouvelles sur le Laos (pp. 435-452). Paris: EFEO.
  • Enfield, N. J. (2004). Nominal classification in Lao: A sketch. Sprachtypologie und Universalienforschung, 57(2/3), 117-143.
  • Enfield, N. J. (2014). Human agency and the infrastructure for requests. In P. Drew, & E. Couper-Kuhlen (Eds.), Requesting in social interaction (pp. 35-50). Amsterdam: John Benjamins.

    Abstract

    This chapter discusses some of the elements of human sociality that serve as the social and cognitive infrastructure or preconditions for the use of requests and other kinds of recruitments in interaction. The notion of an agent with goals is a canonical starting point, though importantly agency tends not to be wholly located in individuals, but rather is socially distributed. This is well illustrated in the case of requests, in which the person or group that has a certain goal is not necessarily the one who carries out the behavior towards that goal. The chapter focuses on the role of semiotic (mostly linguistic) resources in negotiating the distribution of agency with request-like actions, with examples from video-recorded interaction in Lao, a language spoken in Laos and nearby countries. The examples illustrate five hallmarks of requesting in human interaction, which show some ways in which our ‘manipulation’ of other people is quite unlike our manipulation of tools: (1) that even though B is being manipulated, B wants to help, (2) that while A is manipulating B now, A may be manipulated in return later; (3) that the goal of the behavior may be shared between A and B, (4) that B may not comply, or may comply differently than requested, due to actual or potential contingencies, and (5) that A and B are accountable to one another; reasons may be asked for, and/or given, for the request. These hallmarks of requesting are grounded in a prosocial framework of human agency.
  • Enfield, N., Kelly, A., & Sprenger, S. (2004). Max-Planck-Institute for Psycholinguistics: Annual Report 2004. Nijmegen: MPI for Psycholinguistics.
  • Enfield, N. J., & Levinson, S. C. (2008). Metalanguage for speech acts. In A. Majid (Ed.), Field manual volume 11 (pp. 77-79). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492937.

    Abstract

    People of all cultures have some degree of concern with categorizing types of communicative social action. All languages have words with meanings like speak, say, talk, complain, curse, promise, accuse, nod, wink, point and chant. But the exact distinctions they make will differ in both quantity and quality. How is communicative social action categorised across languages and cultures? The goal of this task is to establish a basis for cross-linguistic comparison of native metalanguages for social action.
  • Enfield, N. J., & Sidnell, J. (2014). Language presupposes an enchronic infrastructure for social interaction. In D. Dor, C. Knight, & J. Lewis (Eds.), The social origins of language (pp. 92-104). Oxford: Oxford University Press.
  • Enfield, N. J., Kockelman, P., & Sidnell, J. (2014). Interdisciplinary perspectives. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 599-602). Cambridge: Cambridge University Press.
  • Enfield, N. J., Kockelman, P., & Sidnell, J. (2014). Introduction: Directions in the anthropology of language. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 1-24). Cambridge: Cambridge University Press.
  • Enfield, N. J. (2014). Natural causes of language: Frames, biases and cultural transmission. Berlin: Language Science Press. Retrieved from http://langsci-press.org/catalog/book/48.

    Abstract

    What causes a language to be the way it is? Some features are universal, some are inherited, others are borrowed, and yet others are internally innovated. But no matter where a bit of language is from, it will only exist if it has been diffused and kept in circulation through social interaction in the history of a community. This book makes the case that a proper understanding of the ontology of language systems has to be grounded in the causal mechanisms by which linguistic items are socially transmitted, in communicative contexts. A biased transmission model provides a basis for understanding why certain things and not others are likely to develop, spread, and stick in languages. Because bits of language are always parts of systems, we also need to show how it is that items of knowledge and behavior become structured wholes. The book argues that to achieve this, we need to see how causal processes apply in multiple frames or 'time scales' simultaneously, and we need to understand and address each and all of these frames in our work on language. This forces us to confront implications that are not always comfortable: for example, that "a language" is not a real thing but a convenient fiction, that language-internal and language-external processes have a lot in common, and that tree diagrams are poor conceptual tools for understanding the history of languages. By exploring avenues for clear solutions to these problems, this book suggests a conceptual framework for ultimately explaining, in causal terms, what languages are like and why they are like that.
  • Enfield, N. J. (2004). Repair sequences in interaction. In A. Majid (Ed.), Field Manual Volume 9 (pp. 48-52). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492945.

    Abstract

    This Field Manual entry has been superceded by the 2007 version: https://doi.org/10.17617/2.468724

    Files private

    Request files
  • Enfield, N. J., Kockelman, P., & Sidnell, J. (Eds.). (2014). The Cambridge handbook of linguistic anthropology. Cambridge: Cambridge University Press.
  • Enfield, N. J., Levinson, S. C., & Stivers, T. (2008). Social action formulation: A "10-minutes" task. In A. Majid (Ed.), Field manual volume 11 (pp. 80-81). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492939.

    Abstract

    This Field Manual entry has been superceded by the 2009 version: https://doi.org/10.17617/2.883564

    Files private

    Request files
  • Enfield, N. J., Sidnell, J., & Kockelman, P. (2014). System and function. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 25-28). Cambridge: Cambridge University Press.
  • Enfield, N. J. (2014). The item/system problem. In N. J. Enfield, P. Kockelman, & J. Sidnell (Eds.), The Cambridge handbook of linguistic anthropology (pp. 48-77). Cambridge: Cambridge University Press.
  • Enfield, N. J. (2014). Transmission biases in the cultural evolution of language: Towards an explanatory framework. In D. Dor, C. Knight, & J. Lewis (Eds.), The social origins of language (pp. 325-335). Oxford: Oxford University Press.
  • Ernestus, M., & Neijt, A. (2008). Word length and the location of primary word stress in Dutch, German, and English. Linguistics, 46(3), 507-540. doi:10.1515/LING.2008.017.

    Abstract

    This study addresses the extent to which the location of primary stress in Dutch, German, and English monomorphemic words is affected by the syllables preceding the three final syllables. We present analyses of the monomorphemic words in the CELEX lexical database, which showed that penultimate primary stress is less frequent in Dutch and English trisyllabic than quadrisyllabic words. In addition, we discuss paper-and-pencil experiments in which native speakers assigned primary stress to pseudowords. These experiments provided evidence that in all three languages penultimate stress is more likely in quadrisyllabic than in trisyllabic words. We explain this length effect with the preferences in these languages for word-initial stress and for alternating patterns of stressed and unstressed syllables. The experimental data also showed important intra- and interspeaker variation, and they thus form a challenging test case for theories of language variation.
  • Ernestus, M. (2014). Acoustic reduction and the roles of abstractions and exemplars in speech processing. Lingua, 142, 27-41. doi:10.1016/j.lingua.2012.12.006.

    Abstract

    Acoustic reduction refers to the frequent phenomenon in conversational speech that words are produced with fewer or lenited segments compared to their citation forms. The few published studies on the production and comprehension of acoustic reduction have important implications for the debate on the relevance of abstractions and exemplars in speech processing. This article discusses these implications. It first briefly introduces the key assumptions of simple abstractionist and simple exemplar-based models. It then discusses the literature on acoustic reduction and draws the conclusion that both types of models need to be extended to explain all findings. The ultimate model should allow for the storage of different pronunciation variants, but also reserve an important role for phonetic implementation. Furthermore, the recognition of a highly reduced pronunciation variant requires top down information and leads to activation of the corresponding unreduced variant, the variant that reaches listeners’ consciousness. These findings are best accounted for in hybrids models, assuming both abstract representations and exemplars. None of the hybrid models formulated so far can account for all data on reduced speech and we need further research for obtaining detailed insight into how speakers produce and listeners comprehend reduced speech.
  • Ernestus, M., & Giezenaar, G. (2014). Een goed verstaander heeft maar een half woord nodig. In B. Bossers (Ed.), Vakwerk 9: Achtergronden van de NT2-lespraktijk: Lezingen conferentie Hoeven 2014 (pp. 81-92). Amsterdam: BV NT2.
  • Ernestus, M., & Mak, W. M. (2004). Distinctive phonological features differ in relevance for both spoken and written word recognition. Brain and Language, 90(1-3), 378-392. doi:10.1016/S0093-934X(03)00449-8.

    Abstract

    This paper discusses four experiments on Dutch which show that distinctive phonological features differ in their relevance for word recognition. The relevance of a feature for word recognition depends on its phonological stability, that is, the extent to which that feature is generally realized in accordance with its lexical specification in the relevant word position. If one feature value is uninformative, all values of that feature are less relevant for word recognition, with the least informative feature being the least relevant. Features differ in their relevance both in spoken and written word recognition, though the differences are more pronounced in auditory lexical decision than in self-paced reading.
  • Ernestus, M., & Baayen, R. H. (2004). Analogical effects in regular past tense production in Dutch. Linguistics, 42(5), 873-903. doi:10.1515/ling.2004.031.

    Abstract

    This study addresses the question to what extent the production of regular past tense forms in Dutch is a¤ected by analogical processes. We report an experiment in which native speakers of Dutch listened to existing regular verbs over headphones, and had to indicate which of the past tense allomorphs, te or de, was appropriate for these verbs. According to generative analyses, the choice between the two su‰xes is completely regular and governed by the underlying [voice]-specification of the stem-final segment. In this approach, no analogical e¤ects are expected. In connectionist and analogical approaches, by contrast, the phonological similarity structure in the lexicon is expected to a¤ect lexical processing. Our experimental results support the latter approach: all participants created more nonstandard past tense forms, produced more inconsistency errors, and responded more slowly for verbs with stronger analogical support for the nonstandard form.
  • Ernestus, M., & Baayen, R. H. (2004). Kuchde, tobte, en turfte: Lekkage in 't kofschip. Onze Taal, 73(12), 360-361.
  • Ernestus, M., Kočková-Amortová, L., & Pollak, P. (2014). The Nijmegen corpus of casual Czech. In N. Calzolari, K. Choukri, T. Declerck, H. Loftsson, B. Maegaard, J. Mariani, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of LREC 2014: 9th International Conference on Language Resources and Evaluation (pp. 365-370).

    Abstract

    This article introduces a new speech corpus, the Nijmegen Corpus of Casual Czech (NCCCz), which contains more than 30 hours of high-quality recordings of casual conversations in Common Czech, among ten groups of three male and ten groups of three female friends. All speakers were native speakers of Czech, raised in Prague or in the region of Central Bohemia, and were between 19 and 26 years old. Every group of speakers consisted of one confederate, who was instructed to keep the conversations lively, and two speakers naive to the purposes of the recordings. The naive speakers were engaged in conversations for approximately 90 minutes, while the confederate joined them for approximately the last 72 minutes. The corpus was orthographically annotated by experienced transcribers and this orthographic transcription was aligned with the speech signal. In addition, the conversations were videotaped. This corpus can form the basis for all types of research on casual conversations in Czech, including phonetic research and research on how to improve automatic speech recognition. The corpus will be freely available
  • Escudero, P., Hayes-Harb, R., & Mitterer, H. (2008). Novel second-language words and asymmetric lexical access. Journal of Phonetics, 36(2), 345-360. doi:10.1016/j.wocn.2007.11.002.

    Abstract

    The lexical and phonetic mapping of auditorily confusable L2 nonwords was examined by teaching L2 learners novel words and by later examining their word recognition using an eye-tracking paradigm. During word learning, two groups of highly proficient Dutch learners of English learned 20 English nonwords, of which 10 contained the English contrast /e/-æ/ (a confusable contrast for native Dutch speakers). One group of subjects learned the words by matching their auditory forms to pictured meanings, while a second group additionally saw the spelled forms of the words. We found that the group who received only auditory forms confused words containing /æ/ and /e/ symmetrically, i.e., both /æ/ and /e/ auditory tokens triggered looks to pictures containing both /æ/ and /e/. In contrast, the group who also had access to spelled forms showed the same asymmetric word recognition pattern found by previous studies, i.e., they only looked at pictures of words containing /e/ when presented with /e/ target tokens, but looked at pictures of words containing both /æ/ and /e/ when presented with /æ/ target tokens. The results demonstrate that L2 learners can form lexical contrasts for auditorily confusable novel L2 words. However, and most importantly, this study suggests that explicit information over the contrastive nature of two new sounds may be needed to build separate lexical representations for similar-sounding L2 words.
  • Evans, N., Levinson, S. C., Enfield, N. J., Gaby, A., & Majid, A. (2004). Reciprocal constructions and situation type. In A. Majid (Ed.), Field Manual Volume 9 (pp. 25-30). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.506955.
  • Evans, S., McGettigan, C., Agnew, Z., Rosen, S., Cesar, L., Boebinger, D., Ostarek, M., Chen, S. H., Richards, A., Meekins, S., & Scott, S. K. (2014). The neural basis of informational and energetic masking effects in the perception and production of speech [abstract]. The Journal of the Acoustical Society of America, 136(4), 2243. doi:10.1121/1.4900096.

    Abstract

    When we have spoken conversations, it is usually in the context of competing sounds within our environment. Speech can be masked by many different kinds of sounds, for example, machinery noise and the speech of others, and these different sounds place differing demands on cognitive resources. In this talk, I will present data from a series of functional magnetic resonance imaging (fMRI) studies in which the informational properties of background sounds have been manipulated to make them more or less similar to speech. I will demonstrate the neural effects associated with speaking over and listening to these sounds, and demonstrate how in perception these effects are modulated by the age of the listener. The results will be interpreted within a framework of auditory processing developed from primate neurophysiology and human functional imaging work (Rauschecker and Scott 2009).
  • Falcaro, M., Pickles, A., Newbury, D. F., Addis, L., Banfield, E., Fisher, S. E., Monaco, A. P., Simkin, Z., Conti-Ramsden, G., & Consortium (2008). Genetic and phenotypic effects of phonological short-term memory and grammatical morphology in specific language impairment. Genes, Brain and Behavior, 7, 393-402. doi:10.1111/j.1601-183X.2007.00364.x.

    Abstract

    Deficits in phonological short-term memory and aspects of verb grammar morphology have been proposed as phenotypic markers of specific language impairment (SLI) with the suggestion that these traits are likely to be under different genetic influences. This investigation in 300 first-degree relatives of 93 probands with SLI examined familial aggregation and genetic linkage of two measures thought to index these two traits, non-word repetition and tense marking. In particular, the involvement of chromosomes 16q and 19q was examined as previous studies found these two regions to be related to SLI. Results showed a strong association between relatives' and probands' scores on non-word repetition. In contrast, no association was found for tense marking when examined as a continuous measure. However, significant familial aggregation was found when tense marking was treated as a binary measure with a cut-off point of -1.5 SD, suggestive of the possibility that qualitative distinctions in the trait may be familial while quantitative variability may be more a consequence of non-familial factors. Linkage analyses supported previous findings of the SLI Consortium of linkage to chromosome 16q for phonological short-term memory and to chromosome 19q for expressive language. In addition, we report new findings that relate to the past tense phenotype. For the continuous measure, linkage was found on both chromosomes, but evidence was stronger on chromosome 19. For the binary measure, linkage was observed on chromosome 19 but not on chromosome 16.
  • Filippi, P. (2014). Linguistic animals: understanding language through a comparative approach. In E. A. Cartmill, S. Roberts, H. Lyn, & H. Crnish (Eds.), The Evolution of Language: Proceedings of the 10th International Conference (pp. 74-81). doi:10.1142/9789814603638_0082.

    Abstract

    With the aim to clarify the definition of humans as “linguistic animals”, in the present paper I functionally distinguish three types of language competences: i) language as a general biological tool for communication, ii) “perceptual syntax”, iii) propositional language. Following this terminological distinction, I review pivotal findings on animals' communication systems, which constitute useful evidence for the investigation of the nature of three core components of humans' faculty of language: semantics, syntax, and theory of mind. In fact, despite the capacity to process and share utterances with an open-ended structure is uniquely human, some isolated components of our linguistic competence are in common with nonhuman animals. Therefore, as I argue in the present paper, the investigation of animals' communicative competence provide crucial insights into the range of cognitive constraints underlying humans' ability of language, enabling at the same time the analysis of its phylogenetic path as well as of the selective pressures that have led to its emergence.
  • Filippi, P., Gingras, B., & Fitch, W. T. (2014). The effect of pitch enhancement on spoken language acquisition. In E. A. Cartmill, S. Roberts, H. Lyn, & H. Crnish (Eds.), The Evolution of Language: Proceedings of the 10th International Conference (pp. 437-438). doi:10.1142/9789814603638_0082.

    Abstract

    The aim of this study is to investigate the word-learning phenomenon utilizing a new model that integrates three processes: a) extracting a word out of a continuous sounds sequence, b) inducing referential meanings, c) mapping a word onto its intended referent, with the possibility to extend the acquired word over a potentially infinite sets of objects of the same semantic category, and over not-previously-heard utterances. Previous work has examined the role of statistical learning and/or of prosody in each of these processes separately. In order to examine the multilayered word-learning task, we integrate these two strands of investigation into a single approach. We have conducted the study on adults and included six different experimental conditions, each including specific perceptual manipulations of the signal. In condition 1, the only cue to word-meaning mapping was the co-occurrence between words and referents (“statistical cue”). This cue was present in all the conditions. In condition 2, we added infant-directed-speech (IDS) typical pitch enhancement as a marker of the target word and of the statistical cue. In condition 3 we placed IDS typical pitch enhancement on random words of the utterances, i.e. inconsistently matching the statistical cue. In conditions 4, 5 and 6 we manipulated respectively duration, a non-prosodic acoustic cue and a visual cue as markers of the target word and of the statistical cue. Systematic comparisons between learning performance in condition 1 with the other conditions revealed that the word-learning process is facilitated only when pitch prominence consistently marks the target word and the statistical cue…
  • Filippi, P., Gingras, B., & Fitch, W. T. (2014). Pitch enhancement facilitates word learning across visual contexts. Frontiers in Psychology, 5: 1468. doi:10.3389%2Ffpsyg.2014.01468.

    Abstract

    This study investigates word-learning using a new experimental paradigm that integrates three processes: (a) extracting a word out of a continuous sound sequence, (b) inferring its referential meanings in context, (c) mapping the segmented word onto its broader intended referent, such as other objects of the same semantic category, and to novel utterances. Previous work has examined the role of statistical learning and/or of prosody in each of these processes separately. Here, we combine these strands of investigation into a single experimental approach, in which participants viewed a photograph belonging to one of three semantic categories while hearing a complex, five-word utterance containing a target word. Six between-subjects conditions were tested with 20 adult participants each. In condition 1, the only cue to word-meaning mapping was the co-occurrence of word and referents. This statistical cue was present in all conditions. In condition 2, the target word was sounded at a higher pitch. In condition 3, random words were sounded at a higher pitch, creating an inconsistent cue. In condition 4, the duration of the target word was lengthened. In conditions 5 and 6, an extraneous acoustic cue and a visual cue were associated with the target word, respectively. Performance in this word-learning task was significantly higher than that observed with simple co-occurrence only when pitch prominence consistently marked the target word. We discuss implications for the pragmatic value of pitch marking as well as the relevance of our findings to language acquisition and language evolution.
  • Fisher, S. E., Vargha-Khadem, F., Watkins, K. E., Monaco, A. P., & Pembrey, M. E. (1998). Localisation of a gene implicated in a severe speech and language disorder. Nature Genetics, 18, 168 -170. doi:10.1038/ng0298-168.

    Abstract

    Between 2 and 5% of children who are otherwise unimpaired have significant difficulties in acquiring expressive and/or receptive language, despite adequate intelligence and opportunity. While twin studies indicate a significant role for genetic factors in developmental disorders of speech and language, the majority of families segregating such disorders show complex patterns of inheritance, and are thus not amenable for conventional linkage analysis. A rare exception is the KE family, a large three-generation pedigree in which approximately half of the members are affected with a severe speech and language disorder which appears to be transmitted as an autosomal dominant monogenic trait. This family has been widely publicised as suffering primarily from a defect in the use of grammatical suffixation rules, thus supposedly supporting the existence of genes specific to grammar. The phenotype, however, is broader in nature, with virtually every aspect of grammar and of language affected. In addition, affected members have a severe orofacial dyspraxia, and their speech is largely incomprehensible to the naive listener. We initiated a genome-wide search for linkage in the KE family and have identified a region on chromosome 7 which co-segregates with the speech and language disorder (maximum lod score = 6.62 at theta = 0.0), confirming autosomal dominant inheritance with full penetrance. Further analysis of microsatellites from within the region enabled us to fine map the locus responsible (designated SPCH1) to a 5.6-cM interval in 7q31, thus providing an important step towards its identification. Isolation of SPCH1 may offer the first insight into the molecular genetics of the developmental process that culminates in speech and language.
  • Fitz, H. (2014). Computermodelle für Spracherwerb und Sprachproduktion. Forschungsbericht 2014 - Max-Planck-Institut für Psycholinguistik. In Max-Planck-Gesellschaft Jahrbuch 2014. München: Max Planck Society for the Advancement of Science. Retrieved from http://www.mpg.de/7850678/Psycholinguistik_JB_2014?c=8236817.

    Abstract

    Relative clauses are a syntactic device to create complex sentences and they make language structurally productive. Despite a considerable number of experimental studies, it is still largely unclear how children learn relative clauses and how these are processed in the language system. Researchers at the MPI for Psycholinguistics used a computational learning model to gain novel insights into these issues. The model explains the differential development of relative clauses in English as well as cross-linguistic differences
  • Fitz, H., & Chang, F. (2008). The role of the input in a connectionist model of the accessibility hierarchy in development. In H. Chan, H. Jacob, & E. Kapia (Eds.), Proceedings from the 32nd Annual Boston University Conference on Language Development [BUCLD 32] (pp. 120-131). Somerville, Mass.: Cascadilla Press.
  • FitzPatrick, I., & Weber, K. (2008). “Il piccolo principe est allé”: Processing of language switches in auditory sentence comprehension. Journal of Neuroscience, 28(18), 4581-4582. doi:10.1523/JNEUROSCI.0905-08.2008.
  • FitzPatrick, I., & Indefrey, P. (2014). Head start for target language in bilingual listening. Brain Research, 1542, 111-130. doi:10.1016/j.brainres.2013.10.014.

    Abstract

    In this study we investigated the availability of non-target language semantic features in bilingual speech processing. We recorded EEG from Dutch-English bilinguals who listened to spoken sentences in their L2 (English) or L1 (Dutch). In Experiments 1 and 3 the sentences contained an interlingual homophone. The sentence context was either biased towards the target language meaning of the homophone (target biased), the non-target language meaning (non-target biased), or neither meaning of the homophone (fully incongruent). These conditions were each compared to a semantically congruent control condition. In L2 sentences we observed an N400 in the non-target biased condition that had an earlier offset than the N400 to fully incongruent homophones. In the target biased condition, a negativity emerged that was later than the N400 to fully incongruent homophones. In L1 contexts, neither target biased nor non-target biased homophones yielded significant N400 effects (compared to the control condition). In Experiments 2 and 4 the sentences contained a language switch to a non-target language word that could be semantically congruent or incongruent. Semantically incongruent words (switched, and non-switched) elicited an N400 effect. The N400 to semantically congruent language-switched words had an earlier offset than the N400 to incongruent words. Both congruent and incongruent language switches elicited a Late Positive Component (LPC). These findings show that bilinguals activate both meanings of interlingual homophones irrespective of their contextual fit. In L2 contexts, the target-language meaning of the homophone has a head start over the non-target language meaning. The target-language head start is also evident for language switches from both L2-to-L1 and L1-to-L2
  • Flecken, M., von Stutterheim, C., & Carroll, M. (2014). Grammatical aspect influences motion event perception: Evidence from a cross-linguistic non-verbal recognition task. Language and Cognition, 6(1), 45-78. doi:10.1017/langcog.2013.2.

    Abstract

    Using eye-tracking as a window on cognitive processing, this study investigates language effects on attention to motion events in a non-verbal task. We compare gaze allocation patterns by native speakers of German and Modern Standard Arabic (MSA), two languages that differ with regard to the grammaticalization of temporal concepts. Findings of the non-verbal task, in which speakers watch dynamic event scenes while performing an auditory distracter task, are compared to gaze allocation patterns which were obtained in an event description task, using the same stimuli. We investigate whether differences in the grammatical aspectual systems of German and MSA affect the extent to which endpoints of motion events are linguistically encoded and visually processed in the two tasks. In the linguistic task, we find clear language differences in endpoint encoding and in the eye-tracking data (attention to event endpoints) as well: German speakers attend to and linguistically encode endpoints more frequently than speakers of MSA. The fixation data in the non-verbal task show similar language effects, providing relevant insights with regard to the language-and-thought debate. The present study is one of the few studies that focus explicitly on language effects related to grammatical concepts, as opposed to lexical concepts.
  • Floyd, S. (2014). 'We’ as social categorization in Cha’palaa: A language of Ecuador. In T.-S. Pavlidou (Ed.), Constructing collectivity: 'We' across languages and contexts (pp. 135-158). Amsterdam: Benjamins.

    Abstract

    This chapter connects the grammar of the first person collective pronoun in the Cha’palaa language of Ecuador with its use in interaction for collective reference and social category membership attribution, addressing the problem posed by the fact that non-singular pronouns do not have distributional semantics (“speakers”) but are rather associational (“speaker and relevant associates”). It advocates a cross-disciplinary approach that jointly considers elements of linguistic form, situated usages of those forms in instances of interaction, and the broader ethnographic context of those instances. Focusing on large-scale and relatively stable categories such as racial and ethnic groups, it argues that looking at how speakers categorize themselves and others in the speech situation by using pronouns provides empirical data on the status of macro-social categories for members of a society

    Files private

    Request files
  • Floyd, S. (2014). [Review of the book Flexible word classes: Typological studies of underspecified parts of speech ed. by Jan Rijkhoff and Eva van Lier]. Linguistics, 52, 1499-1502. doi:10.1515/ling-2014-0027.
  • Floyd, S. (2014). Four types of reduplication in the Cha'palaa language of Ecuador. In H. van der Voort, & G. Goodwin Gómez (Eds.), Reduplication in Indigenous Languages of South America (pp. 77-114). Leiden: Brill.
  • Floyd, S. (2004). Purismo lingüístico y realidad local: ¿Quichua puro o puro quichuañol? In Proceedings of the Conference on Indigenous Languages of Latin America (CILLA)-I.
  • Floyd, S. (2008). The Pirate media economy and the emergence of Quichua language media spaces in Ecuador. Anthropology of Work Review, 29(2), 34-41. doi:10.1111/j.1548-1417.2008.00012.x.

    Abstract

    This paper gives an account of the pirate media economy of Ecuador and its role in the emergence of indigenous Quichua-language media spaces, identifying the different parties involved in this economy, discussing their relationship to the parallel ‘‘legitimate’’ media economy, and considering the implications of this informal media market for Quichua linguistic and cultural reproduction. As digital recording and playback technology has become increasingly more affordable and widespread over recent years, black markets have grown up worldwide, based on cheap ‘‘illegal’’ reproduction of commercial media, today sold by informal entrepreneurs in rural markets, shops and street corners around Ecuador. Piggybacking on this pirate infrastructure, Quichua-speaking media producers and consumers have begun to circulate indigenous-language video at an unprecedented rate, helped by small-scale merchants who themselves profit by supplying market demands for positive images of indigenous people. In a context of a national media that has tended to silence indigenous voices rather than amplify them, informal media producers, consumers and vendors are developing relationships that open meaningful media spaces within the particular social, economic and linguistic contexts of Ecuador.
  • Folia, V., Uddén, J., Forkstam, C., Ingvar, M., Hagoort, P., & Petersson, K. M. (2008). Implicit learning and dyslexia. Annals of the New York Academy of Sciences, 1145, 132-150. doi:10.1196/annals.1416.012.

    Abstract

    Several studies have reported an association between dyslexia and implicit learning deficits. It has been suggested that the weakness in implicit learning observed in dyslexic individuals may be related to sequential processing and implicit sequence learning. In the present article, we review the current literature on implicit learning and dyslexia. We describe a novel, forced-choice structural "mere exposure" artificial grammar learning paradigm and characterize this paradigm in normal readers in relation to the standard grammaticality classification paradigm. We argue that preference classification is a more optimal measure of the outcome of implicit acquisition since in the preference version participants are kept completely unaware of the underlying generative mechanism, while in the grammaticality version, the subjects have, at least in principle, been informed about the existence of an underlying complex set of rules at the point of classification (but not during acquisition). On the basis of the "mere exposure effect," we tested the prediction that the development of preference will correlate with the grammaticality status of the classification items. In addition, we examined the effects of grammaticality (grammatical/nongrammatical) and associative chunk strength (ACS; high/low) on the classification tasks (preference/grammaticality). Using a balanced ACS design in which the factors of grammaticality (grammatical/nongrammatical) and ACS (high/low) were independently controlled in a 2 × 2 factorial design, we confirmed our predictions. We discuss the suitability of this task for further investigation of the implicit learning characteristics in dyslexia.
  • Folia, V., & Petersson, K. M. (2014). Implicit structured sequence learning: An fMRI study of the structural mere-exposure effect. Frontiers in Psychology, 5: 41. doi:10.3389/fpsyg.2014.00041.

    Abstract

    In this event-related FMRI study we investigated the effect of five days of implicit acquisition on preference classification by means of an artificial grammar learning (AGL) paradigm based on the structural mere-exposure effect and preference classification using a simple right-linear unification grammar. This allowed us to investigate implicit AGL in a proper learning design by including baseline measurements prior to grammar exposure. After 5 days of implicit acquisition, the FMRI results showed activations in a network of brain regions including the inferior frontal (centered on BA 44/45) and the medial prefrontal regions (centered on BA 8/32). Importantly, and central to this study, the inclusion of a naive preference FMRI baseline measurement allowed us to conclude that these FMRI findings were the intrinsic outcomes of the learning process itself and not a reflection of a preexisting functionality recruited during classification, independent of acquisition. Support for the implicit nature of the knowledge utilized during preference classification on day 5 come from the fact that the basal ganglia, associated with implicit procedural learning, were activated during classification, while the medial temporal lobe system, associated with explicit declarative memory, was consistently deactivated. Thus, preference classification in combination with structural mere-exposure can be used to investigate structural sequence processing (syntax) in unsupervised AGL paradigms with proper learning designs.
  • Forkel, S. J., Thiebaut de Schotten, M., Dell’Acqua, F., Kalra, L., Murphy, D. G. M., Williams, S. C. R., & Catani, M. (2014). Anatomical predictors of aphasia recovery: a tractography study of bilateral perisylvian language networks. Brain, 137, 2027-2039. doi:10.1093/brain/awu113.

    Abstract

    Stroke-induced aphasia is associated with adverse effects on quality of life and the ability to return to work. For patients and clinicians the possibility of relying on valid predictors of recovery is an important asset in the clinical management of stroke-related impairment. Age, level of education, type and severity of initial symptoms are established predictors of recovery. However, anatomical predictors are still poorly understood. In this prospective longitudinal study, we intended to assess anatomical predictors of recovery derived from diffusion tractography of the perisylvian language networks. Our study focused on the arcuate fasciculus, a language pathway composed of three segments connecting Wernicke’s to Broca’s region (i.e. long segment), Wernicke’s to Geschwind’s region (i.e. posterior segment) and Broca’s to Geschwind’s region (i.e. anterior segment). In our study we were particularly interested in understanding how lateralization of the arcuate fasciculus impacts on severity of symptoms and their recovery. Sixteen patients (10 males; mean age 60 ± 17 years, range 28–87 years) underwent post stroke language assessment with the Revised Western Aphasia Battery and neuroimaging scanning within a fortnight from symptoms onset. Language assessment was repeated at 6 months. Backward elimination analysis identified a subset of predictor variables (age, sex, lesion size) to be introduced to further regression analyses. A hierarchical regression was conducted with the longitudinal aphasia severity as the dependent variable. The first model included the subset of variables as previously defined. The second model additionally introduced the left and right arcuate fasciculus (separate analysis for each segment). Lesion size was identified as the only independent predictor of longitudinal aphasia severity in the left hemisphere [beta = −0.630, t(−3.129), P = 0.011]. For the right hemisphere, age [beta = −0.678, t(–3.087), P = 0.010] and volume of the long segment of the arcuate fasciculus [beta = 0.730, t(2.732), P = 0.020] were predictors of longitudinal aphasia severity. Adding the volume of the right long segment to the first-level model increased the overall predictive power of the model from 28% to 57% [F(1,11) = 7.46, P = 0.02]. These findings suggest that different predictors of recovery are at play in the left and right hemisphere. The right hemisphere language network seems to be important in aphasia recovery after left hemispheric stroke.

    Additional information

    supplementary information
  • Forkel, S. J. (2014). Identification of anatomical predictors of language recovery after stroke with diffusion tensor imaging. PhD Thesis, King's College London, London.

    Abstract

    Background Stroke-induced aphasia is associated with adverse effects on quality of life and the ability to return to work. However, the predictors of recovery are still poorly understood. Anatomical variability of the arcuate fasciculus, connecting Broca’s and Wernicke’s areas, has been reported in the healthy population using diffusion tensor imaging tractography. In about 40% of the population the arcuate fasciculus is bilateral and this pattern is advantageous for certain language related functions, such as auditory verbal learning (Catani et al. 2007). Methods In this prospective longitudinal study, anatomical predictors of post-stroke aphasia recovery were investigated using diffusion tractography and arterial spin labelling. Patients An 18-subject strong aphasia cohort with first-ever unilateral left hemispheric middle cerebral artery infarcts underwent post stroke language (mean 5±5 days) and neuroimaging (mean 10±6 days) assessments and neuropsychological follow-up at six months. Ten of these patients were available for reassessment one year after symptom onset. Aphasia was assessed with the Western Aphasia Battery, which provides a global measure of severity (Aphasia Quotient, AQ). Results Better recover from aphasia was observed in patients with a right arcuate fasciculus [beta=.730, t(2.732), p=.020] (tractography) and increased fractional anisotropy in the right hemisphere (p<0.05) (Tract-based spatial statistics). Further, an increase in left hemisphere perfusion was observed after one year (p<0.01) (perfusion). Lesion analysis identified maximal overlay in the periinsular white matter (WM). Lesion-symptom mapping identified damage to periinsular structure as predictive for overall aphasia severity and damage to frontal lobe white matter as predictive of repetition deficits. Conclusion These findings suggest an important role for the right hemisphere language network in recovery from aphasia after left hemispheric stroke.

    Additional information

    Link to repository
  • Forkel, S. J., Thiebaut de Schotten, M., Kawadler, J. M., Dell'Acqua, F., Danek, A., & Catani, M. (2014). The anatomy of fronto-occipital connections from early blunt dissections to contemporary tractography. Cortex, 56, 73-84. doi:10.1016/j.cortex.2012.09.005.

    Abstract

    The occipital and frontal lobes are anatomically distant yet functionally highly integrated to generate some of the most complex behaviour. A series of long associative fibres, such as the fronto-occipital networks, mediate this integration via rapid feed-forward propagation of visual input to anterior frontal regions and direct top–down modulation of early visual processing.

    Despite the vast number of anatomical investigations a general consensus on the anatomy of fronto-occipital connections is not forthcoming. For example, in the monkey the existence of a human equivalent of the ‘inferior fronto-occipital fasciculus’ (iFOF) has not been demonstrated. Conversely, a ‘superior fronto-occipital fasciculus’ (sFOF), also referred to as ‘subcallosal bundle’ by some authors, is reported in monkey axonal tracing studies but not in human dissections.

    In this study our aim is twofold. First, we use diffusion tractography to delineate the in vivo anatomy of the sFOF and the iFOF in 30 healthy subjects and three acallosal brains. Second, we provide a comprehensive review of the post-mortem and neuroimaging studies of the fronto-occipital connections published over the last two centuries, together with the first integral translation of Onufrowicz's original description of a human fronto-occipital fasciculus (1887) and Muratoff's report of the ‘subcallosal bundle’ in animals (1893).

    Our tractography dissections suggest that in the human brain (i) the iFOF is a bilateral association pathway connecting ventro-medial occipital cortex to orbital and polar frontal cortex, (ii) the sFOF overlaps with branches of the superior longitudinal fasciculus (SLF) and probably represents an ‘occipital extension’ of the SLF, (iii) the subcallosal bundle of Muratoff is probably a complex tract encompassing ascending thalamo-frontal and descending fronto-caudate connections and is therefore a projection rather than an associative tract.

    In conclusion, our experimental findings and review of the literature suggest that a ventral pathway in humans, namely the iFOF, mediates a direct communication between occipital and frontal lobes. Whether the iFOF represents a unique human pathway awaits further ad hoc investigations in animals.
  • Forkstam, C., Elwér, A., Ingvar, M., & Petersson, K. M. (2008). Instruction effects in implicit artificial grammar learning: A preference for grammaticality. Brain Research, 1221, 80-92. doi:10.1016/j.brainres.2008.05.005.

    Abstract

    Human implicit learning can be investigated with implicit artificial grammar learning, a paradigm that has been proposed as a simple model for aspects of natural language acquisition. In the present study we compared the typical yes–no grammaticality classification, with yes–no preference classification. In the case of preference instruction no reference to the underlying generative mechanism (i.e., grammar) is needed and the subjects are therefore completely uninformed about an underlying structure in the acquisition material. In experiment 1, subjects engaged in a short-term memory task using only grammatical strings without performance feedback for 5 days. As a result of the 5 acquisition days, classification performance was independent of instruction type and both the preference and the grammaticality group acquired relevant knowledge of the underlying generative mechanism to a similar degree. Changing the grammatical stings to random strings in the acquisition material (experiment 2) resulted in classification being driven by local substring familiarity. Contrasting repeated vs. non-repeated preference classification (experiment 3) showed that the effect of local substring familiarity decreases with repeated classification. This was not the case for repeated grammaticality classifications. We conclude that classification performance is largely independent of instruction type and that forced-choice preference classification is equivalent to the typical grammaticality classification.
  • Fradera, A., & Sauter, D. (2004). Make yourself happy. In T. Stafford, & M. Webb (Eds.), Mind hacks: tips & tools for using your brain (pp. 325-327). Sebastopol, CA: O'Reilly.

    Abstract

    Turn on your affective system by tweaking your face muscles - or getting an eyeful of someone else doing the same.
  • Fradera, A., & Sauter, D. (2004). Reminisce hot and cold. In T. Stafford, & M. Webb (Eds.), Mind hacks: tips & tools for using your brain (pp. 327-331). Sebastopol, CA: O'Reilly.

    Abstract

    Find the fire that's cooking your memory systems.
  • Fradera, A., & Sauter, D. (2004). Signal emotion. In T. Stafford, & M. Webb (Eds.), Mind hacks: tips & tools for using your brain (pp. 320-324). Sebastopol, CA: O'Reilly.

    Abstract

    Emotions are powerful on the inside but often displayed in subtle ways on the outside. Are these displays culturally dependent or universal?
  • Francisco, A. A., Jesse, A., Groen, M. a., & McQueen, J. M. (2014). Audiovisual temporal sensitivity in typical and dyslexic adult readers. In Proceedings of the 15th Annual Conference of the International Speech Communication Association (INTERSPEECH 2014) (pp. 2575-2579).

    Abstract

    Reading is an audiovisual process that requires the learning of systematic links between graphemes and phonemes. It is thus possible that reading impairments reflect an audiovisual processing deficit. In this study, we compared audiovisual processing in adults with developmental dyslexia and adults without reading difficulties. We focused on differences in cross-modal temporal sensitivity both for speech and for non-speech events. When compared to adults without reading difficulties, adults with developmental dyslexia presented a wider temporal window in which unsynchronized speech events were perceived as synchronized. No differences were found between groups for the non-speech events. These results suggests a deficit in dyslexia in the perception of cross-modal temporal synchrony for speech events.
  • Francks, C., Paracchini, S., Smith, S. D., Richardson, A. J., Scerri, T. S., Cardon, L. R., Marlow, A. J., MacPhie, I. L., Walter, J., Pennington, B. F., Fisher, S. E., Olson, R. K., DeFries, J. C., Stein, J. F., & Monaco, A. P. (2004). A 77-kilobase region of chromosome 6p22.2 is associated with dyslexia in families from the United Kingdom and from the United States. American Journal of Human Genetics, 75(6), 1046-1058. doi:10.1086/426404.

    Abstract

    Several quantitative trait loci (QTLs) that influence developmental dyslexia (reading disability [RD]) have been mapped to chromosome regions by linkage analysis. The most consistently replicated area of linkage is on chromosome 6p23-21.3. We used association analysis in 223 siblings from the United Kingdom to identify an underlying QTL on 6p22.2. Our association study implicates a 77-kb region spanning the gene TTRAP and the first four exons of the neighboring uncharacterized gene KIAA0319. The region of association is also directly upstream of a third gene, THEM2. We found evidence of these associations in a second sample of siblings from the United Kingdom, as well as in an independent sample of twin-based sibships from Colorado. One main RD risk haplotype that has a frequency of ∼12% was found in both the U.K. and U.S. samples. The haplotype is not distinguished by any protein-coding polymorphisms, and, therefore, the functional variation may relate to gene expression. The QTL influences a broad range of reading-related cognitive abilities but has no significant impact on general cognitive performance in these samples. In addition, the QTL effect may be largely limited to the severe range of reading disability.
  • Frank, S. L., Koppen, M., Noordman, L. G. M., & Vonk, W. (2008). World knowledge in computational models of discourse comprehension. Discourse Processes, 45(6), 429-463. doi:10.1080/01638530802069926.

    Abstract

    Because higher level cognitive processes generally involve the use of world knowledge, computational models of these processes require the implementation of a knowledge base. This article identifies and discusses 4 strategies for dealing with world knowledge in computational models: disregarding world knowledge, ad hoc selection, extraction from text corpora, and implementation of all knowledge about a simplified microworld. Each of these strategies is illustrated by a detailed discussion of a model of discourse comprehension. It is argued that seemingly successful modeling results are uninformative if knowledge is implemented ad hoc or not at all, that knowledge extracted from large text corpora is not appropriate for discourse comprehension, and that a suitable implementation can be obtained by applying the microworld strategy.
  • Frank, S. L. (2004). Computational modeling of discourse comprehension. PhD Thesis, Tilburg University, Tilburg.
  • Franke, B., Hoogman, M., Vasquez, A. A., Heister, J., Savelkoul, P., Naber, M., Scheffer, H., Kiemeney, L., Kan, C., Kooij, J., & Buitelaar, J. (2008). Association of the dopamine transporter (SLC6A3/DAT1) gene 9-6 haplotype with adult ADHD. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 147, 1576-1579. doi:10.1002/ajmg.b.30861.

    Abstract

    ADHD is a neuropsychiatric disorder characterized by chronic hyperactivity, inattention and impulsivity, which affects about 5% of school-age children. ADHD persists into adulthood in at least 15% of cases. It is highly heritable and familial influences seem strongest for ADHD persisting into adulthood. However, most of the genetic research in ADHD has been carried out in children with the disorder. The gene that has received most attention in ADHD genetics is SLC6A3/DAT1 encoding the dopamine transporter. In the current study we attempted to replicate in adults with ADHD the reported association of a 10–6 SLC6A3-haplotype, formed by the 10-repeat allele of the variable number of tandem repeat (VNTR) polymorphism in the 3′ untranslated region of the gene and the 6-repeat allele of the VNTR in intron 8 of the gene, with childhood ADHD. In addition, we wished to explore the role of a recently described VNTR in intron 3 of the gene. Two hundred sixteen patients and 528 controls were included in the study. We found a 9–6 SLC6A3-haplotype, rather than the 10–6 haplotype, to be associated with ADHD in adults. The intron 3 VNTR showed no association with adult ADHD. Our findings converge with earlier reports and suggest that age is an important factor to be taken into account when assessing the association of SLC6A3 with ADHD. If confirmed in other studies, the differential association of the gene with ADHD in children and in adults might imply that SLC6A3 plays a role in modulating the ADHD phenotype, rather than causing it
  • French, C. A., & Fisher, S. E. (2014). What can mice tell us about Foxp2 function? Current Opinion in Neurobiology, 28, 72-79. doi:10.1016/j.conb.2014.07.003.

    Abstract

    Disruptions of the FOXP2 gene cause a rare speech and language disorder, a discovery that has opened up novel avenues for investigating the relevant neural pathways. FOXP2 shows remarkably high conservation of sequence and neural expression in diverse vertebrates, suggesting that studies in other species are useful in elucidating its functions. Here we describe how investigations of mice that carry disruptions of Foxp2 provide insights at multiple levels: molecules, cells, circuits and behaviour. Work thus far has implicated the gene in key processes including neurite outgrowth, synaptic plasticity, sensorimotor integration and motor-skill learning.
  • Friederici, A., & Levelt, W. J. M. (1988). Sprache. In K. Immelmann, K. Scherer, C. Vogel, & P. Schmook (Eds.), Psychobiologie: Grundlagen des Verhaltens (pp. 648-671). Stuttgart: Fischer.
  • Friederici, A. D., & Levelt, W. J. M. (1986). Cognitive processes of spatial coordinate assignment: On weighting perceptual cues. Naturwissenschaften, 73, 455-458.
  • Frost, R. (2014). Learning grammatical structures with and without sleep. PhD Thesis, Lancaster University, Lancaster.
  • Fuhrmann, D., Ravignani, A., Marshall-Pescini, S., & Whiten, A. (2014). Synchrony and motor mimicking in chimpanzee observational learning. Scientific Reports, 4: 5283. doi:10.1038/srep05283.

    Abstract

    Cumulative tool-based culture underwrote our species' evolutionary success and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function.

    Additional information

    Supplementary Information
  • Furman, R., Kuntay, A., & Ozyurek, A. (2014). Early language-specificity of children's event encoding in speech and gesture: Evidence from caused motion in Turkish. Language, Cognition and Neuroscience, 29, 620-634. doi:10.1080/01690965.2013.824993.

    Abstract

    Previous research on language development shows that children are tuned early on to the language-specific semantic and syntactic encoding of events in their native language. Here we ask whether language-specificity is also evident in children's early representations in gesture accompanying speech. In a longitudinal study, we examined the spontaneous speech and cospeech gestures of eight Turkish-speaking children aged one to three and focused on their caused motion event expressions. In Turkish, unlike in English, the main semantic elements of caused motion such as Action and Path can be encoded in the verb (e.g. sok- ‘put in’) and the arguments of a verb can be easily omitted. We found that Turkish-speaking children's speech indeed displayed these language-specific features and focused on verbs to encode caused motion. More interestingly, we found that their early gestures also manifested specificity. Children used iconic cospeech gestures (from 19 months onwards) as often as pointing gestures and represented semantic elements such as Action with Figure and/or Path that reinforced or supplemented speech in language-specific ways until the age of three. In the light of previous reports on the scarcity of iconic gestures in English-speaking children's early productions, we argue that the language children learn shapes gestures and how they get integrated with speech in the first three years of life.
  • Gaby, A. R. (2004). Extended functions of Thaayorre body part terms. Papers in Linguistics and Applied Linguistics, 4(2), 24-34.
  • Ganushchak, L. Y., & Schiller, N. O. (2008). Brain error-monitoring activity is affected by semantic relatedness: An event-related brain potentials study. Journal of Cognitive Neuroscience, 20(5), 927-940. doi:10.1162/jocn.2008.20514.

    Abstract

    Speakers continuously monitor what they say. Sometimes, self-monitoring malfunctions and errors pass undetected and uncorrected. In the field of action monitoring, an event-related brain potential, the error-related negativity (ERN), is associated with error processing. The present study relates the ERN to verbal self-monitoring and investigates how the ERN is affected by auditory distractors during verbal monitoring. We found that the ERN was largest following errors that occurred after semantically related distractors had been presented, as compared to semantically unrelated ones. This result demonstrates that the ERN is sensitive not only to response conflict resulting from the incompatibility of motor responses but also to more abstract lexical retrieval conflict resulting from activation of multiple lexical entries. This, in turn, suggests that the functioning of the verbal self-monitoring system during speaking is comparable to other performance monitoring, such as action monitoring.
  • Ganushchak, L. Y., & Schiller, N. O. (2008). Motivation and semantic context affect brain error-monitoring activity: An event-related brain potentials study. NeuroImage, 39, 395-405. doi:10.1016/j.neuroimage.2007.09.001.

    Abstract

    During speech production, we continuously monitor what we say. In
    situations in which speech errors potentially have more severe
    consequences, e.g. during a public presentation, our verbal selfmonitoring
    system may pay special attention to prevent errors than in
    situations in which speech errors are more acceptable, such as a casual
    conversation. In an event-related potential study, we investigated
    whether or not motivation affected participants’ performance using a
    picture naming task in a semantic blocking paradigm. Semantic
    context of to-be-named pictures was manipulated; blocks were
    semantically related (e.g., cat, dog, horse, etc.) or semantically
    unrelated (e.g., cat, table, flute, etc.). Motivation was manipulated
    independently by monetary reward. The motivation manipulation did
    not affect error rate during picture naming. However, the highmotivation
    condition yielded increased amplitude and latency values of
    the error-related negativity (ERN) compared to the low-motivation
    condition, presumably indicating higher monitoring activity. Furthermore,
    participants showed semantic interference effects in reaction
    times and error rates. The ERN amplitude was also larger during
    semantically related than unrelated blocks, presumably indicating that
    semantic relatedness induces more conflict between possible verbal
    responses.
  • Ganushchak, L., Konopka, A. E., & Chen, Y. (2014). What the eyes say about planning of focused referents during sentence formulation: a cross-linguistic investigation. Frontiers in Psychology, 5: 1124. doi:10.3389/fpsyg.2014.01124.

    Abstract

    This study investigated how sentence formulation is influenced by a preceding discourse context. In two eye-tracking experiments, participants described pictures of two-character transitive events in Dutch (Experiment 1) and Chinese (Experiment 2). Focus was manipulated by presenting questions before each picture. In the Neutral condition, participants first heard ‘What is happening here?’ In the Object or Subject Focus conditions, the questions asked about the Object or Subject character (What is the policeman stopping? Who is stopping the truck?). The target response was the same in all conditions (The policeman is stopping the truck). In both experiments, sentence formulation in the Neutral condition showed the expected pattern of speakers fixating the subject character (policeman) before the object character (truck). In contrast, in the focus conditions speakers rapidly directed their gaze preferentially only to the character they needed to encode to answer the question (the new, or focused, character). The timing of gaze shifts to the new character varied by language group (Dutch vs. Chinese): shifts to the new character occurred earlier when information in the question can be repeated in the response with the same syntactic structure (in Chinese but not in Dutch). The results show that discourse affects the timecourse of linguistic formulation in simple sentences and that these effects can be modulated by language-specific linguistic structures such as parallels in the syntax of questions and declarative sentences.
  • Ganushchak, L. Y., & Acheson, D. J. (Eds.). (2014). What's to be learned from speaking aloud? - Advances in the neurophysiological measurement of overt language production. [Research topic] [Special Issue]. Frontiers in Language Sciences. Retrieved from http://www.frontiersin.org/Language_Sciences/researchtopics/What_s_to_be_Learned_from_Spea/1671.

    Abstract

    Researchers have long avoided neurophysiological experiments of overt speech production due to the suspicion that artifacts caused by muscle activity may lead to a bad signal-to-noise ratio in the measurements. However, the need to actually produce speech may influence earlier processing and qualitatively change speech production processes and what we can infer from neurophysiological measures thereof. Recently, however, overt speech has been successfully investigated using EEG, MEG, and fMRI. The aim of this Research Topic is to draw together recent research on the neurophysiological basis of language production, with the aim of developing and extending theoretical accounts of the language production process. In this Research Topic of Frontiers in Language Sciences, we invite both experimental and review papers, as well as those about the latest methods in acquisition and analysis of overt language production data. All aspects of language production are welcome: i.e., from conceptualization to articulation during native as well as multilingual language production. Focus should be placed on using the neurophysiological data to inform questions about the processing stages of language production. In addition, emphasis should be placed on the extent to which the identified components of the electrophysiological signal (e.g., ERP/ERF, neuronal oscillations, etc.), brain areas or networks are related to language comprehension and other cognitive domains. By bringing together electrophysiological and neuroimaging evidence on language production mechanisms, a more complete picture of the locus of language production processes and their temporal and neurophysiological signatures will emerge.
  • García Lecumberri, M. L., Cooke, M., Cutugno, F., Giurgiu, M., Meyer, B. T., Scharenborg, O., Van Dommelen, W., & Volin, J. (2008). The non-native consonant challenge for European languages. In INTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association (pp. 1781-1784). ISCA Archive.

    Abstract

    This paper reports on a multilingual investigation into the effects of different masker types on native and non-native perception in a VCV consonant recognition task. Native listeners outperformed 7 other language groups, but all groups showed a similar ranking of maskers. Strong first language (L1) interference was observed, both from the sound system and from the L1 orthography. Universal acoustic-perceptual tendencies are also at work in both native and non-native sound identifications in noise. The effect of linguistic distance, however, was less clear: in large multilingual studies, listener variables may overpower other factors.
  • Gaskell, M. G., Warker, J., Lindsay, S., Frost, R. L. A., Guest, J., Snowdon, R., & Stackhouse, A. (2014). Sleep Underpins the Plasticity of Language Production. Psychological Science, 25(7), 1457-1465. doi:10.1177/0956797614535937.

    Abstract

    The constraints that govern acceptable phoneme combinations in speech perception and production have considerable plasticity. We addressed whether sleep influences the acquisition of new constraints and their integration into the speech-production system. Participants repeated sequences of syllables in which two phonemes were artificially restricted to syllable onset or syllable coda, depending on the vowel in that sequence. After 48 sequences, participants either had a 90-min nap or remained awake. Participants then repeated 96 sequences so implicit constraint learning could be examined, and then were tested for constraint generalization in a forced-choice task. The sleep group, but not the wake group, produced speech errors at test that were consistent with restrictions on the placement of phonemes in training. Furthermore, only the sleep group generalized their learning to new materials. Polysomnography data showed that implicit constraint learning was associated with slow-wave sleep. These results show that sleep facilitates the integration of new linguistic knowledge with existing production constraints. These data have relevance for systems-consolidation models of sleep.

    Additional information

    https://osf.io/zqg9y/
  • Gast, V., & Levshina, N. (2014). Motivating w(h)-Clefts in English and German: A hypothesis-driven parallel corpus study. In A.-M. De Cesare (Ed.), Frequency, Forms and Functions of Cleft Constructions in Romance and Germanic: Contrastive, Corpus-Based Studies (pp. 377-414). Berlin: De Gruyter.
  • Gebre, B. G., Wittenburg, P., Heskes, T., & Drude, S. (2014). Motion history images for online speaker/signer diarization. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (pp. 1537-1541). Piscataway, NJ: IEEE.

    Abstract

    We present a solution to the problem of online speaker/signer diarization - the task of determining "who spoke/signed when?". Our solution is based on the idea that gestural activity (hands and body movement) is highly correlated with uttering activity. This correlation is necessarily true for sign languages and mostly true for spoken languages. The novel part of our solution is the use of motion history images (MHI) as a likelihood measure for probabilistically detecting uttering activities. MHI is an efficient representation of where and how motion occurred for a fixed period of time. We conducted experiments on 4.9 hours of a publicly available dataset (the AMI meeting data) and 1.4 hours of sign language dataset (Kata Kolok data). The best performance obtained is 15.70% for sign language and 31.90% for spoken language (measurements are in DER). These results show that our solution is applicable in real-world applications like video conferences.

    Files private

    Request files
  • Gebre, B. G., Wittenburg, P., Drude, S., Huijbregts, M., & Heskes, T. (2014). Speaker diarization using gesture and speech. In H. Li, & P. Ching (Eds.), Proceedings of Interspeech 2014: 15th Annual Conference of the International Speech Communication Association (pp. 582-586).

    Abstract

    We demonstrate how the problem of speaker diarization can be solved using both gesture and speaker parametric models. The novelty of our solution is that we approach the speaker diarization problem as a speaker recognition problem after learning speaker models from speech samples corresponding to gestures (the occurrence of gestures indicates the presence of speech and the location of gestures indicates the identity of the speaker). This new approach offers many advantages: comparable state-of-the-art performance, faster computation and more adaptability. In our implementation, parametric models are used to model speakers' voice and their gestures: more specifically, Gaussian mixture models are used to model the voice characteristics of each person and all persons, and gamma distributions are used to model gestural activity based on features extracted from Motion History Images. Tests on 4.24 hours of the AMI meeting data show that our solution makes DER score improvements of 19% on speech-only segments and 4% on all segments including silence (the comparison is with the AMI system).
  • Gebre, B. G., Crasborn, O., Wittenburg, P., Drude, S., & Heskes, T. (2014). Unsupervised feature learning for visual sign language identification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: Vol 2 (pp. 370-376). Redhook, NY: Curran Proceedings.

    Abstract

    Prior research on language identification focused primarily on text and speech. In this paper, we focus on the visual modality and present a method for identifying sign languages solely from short video samples. The method is trained on unlabelled video data (unsupervised feature learning) and using these features, it is trained to discriminate between six sign languages (supervised learning). We ran experiments on video samples involving 30 signers (running for a total of 6 hours). Using leave-one-signer-out cross-validation, our evaluation on short video samples shows an average best accuracy of 84%. Given that sign languages are under-resourced, unsupervised feature learning techniques are the right tools and our results indicate that this is realistic for sign language identification.
  • Gentzsch, W., Lecarpentier, D., & Wittenburg, P. (2014). Big data in science and the EUDAT project. In Proceeding of the 2014 Annual SRII Global Conference.
  • Ghatan, P. H., Hsieh, J. C., Petersson, K. M., Stone-Elander, S., & Ingvar, M. (1998). Coexistence of attention-based facilitation and inhibition in the human cortex. NeuroImage, 7, 23-29.

    Abstract

    A key function of attention is to select an appropriate subset of available information by facilitation of attended processes and/or inhibition of irrelevant processing. Functional imaging studies, using positron emission tomography, have during different experimental tasks revealed decreased neuronal activity in areas that process input from unattended sensory modalities. It has been hypothesized that these decreases reflect a selective inhibitory modulation of nonrelevant cortical processing. In this study we addressed this question using a continuous arithmetical task with and without concomitant disturbing auditory input (task-irrelevant speech). During the arithmetical task, irrelevant speech did not affect task-performance but yielded decreased activity in the auditory and midcingulate cortices and increased activity in the left posterior parietal cortex. This pattern of modulation is consistent with a top down inhibitory modulation of a nonattended input to the auditory cortex and a coexisting, attention-based facilitation of taskrelevant processing in higher order cortices. These findings suggest that task-related decreases in cortical activity may be of functional importance in the understanding of both attentional mechanisms and taskrelated information processing.
  • Gialluisi, A., Newbury, D. F., Wilcutt, E. G., Olson, R. K., DeFries, J. C., Brandler, W. M., Pennington, B. F., Smith, S. D., Scerri, T. S., Simpson, N. H., The SLI Consortium, Luciano, M., Evans, D. M., Bates, T. C., Stein, J. F., Talcott, J. B., Monaco, A. P., Paracchini, S., Francks, C., & Fisher, S. E. (2014). Genome-wide screening for DNA variants associated with reading and language traits. Genes, Brain and Behavior, 13, 686-701. doi:10.1111/gbb.12158.

    Abstract

    Reading and language abilities are heritable traits that are likely to share some genetic influences with each other. To identify pleiotropic genetic variants affecting these traits, we first performed a Genome-wide Association Scan (GWAS) meta-analysis using three richly characterised datasets comprising individuals with histories of reading or language problems, and their siblings. GWAS was performed in a total of 1862 participants using the first principal component computed from several quantitative measures of reading- and language-related abilities, both before and after adjustment for performance IQ. We identified novel suggestive associations at the SNPs rs59197085 and rs5995177 (uncorrected p≈10−7 for each SNP), located respectively at the CCDC136/FLNC and RBFOX2 genes. Each of these SNPs then showed evidence for effects across multiple reading and language traits in univariate association testing against the individual traits. FLNC encodes a structural protein involved in cytoskeleton remodelling, while RBFOX2 is an important regulator of alternative splicing in neurons. The CCDC136/FLNC locus showed association with a comparable reading/language measure in an independent sample of 6434 participants from the general population, although involving distinct alleles of the associated SNP. Our datasets will form an important part of on-going international efforts to identify genes contributing to reading and language skills.
  • Gialluisi, A., Pippucci, T., & Romeo, G. (2014). Reply to ten Kate et al. European Journal of Human Genetics, 2, 157-158. doi:10.1038/ejhg.2013.153.
  • Gisselgard, J., Petersson, K. M., & Ingvar, M. (2004). The irrelevant speech effect and working memory load. NeuroImage, 22, 1107-1116. doi:10.1016/j.neuroimage.2004.02.031.

    Abstract

    Irrelevant speech impairs the immediate serial recall of visually presented material. Previously, we have shown that the irrelevant speech effect (ISE) was associated with a relative decrease of regional blood flow in cortical regions subserving the verbal working memory, in particular the superior temporal cortex. In this extension of the previous study, the working memory load was increased and an increased activity as a response to irrelevant speech was noted in the dorsolateral prefrontal cortex. We suggest that the two studies together provide some basic insights as to the nature of the irrelevant speech effect. Firstly, no area in the brain can be ascribed as the single locus of the irrelevant speech effect. Instead, the functional neuroanatomical substrate to the effect can be characterized in terms of changes in networks of functionally interrelated areas. Secondly, the areas that are sensitive to the irrelevant speech effect are also generically activated by the verbal working memory task itself. Finally, the impact of irrelevant speech and related brain activity depends on working memory load as indicated by the differences between the present and the previous study. From a brain perspective, the irrelevant speech effect may represent a complex phenomenon that is a composite of several underlying mechanisms, which depending on the working memory load, include top-down inhibition as well as recruitment of compensatory support and control processes. We suggest that, in the low-load condition, a selection process by an inhibitory top-down modulation is sufficient, whereas in the high-load condition, at or above working memory span, auxiliary adaptive cognitive resources are recruited as compensation
  • Goldin-Meadow, S., Chee So, W., Ozyurek, A., & Mylander, C. (2008). The natural order of events: how speakers of different languages represent events nonverbally. Proceedings of the National Academy of Sciences of the USA, 105(27), 9163-9168. doi:10.1073/pnas.0710060105.

    Abstract

    To test whether the language we speak influences our behavior even when we are not speaking, we asked speakers of four languages differing in their predominant word orders (English, Turkish, Spanish, and Chinese) to perform two nonverbal tasks: a communicative task (describing an event by using gesture without speech) and a noncommunicative task (reconstructing an event with pictures). We found that the word orders speakers used in their everyday speech did not influence their nonverbal behavior. Surprisingly, speakers of all four languages used the same order and on both nonverbal tasks. This order, actor–patient–act, is analogous to the subject–object–verb pattern found in many languages of the world and, importantly, in newly developing gestural languages. The findings provide evidence for a natural order that we impose on events when describing and reconstructing them nonverbally and exploit when constructing language anew.

    Additional information

    GoldinMeadow_2008_naturalSuppl.pdf
  • Gonzalez da Silva, C., Petersson, K. M., Faísca, L., Ingvar, M., & Reis, A. (2004). The effects of literacy and education on the quantitative and qualitative aspects of semantic verbal fluency. Journal of Clinical and Experimental Neuropsychology, 26(2), 266-277. doi:10.1076/jcen.26.2.266.28089.

    Abstract

    Semantic verbal fluency tasks are commonly used in neuropsychological assessment. Investigations of the influence of level of literacy have not yielded consistent results in the literature. This prompted us to investigate the ecological relevance of task specifics, in particular, the choice of semantic criteria used. Two groups of literate and illiterate subjects were compared on two verbal fluency tasks using different semantic criteria. The performance on a food criterion (supermarket fluency task), considered more ecologically relevant for the two literacy groups, and an animal criterion (animal fluency task) were compared. The data were analysed using both quantitative and qualitative measures. The quantitative analysis indicated that the two literacy groups performed equally well on the supermarket fluency task. In contrast, results differed significantly during the animal fluency task. The qualitative analyses indicated differences between groups related to the strategies used, especially with respect to the animal fluency task. The overall results suggest that there is not a substantial difference between literate and illiterate subjects related to the fundamental workings of semantic memory. However, there is indication that the content of semantic memory reflects differences in shared cultural background - in other words, formal education –, as indicated by the significant interaction between level of literacy and semantic criterion.
  • Gonzalez Gomez, N., Hayashi, A., Tsuji, S., Mazuka, R., & Nazzi, T. (2014). The role of the input on the development of the LC bias: A crosslinguistic comparison. Cognition, 132(3), 301-311. doi:10.1016/j.cognition.2014.04.004.

    Abstract

    Previous studies have described the existence of a phonotactic bias called the Labial–Coronal (LC) bias, corresponding to a tendency to produce more words beginning with a labial consonant followed by a coronal consonant (i.e. “bat”) than the opposite CL pattern (i.e. “tap”). This bias has initially been interpreted in terms of articulatory constraints of the human speech production system. However, more recently, it has been suggested that this presumably language-general LC bias in production might be accompanied by LC and CL biases in perception, acquired in infancy on the basis of the properties of the linguistic input. The present study investigates the origins of these perceptual biases, testing infants learning Japanese, a language that has been claimed to possess more CL than LC sequences, and comparing them with infants learning French, a language showing a clear LC bias in its lexicon. First, a corpus analysis of Japanese IDS and ADS revealed the existence of an overall LC bias, except for plosive sequences in ADS, which show a CL bias across counts. Second, speech preference experiments showed a perceptual preference for CL over LC plosive sequences (all recorded by a Japanese speaker) in 13- but not in 7- and 10-month-old Japanese-learning infants (Experiment 1), while revealing the emergence of an LC preference between 7 and 10 months in French-learning infants, using the exact same stimuli. These crosslinguistic behavioral differences, obtained with the same stimuli, thus reflect differences in processing in two populations of infants, which can be linked to differences in the properties of the lexicons of their respective native languages. These findings establish that the emergence of a CL/LC bias is related to exposure to a linguistic input.
  • Goodhew, S. C., McGaw, B., & Kidd, E. (2014). Why is the sunny side always up? Explaining the spatial mapping of concepts by language use. Psychonomic Bulletin & Review, 21(5), 1287-1293. doi:10.3758/s13423-014-0593-6.

    Abstract

    Humans appear to rely on spatial mappings to represent and describe concepts. The conceptual cuing effect describes the tendency for participants to orient attention to a spatial location following the presentation of an unrelated cue word (e.g., orienting attention upward after reading the word sky). To date, such effects have predominately been explained within the embodied cognition framework, according to which people’s attention is oriented on the basis of prior experience (e.g., sky → up via perceptual simulation). However, this does not provide a compelling explanation for how abstract words have the same ability to orient attention. Why, for example, does dream also orient attention upward? We report on an experiment that investigated the role of language use (specifically, collocation between concept words and spatial words for up and down dimensions) and found that it predicted the cuing effect. The results suggest that language usage patterns may be instrumental in explaining conceptual cuing.
  • Gori, M., Vercillo, T., Sandini, G., & Burr, D. (2014). Tactile feedback improves auditory spatial localization. Frontiers in Psychology, 5: 1121. doi:10.3389/fpsyg.2014.01121.

    Abstract

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gon etal., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject's forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial.The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.
  • Goudbeek, M., Cutler, A., & Smits, R. (2008). Supervised and unsupervised learning of multidimensionally varying nonnative speech categories. Speech Communication, 50(2), 109-125. doi:10.1016/j.specom.2007.07.003.

    Abstract

    The acquisition of novel phonetic categories is hypothesized to be affected by the distributional properties of the input, the relation of the new categories to the native phonology, and the availability of supervision (feedback). These factors were examined in four experiments in which listeners were presented with novel categories based on vowels of Dutch. Distribution was varied such that the categorization depended on the single dimension duration, the single dimension frequency, or both dimensions at once. Listeners were clearly sensitive to the distributional information, but unidimensional contrasts proved easier to learn than multidimensional. The native phonology was varied by comparing Spanish versus American English listeners. Spanish listeners found categorization by frequency easier than categorization by duration, but this was not true of American listeners, whose native vowel system makes more use of duration-based distinctions. Finally, feedback was either available or not; this comparison showed supervised learning to be significantly superior to unsupervised learning.
  • Grabe, E. (1998). Comparative intonational phonology: English and German. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.2057683.
  • De Grauwe, S., Willems, R. M., Rüschemeyer, S.-A., Lemhöfer, K., & Schriefers, H. (2014). Embodied language in first- and second-language speakers: Neural correlates of processing motor verbs. Neuropsychologia, 56, 334-349. doi:10.1016/j.neuropsychologia.2014.02.003.

    Abstract

    The involvement of neural motor and sensory systems in the processing of language has so far mainly been studied in native (L1) speakers. In an fMRI experiment, we investigated whether non-native (L2) semantic representations are rich enough to allow for activation in motor and somatosensory brain areas. German learners of Dutch and a control group of Dutch native speakers made lexical decisions about visually presented Dutch motor and non-motor verbs. Region-of-interest (ROI) and whole-brain analyses indicated that L2 speakers, like L1 speakers, showed significantly increased activation for simple motor compared to non-motor verbs in motor and somatosensory regions. This effect was not restricted to Dutch-German cognate verbs, but was also present for non-cognate verbs. These results indicate that L2 semantic representations are rich enough for motor-related activations to develop in motor and somatosensory areas.
  • De Grauwe, S., Lemhöfer, K., Willems, R. M., & Schriefers, H. (2014). L2 speakers decompose morphologically complex verbs: fMRI evidence from priming of transparent derived verbs. Frontiers in Human Neuroscience, 8: 802. doi:10.3389/fnhum.2014.00802.

    Abstract

    In this functional magnetic resonance imaging (fMRI) long-lag priming study, we investigated the processing of Dutch semantically transparent, derived prefix verbs. In such words, the meaning of the word as a whole can be deduced from the meanings of its parts, e.g., wegleggen “put aside.” Many behavioral and some fMRI studies suggest that native (L1) speakers decompose transparent derived words. The brain region usually implicated in morphological decomposition is the left inferior frontal gyrus (LIFG). In non-native (L2) speakers, the processing of transparent derived words has hardly been investigated, especially in fMRI studies, and results are contradictory: some studies find more reliance on holistic (i.e., non-decompositional) processing by L2 speakers; some find no difference between L1 and L2 speakers. In this study, we wanted to find out whether Dutch transparent derived prefix verbs are decomposed or processed holistically by German L2 speakers of Dutch. Half of the derived verbs (e.g., omvallen “fall down”) were preceded by their stem (e.g., vallen “fall”) with a lag of 4–6 words (“primed”); the other half (e.g., inslapen “fall asleep”) were not (“unprimed”). L1 and L2 speakers of Dutch made lexical decisions on these visually presented verbs. Both region of interest analyses and whole-brain analyses showed that there was a significant repetition suppression effect for primed compared to unprimed derived verbs in the LIFG. This was true both for the analyses over L2 speakers only and for the analyses over the two language groups together. The latter did not reveal any interaction with language group (L1 vs. L2) in the LIFG. Thus, L2 speakers show a clear priming effect in the LIFG, an area that has been associated with morphological decomposition. Our findings are consistent with the idea that L2 speakers engage in decomposition of transparent derived verbs rather than processing them holistically

    Additional information

    Data Sheet 1.docx
  • Gretsch, P. (2004). What does finiteness mean to children? A cross-linguistic perspective onroot infinitives. Linguistics, 42(2), 419-468. doi:10.1515/ling.2004.014.

    Abstract

    The discussion on root infinitives has mainly centered around their supposed modal usage. This article aims at modelling the form-function relation of the root infinitive phenomenon by taking into account the full range of interpretational facets encountered cross-linguistically and interindividually. Following the idea of a subsequent ‘‘cell partitioning’’ in the emergence of form-function correlations, I claim that it is the major fission between [+-finite] which is central to express temporal reference different from the default here&now in tense-oriented languages. In aspectual-oriented languages, a similar opposition is mastered with the marking of early aspectual forms. It is observed that in tense-oriented languages like Dutch and German, the progression of functions associated with the infinitival form proceeds from nonmodal to modal, whereas the reverse progression holds for the Russian infinitive. Based on this crucial observation, a model of acquisition is proposed which allows for a flexible and systematic relationship between morphological forms and their respective interpretational biases dependent on their developmental context. As for early child language, I argue that children entertain only two temporal parameters: one parameter is fixed to the here&now point in time, and a second parameter relates to the time talked about, the topic time; this latter time overlaps the situation time as long as no empirical evidence exists to support the emergence of a proper distinction between tense and aspect.

    Files private

    Request files
  • Groszer, M., Keays, D. A., Deacon, R. M. J., De Bono, J. P., Prasad-Mulcare, S., Gaub, S., Baum, M. G., French, C. A., Nicod, J., Coventry, J. A., Enard, W., Fray, M., Brown, S. D. M., Nolan, P. M., Pääbo, S., Channon, K. M., Costa, R. M., Eilers, J., Ehret, G., Rawlins, J. N. P. and 1 moreGroszer, M., Keays, D. A., Deacon, R. M. J., De Bono, J. P., Prasad-Mulcare, S., Gaub, S., Baum, M. G., French, C. A., Nicod, J., Coventry, J. A., Enard, W., Fray, M., Brown, S. D. M., Nolan, P. M., Pääbo, S., Channon, K. M., Costa, R. M., Eilers, J., Ehret, G., Rawlins, J. N. P., & Fisher, S. E. (2008). Impaired synaptic plasticity and motor learning in mice with a point mutation implicated in human speech deficits. Current Biology, 18(5), 354-362. doi:10.1016/j.cub.2008.01.060.

    Abstract

    The most well-described example of an inherited speech and language disorder is that observed in the multigenerational KE family, caused by a heterozygous missense mutation in the FOXP2 gene. Affected individuals are characterized by deficits in the learning and production of complex orofacial motor sequences underlying fluent speech and display impaired linguistic processing for both spoken and written language. The FOXP2 transcription factor is highly similar in many vertebrate species, with conserved expression in neural circuits related to sensorimotor integration and motor learning. In this study, we generated mice carrying an identical point mutation to that of the KE family, yielding the equivalent arginine-to-histidine substitution in the Foxp2 DNA-binding domain. Homozygous R552H mice show severe reductions in cerebellar growth and postnatal weight gain but are able to produce complex innate ultrasonic vocalizations. Heterozygous R552H mice are overtly normal in brain structure and development. Crucially, although their baseline motor abilities appear to be identical to wild-type littermates, R552H heterozygotes display significant deficits in species-typical motor-skill learning, accompanied by abnormal synaptic plasticity in striatal and cerebellar neural circuits.

    Additional information

    mmc1.pdf
  • Guadalupe, T., Willems, R. M., Zwiers, M., Arias Vasquez, A., Hoogman, M., Hagoort, P., Fernández, G., Buitelaar, J., Franke, B., Fisher, S. E., & Francks, C. (2014). Differences in cerebral cortical anatomy of left- and right-handers. Frontiers in Psychology, 5: 261. doi:10.3389/fpsyg.2014.00261.

    Abstract

    The left and right sides of the human brain are specialized for different kinds of information processing, and much of our cognition is lateralized to an extent towards one side or the other. Handedness is a reflection of nervous system lateralization. Roughly ten percent of people are mixed- or left-handed, and they show an elevated rate of reductions or reversals of some cerebral functional asymmetries compared to right-handers. Brain anatomical correlates of left-handedness have also been suggested. However, the relationships of left-handedness to brain structure and function remain far from clear. We carried out a comprehensive analysis of cortical surface area differences between 106 left-handed subjects and 1960 right-handed subjects, measured using an automated method of regional parcellation (FreeSurfer, Destrieux atlas). This is the largest study sample that has so far been used in relation to this issue. No individual cortical region showed an association with left-handedness that survived statistical correction for multiple testing, although there was a nominally significant association with the surface area of a previously implicated region: the left precentral sulcus. Identifying brain structural correlates of handedness may prove useful for genetic studies of cerebral asymmetries, as well as providing new avenues for the study of relations between handedness, cerebral lateralization and cognition.
  • Guadalupe, T., Zwiers, M. P., Teumer, A., Wittfeld, K., Arias Vasquez, A., Hoogman, M., Hagoort, P., Fernández, G., Buitelaar, J., Hegenscheid, K., Völzke, H., Franke, B., Fisher, S. E., Grabe, H. J., & Francks, C. (2014). Measurement and genetics of human subcortical and hippocampal asymmetries in large datasets. Human Brain Mapping, 35(7), 3277-3289. doi:10.1002/hbm.22401.

    Abstract

    Functional and anatomical asymmetries are prevalent features of the human brain, linked to gender, handedness, and cognition. However, little is known about the neurodevelopmental processes involved. In zebrafish, asymmetries arise in the diencephalon before extending within the central nervous system. We aimed to identify genes involved in the development of subtle, left-right volumetric asymmetries of human subcortical structures using large datasets. We first tested the feasibility of measuring left-right volume differences in such large-scale samples, as assessed by two automated methods of subcortical segmentation (FSL|FIRST and FreeSurfer), using data from 235 subjects who had undergone MRI twice. We tested the agreement between the first and second scan, and the agreement between the segmentation methods, for measures of bilateral volumes of six subcortical structures and the hippocampus, and their volumetric asymmetries. We also tested whether there were biases introduced by left-right differences in the regional atlases used by the methods, by analyzing left-right flipped images. While many bilateral volumes were measured well (scan-rescan r = 0.6-0.8), most asymmetries, with the exception of the caudate nucleus, showed lower repeatabilites. We meta-analyzed genome-wide association scan results for caudate nucleus asymmetry in a combined sample of 3,028 adult subjects but did not detect associations at genome-wide significance (P < 5 × 10-8). There was no enrichment of genetic association in genes involved in left-right patterning of the viscera. Our results provide important information for researchers who are currently aiming to carry out large-scale genome-wide studies of subcortical and hippocampal volumes, and their asymmetries
  • Le Guen, O., Senft, G., & Sicoli, M. A. (2008). Language of perception: Views from anthropology. In A. Majid (Ed.), Field Manual Volume 11 (pp. 29-36). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.446079.

    Abstract

    To understand the underlying principles of categorisation and classification of sensory input semantic analyses must be based on both language and culture. The senses are not only physiological phenomena, but they are also linguistic, cultural, and social. The goal of this task is to explore and describe sociocultural patterns relating language of perception, ideologies of perception, and perceptual practice in our speech communities.
  • Le Guen, O. (2008). Ubèel pixan: El camino de las almas ancetros familiares y colectivos entre los Mayas Yacatecos. Penisula, 3(1), 83-120. Retrieved from http://www.revistas.unam.mx/index.php/peninsula/article/viewFile/44354/40086.

    Abstract

    The aim of this article is to analyze the funerary customs and ritual for the souls among contemporary Yucatec Maya in order to better understand their relations with pre-Hispanic burial patterns. It is suggested that the souls of the dead are considered as ancestors that can be distinguished between family and collective ancestors considering several criteria: the place of burial, the place of ritual performance and the ritual treatment. In this proposition, funerary practices as well as ritual categories of ancestors (family or collective), are considered as reminiscences of ancient practices whose traces can be found throughout historical sources. Through an analyze of the current funerary practices and their variations, this article aims to demonstrate that over the time and despite socio-economical changes, ancient funerary practices (specifically from the post-classic period) had kept some homogeneity, preserving some essential characteristics that can be observed in the actuality.

Share this page