Publications

Displaying 301 - 400 of 1062
  • Enfield, N. J. (2008). Language as shaped by social interaction [Commentary on Christiansen and Chater]. Behavioral and Brain Sciences, 31(5), 519-520. doi:10.1017/S0140525X08005104.

    Abstract

    Language is shaped by its environment, which includes not only the brain, but also the public context in which speech acts are effected. To fully account for why language has the shape it has, we need to examine the constraints imposed by language use as a sequentially organized joint activity, and as the very conduit for linguistic diffusion and change.
  • Enfield, N. J. (2008). Lao linguistics in the 20th century and since. In Y. Goudineau, & M. Lorrillard (Eds.), Recherches nouvelles sur le Laos (pp. 435-452). Paris: EFEO.
  • Enfield, N. J., & Levinson, S. C. (2008). Metalanguage for speech acts. In A. Majid (Ed.), Field manual volume 11 (pp. 77-79). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492937.

    Abstract

    People of all cultures have some degree of concern with categorizing types of communicative social action. All languages have words with meanings like speak, say, talk, complain, curse, promise, accuse, nod, wink, point and chant. But the exact distinctions they make will differ in both quantity and quality. How is communicative social action categorised across languages and cultures? The goal of this task is to establish a basis for cross-linguistic comparison of native metalanguages for social action.
  • Enfield, N. J., & Levinson, S. C. (2009). Metalanguage for speech acts. In A. Majid (Ed.), Field manual volume 12 (pp. 51-53). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.883559.

    Abstract

    People of all cultures have some degree of concern with categorizing types of communicative social action. All languages have words with meanings like speak, say, talk, complain, curse, promise, accuse, nod, wink, point and chant. But the exact distinctions they make will differ in both quantity and quality. How is communicative social action categorised across languages and cultures? The goal of this task is to establish a basis for cross-linguistic comparison of native metalanguages for social action.
  • Enfield, N. J. (2009). Language and culture. In L. Wei, & V. Cook (Eds.), Contemporary Applied Linguistics Volume 2 (pp. 83-97). London: Continuum.
  • Enfield, N. J. (2009). Language: Social motives for syntax [Review of the book Origins of human communication by Michael Tomasello]. Science, 324(5923), 39. doi:10.1126/science.1172660.
  • Enfield, N. J. (2009). Everyday ritual in the residential world. In G. Senft, & E. B. Basso (Eds.), Ritual communication (pp. 51-80). Oxford: Berg.
  • Enfield, N. J., & Diffloth, G. (2009). Phonology and sketch grammar of Kri, a Vietic language of Laos. Cahiers de Linguistique - Asie Orientale (CLAO), 38(1), 3-69.
  • Enfield, N. J. (2009). Relationship thinking and human pragmatics. Journal of Pragmatics, 41, 60-78. doi:10.1016/j.pragma.2008.09.007.

    Abstract

    The approach to pragmatics explored in this article focuses on elements of social interaction which are of universal relevance, and which may provide bases for a comparative approach. The discussion is anchored by reference to a fragment of conversation from a video-recording of Lao speakers during a home visit in rural Laos. The following points are discussed. First, an understanding of the full richness of context is indispensable for a proper understanding of any interaction. Second, human relationships are a primary locus of social organization, and as such constitute a key focus for pragmatics. Third, human social intelligence forms a universal cognitive under-carriage for interaction, and requires careful cross-cultural study. Fourth, a neo-Peircean framework for a general understanding of semiotic processes gives us a way of stepping away from language as our basic analytical frame. It is argued that in order to get a grip on pragmatics across human groups, we need to take a comparative approach in the biological sense—i.e. with reference to other species as well. From this perspective, human pragmatics is about using semiotic resources to try to meet goals in the realm of social relationships.
  • Enfield, N. J. (2009). The anatomy of meaning: Speech, gesture, and composite utterances. Cambridge: Cambridge University Press.
  • Enfield, N. J., Levinson, S. C., & Stivers, T. (2008). Social action formulation: A "10-minutes" task. In A. Majid (Ed.), Field manual volume 11 (pp. 80-81). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.492939.

    Abstract

    This Field Manual entry has been superceded by the 2009 version: https://doi.org/10.17617/2.883564

    Files private

    Request files
  • Enfield, N. J., Levinson, S. C., & Stivers, T. (2009). Social action formulation: A "10-minutes" task. In A. Majid (Ed.), Field manual volume 12 (pp. 54-55). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.883564.

    Abstract

    Human actions in the social world – like greeting, requesting, complaining, accusing, asking, confirming, etc. – are recognised through the interpretation of signs. Language is where much of the action is, but gesture, facial expression and other bodily actions matter as well. The goal of this task is to establish a maximally rich description of a representative, good quality piece of conversational interaction, which will serve as a reference point for comparative exploration of the status of social actions and their formulation across language
  • Erard, M. (2009). How Many Languages? Linguists Discover New Tongues in China. Science, 324(5925), 332-333. doi:10.1126/science.324.5925.332a.
  • Ernestus, M., Baayen, R. H., & Schreuder, R. (2002). The recognition of reduced word forms. Brain and Language, 81(1-3), 162-173. doi:10.1006/brln.2001.2514.

    Abstract

    This article addresses the recognition of reduced word forms, which are frequent in casual speech. We describe two experiments on Dutch showing that listeners only recognize highly reduced forms well when these forms are presented in their full context and that the probability that a listener recognizes a word form in limited context is strongly correlated with the degree of reduction of the form. Moreover, we show that the effect of degree of reduction can only partly be interpreted as the effect of the intelligibility of the acoustic signal, which is negatively correlated with degree of reduction. We discuss the consequences of our findings for models of spoken word recognition and especially for the role that storage plays in these models.
  • Ernestus, M., & Neijt, A. (2008). Word length and the location of primary word stress in Dutch, German, and English. Linguistics, 46(3), 507-540. doi:10.1515/LING.2008.017.

    Abstract

    This study addresses the extent to which the location of primary stress in Dutch, German, and English monomorphemic words is affected by the syllables preceding the three final syllables. We present analyses of the monomorphemic words in the CELEX lexical database, which showed that penultimate primary stress is less frequent in Dutch and English trisyllabic than quadrisyllabic words. In addition, we discuss paper-and-pencil experiments in which native speakers assigned primary stress to pseudowords. These experiments provided evidence that in all three languages penultimate stress is more likely in quadrisyllabic than in trisyllabic words. We explain this length effect with the preferences in these languages for word-initial stress and for alternating patterns of stressed and unstressed syllables. The experimental data also showed important intra- and interspeaker variation, and they thus form a challenging test case for theories of language variation.
  • Ernestus, M. (2009). The roles of reconstruction and lexical storage in the comprehension of regular pronunciation variants. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 1875-1878). Causal Productions Pty Ltd.

    Abstract

    This paper investigates how listeners process regular pronunciation variants, resulting from simple general reduction processes. Study 1 shows that when listeners are presented with new words, they store the pronunciation variants presented to them, whether these are unreduced or reduced. Listeners thus store information on word-specific pronunciation variation. Study 2 suggests that if participants are presented with regularly reduced pronunciations, they also reconstruct and store the corresponding unreduced pronunciations. These unreduced pronunciations apparently have special status. Together the results support hybrid models of speech processing, assuming roles for both exemplars and abstract representations.
  • Escudero, P., Hayes-Harb, R., & Mitterer, H. (2008). Novel second-language words and asymmetric lexical access. Journal of Phonetics, 36(2), 345-360. doi:10.1016/j.wocn.2007.11.002.

    Abstract

    The lexical and phonetic mapping of auditorily confusable L2 nonwords was examined by teaching L2 learners novel words and by later examining their word recognition using an eye-tracking paradigm. During word learning, two groups of highly proficient Dutch learners of English learned 20 English nonwords, of which 10 contained the English contrast /e/-æ/ (a confusable contrast for native Dutch speakers). One group of subjects learned the words by matching their auditory forms to pictured meanings, while a second group additionally saw the spelled forms of the words. We found that the group who received only auditory forms confused words containing /æ/ and /e/ symmetrically, i.e., both /æ/ and /e/ auditory tokens triggered looks to pictures containing both /æ/ and /e/. In contrast, the group who also had access to spelled forms showed the same asymmetric word recognition pattern found by previous studies, i.e., they only looked at pictures of words containing /e/ when presented with /e/ target tokens, but looked at pictures of words containing both /æ/ and /e/ when presented with /æ/ target tokens. The results demonstrate that L2 learners can form lexical contrasts for auditorily confusable novel L2 words. However, and most importantly, this study suggests that explicit information over the contrastive nature of two new sounds may be needed to build separate lexical representations for similar-sounding L2 words.
  • Evans, N., & Levinson, S. C. (2009). The myth of language universals: Language diversity and its importance for cognitive science. Behavioral and Brain Sciences, 32(5), 429-492. doi:10.1017/S0140525X0999094X.

    Abstract

    Talk of linguistic universals has given cognitive scientists the impression that languages are all built to a common pattern. In fact, there are vanishingly few universals of language in the direct sense that all languages exhibit them. Instead, diversity can be found at almost every level of linguistic organization. This fundamentally changes the object of enquiry from a cognitive science perspective. This target article summarizes decades of cross-linguistic work by typologists and descriptive linguists, showing just how few and unprofound the universal characteristics of language are, once we honestly confront the diversity offered to us by the world's 6,000 to 8,000 languages. After surveying the various uses of “universal,” we illustrate the ways languages vary radically in sound, meaning, and syntactic organization, and then we examine in more detail the core grammatical machinery of recursion, constituency, and grammatical relations. Although there are significant recurrent patterns in organization, these are better explained as stable engineering solutions satisfying multiple design constraints, reflecting both cultural-historical factors and the constraints of human cognition.
  • Evans, N., & Levinson, S. C. (2009). With diversity in mind: Freeing the language sciences from universal grammar [Author's response]. Behavioral and Brain Sciences, 32(5), 472-484. doi:10.1017/S0140525X09990525.

    Abstract

    Our response takes advantage of the wide-ranging commentary to clarify some aspects of our original proposal and augment others. We argue against the generative critics of our coevolutionary program for the language sciences, defend the use of close-to-surface models as minimizing crosslinguistic data distortion, and stress the growing role of stochastic simulations in making generalized historical accounts testable. These methods lead the search for general principles away from idealized representations and towards selective processes. Putting cultural evolution central in understanding language diversity makes learning fundamental in the cognition of language: increasingly powerful models of general learning, paired with channelled caregiver input, seem set to manage language acquisition without recourse to any innate “universal grammar.” Understanding why human language has no clear parallels in the animal world requires a cross-species perspective: crucial ingredients are vocal learning (for which there are clear non-primate parallels) and an intentionattributing cognitive infrastructure that provides a universal base for language evolution. We conclude by situating linguistic diversity within a broader trend towards understanding human cognition through the study of variation in, for example, human genetics, neurocognition, and psycholinguistic processing.
  • Everett, D., & Majid, A. (2009). Adventures in the jungle of language. [Interview by Asifa Majid and Jon Sutton.]. The Psychologist, 22(4), 312-313. Retrieved from http://www.thepsychologist.org.uk/archive/archive_home.cfm?volumeID=22&editionID=174&ArticleID=1494.

    Abstract

    Daniel Everett has spent his career in the Amazon, challenging some fundamental ideas about language and thought. Asifa Majid and Jon Sutton pose the questions
  • Falcaro, M., Pickles, A., Newbury, D. F., Addis, L., Banfield, E., Fisher, S. E., Monaco, A. P., Simkin, Z., Conti-Ramsden, G., & Consortium (2008). Genetic and phenotypic effects of phonological short-term memory and grammatical morphology in specific language impairment. Genes, Brain and Behavior, 7, 393-402. doi:10.1111/j.1601-183X.2007.00364.x.

    Abstract

    Deficits in phonological short-term memory and aspects of verb grammar morphology have been proposed as phenotypic markers of specific language impairment (SLI) with the suggestion that these traits are likely to be under different genetic influences. This investigation in 300 first-degree relatives of 93 probands with SLI examined familial aggregation and genetic linkage of two measures thought to index these two traits, non-word repetition and tense marking. In particular, the involvement of chromosomes 16q and 19q was examined as previous studies found these two regions to be related to SLI. Results showed a strong association between relatives' and probands' scores on non-word repetition. In contrast, no association was found for tense marking when examined as a continuous measure. However, significant familial aggregation was found when tense marking was treated as a binary measure with a cut-off point of -1.5 SD, suggestive of the possibility that qualitative distinctions in the trait may be familial while quantitative variability may be more a consequence of non-familial factors. Linkage analyses supported previous findings of the SLI Consortium of linkage to chromosome 16q for phonological short-term memory and to chromosome 19q for expressive language. In addition, we report new findings that relate to the past tense phenotype. For the continuous measure, linkage was found on both chromosomes, but evidence was stronger on chromosome 19. For the binary measure, linkage was observed on chromosome 19 but not on chromosome 16.
  • Faller, M. (2002). Remarks on evidential hierarchies. In D. I. Beaver, L. D. C. Martinez, B. Z. Clark., & S. Kaufmann (Eds.), The construction of meaning (pp. 89-111). Stanford: CSLI Publications.
  • Faller, M. (2002). The evidential and validational licensing conditions for the Cusco Quechua enclitic-mi. Belgian Journal of Linguistics, 16, 7-21. doi:10.1075/bjl.16.02fa.
  • Fedor, A., Pléh, C., Brauer, J., Caplan, D., Friederici, A. D., Gulyás, B., Hagoort, P., Nazir, T., & Singer, W. (2009). What are the brain mechanisms underlying syntactic operations? In D. Bickerton, & E. Szathmáry (Eds.), Biological foundations and origin of syntax (pp. 299-324). Cambridge, MA: MIT Press.

    Abstract

    This chapter summarizes the extensive discussions that took place during the Forum as well as the subsequent months thereafter. It assesses current understanding of the neuronal mechanisms that underlie syntactic structure and processing.... It is posited that to understand the neurobiology of syntax, it might be worthwhile to shift the balance from comprehension to syntactic encoding in language production
  • Fedorenko, E., Patel, A., Casasanto, D., Winawer, J., & Gibson, E. (2009). Structural integration in language and music: Evidence for a shared system. Memory & Cognition, 37, 1-9. doi:10.3758/MC.37.1.1.

    Abstract

    In this study, we investigate whether language and music share cognitive resources for structural processing. We report an experiment that used sung materials and manipulated linguistic complexity (subject-extracted relative clauses, object-extracted relative clauses) and musical complexity (in-key critical note, out-of-key critical note, auditory anomaly on the critical note involving a loudness increase). The auditory-anomaly manipulation was included in order to test whether the difference between in-key and out-of-key conditions might be due to any salient, unexpected acoustic event. The critical dependent measure involved comprehension accuracies to questions about the propositional content of the sentences asked at the end of each trial. The results revealed an interaction between linguistic and musical complexity such that the difference between the subject- and object-extracted relative clause conditions was larger in the out-of-key condition than in the in-key and auditory-anomaly conditions. These results provide evidence for an overlap in structural processing between language and music.
  • Fisher, S. E., Francks, C., McCracken, J. T., McGough, J. J., Marlow, A. J., MacPhie, I. L., Newbury, D. F., Crawford, L. R., Palmer, C. G. S., Woodward, J. A., Del’Homme, M., Cantwell, D. P., Nelson, S. F., Monaco, A. P., & Smalley, S. L. (2002). A genomewide scan for loci involved in Attention-Deficit/Hyperactivity Disorder. American Journal of Human Genetics, 70(5), 1183-1196. doi:10.1086/340112.

    Abstract

    Attention deficit/hyperactivity disorder (ADHD) is a common heritable disorder with a childhood onset. Molecular genetic studies of ADHD have previously focused on examining the roles of specific candidate genes, primarily those involved in dopaminergic pathways. We have performed the first systematic genomewide linkage scan for loci influencing ADHD in 126 affected sib pairs, using a ∼10-cM grid of microsatellite markers. Allele-sharing linkage methods enabled us to exclude any loci with a λs of ⩾3 from 96% of the genome and those with a λs of ⩾2.5 from 91%, indicating that there is unlikely to be a major gene involved in ADHD susceptibility in our sample. Under a strict diagnostic scheme we could exclude all screened regions of the X chromosome for a locus-specific λs of ⩾2 in brother-brother pairs, demonstrating that the excess of affected males with ADHD is probably not attributable to a major X-linked effect. Qualitative trait maximum LOD score analyses pointed to a number of chromosomal sites that may contain genetic risk factors of moderate effect. None exceeded genomewide significance thresholds, but LOD scores were >1.5 for regions on 5p12, 10q26, 12q23, and 16p13. Quantitative-trait analysis of ADHD symptom counts implicated a region on 12p13 (maximum LOD 2.6) that also yielded a LOD >1 when qualitative methods were used. A survey of regions containing 36 genes that have been proposed as candidates for ADHD indicated that 29 of these genes, including DRD4 and DAT1, could be excluded for a λs of 2. Only three of the candidates—DRD5, 5HTT, and CALCYON—coincided with sites of positive linkage identified by our screen. Two of the regions highlighted in the present study, 2q24 and 16p13, coincided with the top linkage peaks reported by a recent genome-scan study of autistic sib pairs.
  • Fisher, S. E., & DeFries, J. C. (2002). Developmental dyslexia: Genetic dissection of a complex cognitive trait. Nature Reviews Neuroscience, 3, 767-780. doi:10.1038/nrn936.

    Abstract

    Developmental dyslexia, a specific impairment of reading ability despite adequate intelligence and educational opportunity, is one of the most frequent childhood disorders. Since the first documented cases at the beginning of the last century, it has become increasingly apparent that the reading problems of people with dyslexia form part of a heritable neurobiological syndrome. As for most cognitive and behavioural traits, phenotypic definition is fraught with difficulties and the genetic basis is complex, making the isolation of genetic risk factors a formidable challenge. Against such a background, it is notable that several recent studies have reported the localization of genes that influence dyslexia and other language-related traits. These investigations exploit novel research approaches that are relevant to many areas of human neurogenetics.
  • Fisher, S. E., & Scharff, C. (2009). FOXP2 as a molecular window into speech and language [Review article]. Trends in Genetics, 25, 166-177. doi:10.1016/j.tig.2009.03.002.

    Abstract

    Rare mutations of the FOXP2 transcription factor gene cause a monogenic syndrome characterized by impaired speech development and linguistic deficits. Recent genomic investigations indicate that its downstream neural targets make broader impacts on common language impairments, bridging clinically distinct disorders. Moreover, the striking conservation of both FoxP2 sequence and neural expression in different vertebrates facilitates the use of animal models to study ancestral pathways that have been recruited towards human speech and language. Intriguingly, reduced FoxP2 dosage yields abnormal synaptic plasticity and impaired motor-skill learning in mice, and disrupts vocal learning in songbirds. Converging data indicate that Foxp2 is important for modulating the plasticity of relevant neural circuits. This body of research represents the first functional genetic forays into neural mechanisms contributing to human spoken language.
  • Fisher, S. E., Francks, C., Marlow, A. J., MacPhie, I. L., Newbury, D. F., Cardon, L. R., Ishikawa-Brush, Y., Richardson, A. J., Talcott, J. B., Gayán, J., Olson, R. K., Pennington, B. F., Smith, S. D., DeFries, J. C., Stein, J. F., & Monaco, A. P. (2002). Independent genome-wide scans identify a chromosome 18 quantitative-trait locus influencing dyslexia. Nature Genetics, 30(1), 86-91. doi:10.1038/ng792.

    Abstract

    Developmental dyslexia is defined as a specific and significant impairment in reading ability that cannot be explained by deficits in intelligence, learning opportunity, motivation or sensory acuity. It is one of the most frequently diagnosed disorders in childhood, representing a major educational and social problem. It is well established that dyslexia is a significantly heritable trait with a neurobiological basis. The etiological mechanisms remain elusive, however, despite being the focus of intensive multidisciplinary research. All attempts to map quantitative-trait loci (QTLs) influencing dyslexia susceptibility have targeted specific chromosomal regions, so that inferences regarding genetic etiology have been made on the basis of very limited information. Here we present the first two complete QTL-based genome-wide scans for this trait, in large samples of families from the United Kingdom and United States. Using single-point analysis, linkage to marker D18S53 was independently identified as being one of the most significant results of the genome in each scan (P< or =0.0004 for single word-reading ability in each family sample). Multipoint analysis gave increased evidence of 18p11.2 linkage for single-word reading, yielding top empirical P values of 0.00001 (UK) and 0.0004 (US). Measures related to phonological and orthographic processing also showed linkage at this locus. We replicated linkage to 18p11.2 in a third independent sample of families (from the UK), in which the strongest evidence came from a phoneme-awareness measure (most significant P value=0.00004). A combined analysis of all UK families confirmed that this newly discovered 18p QTL is probably a general risk factor for dyslexia, influencing several reading-related processes. This is the first report of QTL-based genome-wide scanning for a human cognitive trait.
  • Fisher, S. E., Vargha-Khadem, F., Watkins, K. E., Monaco, A. P., & Pembrey, M. E. (1998). Localisation of a gene implicated in a severe speech and language disorder. Nature Genetics, 18, 168 -170. doi:10.1038/ng0298-168.

    Abstract

    Between 2 and 5% of children who are otherwise unimpaired have significant difficulties in acquiring expressive and/or receptive language, despite adequate intelligence and opportunity. While twin studies indicate a significant role for genetic factors in developmental disorders of speech and language, the majority of families segregating such disorders show complex patterns of inheritance, and are thus not amenable for conventional linkage analysis. A rare exception is the KE family, a large three-generation pedigree in which approximately half of the members are affected with a severe speech and language disorder which appears to be transmitted as an autosomal dominant monogenic trait. This family has been widely publicised as suffering primarily from a defect in the use of grammatical suffixation rules, thus supposedly supporting the existence of genes specific to grammar. The phenotype, however, is broader in nature, with virtually every aspect of grammar and of language affected. In addition, affected members have a severe orofacial dyspraxia, and their speech is largely incomprehensible to the naive listener. We initiated a genome-wide search for linkage in the KE family and have identified a region on chromosome 7 which co-segregates with the speech and language disorder (maximum lod score = 6.62 at theta = 0.0), confirming autosomal dominant inheritance with full penetrance. Further analysis of microsatellites from within the region enabled us to fine map the locus responsible (designated SPCH1) to a 5.6-cM interval in 7q31, thus providing an important step towards its identification. Isolation of SPCH1 may offer the first insight into the molecular genetics of the developmental process that culminates in speech and language.
  • Fisher, S. E. (2002). Isolation of the genetic factors underlying speech and language disorders. In R. Plomin, J. C. DeFries, I. W. Craig, & P. McGuffin (Eds.), Behavioral genetics in the postgenomic era (pp. 205-226). Washington, DC: American Psychological Association.

    Abstract

    This chapter highlights the research in isolating genetic factors underlying specific language impairment (SLI), or developmental dysphasia, which exploits newly developed genotyping technology, novel statistical methodology, and DNA sequence data generated by the Human Genome Project. The author begins with an overview of results from family, twin, and adoption studies supporting genetic involvement and then goes on to outline progress in a number of genetic mapping efforts that have been recently completed or are currently under way. It has been possible for genetic researchers to pinpoint the specific mutation responsible for some speech and language disorders, providing an example of how the availability of human genomic sequence data can greatly accelerate the pace of disease gene discovery. Finally, the author discusses future prospects on how molecular genetics may offer new insight into the etiology underlying speech and language disorders, leading to improvements in diagnosis and treatment.
  • Fitz, H. (2009). Neural syntax. PhD Thesis, Universiteit van Amsterdam, Institute for Logic, Language, and Computation.

    Abstract

    Children learn their mother tongue spontaneously and effortlessly through communicative interaction with their environment; they do not have to be taught explicitly or learn how to learn first. The ambient language to which children are exposed, however, is highly variable and arguably deficient with regard to the learning target. Nonetheless, most normally developing children learn their native language rapidly and with ease. To explain this accomplishment, many theories of acquisition posit innate constraints on learning, or even a biological endowment for language which is specific to language. Usage-based theories, on the other hand, place more emphasis on the role of experience and domain-general learning mechanisms than on innate language-specific knowledge. But languages are lexically open and combinatorial in structure, so no amount of experience covers their expressivity. Usage-based theories therefore have to explain how children can generalize the properties of their linguistic input to an adult-like grammar. In this thesis I provide an explicit computational mechanism with which usage-based theories of language can be tested and evaluated. The focus of my work lies on complex syntax and the human ability to form sentences which express more than one proposition by means of relativization. This `capacity for recursion' is a hallmark of an adult grammar and, as some have argued, the human language faculty itself. The manuscript is organized as follows. In the second chapter, I give an overview of results that characterize the properties of neural networks as mathematical objects and review previous attempts at modelling the acquisition of complex syntax with such networks. The chapter introduces the conceptual landscape in which the current work is located. In the third chapter, I argue that the construction and use of meaning is essential in child language acquisition and adult processing. Neural network models need to incorporate this dimension of human linguistic behavior. I introduce the Dual-path model of sentence production and syntactic development which is able to represent semantics and learns from exposure to sentences paired with their meaning (cf. Chang et al. 2006). I explain the architecture of this model, motivate critical assumptions behind its design, and discuss existing research using this model. The fourth chapter describes and compares several extensions of the basic architecture to accommodate the processing of multi-clause utterances. These extensions are evaluated against computational desiderata, such as good learning and generalization performance and the parsimony of input representations. A single-best solution for encoding the meaning of complex sentences with restrictive relative clauses is identified, which forms the basis for all subsequent simulations. Chapter five analyzes the learning dynamics in more detail. I first examine the model's behavior for different relative clause types. Syntactic alternations prove to be particularly difficult to learn because they complicate the meaning-to-form mapping the model has to acquire. In the second part, I probe the internal representations the model has developed during learning. It is argued that the model acquires the argument structure of the construction types in its input language and represents the hierarchical organization of distinct multi-clause utterances. The juice of this thesis is contained in chapters six to eight. In chapter six, I test the Dual-path model's generalization capacities in a variety of tasks. I show that its syntactic representations are sufficiently transparent to allow structural generalization to novel complex utterances. Semantic similarities between novel and familiar sentence types play a critical role in this task. The Dual-path model also has a capacity for generalizing familiar words to novel slots in novel constructions (strong semantic systematicity). Moreover, I identify learning conditions under which the model displays recursive productivity. It is argued that the model's behavior is consistent with human behavior in that production accuracy degrades with depth of embedding, and right-branching is learned faster than center-embedding recursion. In chapter seven, I address the issue of learning complex polar interrogatives in the absence of positive exemplars in the input. I show that the Dual-path model can acquire the syntax of these questions from simpler and similar structures which are warranted in a child's linguistic environment. The model's errors closely match children's errors, and it is suggested that children might not require an innate learning bias to acquire auxiliary fronting. Since the model does not implement a traditional kind of language-specific universal grammar, these results are relevant to the poverty of the stimulus debate. English relative clause constructions give rise to similar performance orderings in adult processing and child language acquisition. This pattern matches the typological universal called the noun phrase accessibility hierarchy. I propose an input-based explanation of this data in chapter eight. The Dual-path model displays this ordering in syntactic development when exposed to plausible input distributions. But it is possible to manipulate and completely remove the ordering by varying properties of the input from which the model learns. This indicates, I argue, that patterns of interference and facilitation among input structures can explain the hierarchy when all structures are simultaneously learned and represented over a single set of connection weights. Finally, I draw conclusions from this work, address some unanswered questions, and give a brief outlook on how this research might be continued.

    Additional information

    http://dare.uva.nl/record/328271
  • Fitz, H., & Chang, F. (2009). Syntactic generalization in a connectionist model of sentence production. In J. Mayor, N. Ruh, & K. Plunkett (Eds.), Connectionist models of behaviour and cognition II: Proceedings of the 11th Neural Computation and Psychology Workshop (pp. 289-300). River Edge, NJ: World Scientific Publishing.

    Abstract

    We present a neural-symbolic learning model of sentence production which displays strong semantic systematicity and recursive productivity. Using this model, we provide evidence for the data-driven learnability of complex yes/no- questions.
  • Fitz, H., & Chang, F. (2008). The role of the input in a connectionist model of the accessibility hierarchy in development. In H. Chan, H. Jacob, & E. Kapia (Eds.), Proceedings from the 32nd Annual Boston University Conference on Language Development [BUCLD 32] (pp. 120-131). Somerville, Mass.: Cascadilla Press.
  • FitzPatrick, I., & Weber, K. (2008). “Il piccolo principe est allé”: Processing of language switches in auditory sentence comprehension. Journal of Neuroscience, 28(18), 4581-4582. doi:10.1523/JNEUROSCI.0905-08.2008.
  • Flores d'Arcais, G., & Lahiri, A. (1987). Max-Planck-Institute for Psycholinguistics: Annual Report Nr.8 1987. Nijmegen: MPI for Psycholinguistics.
  • Floyd, S. (2009). Nexos históricos, gramaticales y culturales de los números en cha'palaa [Historical, grammatical and cultural connections of Cha'palaa numerals]. In Proceedings of the Conference on Indigenous Languages of Latin America (CILLA) -IV.

    Abstract

    Los idiomas sudamericanas tienen una diversidad de sistemas numéricos, desde sistemas con solamente dos o tres términos en algunos idiomas amazónicos hasta sistemas con numerales extendiendo a miles. Una mirada al sistema del idioma cha’palaa de Ecuador demuestra rasgos de base-2, base-5, base-10 y base-20, ligados a diferentes etapas de cambio, desarrollo y contacto lingüístico. Conocer estas etapas nos permite proponer algunas correlaciones con lo que conocemos de la historia de contactos culturales en la región. The South American languages have diverse types of numeral systems, from systems of just two or three terms in some Amazonian languages to systems extending into the thousands. A look a the system of the Cha'palaa language of Ecuador demonstrates base-2, base-5, base-10 and base-20 features, linked to different stages of change, development and language contact. Learning about these stages permits up to propose some correlations between them and what we know about the history of cultural contact in the region.
  • Floyd, S. (2008). The Pirate media economy and the emergence of Quichua language media spaces in Ecuador. Anthropology of Work Review, 29(2), 34-41. doi:10.1111/j.1548-1417.2008.00012.x.

    Abstract

    This paper gives an account of the pirate media economy of Ecuador and its role in the emergence of indigenous Quichua-language media spaces, identifying the different parties involved in this economy, discussing their relationship to the parallel ‘‘legitimate’’ media economy, and considering the implications of this informal media market for Quichua linguistic and cultural reproduction. As digital recording and playback technology has become increasingly more affordable and widespread over recent years, black markets have grown up worldwide, based on cheap ‘‘illegal’’ reproduction of commercial media, today sold by informal entrepreneurs in rural markets, shops and street corners around Ecuador. Piggybacking on this pirate infrastructure, Quichua-speaking media producers and consumers have begun to circulate indigenous-language video at an unprecedented rate, helped by small-scale merchants who themselves profit by supplying market demands for positive images of indigenous people. In a context of a national media that has tended to silence indigenous voices rather than amplify them, informal media producers, consumers and vendors are developing relationships that open meaningful media spaces within the particular social, economic and linguistic contexts of Ecuador.
  • Foley, W., & Van Valin Jr., R. D. (2009). Functional syntax and universal grammar (Repr.). Cambridge University Press.

    Abstract

    The key argument of this book, originally published in 1984, is that when human beings communicate with each other by means of a natural language they typically do not do so in simple sentences but rather in connected discourse - complex expressions made up of a number of clauses linked together in various ways. A necessary precondition for intelligible discourse is the speaker’s ability to signal the temporal relations between the events that are being discussed and to refer to the participants in those events in such a way that it is clear who is being talked about. A great deal of the grammatical machinery in a language is devoted to this task, and Functional Syntax and Universal Grammar explores how different grammatical systems accomplish it. This book is an important attempt to integrate the study of linguistic form with the study of language use and meaning. It will be of particular interest to field linguists and those concerned with typology and language universals, and also to anthropologists involved in the study of language function.
  • Folia, V., Uddén, J., Forkstam, C., Ingvar, M., Hagoort, P., & Petersson, K. M. (2008). Implicit learning and dyslexia. Annals of the New York Academy of Sciences, 1145, 132-150. doi:10.1196/annals.1416.012.

    Abstract

    Several studies have reported an association between dyslexia and implicit learning deficits. It has been suggested that the weakness in implicit learning observed in dyslexic individuals may be related to sequential processing and implicit sequence learning. In the present article, we review the current literature on implicit learning and dyslexia. We describe a novel, forced-choice structural "mere exposure" artificial grammar learning paradigm and characterize this paradigm in normal readers in relation to the standard grammaticality classification paradigm. We argue that preference classification is a more optimal measure of the outcome of implicit acquisition since in the preference version participants are kept completely unaware of the underlying generative mechanism, while in the grammaticality version, the subjects have, at least in principle, been informed about the existence of an underlying complex set of rules at the point of classification (but not during acquisition). On the basis of the "mere exposure effect," we tested the prediction that the development of preference will correlate with the grammaticality status of the classification items. In addition, we examined the effects of grammaticality (grammatical/nongrammatical) and associative chunk strength (ACS; high/low) on the classification tasks (preference/grammaticality). Using a balanced ACS design in which the factors of grammaticality (grammatical/nongrammatical) and ACS (high/low) were independently controlled in a 2 × 2 factorial design, we confirmed our predictions. We discuss the suitability of this task for further investigation of the implicit learning characteristics in dyslexia.
  • Folia, V., Forkstam, C., Hagoort, P., & Petersson, K. M. (2009). Language comprehension: The interplay between form and content. In N. Taatgen, & H. van Rijn (Eds.), Proceedings of the 31th Annual Conference of the Cognitive Science Society (pp. 1686-1691). Austin, TX: Cognitive Science Society.

    Abstract

    In a 2x2 event-related FMRI study we find support for the idea that the inferior frontal cortex, centered on Broca’s region and its homologue, is involved in constructive unification operations during the structure-building process in parsing for comprehension. Tentatively, we provide evidence for a role of the dorsolateral prefrontal cortex centered on BA 9/46 in the control component of the language system. Finally, the left temporo-parietal cortex, in the vicinity of Wernicke’s region, supports the interaction between the syntax of gender agreement and sentence-level semantics.
  • Forkstam, C., Elwér, A., Ingvar, M., & Petersson, K. M. (2008). Instruction effects in implicit artificial grammar learning: A preference for grammaticality. Brain Research, 1221, 80-92. doi:10.1016/j.brainres.2008.05.005.

    Abstract

    Human implicit learning can be investigated with implicit artificial grammar learning, a paradigm that has been proposed as a simple model for aspects of natural language acquisition. In the present study we compared the typical yes–no grammaticality classification, with yes–no preference classification. In the case of preference instruction no reference to the underlying generative mechanism (i.e., grammar) is needed and the subjects are therefore completely uninformed about an underlying structure in the acquisition material. In experiment 1, subjects engaged in a short-term memory task using only grammatical strings without performance feedback for 5 days. As a result of the 5 acquisition days, classification performance was independent of instruction type and both the preference and the grammaticality group acquired relevant knowledge of the underlying generative mechanism to a similar degree. Changing the grammatical stings to random strings in the acquisition material (experiment 2) resulted in classification being driven by local substring familiarity. Contrasting repeated vs. non-repeated preference classification (experiment 3) showed that the effect of local substring familiarity decreases with repeated classification. This was not the case for repeated grammaticality classifications. We conclude that classification performance is largely independent of instruction type and that forced-choice preference classification is equivalent to the typical grammaticality classification.
  • Forkstam, C., Jansson, A., Ingvar, M., & Petersson, K. M. (2009). Modality transfer of acquired structural regularities: A preference for an acoustic route. In N. Taatgen, & H. Van Rijn (Eds.), Proceedings of the 31th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society.

    Abstract

    Human implicit learning can be investigated with implicit artificial grammar learning, a simple model for aspects of natural language acquisition. In this paper we investigate the remaining effect of modality transfer in syntactic classification of an acquired grammatical sequence structure after implicit grammar acquisition. Participants practiced either on acoustically presented syllable sequences or visually presented consonant letter sequences. During classification we independently manipulated the statistical frequency-based and rule-based characteristics of the classification stimuli. Participants performed reliably above chance on the within modality classification task although more so for those working on syllable sequence acquisition. These subjects were also the only group that kept a significant performance level in transfer classification. We speculate that this finding is of particular relevance in consideration of an ecological validity in the input signal in the use of artificial grammar learning and in language learning paradigms at large.
  • Francks, C., Fisher, S. E., MacPhie, I. L., Richardson, A. J., Marlow, A. J., Stein, J. F., & Monaco, A. P. (2002). A genomewide linkage screen for relative hand skill in sibling pairs. American Journal of Human Genetics, 70(3), 800-805. doi:10.1086/339249.

    Abstract

    Genomewide quantitative-trait locus (QTL) linkage analysis was performed using a continuous measure of relative hand skill (PegQ) in a sample of 195 reading-disabled sibling pairs from the United Kingdom. This was the first genomewide screen for any measure related to handedness. The mean PegQ in the sample was equivalent to that of normative data, and PegQ was not correlated with tests of reading ability (correlations between −0.13 and 0.05). Relative hand skill could therefore be considered normal within the sample. A QTL on chromosome 2p11.2-12 yielded strong evidence for linkage to PegQ (empirical P=.00007), and another suggestive QTL on 17p11-q23 was also identified (empirical P=.002). The 2p11.2-12 locus was further analyzed in an independent sample of 143 reading-disabled sibling pairs, and this analysis yielded an empirical P=.13. Relative hand skill therefore is probably a complex multifactorial phenotype with a heterogeneous background, but nevertheless is amenable to QTL-based gene-mapping approaches.
  • Francks, C. (2009). 13 - LRRTM1: A maternally suppressed genetic effect on handedness and schizophrenia. In I. E. C. Sommer, & R. S. Kahn (Eds.), Cerebral lateralization and psychosis (pp. 181-196). Cambridge: Cambridge University Press.

    Abstract

    The molecular, developmental, and evolutionary bases of human brain asymmetry are almost completely unknown. Genetic linkage and association mapping have pin-pointed a gene called LRRTM1 (leucine-rich repeat transmembrane neuronal 1) that may contribute to variability in human handedness. Here I describe how LRRTM1's involvement in handedness was discovered, and also the latest knowledge of its functions in brain development and disease. The association of LRRTM1 with handedness was derived entirely from the paternally inherited gene, and follow-up analysis of gene expression confirmed that LRRTM1 is one of a small number of genes that are imprinted in the human genome, for which the maternally inherited copy is suppressed. The same variation at LRRTM1 that was associated paternally with mixed-/left-handedness was also over-transmitted paternally to schizophrenic patients in a large family study.
    LRRTM1 is expressed in specific regions of the developing and adult forebrain by post-mitotic neurons, and the protein may be involved in axonal trafficking. Thus LRRTM1 has a probable role in neurodevelopment, and its association with handedness suggests that one of its functions may be in establishing or consolidating human brain asymmetry.
    LRRTM1 is the first gene for which allelic variation has been associated with human handedness. The genetic data also suggest indirectly that the epigenetic regulation of this gene may yet prove more important than DNA sequence variation for influencing brain development and disease.
    Intriguingly, the parent-of-origin activity of LRRTM1 suggests that men and women have had conflicting interests in relation to the outcome of lateralized brain development in their offspring.
  • Francks, C., Fisher, S. E., Olson, R. K., Pennington, B. F., Smith, S. D., DeFries, J. C., & Monaco, A. P. (2002). Fine mapping of the chromosome 2p12-16 dyslexia susceptibility locus: Quantitative association analysis and positional candidate genes SEMA4F and OTX1. Psychiatric Genetics, 12(1), 35-41.

    Abstract

    A locus on chromosome 2p12-16 has been implicated in dyslexia susceptibility by two independent linkage studies, including our own study of 119 nuclear twin-based families, each with at least one reading-disabled child. Nonetheless, no variant of any gene has been reported to show association with dyslexia, and no consistent clinical evidence exists to identify candidate genes with any strong a priori logic. We used 21 microsatellite markers spanning 2p12-16 to refine our 1-LOD unit linkage support interval to 12cM between D2S337 and D2S286. Then, in quantitative association analysis, two microsatellites yielded P values<0.05 across a range of reading-related measures (D2S2378 and D2S2114). The exon/intron borders of two positional candidate genes within the region were characterized, and the exons were screened for polymorphisms. The genes were Semaphorin4F (SEMA4F), which encodes a protein involved in axonal growth cone guidance, and OTX1, encoding a homeodomain transcription factor involved in forebrain development. Two non-synonymous single nucleotide polymorphisms were found in SEMA4F, each with a heterozygosity of 0.03. One intronic single nucleotide polymorphism between exons 12 and 13 of SEMA4F was tested for quantitative association, but no significant association was found. Only one single nucleotide polymorphism was found in OTX1, which was exonic but silent. Our data therefore suggest that linkage with reading disability at 2p12-16 is not caused by coding variants of SEMA4F or OTX1. Our study outlines the approach necessary for the identification of genetic variants causing dyslexia susceptibility in an epidemiological population of dyslexics.
  • Francks, C., MacPhie, I. L., & Monaco, A. P. (2002). The genetic basis of dyslexia. The Lancet Neurology, 1(8), 483-490. doi:10.1016/S1474-4422(02)00221-1.

    Abstract

    Dyslexia, a disorder of reading and spelling, is a heterogeneous neurological syndrome with a complex genetic and environmental aetiology. People with dyslexia differ in their individual profiles across a range of cognitive, physiological, and behavioural measures related to reading disability. Some or all of the subtypes of dyslexia might have partly or wholly distinct genetic causes. An understanding of the role of genetics in dyslexia could help to diagnose and treat susceptible children more effectively and rapidly than is currently possible and in ways that account for their individual disabilities. This knowledge will also give new insights into the neurobiology of reading and language cognition. Genetic linkage analysis has identified regions of the genome that might harbour inherited variants that cause reading disability. In particular, loci on chromosomes 6 and 18 have shown strong and replicable effects on reading abilities. These genomic regions contain tens or hundreds of candidate genes, and studies aimed at the identification of the specific causal genetic variants are underway.
  • Francks, C. (2009). Understanding the genetics of behavioural and psychiatric traits will only be achieved through a realistic assessment of their complexity. Laterality: Asymmetries of Body, Brain and Cognition, 14(1), 11-16. doi:10.1080/13576500802536439.

    Abstract

    Francks et al. (2007) performed a recent study in which the first putative genetic effect on human handedness was identified (the imprinted locus LRRTM1 on human chromosome 2). In this issue of Laterality, Tim Crow and colleagues present a critique of that study. The present paper presents a personal response to that critique which argues that Francks et al. (2007) published a substantial body of evidence implicating LRRTM1 in handedness and schizophrenia. Progress will now be achieved by others trying to validate, refute, or extend those findings, rather than by further armchair discussion.
  • Frank, S. L., Koppen, M., Noordman, L. G. M., & Vonk, W. (2008). World knowledge in computational models of discourse comprehension. Discourse Processes, 45(6), 429-463. doi:10.1080/01638530802069926.

    Abstract

    Because higher level cognitive processes generally involve the use of world knowledge, computational models of these processes require the implementation of a knowledge base. This article identifies and discusses 4 strategies for dealing with world knowledge in computational models: disregarding world knowledge, ad hoc selection, extraction from text corpora, and implementation of all knowledge about a simplified microworld. Each of these strategies is illustrated by a detailed discussion of a model of discourse comprehension. It is argued that seemingly successful modeling results are uninformative if knowledge is implemented ad hoc or not at all, that knowledge extracted from large text corpora is not appropriate for discourse comprehension, and that a suitable implementation can be obtained by applying the microworld strategy.
  • Franke, B., Hoogman, M., Vasquez, A. A., Heister, J., Savelkoul, P., Naber, M., Scheffer, H., Kiemeney, L., Kan, C., Kooij, J., & Buitelaar, J. (2008). Association of the dopamine transporter (SLC6A3/DAT1) gene 9-6 haplotype with adult ADHD. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 147, 1576-1579. doi:10.1002/ajmg.b.30861.

    Abstract

    ADHD is a neuropsychiatric disorder characterized by chronic hyperactivity, inattention and impulsivity, which affects about 5% of school-age children. ADHD persists into adulthood in at least 15% of cases. It is highly heritable and familial influences seem strongest for ADHD persisting into adulthood. However, most of the genetic research in ADHD has been carried out in children with the disorder. The gene that has received most attention in ADHD genetics is SLC6A3/DAT1 encoding the dopamine transporter. In the current study we attempted to replicate in adults with ADHD the reported association of a 10–6 SLC6A3-haplotype, formed by the 10-repeat allele of the variable number of tandem repeat (VNTR) polymorphism in the 3′ untranslated region of the gene and the 6-repeat allele of the VNTR in intron 8 of the gene, with childhood ADHD. In addition, we wished to explore the role of a recently described VNTR in intron 3 of the gene. Two hundred sixteen patients and 528 controls were included in the study. We found a 9–6 SLC6A3-haplotype, rather than the 10–6 haplotype, to be associated with ADHD in adults. The intron 3 VNTR showed no association with adult ADHD. Our findings converge with earlier reports and suggest that age is an important factor to be taken into account when assessing the association of SLC6A3 with ADHD. If confirmed in other studies, the differential association of the gene with ADHD in children and in adults might imply that SLC6A3 plays a role in modulating the ADHD phenotype, rather than causing it
  • Fransson, P., Merboldt, K.-D., Petersson, K. M., Ingvar, M., & Frahm, J. (2002). On the effects of spatial filtering — A comparative fMRI study of episodic memory encoding at high and low resolution. NeuroImage, 16(4), 977-984. doi:10.1006/nimg.2002.1079.

    Abstract

    Theeffects of spatial filtering in functional magnetic resonance imaging were investigated by reevaluating the data of a previous study of episodic memory encoding at 2 × 2 × 4-mm3 resolution with use of a SPM99 analysis involving a Gaussian kernel of 8-mm full width at half maximum. In addition, a multisubject analysis of activated regions was performed by normalizing the functional images to an approximate Talairach brain atlas. In individual subjects, spatial filtering merged activations in anatomically separated brain regions. Moreover, small foci of activated pixels which originated from veins became blurred and hence indistinguishable from parenchymal responses. The multisubject analysis resulted in activation of the hippocampus proper, a finding which could not be confirmed by the activation maps obtained at high resolution. It is concluded that the validity of multisubject fMRI analyses can be considerably improved by first analyzing individual data sets at optimum resolution to assess the effects of spatial filtering and minimize the risk of signal contamination by macroscopically visible vessels.
  • Friederici, A., & Levelt, W. J. M. (1987). Resolving perceptual conflicts: The cognitive mechanism of spatial orientation. Aviation, Space, and Environmental Medicine, 58(9), A164-A169.
  • Friederici, A., & Levelt, W. J. M. (1987). Spatial description in microgravity: Aspects of cognitive adaptation. In P. R. Sahm, R. Jansen, & M. Keller (Eds.), Proceedings of the Norderney Symposium on Scientific Results of the German Spacelab Mission D1 (pp. 518-524). Köln, Germany: Wissenschaftliche Projektführung DI c/o DFVLR.
  • Friederici, A., & Levelt, W. J. M. (1987). Sprache. In K. Immelmann, K. Scherer, & C. Vogel (Eds.), Funkkolleg Psychobiologie (pp. 58-87). Weinheim: Beltz.
  • Friedlaender, J., Hunley, K., Dunn, M., Terrill, A., Lindström, E., Reesink, G., & Friedlaender, F. (2009). Linguistics more robust than genetics [Letter to the editor]. Science, 324, 464-465. doi:10.1126/science.324_464c.
  • Ganushchak, L. Y., & Schiller, N. O. (2008). Brain error-monitoring activity is affected by semantic relatedness: An event-related brain potentials study. Journal of Cognitive Neuroscience, 20(5), 927-940. doi:10.1162/jocn.2008.20514.

    Abstract

    Speakers continuously monitor what they say. Sometimes, self-monitoring malfunctions and errors pass undetected and uncorrected. In the field of action monitoring, an event-related brain potential, the error-related negativity (ERN), is associated with error processing. The present study relates the ERN to verbal self-monitoring and investigates how the ERN is affected by auditory distractors during verbal monitoring. We found that the ERN was largest following errors that occurred after semantically related distractors had been presented, as compared to semantically unrelated ones. This result demonstrates that the ERN is sensitive not only to response conflict resulting from the incompatibility of motor responses but also to more abstract lexical retrieval conflict resulting from activation of multiple lexical entries. This, in turn, suggests that the functioning of the verbal self-monitoring system during speaking is comparable to other performance monitoring, such as action monitoring.
  • Ganushchak, L. Y., & Schiller, N. O. (2008). Motivation and semantic context affect brain error-monitoring activity: An event-related brain potentials study. NeuroImage, 39, 395-405. doi:10.1016/j.neuroimage.2007.09.001.

    Abstract

    During speech production, we continuously monitor what we say. In
    situations in which speech errors potentially have more severe
    consequences, e.g. during a public presentation, our verbal selfmonitoring
    system may pay special attention to prevent errors than in
    situations in which speech errors are more acceptable, such as a casual
    conversation. In an event-related potential study, we investigated
    whether or not motivation affected participants’ performance using a
    picture naming task in a semantic blocking paradigm. Semantic
    context of to-be-named pictures was manipulated; blocks were
    semantically related (e.g., cat, dog, horse, etc.) or semantically
    unrelated (e.g., cat, table, flute, etc.). Motivation was manipulated
    independently by monetary reward. The motivation manipulation did
    not affect error rate during picture naming. However, the highmotivation
    condition yielded increased amplitude and latency values of
    the error-related negativity (ERN) compared to the low-motivation
    condition, presumably indicating higher monitoring activity. Furthermore,
    participants showed semantic interference effects in reaction
    times and error rates. The ERN amplitude was also larger during
    semantically related than unrelated blocks, presumably indicating that
    semantic relatedness induces more conflict between possible verbal
    responses.
  • Ganushchak, L. Y., & Schiller, N. O. (2009). Speaking in one’s second language under time pressure: An ERP study on verbal self-monitoring in German-Dutch bilinguals. Psychophysiology, 46, 410-419. doi:10.1111/j.1469-8986.2008.00774.x.

    Abstract

    This study addresses how verbal self-monitoring and the Error-Related Negativity (ERN) are affected by time pressure
    when a task is performed in a second language as opposed to performance in the native language. German–Dutch
    bilinguals were required to perform a phoneme-monitoring task in Dutch with and without a time pressure manipulation.
    We obtained an ERN following verbal errors that showed an atypical increase in amplitude under time
    pressure. This finding is taken to suggest that under time pressure participants had more interference from their native
    language, which in turn led to a greater response conflict and thus enhancement of the amplitude of the ERN. This
    result demonstrates once more that the ERN is sensitive to psycholinguistic manipulations and suggests that the
    functioning of the verbal self-monitoring systemduring speaking is comparable to other performance monitoring, such
    as action monitoring.
  • Garcia, N., Lenkiewicz, P., Freire, M., & Monteiro, P. (2009). A new architecture for optical burst switching networks based on cooperative control. In Proceeding of the 8th IEEE International Symposium on Network Computing and Applications (IEEE NCA09) (pp. 310-313).

    Abstract

    This paper presents a new architecture for optical burst switched networks where the control plane of the network functions in a cooperative manner. Each node interprets the data conveyed by the control packet and forwards it to the next nodes, making the control plane of the network distribute the relevant information to all the nodes in the network. A cooperation transmission tree is used, thus allowing all the nodes to store the information related to the traffic management in the network, and enabling better network resource planning at each node. A model of this network architecture is proposed, and its performance is evaluated.
  • García Lecumberri, M. L., Cooke, M., Cutugno, F., Giurgiu, M., Meyer, B. T., Scharenborg, O., Van Dommelen, W., & Volin, J. (2008). The non-native consonant challenge for European languages. In INTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association (pp. 1781-1784). ISCA Archive.

    Abstract

    This paper reports on a multilingual investigation into the effects of different masker types on native and non-native perception in a VCV consonant recognition task. Native listeners outperformed 7 other language groups, but all groups showed a similar ranking of maskers. Strong first language (L1) interference was observed, both from the sound system and from the L1 orthography. Universal acoustic-perceptual tendencies are also at work in both native and non-native sound identifications in noise. The effect of linguistic distance, however, was less clear: in large multilingual studies, listener variables may overpower other factors.
  • Garrido, L., Eisner, F., McGettigan, C., Stewart, L., Sauter, D., Hanley, J. R., Schweinberger, S. R., Warren, J. D., & Duchaine, B. (2009). Developmental phonagnosia: A selective deficit of vocal identity recognition. Neuropsychologia, 47(1), 123-131. doi:10.1016/j.neuropsychologia.2008.08.003.

    Abstract

    Phonagnosia, the inability to recognize familiar voices, has been studied in brain-damaged patients but no cases due to developmental problems have been reported. Here we describe the case of KH, a 60-year-old active professional woman who reports that she has always experienced severe voice recognition difficulties. Her hearing abilities are normal, and an MRI scan showed no evidence of brain damage in regions associated with voice or auditory perception. To better understand her condition and to assess models of voice and high-level auditory processing, we tested KH on behavioural tasks measuring voice recognition, recognition of vocal emotions, face recognition, speech perception, and processing of environmental sounds and music. KH was impaired on tasks requiring the recognition of famous voices and the learning and recognition of new voices. In contrast, she performed well on nearly all other tasks. Her case is the first report of developmental phonagnosia, and the results suggest that the recognition of a speaker’s vocal identity depends on separable mechanisms from those used to recognize other information from the voice or non-vocal auditory stimuli.
  • Gazendam, L., Wartena, C., Malaise, V., Schreiber, G., De Jong, A., & Brugman, H. (2009). Automatic annotation suggestions for audiovisual archives: Evaluation aspects. Interdisciplinary Science Reviews, 34(2/3), 172-188. doi:10.1179/174327909X441090.

    Abstract

    In the context of large and ever growing archives, generating annotation suggestions automatically from textual resources related to the documents to be archived is an interesting option in theory. It could save a lot of work in the time consuming and expensive task of manual annotation and it could help cataloguers attain a higher inter-annotator agreement. However, some questions arise in practice: what is the quality of the automatically produced annotations? How do they compare with manual annotations and with the requirements for annotation that were defined in the archive? If different from the manual annotations, are the automatic annotations wrong? In the CHOICE project, partially hosted at the Netherlands Institute for Sound and Vision, the Dutch public archive for audiovisual broadcasts, we automatically generate annotation suggestions for cataloguers. In this paper, we define three types of evaluation of these annotation suggestions: (1) a classic and strict evaluation measure expressing the overlap between automatically generated keywords and the manual annotations, (2) a loosened evaluation measure for which semantically very similar annotations are also considered as relevant matches, and (3) an in-use evaluation of the usefulness of manual versus automatic annotations in the context of serendipitous browsing. During serendipitous browsing, the annotations (manual or automatic) are used to retrieve and visualize semantically related documents.
  • Gentner, D., & Bowerman, M. (2009). Why some spatial semantic categories are harder to learn than others: The typological prevalence hypothesis. In J. Guo, E. Lieven, N. Budwig, S. Ervin-Tripp, K. Nakamura, & S. Ozcaliskan (Eds.), Crosslinguistic approaches to the psychology of language: Research in the tradition of Dan Isaac Slobin (pp. 465-480). New York: Psychology Press.
  • Ghatan, P. H., Hsieh, J. C., Petersson, K. M., Stone-Elander, S., & Ingvar, M. (1998). Coexistence of attention-based facilitation and inhibition in the human cortex. NeuroImage, 7, 23-29.

    Abstract

    A key function of attention is to select an appropriate subset of available information by facilitation of attended processes and/or inhibition of irrelevant processing. Functional imaging studies, using positron emission tomography, have during different experimental tasks revealed decreased neuronal activity in areas that process input from unattended sensory modalities. It has been hypothesized that these decreases reflect a selective inhibitory modulation of nonrelevant cortical processing. In this study we addressed this question using a continuous arithmetical task with and without concomitant disturbing auditory input (task-irrelevant speech). During the arithmetical task, irrelevant speech did not affect task-performance but yielded decreased activity in the auditory and midcingulate cortices and increased activity in the left posterior parietal cortex. This pattern of modulation is consistent with a top down inhibitory modulation of a nonattended input to the auditory cortex and a coexisting, attention-based facilitation of taskrelevant processing in higher order cortices. These findings suggest that task-related decreases in cortical activity may be of functional importance in the understanding of both attentional mechanisms and taskrelated information processing.
  • Giering, E., Tinbergen, M., & Verbunt, A. (2009). Research Report 2007 | 2008. Nijmegen: Max Planck Institute for Psycholinguistics.
  • Glaser, B., & Holmans, P. (2009). Comparison of methods for combining case-control and family-based association studies. Human Heredity, 68(2), 106-116. doi:10.1159/000212503.

    Abstract

    OBJECTIVES: Combining the analysis of family-based samples with unrelated individuals can enhance the power of genetic association studies. Various combined analysis techniques have been recently developed; as yet, there have been no comparisons of their power, or robustness to confounding factors. We investigated empirically the power of up to six combined methods using simulated samples of trios and unrelated cases/controls (TDTCC), trios and unrelated controls (TDTC), and affected sibpairs with parents and unrelated cases/controls (ASPFCC). METHODS: We simulated multiplicative, dominant and recessive models with varying risk parameters in single samples. Additionally, we studied false-positive rates and investigated, if possible, the coverage of the true genetic effect (TDTCC). RESULTS/CONCLUSIONS: Under the TDTCC design, we identified four approaches with equivalent power and false-positive rates. Combined statistics were more powerful than single-sample statistics or a pooled chi(2)-statistic when risk parameters were similar in single samples. Adding parental information to the CC part of the joint likelihood increased the power of generalised logistic regression under the TDTC but not the TDTCC scenario. Formal testing of differences between risk parameters in subsamples was the most sensitive approach to avoid confounding in combined analysis. Non-parametric analysis based on Monte-Carlo testing showed the highest power for ASPFCC samples.
  • De Goede, D., Shapiro, L. P., Wester, F., Swinney, D. A., & Bastiaanse, Y. R. M. (2009). The time course of verb processing in Dutch sentences. Journal of Psycholinguistic Research, 38(3), 181-199. doi:10.1007/s10936-009-9117-3.

    Abstract

    The verb has traditionally been characterized as the central element in a sentence. Nevertheless, the exact role of the verb during the actual ongoing comprehension of a sentence as it unfolds in time remains largely unknown. This paper reports the results of two Cross-Modal Lexical Priming (CMLP) experiments detailing the pattern of verb priming during on-line processing of Dutch sentences. Results are contrasted with data from a third CMLP experiment on priming of nouns in similar sentences. It is demonstrated that the meaning of a matrix verb remains active throughout the entire matrix clause, while this is not the case for the meaning of a subject head noun. Activation of the meaning of the verb only dissipates upon encountering a clear signal as to the start of a new clause.
  • Goldin-Meadow, S., Chee So, W., Ozyurek, A., & Mylander, C. (2008). The natural order of events: how speakers of different languages represent events nonverbally. Proceedings of the National Academy of Sciences of the USA, 105(27), 9163-9168. doi:10.1073/pnas.0710060105.

    Abstract

    To test whether the language we speak influences our behavior even when we are not speaking, we asked speakers of four languages differing in their predominant word orders (English, Turkish, Spanish, and Chinese) to perform two nonverbal tasks: a communicative task (describing an event by using gesture without speech) and a noncommunicative task (reconstructing an event with pictures). We found that the word orders speakers used in their everyday speech did not influence their nonverbal behavior. Surprisingly, speakers of all four languages used the same order and on both nonverbal tasks. This order, actor–patient–act, is analogous to the subject–object–verb pattern found in many languages of the world and, importantly, in newly developing gestural languages. The findings provide evidence for a natural order that we impose on events when describing and reconstructing them nonverbally and exploit when constructing language anew.

    Additional information

    GoldinMeadow_2008_naturalSuppl.pdf
  • Goldin-Meadow, S., Ozyurek, A., Sancar, B., & Mylander, C. (2009). Making language around the globe: A cross-linguistic study of homesign in the United States, China, and Turkey. In J. Guo, E. Lieven, N. Budwig, S. Ervin-Tripp, K. Nakamura, & S. Ozcaliskan (Eds.), Crosslinguistic approaches to the psychology of language: Research in the tradition of Dan Isaac Slobin (pp. 27-39). New York: Psychology Press.
  • Goldin-Meadow, S., Gentner, D., Ozyurek, A., & Gurcanli, O. (2009). Spatial language supports spatial cognition: Evidence from deaf homesigners [abstract]. Cognitive Processing, 10(Suppl. 2), S133-S134.
  • Goudbeek, M., Cutler, A., & Smits, R. (2008). Supervised and unsupervised learning of multidimensionally varying nonnative speech categories. Speech Communication, 50(2), 109-125. doi:10.1016/j.specom.2007.07.003.

    Abstract

    The acquisition of novel phonetic categories is hypothesized to be affected by the distributional properties of the input, the relation of the new categories to the native phonology, and the availability of supervision (feedback). These factors were examined in four experiments in which listeners were presented with novel categories based on vowels of Dutch. Distribution was varied such that the categorization depended on the single dimension duration, the single dimension frequency, or both dimensions at once. Listeners were clearly sensitive to the distributional information, but unidimensional contrasts proved easier to learn than multidimensional. The native phonology was varied by comparing Spanish versus American English listeners. Spanish listeners found categorization by frequency easier than categorization by duration, but this was not true of American listeners, whose native vowel system makes more use of duration-based distinctions. Finally, feedback was either available or not; this comparison showed supervised learning to be significantly superior to unsupervised learning.
  • Goudbeek, M., Swingley, D., & Smits, R. (2009). Supervised and unsupervised learning of multidimensional acoustic categories. Journal of Experimental Psychology: Human Perception and Performance, 35, 1913-1933. doi:10.1037/a0015781.

    Abstract

    Learning to recognize the contrasts of a language-specific phonemic repertoire can be viewed as forming categories in a multidimensional psychophysical space. Research on the learning of distributionally defined visual categories has shown that categories defined over I dimension are easy to learn and that learning multidimensional categories is more difficult but tractable under specific task conditions. In 2 experiments, adult participants learned either a unidimensional ora multidimensional category distinction with or without supervision (feedback) during learning. The unidimensional distinctions were readily learned and supervision proved beneficial, especially in maintaining category learning beyond the learning phase. Learning the multidimensional category distinction proved to be much more difficult and supervision was not nearly as beneficial as with unidimensionally defined categories. Maintaining a learned multidimensional category distinction was only possible when the distributional information (hat identified the categories remained present throughout the testing phase. We conclude that listeners are sensitive to both trial-by-trial feedback and the distributional information in the stimuli. Even given limited exposure, listeners learned to use 2 relevant dimensions. albeit with considerable difficulty.
  • Grabe, E. (1998). Comparative intonational phonology: English and German. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.2057683.
  • Graham, S. A., Jégouzo, S. A. F., Yan, S., Powlesland, A. S., Brady, J. P., Taylor, M. E., & Drickamer, K. (2009). Prolectin, a glycan-binding receptor on dividing B cells in germinal centers. The Journal of Biological Chemistry, 284, 18537-18544. doi:10.1074/jbc.M109.012807.

    Abstract

    Prolectin, a previously undescribed glycan-binding receptor, has been identified by re-screening of the human genome for genes encoding proteins containing potential C-type carbohydrate-recognition domains. Glycan array analysis revealed that the carbohydrate-recognition domain in the extracellular domain of the receptor binds glycans with terminal α-linked mannose or fucose residues. Prolectin expressed in fibroblasts is found at the cell surface, but unlike many glycan-binding receptors it does not mediate endocytosis of a neoglycoprotein ligand. However, compared with other known glycan-binding receptors, the receptor contains an unusually large intracellular domain that consists of multiple sequence motifs, including phosphorylated tyrosine residues, that allow it to interact with signaling molecules such as Grb2. Immunohistochemistry has been used to demonstrate that prolectin is expressed on a specialized population of proliferating B cells in germinal centers. Thus, this novel receptor has the potential to function in carbohydrate-mediated communication between cells in the germinal center.
  • Groszer, M., Keays, D. A., Deacon, R. M. J., De Bono, J. P., Prasad-Mulcare, S., Gaub, S., Baum, M. G., French, C. A., Nicod, J., Coventry, J. A., Enard, W., Fray, M., Brown, S. D. M., Nolan, P. M., Pääbo, S., Channon, K. M., Costa, R. M., Eilers, J., Ehret, G., Rawlins, J. N. P. and 1 moreGroszer, M., Keays, D. A., Deacon, R. M. J., De Bono, J. P., Prasad-Mulcare, S., Gaub, S., Baum, M. G., French, C. A., Nicod, J., Coventry, J. A., Enard, W., Fray, M., Brown, S. D. M., Nolan, P. M., Pääbo, S., Channon, K. M., Costa, R. M., Eilers, J., Ehret, G., Rawlins, J. N. P., & Fisher, S. E. (2008). Impaired synaptic plasticity and motor learning in mice with a point mutation implicated in human speech deficits. Current Biology, 18(5), 354-362. doi:10.1016/j.cub.2008.01.060.

    Abstract

    The most well-described example of an inherited speech and language disorder is that observed in the multigenerational KE family, caused by a heterozygous missense mutation in the FOXP2 gene. Affected individuals are characterized by deficits in the learning and production of complex orofacial motor sequences underlying fluent speech and display impaired linguistic processing for both spoken and written language. The FOXP2 transcription factor is highly similar in many vertebrate species, with conserved expression in neural circuits related to sensorimotor integration and motor learning. In this study, we generated mice carrying an identical point mutation to that of the KE family, yielding the equivalent arginine-to-histidine substitution in the Foxp2 DNA-binding domain. Homozygous R552H mice show severe reductions in cerebellar growth and postnatal weight gain but are able to produce complex innate ultrasonic vocalizations. Heterozygous R552H mice are overtly normal in brain structure and development. Crucially, although their baseline motor abilities appear to be identical to wild-type littermates, R552H heterozygotes display significant deficits in species-typical motor-skill learning, accompanied by abnormal synaptic plasticity in striatal and cerebellar neural circuits.

    Additional information

    mmc1.pdf
  • Gubian, M., Torreira, F., Strik, H., & Boves, L. (2009). Functional data analysis as a tool for analyzing speech dynamics a case study on the French word c'était. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 2199-2202).

    Abstract

    In this paper we introduce Functional Data Analysis (FDA) as a tool for analyzing dynamic transitions in speech signals. FDA makes it possible to perform statistical analyses of sets of mathematical functions in the same way as classical multivariate analysis treats scalar measurement data. We illustrate the use of FDA with a reduction phenomenon affecting the French word c'était /setε/ 'it was', which can be reduced to [stε] in conversational speech. FDA reveals that the dynamics of the transition from [s] to [t] in fully reduced cases may still be different from the dynamics of [s] - [t] transitions in underlying /st/ clusters such as in the word stage.
  • Le Guen, O. (2009). Geocentric gestural deixis among Yucatecan Maya (Quintana Roo, México). In 18th IACCP Book of Selected Congress Papers (pp. 123-136). Athens, Greece: Pedio Books Publishing.
  • Le Guen, O., Senft, G., & Sicoli, M. A. (2008). Language of perception: Views from anthropology. In A. Majid (Ed.), Field Manual Volume 11 (pp. 29-36). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.446079.

    Abstract

    To understand the underlying principles of categorisation and classification of sensory input semantic analyses must be based on both language and culture. The senses are not only physiological phenomena, but they are also linguistic, cultural, and social. The goal of this task is to explore and describe sociocultural patterns relating language of perception, ideologies of perception, and perceptual practice in our speech communities.
  • Le Guen, O. (2009). The ethnography of emotions: A field worker's guide. In A. Majid (Ed.), Field manual volume 12 (pp. 31-34). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.446076.

    Abstract

    The goal of this task is to investigate cross-cultural emotion categories in language and thought. This entry is designed to provide researchers with some guidelines to describe the emotional repertoire of a community from an emic perspective. The first objective is to offer ethnographic tools and a questionnaire in order to understand the semantics of emotional terms and the local conception of emotions. The second objective is to identify the local display rules of emotions in communicative interactions.
  • Le Guen, O. (2008). Ubèel pixan: El camino de las almas ancetros familiares y colectivos entre los Mayas Yacatecos. Penisula, 3(1), 83-120. Retrieved from http://www.revistas.unam.mx/index.php/peninsula/article/viewFile/44354/40086.

    Abstract

    The aim of this article is to analyze the funerary customs and ritual for the souls among contemporary Yucatec Maya in order to better understand their relations with pre-Hispanic burial patterns. It is suggested that the souls of the dead are considered as ancestors that can be distinguished between family and collective ancestors considering several criteria: the place of burial, the place of ritual performance and the ritual treatment. In this proposition, funerary practices as well as ritual categories of ancestors (family or collective), are considered as reminiscences of ancient practices whose traces can be found throughout historical sources. Through an analyze of the current funerary practices and their variations, this article aims to demonstrate that over the time and despite socio-economical changes, ancient funerary practices (specifically from the post-classic period) had kept some homogeneity, preserving some essential characteristics that can be observed in the actuality.
  • Guirardello-Damian, R., & Skiba, R. (2002). Trumai Corpus: An example of presenting multi-media data in the IMDI-browser. In P. Austin, H. Dry, & P. Wittenburg (Eds.), Proceedings of the international LREC workshop on resources and tools in field linguistics (pp. 16-1-16-8). Paris: European Language Resources Association.

    Abstract

    Trumai, a genetically isolated language spoken in Brazil (Xingu reserve), is an example of an endangered language. Although the Trumai population consists of more than 100 individuals, only 51 people speak the language. The oral traditions are progressively dying. Given the current scenario, the documentation of this language and its cultural aspects is of great importance. In the framework of the DoBeS program (Documentation of Endangered Languages), the project "Documentation of Trumai" has selected and organized a collection of Trumai texts, with a multi-media representation of the corpus. Several kinds of information and data types are being included in the archive of the language: texts with audio and video recordings; written texts from educational materials; drawings; photos; songs; annotations in different formats; lexicon; field notes; results from scientific studies of the language (sound system, sketch grammar, comparative studies with other Xinguan languages), etc. All materials are integrated into the IMDI-Browser, a specialized tool for presenting and searching for linguistic data. This paper explores the processing phases and the results of the Trumai project taking into consideration the issue of how to combine the needs and wishes of field linguistics (content and research aspects) and the needs of archiving (structure and workflow aspects) in a well-organized corpus.
  • Gullberg, M., & Holmqvist, K. (2002). Visual attention towards gestures in face-to-face interaction vs. on screen. In I. Wachsmuth, & T. Sowa (Eds.), Gesture and sign languages in human-computer interaction (pp. 206-214). Berlin: Springer.
  • Gullberg, M., & Kita, S. (2009). Attention to speech-accompanying gestures: Eye movements and information uptake. Journal of Nonverbal Behavior, 33(4), 251-277. doi:10.1007/s10919-009-0073-2.

    Abstract

    There is growing evidence that addressees in interaction integrate the semantic information conveyed by speakers’ gestures. Little is known, however, about whether and how addressees’ attention to gestures and the integration of gestural information can be modulated. This study examines the influence of a social factor (speakers’ gaze to their own gestures), and two physical factors (the gesture’s location in gesture space and gestural holds) on addressees’ overt visual attention to gestures (direct fixations of gestures) and their uptake of gestural information. It also examines the relationship between gaze and uptake. The results indicate that addressees’ overt visual attention to gestures is affected both by speakers’ gaze and holds but for different reasons, whereas location in space plays no role. Addressees’ uptake of gesture information is only influenced by speakers’ gaze. There is little evidence of a direct relationship between addressees’ direct fixations of gestures and their uptake.
  • Gullberg, M. (2008). A helping hand? Gestures, L2 learners, and grammar. In S. G. McCafferty, & G. Stam (Eds.), Gesture: Second language acquisition and classroom research (pp. 185-210). New York: Routledge.

    Abstract

    This chapter explores what L2 learners' gestures reveal about L2 grammar. The focus is on learners’ difficulties with maintaining reference in discourse caused by their incomplete mastery of pronouns. The study highlights the systematic parallels between properties of L2 speech and gesture, and the parallel effects of grammatical development in both modalities. The validity of a communicative account of interlanguage grammar in this domain is tested by taking the cohesive properties of the gesture-speech ensemble into account. Specifically, I investigate whether learners use gestures to compensate for and to license over-explicit reference in speech. The results rule out a communicative account for the spoken variety of maintained reference. In contrast, cohesive gestures are found to be multi-functional. While the presence of cohesive gestures is not communicatively motivated, their spatial realisation is. It is suggested that gestures are exploited as a grammatical communication strategy to disambiguate speech wherever possible, but that they may also be doing speaker-internal work. The methodological importance of considering L2 gestures when studying grammar is also discussed.
  • Gullberg, M., & Indefrey, P. (2008). Cognitive and neural prerequisites for time in language: Any answers? Language Learning, 58(suppl. 1), 207-216. doi:10.1111/j.1467-9922.2008.00472.x.
  • Gullberg, M., & Indefrey, P. (2008). Cognitive and neural prerequisites for time in language: Any answers? In P. Indefrey, & M. Gullberg (Eds.), Time to speak: Cognitive and neural prerequisites for time in language (pp. 207-216). Oxford: Blackwell.
  • Gullberg, M. (2008). Gestures and second language acquisition. In P. Robinson, & N. C. Ellis (Eds.), Handbook of cognitive linguistics and second language acquisition (pp. 276-305). New York: Routledge.

    Abstract

    Gestures, the symbolic movements speakers perform while they speak, are systematically related to speech and language at multiple levels, and reflect cognitive and linguistic activities in non-trivial ways. This chapter presents an overview of what gestures can tell us about the processes of second language acquisition. It focuses on two key aspects, (a) gestures and the developing language system and (b) gestures and learning, and discusses some implications of an expanded view of language acquisition that takes gestures into account.
  • Gullberg, M., De Bot, K., & Volterra, V. (2008). Gestures and some key issues in the study of language development. Gesture, 8(2), 149-179. doi:10.1075/gest.8.2.03gul.

    Abstract

    The purpose of the current paper is to outline how gestures can contribute to the study of some key issues in language development. Specifically, we (1) briefly summarise what is already known about gesture in the domains of first and second language development, and development or changes over the life span more generally; (2) highlight theoretical and empirical issues in these domains where gestures can contribute in important ways to further our understanding; and (3) summarise some common themes in all strands of research on language development that could be the target of concentrated research efforts.
  • Gullberg, M., & De Bot, K. (Eds.). (2008). Gestures in language development [Special Issue]. Gesture, 8(2).
  • Gullberg, M. (2002). Gestures, languages, and language acquisition. In S. Strömqvist (Ed.), The diversity of languages and language learning (pp. 45-56). Lund: Lund University.
  • Gullberg, M., & McCafferty, S. G. (2008). Introduction to gesture and SLA: Toward an integrated approach. Studies in Second Language Acquisition, 30(2), 133-146. doi:10.1017/S0272263108080285.

    Abstract

    The title of this special issue, Gesture and SLA: Toward an Integrated Approach, stems in large part from the idea known as integrationism, principally set forth by Harris (2003, 2005), which posits that it is time to “demythologize” linguistics, moving away from the “orthodox exponents” that have idealized the notion of language. The integrationist approach intends a view that focuses on communication—that is, language in use, language as a “fact of life” (Harris, 2003, p. 50). Although not all gesture studies embrace an integrationist view—indeed, the field applies numerous theories across various disciplines—it is nonetheless true that to study gesture is to study what has traditionally been called paralinguistic modes of interaction, with the paralinguistic label given on the assumption that gesture is not part of the core meaning of what is rendered linguistically. However, arguably, most researchers within gesture studies would maintain just the opposite: The studies presented in this special issue reflect a view whereby gesture is regarded as a central aspect of language in use, integral to how we communicate (make meaning) both with each other and with ourselves.
  • Gullberg, M., Hendriks, H., & Hickmann, M. (2008). Learning to talk and gesture about motion in French. First Language, 28(2), 200-236. doi:10.1177/0142723707088074.

    Abstract

    This study explores how French adults and children aged four and six years talk and gesture about voluntary motion, examining (1) how they encode path and manner in speech, (2) how they encode this information in accompanying gestures; and (3) whether gestures are co-expressive with speech or express other information. When path and manner are equally relevant, children’s and adults’ speech and gestures both focus on path, rather than on manner. Moreover, gestures are predominantly co-expressive with speech at all ages. However, when they are non-redundant, adults tend to gesture about path while talking about manner, whereas children gesture about both path and manner while talking about path. The discussion highlights implications for our understanding of speakers’ representations and their development.
  • Gullberg, M. (1998). Gesture as a communication strategy in second language discourse: A study of learners of French and Swedish. Lund: Lund University Press.

    Abstract

    Gestures are often regarded as the most typical compensatory device used by language learners in communicative trouble. Yet gestural solutions to communicative problems have rarely been studied within any theory of second language use. The work pre­sented in this volume aims to account for second language learners’ strategic use of speech-associated gestures by combining a process-oriented framework for communi­cation strategies with a cognitive theory of gesture. Two empirical studies are presented. The production study investigates Swedish lear­ners of French and French learners of Swedish and their use of strategic gestures. The results, which are based on analyses of both individual and group behaviour, contradict popular opinion as well as theoretical assumptions from both fields. Gestures are not primarily used to replace speech, nor are they chiefly mimetic. Instead, learners use gestures with speech, and although they do exploit mimetic gestures to solve lexical problems, they also use more abstract gestures to handle discourse-related difficulties and metalinguistic commentary. The influence of factors such as proficiency, task, culture, and strategic competence on gesture use is discussed, and the oral and gestural strategic modes are compared. In the evaluation study, native speakers’ assessments of learners’ gestures, and the potential effect of gestures on evaluations of proficiency are analysed and discussed in terms of individual communicative style. Compensatory gestures function at multiple communicative levels. This has implica­tions for theories of communication strategies, and an expansion of the existing frameworks is discussed taking both cognitive and interactive aspects into account.
  • Gullberg, M. (2009). Gestures and the development of semantic representations in first and second language acquisition. Acquisition et Interaction en Langue Etrangère..Languages, Interaction, and Acquisition (former AILE), 1, 117-139.

    Abstract

    This paper argues that speech-associated gestures can usefully inform studies exploring development of meaning in first and second language acquisition. The example domain is caused motion or placement meaning (putting a cup on a table) where acquisition problems have been observed and where adult native gesture use reflects crosslinguistically different placement verb semantics. Against this background, the paper summarises three studies examining the development of semantic representations in Dutch children acquiring Dutch, and adult learners’ acquiring Dutch and French placement verbs. Overall, gestures change systematically with semantic development both in children and adults and (1) reveal what semantic elements are included in current semantic representations, whether target-like or not, and (2) highlight developmental shifts in those representations. There is little evidence that gestures chiefly act as a support channel. Instead, the data support the theoretical notion that speech and gesture form an integrated system, opening new possibilities for studying the processes of acquisition.
  • Gullberg, M. (2009). Reconstructing verb meaning in a second language: How English speakers of L2 Dutch talk and gesture about placement. Annual Review of Cognitive Linguistics, 7, 221-245. doi:10.1075/arcl.7.09gul.

    Abstract

    This study examines to what extent English speakers of L2 Dutch reconstruct the meanings of placement verbs when moving from a general L1 verb of caused motion (put) to two specific caused posture verbs (zetten/leggen ‘set/lay’) in the L2 and whether the existence of low-frequency cognate forms in the L1 (set/lay) alleviates the reconstruction problem. Evidence from speech and gesture indicates that English speakers have difficulties with the specific verbs in L2 Dutch, initially looking for means to express general caused motion in L1-like fashion through over-generalisation. The gesture data further show that targetlike forms are often used to convey L1-like meaning. However, the differentiated use of zetten for vertical placement and dummy verbs (gaan ‘go’ and doen ‘do’) and intransitive posture verbs (zitten/staan/liggen ‘sit, stand, lie’) for horizontal placement, and a positive correlation between appropriate verb use and target-like gesturing suggest a beginning sensitivity to the semantic parameters of the L2 verbs and possible reconstruction.
  • Gullberg, M., Indefrey, P., & Muysken, P. (2009). Research techniques for the study of code-switching. In B. E. Bullock, & J. A. Toribio (Eds.), The Cambridge handbook on linguistic code-switching (pp. 21-39). Cambridge: Cambridge University Press.

    Abstract

    The aim of this chapter is to provide researchers with a tool kit of semi-experimental and experimental techniques for studying code-switching. It presents an overview of the current off-line and on-line research techniques, ranging from analyses of published bilingual texts of spontaneous conversations, to tightly controlled experiments. A multi-task approach used for studying code-switched sentence production in Papiamento-Dutch bilinguals is also exemplified.
  • Gullberg, M. (2009). Why gestures are relevant to the bilingual mental lexicon. In A. Pavlenko (Ed.), The bilingual mental lexicon: Interdisciplinary approaches (pp. 161-184). Clevedon: Multilingual Matters.

    Abstract

    Gestures, the symbolic movements speakers perform while they speak, are systematically related to speech and language in non-trivial ways. This chapter presents an overview of what gestures can and cannot tell us about the monolingual and the bilingual mental lexicon. Gesture analysis opens for a broader view of the mental lexicon, targeting the interface between conceptual, semantic and syntactic aspects of event construal, and offers new possibilities for examining how languages co-exist and interact in bilinguals beyond the level of surface forms. The first section of this chapter gives a brief introduction to gesture studies and outlines the current views on the relationship between gesture, speech, and language. The second section targets the key questions for the study of the monolingual and bilingual lexicon, and illustrates the methods employed for addressing these questions. It further exemplifies systematic cross-linguistic patterns in gestural behaviour in monolingual and bilingual contexts. The final section discusses some implications of an expanded view of the multilingual lexicon that includes gesture, and outlines directions for future inquiry.

    Files private

    Request files
  • Gulrajani, G., & Harrison, D. (2002). SHAWEL: Sharable and interactive web-lexicons. In P. Austin, H. Dry, & P. Wittenburg (Eds.), Proceedings of the international LREC workshop on resources and tools in field linguistics (pp. 9-1-9-4). Paris: European Language Resources Association.

    Abstract

    A prototypical lexicon tool was implemented which was intended to allow researchers to collaboratively create lexicons of endangered languages. Increasingly often researchers documenting or analyzing a language work at different locations. Lexicons that evolve through continuous interaction between the collaborators can only be efficiently produced when it can be accessed and manipulated via the Internet. The SHAWEL tool was developed to address these needs; it makes use of a thin Java client and a central database solution.
  • Hagoort, P. (2008). Should psychology ignore the language of the brain? Current Directions in Psychological Science, 17(2), 96-101. doi:10.1111/j.1467-8721.2008.00556.x.

    Abstract

    Claims that neuroscientific data do not contribute to our understanding of psychological functions have been made recently. Here I argue that these criticisms are solely based on an analysis of functional magnetic resonance imaging (fMRI) studies. However, fMRI is only one of the methods in the toolkit of cognitive neuroscience. I provide examples from research on event-related brain potentials (ERPs) that have contributed to our understanding of the cognitive architecture of human language functions. In addition, I provide evidence of (possible) contributions from fMRI measurements to our understanding of the functional architecture of language processing. Finally, I argue that a neurobiology of human language that integrates information about the necessary genetic and neural infrastructures will allow us to answer certain questions that are not answerable if all we have is evidence from behavior.
  • Hagoort, P. (2008). The fractionation of spoken language understanding by measuring electrical and magnetic brain signals. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 363, 1055-1069. doi:10.1098/rstb.2007.2159.

    Abstract

    This paper focuses on what electrical and magnetic recordings of human brain activity reveal about spoken language understanding. Based on the high temporal resolution of these recordings, a fine-grained temporal profile of different aspects of spoken language comprehension can be obtained. Crucial aspects of speech comprehension are lexical access, selection and semantic integration. Results show that for words spoken in context, there is no ‘magic moment’ when lexical selection ends and semantic integration begins. Irrespective of whether words have early or late recognition points, semantic integration processing is initiated before words can be identified on the basis of the acoustic information alone. Moreover, for one particular event-related brain potential (ERP) component (the N400), equivalent impact of sentence- and discourse-semantic contexts is observed. This indicates that in comprehension, a spoken word is immediately evaluated relative to the widest interpretive domain available. In addition, this happens very quickly. Findings are discussed that show that often an unfolding word can be mapped onto discourse-level representations well before the end of the word. Overall, the time course of the ERP effects is compatible with the view that the different information types (lexical, syntactic, phonological, pragmatic) are processed in parallel and influence the interpretation process incrementally, that is as soon as the relevant pieces of information are available. This is referred to as the immediacy principle.

Share this page