Publications

Displaying 301 - 400 of 923
  • Grabe, E. (1998). Comparative intonational phonology: English and German. PhD Thesis, Radboud University Nijmegen, Nijmegen. doi:10.17617/2.2057683.
  • Graham, S. A., Jégouzo, S. A. F., Yan, S., Powlesland, A. S., Brady, J. P., Taylor, M. E., & Drickamer, K. (2009). Prolectin, a glycan-binding receptor on dividing B cells in germinal centers. The Journal of Biological Chemistry, 284, 18537-18544. doi:10.1074/jbc.M109.012807.

    Abstract

    Prolectin, a previously undescribed glycan-binding receptor, has been identified by re-screening of the human genome for genes encoding proteins containing potential C-type carbohydrate-recognition domains. Glycan array analysis revealed that the carbohydrate-recognition domain in the extracellular domain of the receptor binds glycans with terminal α-linked mannose or fucose residues. Prolectin expressed in fibroblasts is found at the cell surface, but unlike many glycan-binding receptors it does not mediate endocytosis of a neoglycoprotein ligand. However, compared with other known glycan-binding receptors, the receptor contains an unusually large intracellular domain that consists of multiple sequence motifs, including phosphorylated tyrosine residues, that allow it to interact with signaling molecules such as Grb2. Immunohistochemistry has been used to demonstrate that prolectin is expressed on a specialized population of proliferating B cells in germinal centers. Thus, this novel receptor has the potential to function in carbohydrate-mediated communication between cells in the germinal center.
  • Gretsch, P. (2004). What does finiteness mean to children? A cross-linguistic perspective onroot infinitives. Linguistics, 42(2), 419-468. doi:10.1515/ling.2004.014.

    Abstract

    The discussion on root infinitives has mainly centered around their supposed modal usage. This article aims at modelling the form-function relation of the root infinitive phenomenon by taking into account the full range of interpretational facets encountered cross-linguistically and interindividually. Following the idea of a subsequent ‘‘cell partitioning’’ in the emergence of form-function correlations, I claim that it is the major fission between [+-finite] which is central to express temporal reference different from the default here&now in tense-oriented languages. In aspectual-oriented languages, a similar opposition is mastered with the marking of early aspectual forms. It is observed that in tense-oriented languages like Dutch and German, the progression of functions associated with the infinitival form proceeds from nonmodal to modal, whereas the reverse progression holds for the Russian infinitive. Based on this crucial observation, a model of acquisition is proposed which allows for a flexible and systematic relationship between morphological forms and their respective interpretational biases dependent on their developmental context. As for early child language, I argue that children entertain only two temporal parameters: one parameter is fixed to the here&now point in time, and a second parameter relates to the time talked about, the topic time; this latter time overlaps the situation time as long as no empirical evidence exists to support the emergence of a proper distinction between tense and aspect.

    Files private

    Request files
  • Gubian, M., Torreira, F., Strik, H., & Boves, L. (2009). Functional data analysis as a tool for analyzing speech dynamics a case study on the French word c'était. In Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009) (pp. 2199-2202).

    Abstract

    In this paper we introduce Functional Data Analysis (FDA) as a tool for analyzing dynamic transitions in speech signals. FDA makes it possible to perform statistical analyses of sets of mathematical functions in the same way as classical multivariate analysis treats scalar measurement data. We illustrate the use of FDA with a reduction phenomenon affecting the French word c'était /setε/ 'it was', which can be reduced to [stε] in conversational speech. FDA reveals that the dynamics of the transition from [s] to [t] in fully reduced cases may still be different from the dynamics of [s] - [t] transitions in underlying /st/ clusters such as in the word stage.
  • Le Guen, O. (2009). Geocentric gestural deixis among Yucatecan Maya (Quintana Roo, México). In 18th IACCP Book of Selected Congress Papers (pp. 123-136). Athens, Greece: Pedio Books Publishing.
  • Le Guen, O. (2005). Geografía de lo sagrado entre los Mayas Yucatecos de Quintana Roo: configuración del espacio y su aprendizaje entre los niños. Ketzalcalli, 2005(1), 54-68.
  • Le Guen, O. (2009). The ethnography of emotions: A field worker's guide. In A. Majid (Ed.), Field manual volume 12 (pp. 31-34). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.446076.

    Abstract

    The goal of this task is to investigate cross-cultural emotion categories in language and thought. This entry is designed to provide researchers with some guidelines to describe the emotional repertoire of a community from an emic perspective. The first objective is to offer ethnographic tools and a questionnaire in order to understand the semantics of emotional terms and the local conception of emotions. The second objective is to identify the local display rules of emotions in communicative interactions.
  • Guerrero, L., & Van Valin Jr., R. D. (2004). Yaqui and the analysis of primary object languages. International Journal of American Linguistics, 70(3), 290-319. doi:10.1086/425603.

    Abstract

    The central topic of this study is to investigate three- and four-place predicate in Yaqui, which are characterized by having multiple object arguments. As with other Southern Uto-Aztecan languages, it has been said that Yaqui follows the Primary/Secondary Object pattern (Dryer 1986). Actually, Yaqui presents three patterns: verbs like nenka ‘sell’ follow the direct–indirect object pattern, verbs like miika ‘give’ follow the primary object pattern, and verbs like chijakta ‘sprinkle’ follow the locative alternation pattern; the primary object pattern is the exclusive one found with derived verbs. This paper shows that the contrast between direct object and primary object languages is not absolute but rather one of degree, and hence two “object” selection principles are needed to explain this mixed system. The two principles are not limited to Yaqui but are found in other languages as well, including English.
  • Gullberg, M., & Kita, S. (2009). Attention to speech-accompanying gestures: Eye movements and information uptake. Journal of Nonverbal Behavior, 33(4), 251-277. doi:10.1007/s10919-009-0073-2.

    Abstract

    There is growing evidence that addressees in interaction integrate the semantic information conveyed by speakers’ gestures. Little is known, however, about whether and how addressees’ attention to gestures and the integration of gestural information can be modulated. This study examines the influence of a social factor (speakers’ gaze to their own gestures), and two physical factors (the gesture’s location in gesture space and gestural holds) on addressees’ overt visual attention to gestures (direct fixations of gestures) and their uptake of gestural information. It also examines the relationship between gaze and uptake. The results indicate that addressees’ overt visual attention to gestures is affected both by speakers’ gaze and holds but for different reasons, whereas location in space plays no role. Addressees’ uptake of gesture information is only influenced by speakers’ gaze. There is little evidence of a direct relationship between addressees’ direct fixations of gestures and their uptake.
  • Gullberg, M. (2004). [Review of the book Pointing: Where language, culture and cognition meet ed. by Sotaro Kita]. Gesture, 4(2), 235-248. doi:10.1075/gest.4.2.08gul.
  • Gullberg, M. (2005). L'expression orale et gestuelle de la cohésion dans le discours de locuteurs langue 2 débutants. AILE, 23, 153-172.
  • Gullberg, M. (1998). Gesture as a communication strategy in second language discourse: A study of learners of French and Swedish. Lund: Lund University Press.

    Abstract

    Gestures are often regarded as the most typical compensatory device used by language learners in communicative trouble. Yet gestural solutions to communicative problems have rarely been studied within any theory of second language use. The work pre­sented in this volume aims to account for second language learners’ strategic use of speech-associated gestures by combining a process-oriented framework for communi­cation strategies with a cognitive theory of gesture. Two empirical studies are presented. The production study investigates Swedish lear­ners of French and French learners of Swedish and their use of strategic gestures. The results, which are based on analyses of both individual and group behaviour, contradict popular opinion as well as theoretical assumptions from both fields. Gestures are not primarily used to replace speech, nor are they chiefly mimetic. Instead, learners use gestures with speech, and although they do exploit mimetic gestures to solve lexical problems, they also use more abstract gestures to handle discourse-related difficulties and metalinguistic commentary. The influence of factors such as proficiency, task, culture, and strategic competence on gesture use is discussed, and the oral and gestural strategic modes are compared. In the evaluation study, native speakers’ assessments of learners’ gestures, and the potential effect of gestures on evaluations of proficiency are analysed and discussed in terms of individual communicative style. Compensatory gestures function at multiple communicative levels. This has implica­tions for theories of communication strategies, and an expansion of the existing frameworks is discussed taking both cognitive and interactive aspects into account.
  • Gullberg, M. (2009). Gestures and the development of semantic representations in first and second language acquisition. Acquisition et Interaction en Langue Etrangère..Languages, Interaction, and Acquisition (former AILE), 1, 117-139.

    Abstract

    This paper argues that speech-associated gestures can usefully inform studies exploring development of meaning in first and second language acquisition. The example domain is caused motion or placement meaning (putting a cup on a table) where acquisition problems have been observed and where adult native gesture use reflects crosslinguistically different placement verb semantics. Against this background, the paper summarises three studies examining the development of semantic representations in Dutch children acquiring Dutch, and adult learners’ acquiring Dutch and French placement verbs. Overall, gestures change systematically with semantic development both in children and adults and (1) reveal what semantic elements are included in current semantic representations, whether target-like or not, and (2) highlight developmental shifts in those representations. There is little evidence that gestures chiefly act as a support channel. Instead, the data support the theoretical notion that speech and gesture form an integrated system, opening new possibilities for studying the processes of acquisition.
  • Gullberg, M. (2009). Reconstructing verb meaning in a second language: How English speakers of L2 Dutch talk and gesture about placement. Annual Review of Cognitive Linguistics, 7, 221-245. doi:10.1075/arcl.7.09gul.

    Abstract

    This study examines to what extent English speakers of L2 Dutch reconstruct the meanings of placement verbs when moving from a general L1 verb of caused motion (put) to two specific caused posture verbs (zetten/leggen ‘set/lay’) in the L2 and whether the existence of low-frequency cognate forms in the L1 (set/lay) alleviates the reconstruction problem. Evidence from speech and gesture indicates that English speakers have difficulties with the specific verbs in L2 Dutch, initially looking for means to express general caused motion in L1-like fashion through over-generalisation. The gesture data further show that targetlike forms are often used to convey L1-like meaning. However, the differentiated use of zetten for vertical placement and dummy verbs (gaan ‘go’ and doen ‘do’) and intransitive posture verbs (zitten/staan/liggen ‘sit, stand, lie’) for horizontal placement, and a positive correlation between appropriate verb use and target-like gesturing suggest a beginning sensitivity to the semantic parameters of the L2 verbs and possible reconstruction.
  • Gullberg, M., Indefrey, P., & Muysken, P. (2009). Research techniques for the study of code-switching. In B. E. Bullock, & J. A. Toribio (Eds.), The Cambridge handbook on linguistic code-switching (pp. 21-39). Cambridge: Cambridge University Press.

    Abstract

    The aim of this chapter is to provide researchers with a tool kit of semi-experimental and experimental techniques for studying code-switching. It presents an overview of the current off-line and on-line research techniques, ranging from analyses of published bilingual texts of spontaneous conversations, to tightly controlled experiments. A multi-task approach used for studying code-switched sentence production in Papiamento-Dutch bilinguals is also exemplified.
  • Gullberg, M. (2009). Why gestures are relevant to the bilingual mental lexicon. In A. Pavlenko (Ed.), The bilingual mental lexicon: Interdisciplinary approaches (pp. 161-184). Clevedon: Multilingual Matters.

    Abstract

    Gestures, the symbolic movements speakers perform while they speak, are systematically related to speech and language in non-trivial ways. This chapter presents an overview of what gestures can and cannot tell us about the monolingual and the bilingual mental lexicon. Gesture analysis opens for a broader view of the mental lexicon, targeting the interface between conceptual, semantic and syntactic aspects of event construal, and offers new possibilities for examining how languages co-exist and interact in bilinguals beyond the level of surface forms. The first section of this chapter gives a brief introduction to gesture studies and outlines the current views on the relationship between gesture, speech, and language. The second section targets the key questions for the study of the monolingual and bilingual lexicon, and illustrates the methods employed for addressing these questions. It further exemplifies systematic cross-linguistic patterns in gestural behaviour in monolingual and bilingual contexts. The final section discusses some implications of an expanded view of the multilingual lexicon that includes gesture, and outlines directions for future inquiry.

    Files private

    Request files
  • De Haan, E., & Hagoort, P. (2004). Het brein in beeld. In B. Deelman, P. Eling, E. De Haan, & E. Van Zomeren (Eds.), Klinische neuropsychologie (pp. 82-98). Amsterdam: Boom.
  • Hagoort, P. (2005). On Broca, brain, and binding: A new framework. Trends in Cognitive Sciences, 9(9), 416-423. doi:10.1016/j.tics.2005.07.004.

    Abstract

    In speaking and comprehending language, word information is retrieved from memory and combined into larger units (unification). Unification operations take place in parallel at the semantic, syntactic and phonological levels of processing. This article proposes a new framework that connects psycholinguistic models to a neurobiological account of language. According to this proposal the left inferior frontal gyrus (LIFG) plays an important role in unification. Research in other domains of cognition indicates that left prefrontal cortex has the necessary neurobiological characteristics for its involvement in the unification for language. I offer here a psycholinguistic perspective on the nature of language unification and the role of LIFG.
  • Hagoort, P. (2009). The fractionation of spoken language understanding by measuring electrical and magnetic brain signals. In B. C. J. Moore, L. K. Tyler, & W. Marslen-Wilson (Eds.), The perception of speech: From sound to meaning (pp. 223-248). New York: Oxford University Press.
  • Hagoort, P. (1998). De electrofysiologie van taal: Wat hersenpotentialen vertellen over het menselijk taalvermogen. Neuropraxis, 2, 223-229.
  • Hagoort, P. (1998). De spreker als sprinter. Psychologie, 17, 48-49.
  • Hagoort, P. (2005). De talige aap. Linguaan, 26-35.
  • Hagoort, P. (2004). Er is geen behoefte aan trompetten als gordijnen. In H. Procee, H. Meijer, P. Timmerman, & R. Tuinsma (Eds.), Bij die wereld wil ik horen! Zesendertig columns en drie essays over de vorming tot academicus (pp. 78-80). Amsterdam: Boom.
  • Hagoort, P. (2005). Breintaal. In S. Knols, & D. Redeker (Eds.), NWO-Spinozapremies 2005 (pp. 21-34). Den Haag: NWO.
  • Hagoort, P. (2005). Broca's complex as the unification space for language. In A. Cutler (Ed.), Twenty-first century psycholinguistics: Four cornerstones (pp. 157-173). Mahwah, NJ: Erlbaum.
  • Hagoort, P., Hald, L. A., Bastiaansen, M. C. M., & Petersson, K. M. (2004). Integration of word meaning and world knowledge in language comprehension. Science, 304(5669), 438-441. doi:10.1126/science.1095455.

    Abstract

    Although the sentences that we hear or read have meaning, this does not necessarily mean that they are also true. Relatively little is known about the critical brain structures for, and the relative time course of, establishing the meaning and truth of linguistic expressions. We present electroencephalogram data that show the rapid parallel integration of both semantic and world
    knowledge during the interpretation of a sentence. Data from functional magnetic resonance imaging revealed that the left inferior prefrontal cortex is involved in the integration of both meaning and world knowledge. Finally, oscillatory brain responses indicate that the brain keeps a record of what makes a sentence hard to interpret.
  • Hagoort, P. (2004). Het zwarte gat tussen brein en bewustzijn. In N. Korteweg (Ed.), De oorsprong: Over het ontstaan van het leven en alles eromheen (pp. 107-124). Amsterdam: Boom.
  • Hagoort, P. (1998). Hersenen en taal in onderzoek en praktijk. Neuropraxis, 6, 204-205.
  • Hagoort, P. (2009). Reflections on the neurobiology of syntax. In D. Bickerton, & E. Szathmáry (Eds.), Biological foundations and origin of syntax (pp. 279-296). Cambridge, MA: MIT Press.

    Abstract

    This contribution focuses on the neural infrastructure for parsing and syntactic encoding. From an anatomical point of view, it is argued that Broca's area is an ill-conceived notion. Functionally, Broca's area and adjacent cortex (together Broca's complex) are relevant for language, but not exclusively for this domain of cognition. Its role can be characterized as providing the necessary infrastructure for unification (syntactic and semantic). A general proposal, but with required level of computational detail, is discussed to account for the distribution of labor between different components of the language network in the brain.Arguments are provided for the immediacy principle, which denies a privileged status for syntax in sentence processing. The temporal profile of event-related brain potential (ERP) is suggested to require predictive processing. Finally, since, next to speed, diversity is a hallmark of human languages, the language readiness of the brain might not depend on a universal, dedicated neural machinery for syntax, but rather on a shaping of the neural infrastructure of more general cognitive systems (e.g., memory, unification) in a direction that made it optimally suited for the purpose of communication through language.
  • Hagoort, P., Baggio, G., & Willems, R. M. (2009). Semantic unification. In M. S. Gazzaniga (Ed.), The cognitive neurosciences, 4th ed. (pp. 819-836). Cambridge, MA: MIT Press.

    Abstract

    Language and communication are about the exchange of meaning. A key feature of understanding and producing language is the construction of complex meaning from more elementary semantic building blocks. The functional characteristics of this semantic unification process are revealed by studies using event related brain potentials. These studies have found that word meaning is assembled into compound meaning in not more than 500 ms. World knowledge, information about the speaker, co-occurring visual input and discourse all have an immediate impact on semantic unification, and trigger similar electrophysiological responses as sentence-internal semantic information. Neuroimaging studies show that a network of brain areas, including the left inferior frontal gyrus, the left superior/middle temporal cortex, the left inferior parietal cortex and, to a lesser extent their right hemisphere homologues are recruited to perform semantic unification.
  • Hagoort, P. (2009). Taalontwikkeling: Meer dan woorden alleen. In M. Evenblij (Ed.), Brein in beeld: Beeldvorming bij heersenonderzoek (pp. 53-57). Den Haag: Stichting Bio-Wetenschappen en Maatschappij.
  • Hagoort, P. (1998). The shadows of lexical meaning in patients with semantic impairments. In B. Stemmer, & H. Whitaker (Eds.), Handbook of neurolinguistics (pp. 235-248). New York: Academic Press.
  • Hagoort, P., & Levelt, W. J. M. (2009). The speaking brain. Science, 326(5951), 372-373. doi:10.1126/science.1181675.

    Abstract

    How does intention to speak become the action of speaking? It involves the generation of a preverbal message that is tailored to the requirements of a particular language, and through a series of steps, the message is transformed into a linear sequence of speech sounds (1, 2). These steps include retrieving different kinds of information from memory (semantic, syntactic, and phonological), and combining them into larger structures, a process called unification. Despite general agreement about the steps that connect intention to articulation, there is no consensus about their temporal profile or the role of feedback from later steps (3, 4). In addition, since the discovery by the French physician Pierre Paul Broca (in 1865) of the role of the left inferior frontal cortex in speaking, relatively little progress has been made in understanding the neural infrastructure that supports speech production (5). One reason is that the characteristics of natural language are uniquely human, and thus the neurobiology of language lacks an adequate animal model. But on page 445 of this issue, Sahin et al. (6) demonstrate, by recording neuronal activity in the human brain, that different kinds of linguistic information are indeed sequentially processed within Broca's area.
  • Hammond, J. (2009). The grammar of nouns and verbs in Whitesands, an oceanic language of Southern Vanuatu. Master Thesis, University of Sydney, Sydney.

    Abstract

    Whitesands is an under-described language of southern Vanuatu, and this thesis presents Whitesands-specific data based on primary in-situ field research. The thesis addresses the distinction of noun and verb word classes in the language. It claims that current linguistic syntax theory cannot account for the argument structure of canonical object-denoting roots. It is shown that there are distinct lexical noun and verb classes in Whitesands but this is only a weak dichotomy. Stronger is the NP and VP distinction, and this is achieved by employing a new theoretical approach that proposes functional categories and their selection of complements as crucial tests of distinction. This approach contrasts with previous analyses of parts of speech in Oceanic languages and cross-linguistically. It ultimately explains many of the syntactic phenomena seen in the language family, including the above argument assignment dilemma, the alienable possession of nouns with classifiers and also the nominalisation processes.
  • Hanulikova, A., & Davidson, D. (2009). Inflectional entropy in Slovak. In J. Levicka, & R. Garabik (Eds.), Slovko 2009, NLP, Corpus Linguistics, Corpus Based Grammar Research (pp. 145-151). Bratislava, Slovakia: Slovak Academy of Sciences.
  • Hanulikova, A. (2009). Lexical segmentation in Slovak and German. Berlin: Akademie Verlag.

    Abstract

    All humans are equipped with perceptual and articulatory mechanisms which (in healthy humans) allow them to learn to perceive and produce speech. One basic question in psycholinguistics is whether humans share similar underlying processing mechanisms for all languages, or whether these are fundamentally different due to the diversity of languages and speakers. This book provides a cross-linguistic examination of speech comprehension by investigating word recognition in users of different languages. The focus is on how listeners segment the quasi-continuous stream of sounds that they hear into a sequence of discrete words, and how a universal segmentation principle, the Possible Word Constraint, applies in the recognition of Slovak and German.
  • Hanulikova, A., & Weber, A. (2009). Experience with foreign accent influences non-native (L2) word recognition: The case of th-substitutions [Abstract]. Journal of the Acoustical Society of America, 125(4), 2762-2762.
  • Hanulikova, A. (2009). The role of syllabification in the lexical segmentation of German and Slovak. In S. Fuchs, H. Loevenbruck, D. Pape, & P. Perrier (Eds.), Some aspects of speech and the brain (pp. 331-361). Frankfurt am Main: Peter Lang.

    Abstract

    Two experiments were carried out to examine the syllable affiliation of intervocalic consonant clusters and their effects on speech segmentation in two different languages. In a syllable reversal task, Slovak and German speakers divided bisyllabic non-words that were presented aurally into two parts, starting with the second syllable. Following the maximal onset principle, intervocalic consonants should be maximally assigned to the onset of the following syllable in conformity with language-specific restrictions, e.g., /du.gru/, /zu.kro:/ (dot indicates a syllable boundary). According to German phonology, syllables require branching rhymes (hence, /zuk.ro:/). In Slovak, both /du.gru/ and /dug.ru/ are possible syllabifications. Experiment 1 showed that German speakers more often closed the first syllable (/zuk.ro:/), following the requirement for a branching rhyme. In Experiment 2, Slovak speakers showed no clear preference; the first syllable was either closed (/dug.ru/) or open (/du.gru/). Correlation analyses on previously conducted word-spotting studies (Hanulíková, in press, 2008) suggest that speech segmentation is unaffected by these syllabification preferences.
  • Harbusch, K., & Kempen, G. (2009). Clausal coordinate ellipsis and its varieties in spoken German: A study with the TüBa-D/S Treebank of the VERBMOBIL corpus. In M. Passarotti, A. Przepiórkowski, S. Raynaud, & F. Van Eynde (Eds.), Proceedings of the The Eighth International Workshop on Treebanks and Linguistic Theories (pp. 83-94). Milano: EDUCatt.
  • Harbusch, K., & Kempen, G. (2009). Generating clausal coordinate ellipsis multilingually: A uniform approach based on postediting. In 12th European Workshop on Natural Language Generation: Proceedings of the Workshop (pp. 138-145). The Association for Computational Linguistics.

    Abstract

    Present-day sentence generators are often in-capable of producing a wide variety of well-formed elliptical versions of coordinated clauses, in particular, of combined elliptical phenomena (Gapping, Forward and Back-ward Conjunction Reduction, etc.). The ap-plicability of the various types of clausal co-ordinate ellipsis (CCE) presupposes detailed comparisons of the syntactic properties of the coordinated clauses. These nonlocal comparisons argue against approaches based on local rules that treat CCE structures as special cases of clausal coordination. We advocate an alternative approach where CCE rules take the form of postediting rules ap-plicable to nonelliptical structures. The ad-vantage is not only a higher level of modu-larity but also applicability to languages be-longing to different language families. We describe a language-neutral module (called Elleipo; implemented in JAVA) that gener-ates as output all major CCE versions of co-ordinated clauses. Elleipo takes as input linearly ordered nonelliptical coordinated clauses annotated with lexical identity and coreferentiality relationships between words and word groups in the conjuncts. We dem-onstrate the feasibility of a single set of postediting rules that attains multilingual coverage.
  • Haun, D. B. M., Allen, G. L., & Wedell, D. H. (2005). Bias in spatial memory: A categorical endorsement. Acta Psychologica, 118(1-2), 149-170. doi:10.1016/j.actpsy.2004.10.011.
  • Haun, D. B. M., & Call, J. (2009). Great apes’ capacities to recognize relational similarity. Cognition, 110, 147-159. doi:10.1016/j.cognition.2008.10.012.

    Abstract

    Recognizing relational similarity relies on the ability to understand that defining object properties might not lie in the objects individually, but in the relations of the properties of various object to each other. This aptitude is highly relevant for many important human skills such as language, reasoning, categorization and understanding analogy and metaphor. In the current study, we investigated the ability to recognize relational similarities by testing five species of great apes, including human children in a spatial task. We found that all species performed better if related elements are connected by logico-causal as opposed to non-causal relations. Further, we find that only children above 4 years of age, bonobos and chimpanzees, unlike younger children, gorillas and orangutans display some mastery of reasoning by non-causal relational similarity. We conclude that recognizing relational similarity is not in its entirety unique to the human species. The lack of a capability for language does not prohibit recognition of simple relational similarities. The data are discussed in the light of the phylogenetic tree of relatedness of the great apes.
  • Haun, D. B. M., & Rapold, C. J. (2009). Variation in memory for body movements across cultures. Current Biology, 19(23), R1068-R1069. doi:10.1016/j.cub.2009.10.041.

    Abstract

    There has been considerable controversy over the existence of cognitive differences across human cultures: some claim that human cognition is essentially universal [1,2], others that it reflects cultural specificities [3,4]. One domain of interest has been spatial cognition [5,6]. Despite the global universality of physical space, cultures vary as to how space is coded in their language. Some, for example, do not use egocentric ‘left, right, front, back’ constructions to code spatial relations, instead using allocentric notions like ‘north, south, east, west’ [4,6]: “The spoon is north of the bowl!” Whether or not spatial cognition also varies across cultures remains a contested question [7,8]. Here we investigate whether memory for movements of one's own body differs between cultures with contrastive strategies for coding spatial relations. Our results show that the ways in which we memorize movements of our own body differ in line with culture-specific preferences for how to conceive of spatial relations.
  • Havik, E., Roberts, L., Van Hout, R., Schreuder, R., & Haverkort, M. (2009). Processing subject-object ambiguities in L2 Dutch: A self-paced reading study with German L2 learners of Dutch. Language Learning, 59(1), 73-112. doi:10.1111/j.1467-9922.2009.00501.x.

    Abstract

    The results of two self-paced reading experiments are reported, which investigated the on-line processing of subject-object ambiguities in Dutch relative clause constructions like Dat is de vrouw die de meisjes heeft/hebben gezien by German advanced second language (L2) learners of Dutch. Native speakers of both Dutch and German have been shown to have a preference for a subject versus an object reading of such temporarily ambiguous sentences, and so we provided an ideal opportunity for the transfer of first language (L1) processing preferences to take place. We also investigated whether the participants' working memory span would affect their processing of the experimental items. The results suggest that processing decisions may be affected by working memory when task demands are high and in this case, the high working memory span learners patterned like the native speakers of lower working memory. However, when reading for comprehension alone, and when only structural information was available to guide parsing decisions, working memory span had no effect on the L2 learners' on-line processing, and this differed from the native speakers' even though the L1 and the L2 are highly comparable.
  • Hay, J. B., & Baayen, R. H. (2005). Shifting paradigms: Gradient structure in morphology. Trends in Cognitive Sciences, 9(7), 342-348. doi:10.1016/j.tics.2005.04.002.

    Abstract

    Morphology is the study of the internal structure of words. A vigorous ongoing debate surrounds the question of how such internal structure is best accounted for: by means of lexical entries and deterministic symbolic rules, or by means of probabilistic subsymbolic networks implicitly encoding structural similarities in connection weights. In this review, we separate the question of subsymbolic versus symbolic implementation from the question of deterministic versus probabilistic structure. We outline a growing body of evidence, mostly external to the above debate, indicating that morphological structure is indeed intrinsically graded. By allowing probability into the grammar, progress can be made towards solving some long-standing puzzles in morphological theory.
  • Hayano, K. (2004). Kaiwa ni okeru ninshikiteki ken’i no koushou: Shuujoshi yo, ne, odoroki hyouji no bunpu to kinou [Negotiation of Epistemic Authority in Conversation: on the use of final particles yo, ne and surprise markers]. Studies in Pragmatics, 6, 17-28.
  • Hendriks, L., Witteman, M. J., Frietman, L. C. G., Westerhof, G., Van Baaren, R. B., Engels, R. C. M. E., & Dijksterhuis, A. J. (2009). Imitation can reduce malnutrition in residents in assisted living facilities [Letter to the editor]. Journal of the American Geriatrics Society, 571(1), 187-188. doi:10.1111/j.1532-5415.2009.02074.x.
  • Holler, J., Shovelton, H., & Beattie, G. (2009). Do iconic gestures really contribute to the semantic information communicated in face-to-face interaction? Journal of Nonverbal Behavior, 33, 73-88.
  • Holler, J., & Wilkin, K. (2009). Communicating common ground: how mutually shared knowledge influences the representation of semantic information in speech and gesture in a narrative task. Language and Cognitive Processes, 24, 267-289.
  • Holler, J. (2004). Semantic and pragmatic aspects of representational gestures: Towards a unified model of communication in talk. PhD Thesis, University of Manchester, Manchester.
  • Holler, J., & Beattie, G. (2004). The interaction of iconic gesture and speech. In A. Cammurri, & G. Volpe (Eds.), Lecture Notes in Computer Science, 5th International Gesture Workshop, Genova, Italy, 2003; Selected Revised Papers (pp. 63-69). Heidelberg: Springer Verlag.
  • De Hoop, H., & Narasimhan, B. (2005). Differential case-marking in Hindi. In M. Amberber, & H. de Hoop (Eds.), Competition and variation in natural languages: The case for case (pp. 321-345). Amsterdam: Elsevier.
  • Horemans, I., & Schiller, N. O. (2004). Form-priming effects in nonword naming. Brain and Language, 90(1-3), 465-469. doi:10.1016/S0093-934X(03)00457-7.

    Abstract

    Form-priming effects from sublexical (syllabic or segmental) primes in masked priming can be accounted for in two ways. One is the sublexical pre-activation view according to which segments are pre-activated by the prime, and at the time the form-related target is to be produced, retrieval/assembly of those pre-activated segments is faster compared to an unrelated situation. However, it has also been argued that form-priming effects from sublexical primes might be due to lexical pre-activation. When the sublexical prime is presented, it activates all form-related words (i.e., cohorts) in the lexicon, necessarily including the form-related target, which—as a consequence—is produced faster than in the unrelated case. Note, however, that this lexical pre-activation account makes previous pre-lexical activation of segments necessary. This study reports a nonword naming experiment to investigate whether or not sublexical pre-activation is involved in masked form priming with sublexical primes. The results demonstrated a priming effect suggesting a nonlexical effect. However, this does not exclude an additional lexical component in form priming.
  • Hoymann, G. (2004). [Review of the book Botswana: The future of the minority languages ed. by Herman M. Batibo and Birgit Smieja]. Journal of African Languages and Linguistics, 25(2), 171-173. doi:10.1515/jall.2004.25.2.171.
  • Huettig, F., & Altmann, G. T. M. (2004). The online processing of ambiguous and unambiguous words in context: Evidence from head-mounted eye-tracking. In M. Carreiras, & C. Clifton (Eds.), The on-line study of sentence comprehension: Eyetracking, ERP and beyond (pp. 187-207). New York: Psychology Press.
  • Huettig, F., & Altmann, G. T. M. (2005). Word meaning and the control of eye fixation: Semantic competitor effects and the visual world paradigm. Cognition, 96(1), B23-B32. doi:10.1016/j.cognition.2004.10.003.

    Abstract

    When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1), 84–107; Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268(5217), 1632–1634]. We demonstrate here that such spontaneous fixation can be driven by partial semantic overlap between a word and a visual object. Participants heard the word ‘piano’ when (a) a piano was depicted amongst unrelated distractors; (b) a trumpet was depicted amongst those same distractors; and (c), both the piano and trumpet were depicted. The probability of fixating the piano and the trumpet in the first two conditions rose as the word ‘piano’ unfolded. In the final condition, only fixations to the piano rose, although the trumpet was fixated more than the distractors. We conclude that eye movements are driven by the degree of match, along various dimensions that go beyond simple visual form, between a word and the mental representations of objects in the concurrent visual field.
  • Hulten, A., Vihla, M., Laine, M., & Salmelin, R. (2009). Accessing newly learned names and meanings in the native language. Human Brain Mapping, 30, 979-989. doi:10.1002/hbm.20561.

    Abstract

    Ten healthy adults encountered pictures of unfamiliar archaic tools and successfully learned either their name, verbal definition of their usage, or both. Neural representation of the newly acquired information was probed with magnetoencephalography in an overt picture-naming task before and after learning, and in two categorization tasks after learning. Within 400 ms, activation proceeded from occipital through parietal to left temporal cortex, inferior frontal cortex (naming) and right temporal cortex (categorization). Comparison of naming of newly learned versus familiar pictures indicated that acquisition and maintenance of word forms are supported by the same neural network. Explicit access to newly learned phonology when such information was known strongly enhanced left temporal activation. By contrast, access to newly learned semantics had no comparable, direct neural effects. Both the behavioral learning pattern and neurophysiological results point to fundamentally different implementation of and access to phonological versus semantic features in processing pictured objects.
  • Hurford, J. R., & Dediu, D. (2009). Diversity in language, genes and the language faculty. In R. Botha, & C. Knight (Eds.), The cradle of language (pp. 167-188). Oxford: Oxford University Press.
  • Indefrey, P., & Cutler, A. (2004). Prelexical and lexical processing in listening. In M. Gazzaniga (Ed.), The cognitive neurosciences III. (pp. 759-774). Cambridge, MA: MIT Press.

    Abstract

    This paper presents a meta-analysis of hemodynamic studies on passive auditory language processing. We assess the overlap of hemodynamic activation areas and activation maxima reported in experiments involving the presentation of sentences, words, pseudowords, or sublexical or non-linguistic auditory stimuli. Areas that have been reliably replicated are identified. The results of the meta-analysis are compared to electrophysiological, magnetencephalic (MEG), and clinical findings. It is concluded that auditory language input is processed in a left posterior frontal and bilateral temporal cortical network. Within this network, no processing leve l is related to a single cortical area. The temporal lobes seem to differ with respect to their involvement in post-lexical processing, in that the left temporal lobe has greater involvement than the right, and also in the degree of anatomical specialization for phonological, lexical, and sentence -level processing, with greater overlap on the right contrasting with a higher degree of differentiation on the left.
  • Indefrey, P., & Levelt, W. J. M. (2004). The spatial and temporal signatures of word production components. Cognition, 92(1-2), 101-144. doi:10.1016/j.cognition.2002.06.001.

    Abstract

    This paper presents the results of a comprehensive meta-analysis of the relevant imaging literature on word production (82 experiments). In addition to the spatial overlap of activated regions, we also analyzed the available data on the time course of activations. The analysis specified regions and time windows of activation for the core processes of word production: lexical selection, phonological code retrieval, syllabification, and phonetic/articulatory preparation. A comparison of the word production results with studies on auditory word/non-word perception and reading showed that the time course of activations in word production is, on the whole, compatible with the temporal constraints that perception processes impose on the production processes they affect in picture/word interference paradigms.
  • Indefrey, P. (1998). De neurale architectuur van taal: Welke hersengebieden zijn betrokken bij het spreken. Neuropraxis, 2(6), 230-237.
  • Indefrey, P. (2004). Hirnaktivierungen bei syntaktischer Sprachverarbeitung: Eine Meta-Analyse. In H. Müller, & G. Rickheit (Eds.), Neurokognition der Sprache (pp. 31-50). Tübingen: Stauffenburg.
  • Indefrey, P., Hellwig, F. M., Herzog, H., Seitz, R. J., & Hagoort, P. (2004). Neural responses to the production and comprehension of syntax in identical utterances. Brain and Language, 89(2), 312-319. doi:10.1016/S0093-934X(03)00352-3.

    Abstract

    Following up on an earlier positron emission tomography (PET) experiment (Indefrey et al., 2001), we used a scene description paradigm to investigate whether a posterior inferior frontal region subserving syntactic encoding for speaking is also involved in syntactic parsing during listening. In the language production part of the experiment, subjects described visually presented scenes
    using either sentences, sequences of noun phrases, or sequences of syntactically unrelated words. In the language comprehension part of the experiment, subjects were auditorily presented with the same kinds of utterances and judged whether they matched the visual scenes. We were able to replicate the previous finding of a region in caudal Broca s area that is sensitive to the complexity of
    syntactic encoding in language production. In language comprehension, no hemodynamic activation differences due to syntactic complexity were found. Given that correct performance in the judgment task did not require syntactic processing of the auditory stimuli, the results suggest that the degree to which listeners recruit syntactic processing resources in language comprehension may be a function of the syntactic demands of the task or the stimulus material.
  • Indefrey, P., Gruber, O., Brown, C. M., Hagoort, P., Posse, S., & Kleinschmidt, A. (1998). Lexicality and not syllable frequency determine lateralized premotor activation during the pronunciation of word-like stimuli: An fMRI study. NeuroImage, 7, S4.
  • Indefrey, P., & Davidson, D. J. (2009). Second language acquisition. In L. R. Squire (Ed.), Encyclopedia of neuroscience (pp. 517-523). London: Academic Press.

    Abstract

    This article reviews neurocognitive evidence on second language (L2) processing at speech sound, word, and sentence levels. Hemodynamic (functional magnetic resonance imaging and positron emission tomography) data suggest that L2s are implemented in the same brain structures as the native language but with quantitative differences in the strength of activation that are modulated by age of L2 acquisition and L2 proficiency. Electrophysiological data show a more complex pattern of first and L2 similarities and differences, providing some, although not conclusive, evidence for qualitative differences between L1 and L2 syntactic processing.
  • Isaac, A., Wang, S., Van der Meij, L., Schlobach, S., Zinn, C., & Matthezing, H. (2009). Evaluating thesaurus alignments for semantic interoperability in the library domain. IEEE Intelligent Systems, 24(2), 76-86.

    Abstract

    Thesaurus alignments play an important role in realising efficient access to heterogeneous Cultural Heritage data. Current technology, however, provides only limited value for such access as it fails to bridge the gap between theoretical study and user needs that stem from practical application requirements. In this paper, we explore common real-world problems of a library, and identify solutions that would greatly benefit from a more application embedded study, development, and evaluation of matching technology.
  • Ischebeck, A., Indefrey, P., Usui, N., Nose, I., Hellwig, F. M., & Taira, M. (2004). Reading in a regular orthography: An fMRI study investigating the role of visual familiarity. Journal of Cognitive Neuroscience, 16(5), 727-741. doi:10.1162/089892904970708.

    Abstract

    In order to separate the cognitive processes associated with phonological encoding and the use of a visual word form lexicon in reading, it is desirable to compare the processing of words presented in a visually familiar form with words in a visually unfamiliar form. Japanese Kana orthography offers this possibility. Two phonologically equivalent but visually dissimilar syllabaries allow the writing of, for example, foreign loanwords in two ways, only one of which is visually familiar. Familiarly written words, unfamiliarly written words, and pseudowords were presented in both Kana syllabaries (yielding six conditions in total) to participants during an fMRI measurement with a silent articulation task (Experiment 1) and a phonological lexical decision task (Experiment 2) using an event-related design. Consistent over two experimental tasks, the three different stimulus types (familiar, unfamiliar, and pseudoword) were found to activate selectively different brain regions previously associated with phonological encoding and word retrieval or meaning. Compatible with the predictions of the dual-route model for reading, pseudowords and visually unfamiliar words, which have to be read using phonological assembly, caused an increase in brain activity in left inferior frontal regions (BA 44/47), as compared to visually familiar words. Visually familiar and unfamiliar words were found to activate a range of areas associated with lexico-semantic processing more strongly than pseudowords, such as the left and right temporo-parietal region (BA 39/40), a region in the left middle/inferior temporal gyrus (BA 20/21), and the posterior cingulate (BA 31).
  • Jaeger, T. F., & Norcliffe, E. (2009). The cross-linguistic study of sentence production. Language and Linguistics Compass, 3, 866-887. doi:10.1111/j.1749-818x.2009.00147.x.

    Abstract

    The mechanisms underlying language production are often assumed to be universal, and hence not contingent on a speaker’s language. This assumption is problematic for at least two reasons. Given the typological diversity of the world’s languages, only a small subset of languages has actually been studied psycholinguistically. And, in some cases, these investigations have returned results that at least superficially raise doubt about the assumption of universal production mechanisms. The goal of this paper is to illustrate the need for more psycholinguistic work on a typologically more diverse set of languages. We summarize cross-linguistic work on sentence production (specifically: grammatical encoding), focusing on examples where such work has improved our theoretical understanding beyond what studies on English alone could have achieved. But cross-linguistic research has much to offer beyond the testing of existing hypotheses: it can guide the development of theories by revealing the full extent of the human ability to produce language structures. We discuss the potential for interdisciplinary collaborations, and close with a remark on the impact of language endangerment on psycholinguistic research on understudied languages.
  • Janse, E., & Klitsch, J. (2004). Auditieve perceptie bij gezonde sprekers en bij sprekers met verworven taalstoornissen. Afasiologie, 26(1), 2-6.
  • Janse, E. (2009). Hearing and cognitive measures predict elderly listeners' difficulty ignoring competing speech. In M. Boone (Ed.), Proceedings of the International Conference on Acoustics (pp. 1532-1535).
  • Janse, E. (2005). Lexical inhibition effects in time-compressed speech. In Proceedings of the 9th European Conference on Speech Communication and Technology [Interspeech 2005] (pp. 1757-1760).
  • Janse, E. (2005). Neighbourhood density effects in auditory nonword processing in aphasia. Brain and Language, 95, 24-25. doi:10.1016/j.bandl.2005.07.027.
  • Janse, E. (2009). Neighbourhood density effects in auditory nonword processing in aphasic listeners. Clinical Linguistics and Phonetics, 23(3), 196-207. doi:10.1080/02699200802394989.

    Abstract

    This study investigates neighbourhood density effects on lexical decision performance (both accuracy and response times) of aphasic patients. Given earlier results on lexical activation and deactivation in Broca's and Wernicke's aphasia, the prediction was that smaller neighbourhood density effects would be found for Broca's aphasic patients, compared to age-matched non-brain-damaged control participants, whereas enlarged density effects were expected for Wernicke's aphasic patients. The results showed density effects for all three groups of listeners, and overall differences in performance between groups, but no significant interaction between neighbourhood density and listener group. Several factors are discussed to account for the present results.
  • Janse, E. (2009). Processing of fast speech by elderly listeners. Journal of the Acoustical Society of America, 125(4), 2361-2373. doi:10.1121/1.3082117.

    Abstract

    This study investigates the relative contributions of auditory and cognitive factors to the common finding that an increase in speech rate affects elderly listeners more than young listeners. Since a direct relation between non-auditory factors, such as age-related cognitive slowing, and fast speech performance has been difficult to demonstrate, the present study took an on-line, rather than off-line, approach and focused on processing time. Elderly and young listeners were presented with speech at two rates of time compression and were asked to detect pre-assigned target words as quickly as possible. A number of auditory and cognitive measures were entered in a statistical model as predictors of elderly participants’ fast speech performance: hearing acuity, an information processing rate measure, and two measures of reading speed. The results showed that hearing loss played a primary role in explaining elderly listeners’ increased difficulty with fast speech. However, non-auditory factors such as reading speed and the extent to which participants were affected by
    increased rate of presentation in a visual analog of the listening experiment also predicted fast
    speech performance differences among the elderly participants. These on-line results confirm that slowed information processing is indeed part of elderly listeners’ problem keeping up with fast language
  • Janse, E., & Ernestus, M. (2009). Recognition of reduced speech and use of phonetic context in listeners with age-related hearing impairment [Abstract]. Journal of the Acoustical Society of America, 125(4), 2535.
  • Janse, E. (2004). Word perception in fast speech: Artificially time-compressed vs. naturally produced fast speech. Speech Communication, 42, 155-173. doi:10.1016/j.specom.2003.07.001.

    Abstract

    Natural fast speech differs from normal-rate speech with respect to its temporal pattern. Previous results showed that word intelligibility of heavily artificially time-compressed speech could not be improved by making its temporal pattern more similar to that of natural fast speech. This might have been due to the extrapolation of timing rules for natural fast speech to rates that are much faster than can be attained by human speakers. The present study investigates whether, at a speech rate that human speakers can attain, artificially time-compressed speech is easier to process if its timing pattern is similar to that of naturally produced fast speech. Our first experiment suggests, however, that word processing speed was slowed down, relative to linear compression. In a second experiment, word processing of artificially time-compressed speech was compared with processing of naturally produced fast speech. Even when naturally produced fast speech is perfectly intelligible, its less careful articulation, combined with the changed timing pattern, slows down processing, relative to linearly time-compressed speech. Furthermore, listeners preferred artificially time-compressed speech over naturally produced fast speech. These results suggest that linearly time-compressed speech has both a temporal and a segmental advantage over natural fast speech.
  • Jansma, B. M., & Schiller, N. O. (2004). Monitoring syllable boundaries during speech production. Brain and Language, 90(1-3), 311-317. doi:10.1016/S0093-934X(03)00443-7.

    Abstract

    This study investigated the encoding of syllable boundary information during speech production in Dutch. Based on Levelt's model of phonological encoding, we hypothesized segments and syllable boundaries to be encoded in an incremental way. In a selfmonitoring experiment, decisions about the syllable affiliation (first or second syllable) of a pre-specified consonant, which was the third phoneme in a word, were required (e.g., ka.No canoe vs. kaN.sel pulpit ; capital letters indicate pivotal consonants, dots mark syllable boundaries). First syllable responses were faster than second syllable responses, indicating the incremental nature of segmental encoding and syllabification during speech production planning. The results of the experiment are discussed in the context of Levelt 's model of phonological encoding.
  • Janssen, D. P., Roelofs, A., & Levelt, W. J. M. (2004). Stem complexity and inflectional encoding in language production. Journal of Psycholinguistic Research, 33(5), 365-381. doi:10.1023/B:JOPR.0000039546.60121.a8.

    Abstract

    Three experiments are reported that examined whether stem complexity plays a role in inflecting polymorphemic words in language production. Experiment 1 showed that preparation effects for words with polymorphemic stems are larger when they are produced among words with constant inflectional structures compared to words with variable inflectional structures and simple stems. This replicates earlier findings for words with monomorphemic stems (Janssen et al., 2002). Experiments 2 and 3 showed that when inflectional structure is held constant, the preparation effects are equally large with simple and compound stems, and with compound and complex adjectival stems. These results indicate that inflectional encoding is blind to the complexity of the stem, which suggests that specific inflectional rather than generic morphological frames guide the generation of inflected forms in speaking words.
  • Janzen, G., & Van Turennout, M. (2004). Selective neural representation of objects relevant for navigation. Nature Neuroscience, 7(6), 673-677. doi:10.1038/nn1257.

    Abstract

    As people find their way through their environment, objects at navigationally relevant locations can serve as crucial landmarks. The parahippocampal gyrus has previously been shown to be involved in object and scene recognition. In the present study, we investigated the neural representation of navigationally relevant locations. Healthy human adults viewed a route through a virtual museum with objects placed at intersections (decision points) or at simple turns (non-decision points). Event-related functional magnetic resonance imaging (fMRI) data were acquired during subsequent recognition of the objects in isolation. Neural activity in the parahippocampal gyrus reflected the navigational relevance of an object's location in the museum. Parahippocampal responses were selectively increased for objects that occurred at decision points, independent of attentional demands. This increase occurred for forgotten as well as remembered objects, showing implicit retrieval of navigational information. The automatic storage of relevant object location in the parahippocampal gyrus provides a part of the neural mechanism underlying successful navigation.
  • Janzen, G., & Hawlik, M. (2005). Orientierung im Raum: Befunde zu Entscheidungspunkten. Zeitschrift für Psychology, 213, 179-186.
  • Janzen, G. (2005). Wie das mensliche Gehirn Orientierung ermöglicht. In G. Plehn (Ed.), Jahrbuch der Max-Planck-Gesellschaft (pp. 599-601). Göttingen: Vandenhoeck & Ruprecht.
  • Janzen, G., & Weststeijn, C. (2004). Neural representation of object location and route direction: An fMRI study. NeuroImage, 22(Supplement 1), e634-e635.
  • Janzen, G., & Van Turennout, M. (2004). Neuronale Markierung navigationsrelevanter Objekte im räumlichen Gedächtnis: Ein fMRT Experiment. In D. Kerzel (Ed.), Beiträge zur 46. Tagung experimentell arbeitender Psychologen (pp. 125-125). Lengerich: Pabst Science Publishers.
  • Järvikivi, J., Pyykkönen, P., & Niemi, J. (2009). Exploiting degrees of inflectional ambiguity: Stem form and the time course of morphological processing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(1), 221-237. doi:10.1037/a0014355.

    Abstract

    The authors compared sublexical and supralexical approaches to morphological processing with unambiguous and ambiguous inflected words and words with ambiguous stems in 3 masked and unmasked priming experiments in Finnish. Experiment 1 showed equal facilitation for all prime types with a short 60-ms stimulus onset asynchrony (SOA) but significant facilitation for unambiguous words only with a long 300-ms SOA. Experiment 2 showed that all potential readings of ambiguous inflections were activated under a short SOA. Whereas the prime-target form overlap did not affect the results under a short SOA, it significantly modulated the results with a long SOA. Experiment 3 confirmed that the results from masked priming were modulated by the morphological structure of the words but not by the prime-target form overlap alone. The results support approaches in which early prelexical morphological processing is driven by morph-based segmentation and form is used to cue selection between 2 candidates only during later processing.

    Files private

    Request files
  • Jesse, A., & Janse, E. (2009). Seeing a speaker's face helps stream segregation for younger and elderly adults [Abstract]. Journal of the Acoustical Society of America, 125(4), 2361.
  • Jesse, A., & Massaro, D. W. (2005). Towards a lexical fuzzy logical model of perception: The time-course of audiovisual speech processing in word identification. In E. Vatikiotis-Bateson, D. Burnham, & S. Fels (Eds.), Proceedings of the Auditory-Visual Speech Processing International Conference 2005 (pp. 35-36). Adelaide, Australia: Causal Productions.

    Abstract

    This study investigates the time-course of information processing in both visual as well as in the auditory speech as used for word identification in face-to-face communication. It extends the limited previous research on this topic and provides a valuable database for future research in audiovisual speech perception. An evaluation of models of speech perception by ear and eye in their ability to account for the audiovisual gating data shows a superior role of the fuzzy logical model of perception (FLMP) [1] over additive models of perception. A new dynamic version of the FLMP seems to be a promising model to account for the complex interplay of perceptual and cognitive information in audiovisual spoken word recognition.
  • Jesse, A. (2005). Towards a lexical fuzzy logical model of perception: The time-course of information in lexical identification of face-to-face speech. PhD Thesis, University of California, Santa Cruz.

    Abstract

    In face-to-face communication, information from the face as well as from the voice contributes to the identification of spoken words. This dissertation investigates the time-course of the evaluation and integration of visual and auditory speech in audiovisual word identification. A large-scale audiovisual gating study extends previous research on this topic by (1) using a set of words that includes all possible initial consonants in English in three vowel contexts, (2) tracking the information processing for individual words not only across modalities, but also over time, and (3) testing quantitative models of the time-course of multimodal word recognition. There was an advantage in accuracy for audiovisual speech over auditory-only and visual-only speech. Auditory performance was, however, close to ceiling while performance on visual-only trials was near the floor of the scale, but well above chance. Visual information was used at all gates to identify the presented words. Information theoretic feature analyses of the confusion matrices revealed that the auditory signal is highly informative about voicing, manner, frication, duration, and place of articulation. Visual speech is mostly informative about place of articulation, but also about frication and duration. The auditory signal provides more information about the place of articulation for back consonants, whereas the visual signal provides more information for the labial consonants. The data were sufficient to discriminate between models of audiovisual word recognition. The Fuzzy Logical Model of Perception (FLMP; Massaro, 1998) gave a better account of the confusion matrix data than additive models of perception. A dynamic version of the FLMP was expanded to account for the evaluation and integration of information over time. This dynamic FLMP provided a better description of the data than dynamic additive competitor models. The present study builds a good foundation to investigate the role of the complex interplay between stimulus information and the structure of the lexicon. It provides an important step in building a formal representation of a lexical dynamic FLMP that can account not only for the time-course of speech information and its perceptual processing, but also for lexical influences.
  • Jesse, A., & Janse, E. (2009). Visual speech information aids elderly adults in stream segregation. In B.-J. Theobald, & R. Harvey (Eds.), Proceedings of the International Conference on Auditory-Visual Speech Processing 2009 (pp. 22-27). Norwich, UK: School of Computing Sciences, University of East Anglia.

    Abstract

    Listening to a speaker while hearing another speaker talks is a challenging task for elderly listeners. We show that elderly listeners over the age of 65 with various degrees of age-related hearing loss benefit in this situation from also seeing the speaker they intend to listen to. In a phoneme monitoring task, listeners monitored the speech of a target speaker for either the phoneme /p/ or /k/ while simultaneously hearing a competing speaker. Critically, on some trials, the target speaker was also visible. Elderly listeners benefited in their response times and accuracy levels from seeing the target speaker when monitoring for the less visible /k/, but more so when monitoring for the highly visible /p/. Visual speech therefore aids elderly listeners not only by providing segmental information about the target phoneme, but also by providing more global information that allows for better performance in this adverse listening situation.
  • Johns, T. G., Vitali, A. A., Perera, R. M., Vernes, S. C., & Scott, A. M. (2005). Ligand-independent activation of the EGFRvIII: A naturally occurring mutation of the EGFR commonly expressed in glioma [Abstract]. Neuro-Oncology, 7, 299.

    Abstract

    Mutations of the epidermal growth factor receptor (EGFR) gene are found at a relatively high frequency in glioma, with the most common being the de2-7 EGFR (or EGFRvIII). This mutation arises from an in-frame deletion of exons 2–7, which removes 267 amino acids from the extracellular domain of the receptor. Despite being unable to bind ligand, the de2-7 EGFR is constitutively active at a low level. Transfection of human glioma cells with the de2-7 EGFR has little effect in vitro, but when grown as tumor xenografts this mutated receptor imparts a dramatic growth advantage. We have now mapped the phosphorylation pattern of de2-7 EGFR, both in vivo and in vitro, using a panel of antibodies unique to the different phosphorylated tyrosine residues. Phosphorylation of de2-7 EGFR was detected constitutively at all tyrosine sites surveyed both in vitro and in vivo, including tyrosine 845, a known target in the wild-type EGFR for src kinase. There was a substantial upregulation of phosphorylation at every tyrosine residue of the de2-7 EGFR when cells were grown in vivo compared to the receptor isolated from cells cultured in vitro. Upregulation of phosphorylation could be mimicked in vitro by the addition of specifi c components of the ECM such as collagen via an integrin-dependent mechanism. Since this increase in in vivo phosphorylation enhances de2-7 EGFR signaling, this observation explains why the growth enhancement mediated by de2-7 EGFR is largely restricted to the in vivo environment. In a second set of experiments we analyzed the interaction between EGFRvIII and ErbB2. Co-expression of these proteins in NR6 cells, a mouse fi broblast line devoid of ErbB family members, dramatically enhanced in vivo tumorigenicity of these cells compared to cells expressing either protein alone. Detailed analysis of these xenografts demonstrated that EGFRvIII could heterodimerize and transphosphorylate the ErbB2. Since both EGFRvIII and ErbB2 are commonly expressed at gliomas, this data suggests that the co-expression of these two proteins may enhance glioma tumorigenicity.
  • Johns, T. G., Perera, R. M., Vitali, A. A., Vernes, S. C., & Scott, A. (2004). Phosphorylation of a glioma-specific mutation of the EGFR [Abstract]. Neuro-Oncology, 6, 317.

    Abstract

    Mutations of the epidermal growth factor receptor (EGFR) gene are found at a relatively high frequency in glioma, with the most common being the de2-7 EGFR (or EGFRvIII). This mutation arises from an in-frame deletion of exons 2-7, which removes 267 amino acids from the extracellular domain of the receptor. Despite being unable to bind ligand, the de2-7 EGFR is constitutively active at a low level. Transfection of human glioma cells with the de2-7 EGFR has little effect in vitro, but when grown as tumor xenografts this mutated receptor imparts a dramatic growth advantage. We mapped the phosphorylation pattern of de2-7 EGFR, both in vivo and in vitro, using a panel of antibodies specific for different phosphorylated tyrosine residues. Phosphorylation of de2-7 EGFR was detected constitutively at all tyrosine sites surveyed in vitro and in vivo, including tyrosine 845, a known target in the wild-type EGFR for src kinase. There was a substantial upregulation of phosphorylation at every yrosine residue of the de2-7 EGFR when cells were grown in vivo compared to the receptor isolated from cells cultured in vitro. Upregulation of phosphorylation at tyrosine 845 could be stimulated in vitro by the addition of specific components of the ECM via an integrindependent mechanism. These observations may partially explain why the growth enhancement mediated by de2-7 EGFR is largely restricted to the in vivo environment
  • Johnson, E. K., & Seidl, A. (2009). At 11 months, prosody still outranks statistics. Developmental Science, 12, 131-141. doi:10.1111/j.1467-7687.2008.00740.x.

    Abstract

    English-learning 7.5-month-olds are heavily biased to perceive stressed syllables as word onsets. By 11 months, however, infants begin segmenting non-initially stressed words from speech.Using the same artificial language methodology as Johnson and Jusczyk (2001), we explored the possibility that the emergence of this ability is linked to a decreased reliance on prosodic cues to word boundaries accompanied by an increased reliance on syllable distribution cues. In a baseline study, where only statistical cues to word boundaries were present, infants exhibited a familiarity preference for statistical words. When conflicting stress cues were added to the speech stream, infants exhibited a familiarity preference for stress as opposed to statistical words. This was interpreted as evidence that 11-month-olds weight stress cues to word boundaries more heavily than statistical cues. Experiment 2 further investigated these results with a language containing convergent cues to word boundaries. The results of Experiment 2 were not conclusive. A third experiment using new stimuli and a different experimental design supported the conclusion that 11-month-olds rely more heavily on prosodic than statistical cues to word boundaries. We conclude that the emergence of the ability to segment non-initially stressed words from speech is not likely to be tied to an increased reliance on syllable distribution cues relative to stress cues, but instead may emerge due to an increased reliance on and integration of a broad array of segmentation cues.
  • Johnson, E. K. (2005). English-learning infants' representations of word-forms with iambic stress. Infancy, 7(1), 95-105. doi:10.1207/s15327078in0701_8.

    Abstract

    Retaining detailed representations of unstressed syllables is a logical prerequisite for infants' use of probabilistic phonotactics to segment iambic words from fluent speech. The head-turn preference study was used to investigate the nature of English- learners' representations of iambic word onsets. Fifty-four 10.5-month-olds were familiarized to passages containing the nonsense iambic word forms ginome and tupong. Following familiarization, infants were either tested on familiar (ginome and tupong) or near-familiar (pinome and bupong) versus unfamiliar (kidar and mafoos) words. Infants in the familiar test group (familiar vs. unfamiliar) oriented significantly longer to familiar than unfamiliar test items, whereas infants in the near-familiar test group (near-familiar vs. unfamiliar) oriented equally long to near-familiar and unfamiliar test items. Our results provide evidence that infants retain fairly detailed representations of unstressed syllables and therefore support the hypothesis that infants use phonotactic cues to find words in fluent speech.
  • Johnson, E. K. (2005). Grammatical gender and early word recognition in Dutch. In A. Brugos, M. R. Clark-Cotton, & S. Ha (Eds.), Proceedings of the 29th Boston University Conference on Language Developement (pp. 320-330). Sommervile, MA: Cascadilla Press.
  • Johnson, E. K., Westrek, E., & Nazzi, T. (2005). Language familiarity affects voice discrimination by seven-month-olds. In Proceedings of the ISCA Workshop on Plasticity in Speech Perception (PSP2005) (pp. 227-230).
  • Johnsrude, I., Davis, M., & Hervais-Adelman, A. (2005). From sound to meaning: Hierarchical processing in speech comprehension. In D. Pressnitzer, S. McAdams, A. DeCheveigne, & L. Collet (Eds.), Auditory Signal Processing: Physiology, Psychoacoustics, and Models (pp. 299-306). New York: Springer.
  • Jolink, A. (2005). Finite linking in normally developing Dutch children and children with specific language impairment. Zeitschrift für Literaturwissenschaft und Linguistik, 140, 61-81.
  • Jolink, A. (2009). Finiteness in children with SLI: A functional approach. In C. Dimroth, & P. Jordens (Eds.), Functional categories in learner language (pp. 235-260). Berlin: Mouton de Gruyter.
  • Jordan, F., Gray, R., Greenhill, S., & Mace, R. (2009). Matrilocal residence is ancestral in Austronesian societies. Proceedings of the Royal Society of London Series B-Biological Sciences, 276(1664), 1957-1964. doi:10.1098/rspb.2009.0088.

    Abstract

    The nature of social life in human prehistory is elusive, yet knowing how kinship systems evolve is critical for understanding population history and cultural diversity. Post-marital residence rules specify sex-specific dispersal and kin association, influencing the pattern of genetic markers across populations. Cultural phylogenetics allows us to practise 'virtual archaeology' on these aspects of social life that leave no trace in the archaeological record. Here we show that early Austronesian societies practised matrilocal post-marital residence. Using a Markov-chain Monte Carlo comparative method implemented in a Bayesian phylogenetic framework, we estimated the type of residence at each ancestral node in a sample of Austronesian language trees spanning 135 Pacific societies. Matrilocal residence has been hypothesized for proto-Oceanic society (ca 3500 BP), but we find strong evidence that matrilocality was predominant in earlier Austronesian societies ca 5000-4500 BP, at the root of the language family and its early branches. Our results illuminate the divergent patterns of mtDNA and Y-chromosome markers seen in the Pacific. The analysis of present-day cross-cultural data in this way allows us to directly address cultural evolutionary and life-history processes in prehistory.
  • Jordan, F., & Mace, R. (2005). The evolution of human sex-ratio at birth: A bio-cultural analysis. In R. Mace, C. J. Holden, & S. Shennan (Eds.), The evolution of cultural diversity: A phylogenetic approach (pp. 207-216). London: UCL Press.
  • Jordens, P. (2004). Systematiek en dynamiek bij de verwerving van Finietheid. Toegepaste Taalwetenschap in Artikelen, 71, 9-22.

    Abstract

    In early Dutch learner varieties, there is no evidence of finiteness being a functional category. There is no V2nd: no correlation between inflectional morphology and movement. Initially, learners express the illocutive function of finiteness through the use of illocutive markers, with the non-use of an illocutive marker expressing the default illocutive function of assertion. Illocutive markers are functioning as adjuncts with scope over the predicate. Illocutive markers become re-analysed as functional elements.The driving force is the acquisition of the auxiliary verbs that occur with past participles. It leads to a reanalysis of illocutive markers as two separate elements: an auxiliary verb and a scope adverb. The (modal) auxiliary carries illocutive function. Lexical verb-argument structure (including the external argument) occurs within the domain of the auxiliary verb. The predicate as the focus constituent occurs within the domain of a scope adverb. This reanalysis establishes a position for the external argument within the domain of AUX. The acquisition of AUX causes the acquisition of a (hierarchical) structure with a complement as a constituent which represents an underlying verb-argument structure, a predicate as the domain of elements that are in focus, and an external (specifier) position as a landing site for elements with topic function.

Share this page