Publications

Displaying 301 - 400 of 950
  • Gullberg, M., & Indefrey, P. (2008). Cognitive and neural prerequisites for time in language: Any answers? Language Learning, 58(suppl. 1), 207-216. doi:10.1111/j.1467-9922.2008.00472.x.
  • Gullberg, M., De Bot, K., & Volterra, V. (2008). Gestures and some key issues in the study of language development. Gesture, 8(2), 149-179. doi:10.1075/gest.8.2.03gul.

    Abstract

    The purpose of the current paper is to outline how gestures can contribute to the study of some key issues in language development. Specifically, we (1) briefly summarise what is already known about gesture in the domains of first and second language development, and development or changes over the life span more generally; (2) highlight theoretical and empirical issues in these domains where gestures can contribute in important ways to further our understanding; and (3) summarise some common themes in all strands of research on language development that could be the target of concentrated research efforts.
  • Gullberg, M., & De Bot, K. (Eds.). (2008). Gestures in language development [Special Issue]. Gesture, 8(2).
  • Gullberg, M., & McCafferty, S. G. (2008). Introduction to gesture and SLA: Toward an integrated approach. Studies in Second Language Acquisition, 30(2), 133-146. doi:10.1017/S0272263108080285.

    Abstract

    The title of this special issue, Gesture and SLA: Toward an Integrated Approach, stems in large part from the idea known as integrationism, principally set forth by Harris (2003, 2005), which posits that it is time to “demythologize” linguistics, moving away from the “orthodox exponents” that have idealized the notion of language. The integrationist approach intends a view that focuses on communication—that is, language in use, language as a “fact of life” (Harris, 2003, p. 50). Although not all gesture studies embrace an integrationist view—indeed, the field applies numerous theories across various disciplines—it is nonetheless true that to study gesture is to study what has traditionally been called paralinguistic modes of interaction, with the paralinguistic label given on the assumption that gesture is not part of the core meaning of what is rendered linguistically. However, arguably, most researchers within gesture studies would maintain just the opposite: The studies presented in this special issue reflect a view whereby gesture is regarded as a central aspect of language in use, integral to how we communicate (make meaning) both with each other and with ourselves.
  • Gullberg, M., Hendriks, H., & Hickmann, M. (2008). Learning to talk and gesture about motion in French. First Language, 28(2), 200-236. doi:10.1177/0142723707088074.

    Abstract

    This study explores how French adults and children aged four and six years talk and gesture about voluntary motion, examining (1) how they encode path and manner in speech, (2) how they encode this information in accompanying gestures; and (3) whether gestures are co-expressive with speech or express other information. When path and manner are equally relevant, children’s and adults’ speech and gestures both focus on path, rather than on manner. Moreover, gestures are predominantly co-expressive with speech at all ages. However, when they are non-redundant, adults tend to gesture about path while talking about manner, whereas children gesture about both path and manner while talking about path. The discussion highlights implications for our understanding of speakers’ representations and their development.
  • Gullberg, M. (2009). Gestures and the development of semantic representations in first and second language acquisition. Acquisition et Interaction en Langue Etrangère..Languages, Interaction, and Acquisition (former AILE), 1, 117-139.

    Abstract

    This paper argues that speech-associated gestures can usefully inform studies exploring development of meaning in first and second language acquisition. The example domain is caused motion or placement meaning (putting a cup on a table) where acquisition problems have been observed and where adult native gesture use reflects crosslinguistically different placement verb semantics. Against this background, the paper summarises three studies examining the development of semantic representations in Dutch children acquiring Dutch, and adult learners’ acquiring Dutch and French placement verbs. Overall, gestures change systematically with semantic development both in children and adults and (1) reveal what semantic elements are included in current semantic representations, whether target-like or not, and (2) highlight developmental shifts in those representations. There is little evidence that gestures chiefly act as a support channel. Instead, the data support the theoretical notion that speech and gesture form an integrated system, opening new possibilities for studying the processes of acquisition.
  • Gullberg, M. (2010). Methodological reflections on gesture analysis in second language acquisition and bilingualism research. Second Language Research, 26(1), 75-102. doi:10.1177/0267658309337639.

    Abstract

    Gestures, the symbolic movements speakers perform while they speak, form a closely inter-connected system with speech where gestures serve both addressee-directed (‘communicative’) and speaker-directed (’internal’) functions. This paper aims (1) to show that a combined analysis of gesture and speech offers new ways to address theoretical issues in SLA and bilingualism studies, probing SLA and bilingualism as product and process; and (2) to outline some methodological concerns and desiderata to facilitate the inclusion of gesture in SLA and bilingualism research.
  • Gullberg, M. (2009). Reconstructing verb meaning in a second language: How English speakers of L2 Dutch talk and gesture about placement. Annual Review of Cognitive Linguistics, 7, 221-245. doi:10.1075/arcl.7.09gul.

    Abstract

    This study examines to what extent English speakers of L2 Dutch reconstruct the meanings of placement verbs when moving from a general L1 verb of caused motion (put) to two specific caused posture verbs (zetten/leggen ‘set/lay’) in the L2 and whether the existence of low-frequency cognate forms in the L1 (set/lay) alleviates the reconstruction problem. Evidence from speech and gesture indicates that English speakers have difficulties with the specific verbs in L2 Dutch, initially looking for means to express general caused motion in L1-like fashion through over-generalisation. The gesture data further show that targetlike forms are often used to convey L1-like meaning. However, the differentiated use of zetten for vertical placement and dummy verbs (gaan ‘go’ and doen ‘do’) and intransitive posture verbs (zitten/staan/liggen ‘sit, stand, lie’) for horizontal placement, and a positive correlation between appropriate verb use and target-like gesturing suggest a beginning sensitivity to the semantic parameters of the L2 verbs and possible reconstruction.
  • Gullberg, M., & Indefrey, P. (Eds.). (2010). The earliest stages of language learning [Special Issue]. Language Learning, 60(Supplement s2).
  • Gullberg, M., & Narasimhan, B. (2010). What gestures reveal about the development of semantic distinctions in Dutch children's placement verbs. Cognitive Linguistics, 21(2), 239-262. doi:10.1515/COGL.2010.009.

    Abstract

    Placement verbs describe every-day events like putting a toy in a box. Dutch uses two semi-obligatory caused posture verbs (leggen ‘lay’ and zetten ‘set/stand’) to distinguish between events based on whether the located object is placed horizontally or vertically. Although prevalent in the input, these verbs cause Dutch children difficulties even at age five (Narasimhan & Gullberg, submitted). Children overextend leggen to all placement events and underextend use of zetten. This study examines what gestures can reveal about Dutch three- and five-year-olds’ semantic representations of such verbs. The results show that children gesture differently from adults in this domain. Three-year-olds express only the path of the caused motion, whereas five-year-olds, like adults, also incorporate the located object. Crucially, gesture patterns are tied to verb use: those children who over-use leggen 'lay' for all placement events only gesture about path. Conversely, children who use the two verbs differentially for horizontal and vertical placement also incorporate objects in gestures like adults. We argue that children's gestures reflect their current knowledge of verb semantics, and indicate a developmental transition from a system with a single semantic component – (caused) movement – to an (adult-like) focus on two semantic components – (caused) movement-and-object
  • Guo, Y., Martin, R. C., Hamilton, C., Van Dyke, J., & Tan, Y. (2010). Neural basis of semantic and syntactic interference resolution in sentence comprehension. Procedia - Social and Behavioral Sciences, 6, 88-89. doi:10.1016/j.sbspro.2010.08.045.
  • Hagoort, P. (2008). Should psychology ignore the language of the brain? Current Directions in Psychological Science, 17(2), 96-101. doi:10.1111/j.1467-8721.2008.00556.x.

    Abstract

    Claims that neuroscientific data do not contribute to our understanding of psychological functions have been made recently. Here I argue that these criticisms are solely based on an analysis of functional magnetic resonance imaging (fMRI) studies. However, fMRI is only one of the methods in the toolkit of cognitive neuroscience. I provide examples from research on event-related brain potentials (ERPs) that have contributed to our understanding of the cognitive architecture of human language functions. In addition, I provide evidence of (possible) contributions from fMRI measurements to our understanding of the functional architecture of language processing. Finally, I argue that a neurobiology of human language that integrates information about the necessary genetic and neural infrastructures will allow us to answer certain questions that are not answerable if all we have is evidence from behavior.
  • Hagoort, P. (2008). The fractionation of spoken language understanding by measuring electrical and magnetic brain signals. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 363, 1055-1069. doi:10.1098/rstb.2007.2159.

    Abstract

    This paper focuses on what electrical and magnetic recordings of human brain activity reveal about spoken language understanding. Based on the high temporal resolution of these recordings, a fine-grained temporal profile of different aspects of spoken language comprehension can be obtained. Crucial aspects of speech comprehension are lexical access, selection and semantic integration. Results show that for words spoken in context, there is no ‘magic moment’ when lexical selection ends and semantic integration begins. Irrespective of whether words have early or late recognition points, semantic integration processing is initiated before words can be identified on the basis of the acoustic information alone. Moreover, for one particular event-related brain potential (ERP) component (the N400), equivalent impact of sentence- and discourse-semantic contexts is observed. This indicates that in comprehension, a spoken word is immediately evaluated relative to the widest interpretive domain available. In addition, this happens very quickly. Findings are discussed that show that often an unfolding word can be mapped onto discourse-level representations well before the end of the word. Overall, the time course of the ERP effects is compatible with the view that the different information types (lexical, syntactic, phonological, pragmatic) are processed in parallel and influence the interpretation process incrementally, that is as soon as the relevant pieces of information are available. This is referred to as the immediacy principle.
  • Hagoort, P. (1998). De electrofysiologie van taal: Wat hersenpotentialen vertellen over het menselijk taalvermogen. Neuropraxis, 2, 223-229.
  • Hagoort, P. (1998). De spreker als sprinter. Psychologie, 17, 48-49.
  • Li, X., Hagoort, P., & Yang, Y. (2008). Event-related potential evidence on the influence of accentuation in spoken discourse comprehension in Chinese. Journal of Cognitive Neuroscience, 20(5), 906-915. doi:10.1162/jocn.2008.20512.

    Abstract

    In an event-related potential experiment with Chinese discourses as material, we investigated how and when accentuation influences spoken discourse comprehension in relation to the different information states of the critical words. These words could either provide new or old information. It was shown that variation of accentuation influenced the amplitude of the N400, with a larger amplitude for accented than deaccented words. In addition, there was an interaction between accentuation and information state. The N400 amplitude difference between accented and deaccented new information was smaller than that between accented and deaccented old information. The results demonstrate that, during spoken discourse comprehension, listeners rapidly extract the semantic consequences of accentuation in relation to the previous discourse context. Moreover, our results show that the N400 amplitude can be larger for correct (new,accented words) than incorrect (new, deaccented words) information. This, we argue, proves that the N400 does not react to semantic anomaly per se, but rather to semantic integration load, which is higher for new information.
  • Hagoort, P. (2008). Mijn omweg naar de filosofie. Algemeen Nederlands Tijdschrift voor Wijsbegeerte, 100(4), 303-310.
  • Hagoort, P. (1998). Hersenen en taal in onderzoek en praktijk. Neuropraxis, 6, 204-205.
  • Hagoort, P. (2018). Prerequisites for an evolutionary stance on the neurobiology of language. Current Opinion in Behavioral Sciences, 21, 191-194. doi:10.1016/j.cobeha.2018.05.012.
  • Hagoort, P., & Levelt, W. J. M. (2009). The speaking brain. Science, 326(5951), 372-373. doi:10.1126/science.1181675.

    Abstract

    How does intention to speak become the action of speaking? It involves the generation of a preverbal message that is tailored to the requirements of a particular language, and through a series of steps, the message is transformed into a linear sequence of speech sounds (1, 2). These steps include retrieving different kinds of information from memory (semantic, syntactic, and phonological), and combining them into larger structures, a process called unification. Despite general agreement about the steps that connect intention to articulation, there is no consensus about their temporal profile or the role of feedback from later steps (3, 4). In addition, since the discovery by the French physician Pierre Paul Broca (in 1865) of the role of the left inferior frontal cortex in speaking, relatively little progress has been made in understanding the neural infrastructure that supports speech production (5). One reason is that the characteristics of natural language are uniquely human, and thus the neurobiology of language lacks an adequate animal model. But on page 445 of this issue, Sahin et al. (6) demonstrate, by recording neuronal activity in the human brain, that different kinds of linguistic information are indeed sequentially processed within Broca's area.
  • Hahn, L. E., Benders, T., Snijders, T. M., & Fikkert, P. (2018). Infants' sensitivity to rhyme in songs. Infant Behavior and Development, 52, 130-139. doi:10.1016/j.infbeh.2018.07.002.

    Abstract

    Children’s songs often contain rhyming words at phrase endings. In this study, we investigated whether infants can already recognize this phonological pattern in songs. Earlier studies using lists of spoken words were equivocal on infants’ spontaneous processing of rhymes (Hayes, Slater, & Brown, 2000; Jusczyk, Goodman, & Baumann, 1999). Songs, however, constitute an ecologically valid rhyming stimulus, which could allow for spontaneous processing of this phonological pattern in infants. Novel children’s songs with rhyming and non-rhyming lyrics using pseudo-words were presented to 35 9-month-old Dutch infants using the Headturn Preference Procedure. Infants on average listened longer to the non-rhyming songs, with around half of the infants however exhibiting a preference for the rhyming songs. These results highlight that infants have the processing abilities to benefit from their natural rhyming input for the development of their phonological abilities.
  • Hammarström, H. (2010). A full-scale test of the language farming dispersal hypothesis. Diachronica, 27(2), 197-213. doi:10.1075/dia.27.2.02ham.

    Abstract

    One attempt at explaining why some language families are large (while others are small) is the hypothesis that the families that are now large became large because their ancestral speakers had a technological advantage, most often agriculture. Variants of this idea are referred to as the Language Farming Dispersal Hypothesis. Previously, detailed language family studies have uncovered various supporting examples and counterexamples to this idea. In the present paper I weigh the evidence from ALL attested language families. For each family, I use the number of member languages as a measure of cardinal size, member language coordinates to measure geospatial size and ethnographic evidence to assess subsistence status. This data shows that, although agricultural families tend to be larger in cardinal size, their size is hardly due to the simple presence of farming. If farming were responsible for language family expansions, we would expect a greater east-west geospatial spread of large families than is actually observed. The data, however, is compatible with weaker versions of the farming dispersal hypothesis as well with models where large families acquire farming because of their size, rather than the other way around.
  • Hammarström, H. (2010). The status of the least documented language families in the world. Language Documentation and Conservation, 4, 177-212. Retrieved from http://hdl.handle.net/10125/4478.

    Abstract

    This paper aims to list all known language families that are not yet extinct and all of whose member languages are very poorly documented, i.e., less than a sketch grammar’s worth of data has been collected. It explains what constitutes a valid family, what amount and kinds of documentary data are sufficient, when a language is considered extinct, and more. It is hoped that the survey will be useful in setting priorities for documentation fieldwork, in particular for those documentation efforts whose underlying goal is to understand linguistic diversity.
  • Hanulikova, A., & Hamann, S. (2010). Illustrations of Slovak IPA. Journal of the International Phonetic Association, 40(3), 373-378. doi:10.1017/S0025100310000162.

    Abstract

    Slovak (sometimes also called Slovakian) is an Indo-European language belonging to the West-Slavic branch, and is most closely related to Czech. Slovak is spoken as a native language by 4.6 million speakers in Slovakia (that is by roughly 85% of the population), and by over two million Slovaks living abroad, most of them in the USA, the Czech Republic, Hungary, Canada and Great Britain (Office for Slovaks Living Abroad 2009).
  • Hanulikova, A., McQueen, J. M., & Mitterer, H. (2010). Possible words and fixed stress in the segmentation of Slovak speech. Quarterly Journal of Experimental Psychology, 63, 555 -579. doi:10.1080/17470210903038958.

    Abstract

    The possible-word constraint (PWC; Norris, McQueen, Cutler, & Butterfield, 1997) has been proposed as a language-universal segmentation principle: Lexical candidates are disfavoured if the resulting segmentation of continuous speech leads to vowelless residues in the input—for example, single consonants. Three word-spotting experiments investigated segmentation in Slovak, a language with single-consonant words and fixed stress. In Experiment 1, Slovak listeners detected real words such as ruka “hand” embedded in prepositional-consonant contexts (e.g., /gruka/) faster than those in nonprepositional-consonant contexts (e.g., /truka/) and slowest in syllable contexts (e.g., /dugruka/). The second experiment controlled for effects of stress. Responses were still fastest in prepositional-consonant contexts, but were now slowest in nonprepositional-consonant contexts. In Experiment 3, the lexical and syllabic status of the contexts was manipulated. Responses were again slowest in nonprepositional-consonant contexts but equally fast in prepositional-consonant, prepositional-vowel, and nonprepositional-vowel contexts. These results suggest that Slovak listeners use fixed stress and the PWC to segment speech, but that single consonants that can be words have a special status in Slovak segmentation. Knowledge about what constitutes a phonologically acceptable word in a given language therefore determines whether vowelless stretches of speech are or are not treated as acceptable parts of the lexical parse.
  • Hasson, U., Egidi, G., Marelli, M., & Willems, R. M. (2018). Grounding the neurobiology of language in first principles: The necessity of non-language-centric explanations for language comprehension. Cognition, 180(1), 135-157. doi:10.1016/j.cognition.2018.06.018.

    Abstract

    Recent decades have ushered in tremendous progress in understanding the neural basis of language. Most of our current knowledge on language and the brain, however, is derived from lab-based experiments that are far removed from everyday language use, and that are inspired by questions originating in linguistic and psycholinguistic contexts. In this paper we argue that in order to make progress, the field needs to shift its focus to understanding the neurobiology of naturalistic language comprehension. We present here a new conceptual framework for understanding the neurobiological organization of language comprehension. This framework is non-language-centered in the computational/neurobiological constructs it identifies, and focuses strongly on context. Our core arguments address three general issues: (i) the difficulty in extending language-centric explanations to discourse; (ii) the necessity of taking context as a serious topic of study, modeling it formally and acknowledging the limitations on external validity when studying language comprehension outside context; and (iii) the tenuous status of the language network as an explanatory construct. We argue that adopting this framework means that neurobiological studies of language will be less focused on identifying correlations between brain activity patterns and mechanisms postulated by psycholinguistic theories. Instead, they will be less self-referential and increasingly more inclined towards integration of language with other cognitive systems, ultimately doing more justice to the neurobiological organization of language and how it supports language as it is used in everyday life.
  • Haun, D. B. M., & Call, J. (2008). Imitation recognition in great apes. Current Biology, 18(7), 288-290. doi:10.1016/j.cub.2008.02.031.

    Abstract

    Human infants imitate not only to acquire skill, but also as a fundamental part of social interaction [1] , [2] and [3] . They recognise when they are being imitated by showing increased visual attention to imitators (implicit recognition) and by engaging in so-called testing behaviours (explicit recognition). Implicit recognition affords the ability to recognize structural and temporal contingencies between actions across agents, whereas explicit recognition additionally affords the ability to understand the directional impact of one's own actions on others' actions [1] , [2] and [3] . Imitation recognition is thought to foster understanding of social causality, intentionality in others and the formation of a concept of self as different from other [3] , [4] and [5] . Pigtailed macaques (Macaca nemestrina) implicitly recognize being imitated [6], but unlike chimpanzees [7], they show no sign of explicit imitation recognition. We investigated imitation recognition in 11 individuals from the four species of non-human great apes. We replicated results previously found with a chimpanzee [7] and, critically, have extended them to the other great ape species. Our results show a general prevalence of imitation recognition in all great apes and thereby demonstrate important differences between great apes and monkeys in their understanding of contingent social interactions.
  • Haun, D. B. M., & Call, J. (2009). Great apes’ capacities to recognize relational similarity. Cognition, 110, 147-159. doi:10.1016/j.cognition.2008.10.012.

    Abstract

    Recognizing relational similarity relies on the ability to understand that defining object properties might not lie in the objects individually, but in the relations of the properties of various object to each other. This aptitude is highly relevant for many important human skills such as language, reasoning, categorization and understanding analogy and metaphor. In the current study, we investigated the ability to recognize relational similarities by testing five species of great apes, including human children in a spatial task. We found that all species performed better if related elements are connected by logico-causal as opposed to non-causal relations. Further, we find that only children above 4 years of age, bonobos and chimpanzees, unlike younger children, gorillas and orangutans display some mastery of reasoning by non-causal relational similarity. We conclude that recognizing relational similarity is not in its entirety unique to the human species. The lack of a capability for language does not prohibit recognition of simple relational similarities. The data are discussed in the light of the phylogenetic tree of relatedness of the great apes.
  • Haun, D. B. M., Jordan, F., Vallortigara, G., & Clayton, N. S. (2010). Origins of spatial, temporal and numerical cognition: Insights from comparative psychology [Review article]. Trends in Cognitive Sciences, 14, 552-560. doi:10.1016/j.tics.2010.09.006.

    Abstract

    Contemporary comparative cognition has a large repertoire of animal models and methods, with concurrent theoretical advances that are providing initial answers to crucial questions about human cognition. What cognitive traits are uniquely human? What are the species-typical inherited predispositions of the human mind? What is the human mind capable of without certain types of specific experiences with the surrounding environment? Here, we review recent findings from the domains of space, time and number cognition. These findings are produced using different comparative methodologies relying on different animal species, namely birds and non-human great apes. The study of these species not only reveals the range of cognitive abilities across vertebrates, but also increases our understanding of human cognition in crucial ways.
  • Haun, D. B. M., & Rapold, C. J. (2009). Variation in memory for body movements across cultures. Current Biology, 19(23), R1068-R1069. doi:10.1016/j.cub.2009.10.041.

    Abstract

    There has been considerable controversy over the existence of cognitive differences across human cultures: some claim that human cognition is essentially universal [1,2], others that it reflects cultural specificities [3,4]. One domain of interest has been spatial cognition [5,6]. Despite the global universality of physical space, cultures vary as to how space is coded in their language. Some, for example, do not use egocentric ‘left, right, front, back’ constructions to code spatial relations, instead using allocentric notions like ‘north, south, east, west’ [4,6]: “The spoon is north of the bowl!” Whether or not spatial cognition also varies across cultures remains a contested question [7,8]. Here we investigate whether memory for movements of one's own body differs between cultures with contrastive strategies for coding spatial relations. Our results show that the ways in which we memorize movements of our own body differ in line with culture-specific preferences for how to conceive of spatial relations.
  • Havik, E., Roberts, L., Van Hout, R., Schreuder, R., & Haverkort, M. (2009). Processing subject-object ambiguities in L2 Dutch: A self-paced reading study with German L2 learners of Dutch. Language Learning, 59(1), 73-112. doi:10.1111/j.1467-9922.2009.00501.x.

    Abstract

    The results of two self-paced reading experiments are reported, which investigated the on-line processing of subject-object ambiguities in Dutch relative clause constructions like Dat is de vrouw die de meisjes heeft/hebben gezien by German advanced second language (L2) learners of Dutch. Native speakers of both Dutch and German have been shown to have a preference for a subject versus an object reading of such temporarily ambiguous sentences, and so we provided an ideal opportunity for the transfer of first language (L1) processing preferences to take place. We also investigated whether the participants' working memory span would affect their processing of the experimental items. The results suggest that processing decisions may be affected by working memory when task demands are high and in this case, the high working memory span learners patterned like the native speakers of lower working memory. However, when reading for comprehension alone, and when only structural information was available to guide parsing decisions, working memory span had no effect on the L2 learners' on-line processing, and this differed from the native speakers' even though the L1 and the L2 are highly comparable.
  • Havron, N., Raviv, L., & Arnon, I. (2018). Literate and preliterate children show different learning patterns in an artificial language learning task. Journal of Cultural Cognitive Science, 2, 21-33. doi:10.1007/s41809-018-0015-9.

    Abstract

    Literacy affects many aspects of cognitive and linguistic processing. Among them, it increases the salience of words as units of linguistic processing. Here, we explored the impact of literacy acquisition on children’s learning of an artifical language. Recent accounts of L1–L2 differences relate adults’ greater difficulty with language learning to their smaller reliance on multiword units. In particular, multiword units are claimed to be beneficial for learning opaque grammatical relations like grammatical gender. Since literacy impacts the reliance on words as units of processing, we ask if and how acquiring literacy may change children’s language-learning results. We looked at children’s success in learning novel noun labels relative to their success in learning article-noun gender agreement, before and after learning to read. We found that preliterate first graders were better at learning agreement (larger units) than at learning nouns (smaller units), and that the difference between the two trial types significantly decreased after these children acquired literacy. In contrast, literate third graders were as good in both trial types. These findings suggest that literacy affects not only language processing, but also leads to important differences in language learning. They support the idea that some of children’s advantage in language learning comes from their previous knowledge and experience with language—and specifically, their lack of experience with written texts.
  • Hayano, K. (2008). Talk and body: Negotiating action framework and social relationship in conversation. Studies in English and American Literature, 43, 187-198.
  • Hebebrand, J., Peters, T., Schijven, D., Hebebrand, M., Grasemann, C., Winkler, T. W., Heid, I. M., Antel, J., Föcker, M., Tegeler, L., Brauner, L., Adan, R. A., Luykx, J. J., Correll, C. U., König, I. R., Hinney, A., & Libuda, L. (2018). The role of genetic variation of human metabolism for BMI, mental traits and mental disorders. Molecular Metabolism, 12, 1-11. doi:10.1016/j.molmet.2018.03.015.

    Abstract

    Objective
    The aim was to assess whether loci associated with metabolic traits also have a significant role in BMI and mental traits/disorders
    Methods
    We first assessed the number of single nucleotide polymorphisms (SNPs) with genome-wide significance for human metabolism (NHGRI-EBI Catalog). These 516 SNPs (216 independent loci) were looked-up in genome-wide association studies for association with body mass index (BMI) and the mental traits/disorders educational attainment, neuroticism, schizophrenia, well-being, anxiety, depressive symptoms, major depressive disorder, autism-spectrum disorder, attention-deficit/hyperactivity disorder, Alzheimer's disease, bipolar disorder, aggressive behavior, and internalizing problems. A strict significance threshold of p < 6.92 × 10−6 was based on the correction for 516 SNPs and all 14 phenotypes, a second less conservative threshold (p < 9.69 × 10−5) on the correction for the 516 SNPs only.
    Results
    19 SNPs located in nine independent loci revealed p-values < 6.92 × 10−6; the less strict criterion was met by 41 SNPs in 24 independent loci. BMI and schizophrenia showed the most pronounced genetic overlap with human metabolism with three loci each meeting the strict significance threshold. Overall, genetic variation associated with estimated glomerular filtration rate showed up frequently; single metabolite SNPs were associated with more than one phenotype. Replications in independent samples were obtained for BMI and educational attainment.
    Conclusions
    Approximately 5–10% of the regions involved in the regulation of blood/urine metabolite levels seem to also play a role in BMI and mental traits/disorders and related phenotypes. If validated in metabolomic studies of the respective phenotypes, the associated blood/urine metabolites may enable novel preventive and therapeutic strategies.
  • Heeschen, C., Ryalls, J., & Hagoort, P. (1988). Psychological stress in Broca's versus Wernicke's aphasia. Clinical Linguistics & Phonetics, 2, 309-316. doi:10.3109/02699208808985262.

    Abstract

    We advance the hypothesis here that the higher-than-average vocal pitch (FO) found for speech of Broca's aphasics in experimental settings is due, in part, to increased psychological stress. Two experiments were conducted which manipulated conversational constraints and the sentence forms to be produced by aphasic patients. Our study revealed significant differences between changes in vocal pitch of agrammatic Broca's aphasics versus those of Wernicke's aphasics and normal controls. It is suggested that the greater psychological stress experienced by the Broca's aphasics, but not by the Wernicke's aphasics, accounts for these observed differences.
  • Heid, I. M., Henneman, P., Hicks, A., Coassin, S., Winkler, T., Aulchenko, Y. S., Fuchsberger, C., Song, K., Hivert, M.-F., Waterworth, D. M., Timpson, N. J., Richards, J. B., Perry, J. R. B., Tanaka, T., Amin, N., Kollerits, B., Pichler, I., Oostra, B. A., Thorand, B., Frants, R. R. and 22 moreHeid, I. M., Henneman, P., Hicks, A., Coassin, S., Winkler, T., Aulchenko, Y. S., Fuchsberger, C., Song, K., Hivert, M.-F., Waterworth, D. M., Timpson, N. J., Richards, J. B., Perry, J. R. B., Tanaka, T., Amin, N., Kollerits, B., Pichler, I., Oostra, B. A., Thorand, B., Frants, R. R., Illig, T., Dupuis, J., Glaser, B., Spector, T., Guralnik, J., Egan, J. M., Florez, J. C., Evans, D. M., Soranzo, N., Bandinelli, S., Carlson, O. D., Frayling, T. M., Burling, K., Smith, G. D., Mooser, V., Ferrucci, L., Meigs, J. B., Vollenweider, P., Dijk, K. W. v., Pramstaller, P., Kronenberg, F., & van Duijn, C. M. (2010). Clear detection of ADIPOQ locus as the major gene for plasma adiponectin: Results of genome-wide association analyses including 4659 European individuals. Atherosclerosis, 208(2), 412-420. doi:10.1016/j.atherosclerosis.2009.11.035.

    Abstract

    OBJECTIVE: Plasma adiponectin is strongly associated with various components of metabolic syndrome, type 2 diabetes and cardiovascular outcomes. Concentrations are highly heritable and differ between men and women. We therefore aimed to investigate the genetics of plasma adiponectin in men and women. METHODS: We combined genome-wide association scans of three population-based studies including 4659 persons. For the replication stage in 13795 subjects, we selected the 20 top signals of the combined analysis, as well as the 10 top signals with p-values less than 1.0 x 10(-4) for each the men- and the women-specific analyses. We further selected 73 SNPs that were consistently associated with metabolic syndrome parameters in previous genome-wide association studies to check for their association with plasma adiponectin. RESULTS: The ADIPOQ locus showed genome-wide significant p-values in the combined (p=4.3 x 10(-24)) as well as in both women- and men-specific analyses (p=8.7 x 10(-17) and p=2.5 x 10(-11), respectively). None of the other 39 top signal SNPs showed evidence for association in the replication analysis. None of 73 SNPs from metabolic syndrome loci exhibited association with plasma adiponectin (p>0.01). CONCLUSIONS: We demonstrated the ADIPOQ gene as the only major gene for plasma adiponectin, which explains 6.7% of the phenotypic variance. We further found that neither this gene nor any of the metabolic syndrome loci explained the sex differences observed for plasma adiponectin. Larger studies are needed to identify more moderate genetic determinants of plasma adiponectin.
  • Heinemann, T. (2010). The question–response system of Danish. Journal of Pragmatics, 42, 2703-2725. doi:10.1016/j.pragma.2010.04.007.

    Abstract

    This paper provides an overview of the question–response system of Danish, based on a collection of 350 questions (and responses) collected from video recordings of naturally occurring face-to-face interactions between native speakers of Danish. The paper identifies the lexico-grammatical options for formulating questions, the range of social actions that can be implemented through questions and the relationship between questions and responses. It further describes features where Danish questions differ from a range of other languages in terms of, for instance, distribution and the relationship between question format and social action. For instance, Danish has a high frequency of interrogatively formatted questions and questions that are negatively formulated, when compared to languages that have the same grammatical options. In terms of action, Danish shows a higher number of questions that are used for making suggestions, offers and requests and does not use repetition as a way of answering a question as often as other languages.
  • Henderson, L., Coltheart, M., Cutler, A., & Vincent, N. (1988). Preface. Linguistics, 26(4), 519-520. doi:10.1515/ling.1988.26.4.519.
  • Hendriks, L., Witteman, M. J., Frietman, L. C. G., Westerhof, G., Van Baaren, R. B., Engels, R. C. M. E., & Dijksterhuis, A. J. (2009). Imitation can reduce malnutrition in residents in assisted living facilities [Letter to the editor]. Journal of the American Geriatrics Society, 571(1), 187-188. doi:10.1111/j.1532-5415.2009.02074.x.
  • Heritage, J., Elliott, M. N., Stivers, T., Richardson, A., & Mangione-Smith, R. (2010). Reducing inappropriate antibiotics prescribing: The role of online commentary on physical examination findings. Patient Education and Counseling, 81, 119-125. doi:10.1016/j.pec.2009.12.005.

    Abstract

    Objective: This study investigates the relationship of ‘online commentary’(contemporaneous physician comments about physical examination [PE] findings) with (i) parent questioning of the treatment recommendation and (ii) inappropriate antibiotic prescribing. Methods: A nested cross-sectional study of 522 encounters motivated by upper respiratory symptoms in 27 California pediatric practices (38 pediatricians). Physicians completed a post-visit survey regarding physical examination findings, diagnosis, treatment, and whether they perceived the parent as expecting an antibiotic. Taped encounters were coded for ‘problem’ online commentary (PE findings discussed as significant or clearly abnormal) and ‘no problem’ online commentary (PE findings discussed reassuringly as normal or insignificant). Results: Online commentary during the PE occurred in 73% of visits with viral diagnoses (n = 261). Compared to similar cases with ‘no problem’ online commentary, ‘problem’ comments were associated with a 13% greater probability of parents uestioning a non-antibiotic treatment plan (95% CI 0-26%, p = .05,) and a 27% (95% CI: 2-52%, p < .05) greater probability of an inappropriate antibiotic prescription. Conclusion: With viral illnesses, problematic online comments are associated with more pediatrician-parent conflict over non-antibiotic treatment recommendations. This may increase inappropriate antibiotic prescribing. Practice implications: In viral cases, physicians should consider avoiding the use of problematic online commentary.
  • Hersh, T. A., Dimond, A. L., Ruth, B. A., Lupica, N. V., Bruce, J. C., Kelley, J. M., King, B. L., & Lutton, B. V. (2018). A role for the CXCR4-CXCL12 axis in the little skate, Leucoraja erinacea. American Journal of Physiology-Regulatory, Integrative and Comparative Physiology, 315, R218-R229. doi:10.1152/ajpregu.00322.2017.

    Abstract

    The interaction between C-X-C chemokine receptor type 4 (CXCR4) and its cognate ligand C-X-C motif chemokine ligand 12 (CXCL12) plays a critical role in regulating hematopoietic stem cell activation and subsequent cellular mobilization. Extensive studies of these genes have been conducted in mammals, but much less is known about the expression and function of CXCR4 and CXCL12 in non-mammalian vertebrates. In the present study, we identify simultaneous expression of CXCR4 and CXCL12 orthologs in the epigonal organ (the primary hematopoietic tissue) of the little skate, Leucoraja erinacea. Genetic and phylogenetic analyses were functionally supported by significant mobilization of leukocytes following administration of Plerixafor, a CXCR4 antagonist and clinically important drug. Our results provide evidence that, as in humans, Plerixafor disrupts CXCR4/CXCL12 binding in the little skate, facilitating release of leukocytes into the bloodstream. Our study illustrates the value of the little skate as a model organism, particularly in studies of hematopoiesis and potentially for preclinical research on hematological and vascular disorders.

    Files private

    Request files
  • Hervais-Adelman, A., Egorova, N., & Golestani, N. (2018). Beyond bilingualism: Multilingual experience correlates with caudate volume. Brain Structure and Function, 223(7), 3495-3502. doi:10.1007/s00429-018-1695-0.

    Abstract

    The multilingual brain implements mechanisms that serve to select the appropriate language as a function of the communicative environment. Engaging these mechanisms on a regular basis appears to have consequences for brain structure and function. Studies have implicated the caudate nuclei as important nodes in polyglot language control processes, and have also shown structural differences in the caudate nuclei in bilingual compared to monolingual populations. However, the majority of published work has focused on the categorical differences between monolingual and bilingual individuals, and little is known about whether these findings extend to multilingual individuals, who have even greater language control demands. In the present paper, we present an analysis of the volume and morphology of the caudate nuclei, putamen, pallidum and thalami in 75 multilingual individuals who speak three or more languages. Volumetric analyses revealed a significant relationship between multilingual experience and right caudate volume, as well as a marginally significant relationship with left caudate volume. Vertex-wise analyses revealed a significant enlargement of dorsal and anterior portions of the left caudate nucleus, known to have connectivity with executive brain regions, as a function of multilingual expertise. These results suggest that multilingual expertise might exercise a continuous impact on brain structure, and that as additional languages beyond a second are acquired, the additional demands for linguistic and cognitive control result in modifications to brain structures associated with language management processes.
  • Hervais-Adelman, A., Moser-Mercer, B., & Golestani, N. (2018). Commentary: Broca pars triangularis constitutes a “hub” of the language-control network during simultaneous language translation. Frontiers in Human Neuroscience, 12: 22. doi:10.3389/fnhum.2018.00022.

    Abstract

    A commentary on
    Broca Pars Triangularis Constitutes a “Hub” of the Language-Control Network during Simultaneous Language Translation

    by Elmer, S. (2016). Front. Hum. Neurosci. 10:491. doi: 10.3389/fnhum.2016.00491

    Elmer (2016) conducted an fMRI investigation of “simultaneous language translation” in five participants. The article presents group and individual analyses of German-to-Italian and Italian-to-German translation, confined to a small set of anatomical regions previously reported to be involved in multilingual control. Here we take the opportunity to discuss concerns regarding certain aspects of the study.
  • Hervais-Adelman, A., Davis, M. H., Johnsrude, I. S., & Carlyon, R. P. (2008). Perceptual learning of noise vocoded words: Effects of feedback and lexicality. Journal of Experimental Psychology: Human Perception and Performance, 34(2), 460-474. doi:10.1037/0096-1523.34.2.460.

    Abstract

    Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. This adjustment is reflected in improved comprehension of distorted speech with experience. For noise vocoding, a manipulation that removes spectral detail from speech, listeners' word report showed a significantly greater improvement over trials for listeners that heard clear speech presentations before rather than after hearing distorted speech (clear-then-distorted compared with distorted-then-clear feedback, in Experiment 1). This perceptual learning generalized to untrained words suggesting a sublexical locus for learning and was equivalent for word and nonword training stimuli (Experiment 2). These findings point to the crucial involvement of phonological short-term memory and top-down processes in the perceptual learning of noise-vocoded speech. Similar processes may facilitate comprehension of speech in an unfamiliar accent or following cochlear implantation.
  • Heyne, H. O., Singh, T., Stamberger, H., Jamra, R. A., Caglayan, H., Craiu, D., Guerrini, R., Helbig, K. L., Koeleman, B. P. C., Kosmicki, J. A., Linnankivi, T., May, P., Muhle, H., Møller, R. S., Neubauer, B. A., Palotie, A., Pendziwiat, M., Striano, P., Tang, S., Wu, S. and 9 moreHeyne, H. O., Singh, T., Stamberger, H., Jamra, R. A., Caglayan, H., Craiu, D., Guerrini, R., Helbig, K. L., Koeleman, B. P. C., Kosmicki, J. A., Linnankivi, T., May, P., Muhle, H., Møller, R. S., Neubauer, B. A., Palotie, A., Pendziwiat, M., Striano, P., Tang, S., Wu, S., EuroEPINOMICS RES Consortium, De Kovel, C. G. F., Poduri, A., Weber, Y. G., Weckhuysen, S., Sisodiya, S. M., Daly, M. J., Helbig, I., Lal, D., & Lemke, J. R. (2018). De novo variants in neurodevelopmental disorders with epilepsy. Nature Genetics, 50, 1048-1053. doi:10.1038/s41588-018-0143-7.

    Abstract

    Epilepsy is a frequent feature of neurodevelopmental disorders (NDDs), but little is known about genetic differences between NDDs with and without epilepsy. We analyzed de novo variants (DNVs) in 6,753 parent–offspring trios ascertained to have different NDDs. In the subset of 1,942 individuals with NDDs with epilepsy, we identified 33 genes with a significant excess of DNVs, of which SNAP25 and GABRB2 had previously only limited evidence of disease association. Joint analysis of all individuals with NDDs also implicated CACNA1E as a novel disease-associated gene. Comparing NDDs with and without epilepsy, we found missense DNVs, DNVs in specific genes, age of recruitment, and severity of intellectual disability to be associated with epilepsy. We further demonstrate the extent to which our results affect current genetic testing as well as treatment, emphasizing the benefit of accurate genetic diagnosis in NDDs with epilepsy.
  • Heyselaar, E., Mazaheri, A., Hagoort, P., & Segaert, K. (2018). Changes in alpha activity reveal that social opinion modulates attention allocation during face processing. NeuroImage, 174, 432-440. doi:10.1016/j.neuroimage.2018.03.034.

    Abstract

    Participants’ performance differs when conducting a task in the presence of a secondary individual, moreover the opinion the participant has of this individual also plays a role. Using EEG, we investigated how previous interactions with, and evaluations of, an avatar in virtual reality subsequently influenced attentional allocation to the face of that avatar. We focused on changes in the alpha activity as an index of attentional allocation. We found that the onset of an avatar’s face whom the participant had developed a rapport with induced greater alpha suppression. This suggests greater attentional resources are allocated to the interacted-with avatars. The evaluative ratings of the avatar induced a U-shaped change in alpha suppression, such that participants paid most attention when the avatar was rated as average. These results suggest that attentional allocation is an important element of how behaviour is altered in the presence of a secondary individual and is modulated by our opinion of that individual.

    Additional information

    mmc1.docx
  • Hill, C. (2010). [Review of the book Discourse and Grammar in Australian Languages ed. by Ilana Mushin and Brett Baker]. Studies in Language, 34(1), 215-225. doi:10.1075/sl.34.1.12hil.
  • Hilverman, C., Clough, S., Duff, M. C., & Cook, S. W. (2018). Patients with hippocampal amnesia successfully integrate gesture and speech. Neuropsychologia, 117, 332-338. doi:10.1016/j.neuropsychologia.2018.06.012.

    Abstract

    During conversation, people integrate information from co-speech hand gestures with information in spoken language. For example, after hearing the sentence, "A piece of the log flew up and hit Carl in the face" while viewing a gesture directed at the nose, people tend to later report that the log hit Carl in the nose (information only in gesture) rather than in the face (information in speech). The cognitive and neural mechanisms that support the integration of gesture with speech are unclear. One possibility is that the hippocampus known for its role in relational memory and information integration is necessary for integrating gesture and speech. To test this possibility, we examined how patients with hippocampal amnesia and healthy and brain-damaged comparison participants express information from gesture in a narrative retelling task. Participants watched videos of an experimenter telling narratives that included hand gestures that contained supplementary information. Participants were asked to retell the narratives and their spoken retellings were assessed for the presence of information from gesture. For features that had been accompanied by supplementary gesture, patients with amnesia retold fewer of these features overall and fewer retellings that matched the speech from the narrative. Yet their retellings included features that contained information that had been present uniquely in. gesture in amounts that were not reliably different from comparison groups. Thus, a functioning hippocampus is not necessary for gesture-speech integration over short timescales. Providing unique information in gesture may enhance communication for individuals with declarative memory impairment, possibly via non-declarative memory mechanisms.
  • Hoey, E. (2018). How speakers continue with talk after a lapse in conversation. Research on Language and Social Interaction, 51(3), 329-346. doi:10.1080/08351813.2018.1485234.

    Abstract

    How do conversational participants continue with turn-by-turn talk after a momentary lapse? If all participants forgo the option to speak at possible sequence completion, an extended silence may emerge that can indicate a lack of anything to talk about next. For the interaction to proceed recognizably as a conversation, the postlapse turn needs to implicate more talk. Using conversation analysis, I examine three practical alternatives regarding sequentially implicative postlapse turns: Participants may move to end the interaction, continue with some prior matter, or start something new. Participants are shown using resources grounded in the interaction’s overall structural organization, the materials from the interaction-so-far, the mentionables they bring to interaction, and the situated environment itself. Comparing these alternatives, there’s suggestive quantitative evidence for a preference for continuation. The analysis of lapse resolution shows lapses as places for the management of multiple possible courses of action. Data are in U.S. and UK English.
  • Holler, J., Shovelton, H., & Beattie, G. (2009). Do iconic gestures really contribute to the semantic information communicated in face-to-face interaction? Journal of Nonverbal Behavior, 33, 73-88.
  • Holler, J., & Wilkin, K. (2009). Communicating common ground: how mutually shared knowledge influences the representation of semantic information in speech and gesture in a narrative task. Language and Cognitive Processes, 24, 267-289.
  • Holler, J., Kendrick, K. H., & Levinson, S. C. (2018). Processing language in face-to-face conversation: Questions with gestures get faster responses. Psychonomic Bulletin & Review, 25(5), 1900-1908. doi:10.3758/s13423-017-1363-z.

    Abstract

    The home of human language use is face-to-face interaction, a context in which communicative exchanges are characterised not only by bodily signals accompanying what is being said but also by a pattern of alternating turns at talk. This transition between turns is astonishingly fast—typically a mere 200-ms elapse between a current and a next speaker’s contribution—meaning that comprehending, producing, and coordinating conversational contributions in time is a significant challenge. This begs the question of whether the additional information carried by bodily signals facilitates or hinders language processing in this time-pressured environment. We present analyses of multimodal conversations revealing that bodily signals appear to profoundly influence language processing in interaction: Questions accompanied by gestures lead to shorter turn transition times—that is, to faster responses—than questions without gestures, and responses come earlier when gestures end before compared to after the question turn has ended. These findings hold even after taking into account prosodic patterns and other visual signals, such as gaze. The empirical findings presented here provide a first glimpse of the role of the body in the psycholinguistic processes underpinning human communication
  • Hömke, P., Holler, J., & Levinson, S. C. (2018). Eye blinks are perceived as communicative signals in human face-to-face interaction. PLoS One, 13(12): e0208030. doi:10.1371/journal.pone.0208030.

    Abstract

    In face-to-face communication, recurring intervals of mutual gaze allow listeners to provide speakers with visual feedback (e.g. nodding). Here, we investigate the potential feedback function of one of the subtlest of human movements—eye blinking. While blinking tends to be subliminal, the significance of mutual gaze in human interaction raises the question whether the interruption of mutual gaze through blinking may also be communicative. To answer this question, we developed a novel, virtual reality-based experimental paradigm, which enabled us to selectively manipulate blinking in a virtual listener, creating small differences in blink duration resulting in ‘short’ (208 ms) and ‘long’ (607 ms) blinks. We found that speakers unconsciously took into account the subtle differences in listeners’ blink duration, producing substantially shorter answers in response to long listener blinks. Our findings suggest that, in addition to physiological, perceptual and cognitive functions, listener blinks are also perceived as communicative signals, directly influencing speakers’ communicative behavior in face-to-face communication. More generally, these findings may be interpreted as shedding new light on the evolutionary origins of mental-state signaling, which is a crucial ingredient for achieving mutual understanding in everyday social interaction.

    Additional information

    Supporting information
  • Howarth, H., Sommer, V., & Jordan, F. (2010). Visual depictions of female genitalia differ depending on source. Medical Humanities, 36, 75-79. doi:10.1136/jmh.2009.003707.

    Abstract

    Very little research has attempted to describe normal human variation in female genitalia, and no studies have compared the visual images that women might use in constructing their ideas of average and acceptable genital morphology to see if there are any systematic differences. Our objective was to determine if visual depictions of the vulva differed according to their source so as to alert medical professionals and their patients to how these depictions might capture variation and thus influence perceptions of "normality". We conducted a comparative analysis by measuring (a) published visual materials from human anatomy textbooks in a university library, (b) feminist publications (both print and online) depicting vulval morphology, and (c) online pornography, focusing on the most visited and freely accessible sites in the UK. Post-hoc tests showed that labial protuberance was significantly less (p < .001, equivalent to approximately 7 mm) in images from online pornography compared to feminist publications. All five measures taken of vulval features were significantly correlated (p < .001) in the online pornography sample, indicating a less varied range of differences in organ proportions than the other sources where not all measures were correlated. Women and health professionals should be aware that specific sources of imagery may depict different types of genital morphology and may not accurately reflect true variation in the population, and consultations for genital surgeries should include discussion about the actual and perceived range of variation in female genital morphology.
  • Howe, L. J., Lee, M. K., Sharp, G. C., Smith, G. D. W., St Pourcain, B., Shaffer, J. R., Ludwig, K. U., Mangold, E., Marazita, M. L., Feingold, E., Zhurov, A., Stergiakouli, E., Sandy, J., Richmond, S., Weinberg, S. M., Hemani, G., & Lewis, S. J. (2018). Investigating the shared genetics of non-syndromic cleft lip/palate and facial morphology. PLoS Genetics, 14(8): e1007501. doi:10.1371/journal.pgen.1007501.

    Abstract

    There is increasing evidence that genetic risk variants for non-syndromic cleft lip/palate (nsCL/P) are also associated with normal-range variation in facial morphology. However, previous analyses are mostly limited to candidate SNPs and findings have not been consistently replicated. Here, we used polygenic risk scores (PRS) to test for genetic overlap between nsCL/P and seven biologically relevant facial phenotypes. Where evidence was found of genetic overlap, we used bidirectional Mendelian randomization (MR) to test the hypothesis that genetic liability to nsCL/P is causally related to implicated facial phenotypes. Across 5,804 individuals of European ancestry from two studies, we found strong evidence, using PRS, of genetic overlap between nsCL/P and philtrum width; a 1 S.D. increase in nsCL/P PRS was associated with a 0.10 mm decrease in philtrum width (95% C.I. 0.054, 0.146; P = 2x10-5). Follow-up MR analyses supported a causal relationship; genetic variants for nsCL/P homogeneously cause decreased philtrum width. In addition to the primary analysis, we also identified two novel risk loci for philtrum width at 5q22.2 and 7p15.2 in our Genome-wide Association Study (GWAS) of 6,136 individuals. Our results support a liability threshold model of inheritance for nsCL/P, related to abnormalities in development of the philtrum.
  • Hoymann, G. (2010). Questions and responses in ╪Ākhoe Hai||om. Journal of Pragmatics, 42(10), 2726-2740. doi:10.1016/j.pragma.2010.04.008.

    Abstract

    This paper examines ╪Ākhoe Hai||om, a Khoe language of the Khoisan family spoken in Northern Namibia. I document the way questions are posed in natural conversation, the actions the questions are used for and the manner in which they are responded to. I show that in this language speakers rely most heavily on content questions. I also find that speakers of ╪Ākhoe Hai||om address fewer questions to a specific individual than would be expected from prior research on Indo European languages. Finally, I discuss some possible explanations for these findings.
  • Huettig, F., Kolinsky, R., & Lachmann, T. (2018). The culturally co-opted brain: How literacy affects the human mind. Language, Cognition and Neuroscience, 33(3), 275-277. doi:10.1080/23273798.2018.1425803.

    Abstract

    Introduction to the special issue 'The Effects of Literacy on Cognition and Brain Functioning'
  • Huettig, F., Kolinsky, R., & Lachmann, T. (Eds.). (2018). The effects of literacy on cognition and brain functioning [Special Issue]. Language, Cognition and Neuroscience, 33(3).
  • Huettig, F., & Hartsuiker, R. J. (2008). When you name the pizza you look at the coin and the bread: Eye movements reveal semantic activation during word production. Memory & Cognition, 36(2), 341-360. doi:10.3758/MC.36.2.341.

    Abstract

    Two eyetracking experiments tested for activation of category coordinate and perceptually related concepts when speakers prepare the name of an object. Speakers saw four visual objects in a 2 × 2 array and identified and named a target picture on the basis of either category (e.g., "What is the name of the musical instrument?") or visual-form (e.g., "What is the name of the circular object?") instructions. There were more fixations on visual-form competitors and category coordinate competitors than on unrelated objects during name preparation, but the increased overt attention did not affect naming latencies. The data demonstrate that eye movements are a sensitive measure of the overlap between the conceptual (including visual-form) information that is accessed in preparation for word production and the conceptual knowledge associated with visual objects. Furthermore, these results suggest that semantic activation of competitor concepts does not necessarily affect lexical selection, contrary to the predictions of lexical-selection-by-competition accounts (e.g., Levelt, Roelofs, & Meyer, 1999).
  • Huettig, F., Chen, J., Bowerman, M., & Majid, A. (2010). Do language-specific categories shape conceptual processing? Mandarin classifier distinctions influence eye gaze behavior, but only during linguistic processing. Journal of Cognition and Culture, 10(1/2), 39-58. doi:10.1163/156853710X497167.

    Abstract

    In two eye-tracking studies we investigated the influence of Mandarin numeral classifiers - a grammatical category in the language - on online overt attention. Mandarin speakers were presented with simple sentences through headphones while their eye-movements to objects presented on a computer screen were monitored. The crucial question is what participants look at while listening to a pre-specified target noun. If classifier categories influence Mandarin speakers' general conceptual processing, then on hearing the target noun they should look at objects that are members of the same classifier category - even when the classifier is not explicitly present (cf. Huettig & Altmann, 2005). The data show that when participants heard a classifier (e.g., ba3, Experiment 1) they shifted overt attention significantly more to classifier-match objects (e.g., chair) than to distractor objects. But when the classifier was not explicitly presented in speech, overt attention to classifier-match objects and distractor objects did not differ (Experiment 2). This suggests that although classifier distinctions do influence eye-gaze behavior, they do so only during linguistic processing of that distinction and not in moment-to-moment general conceptual processing.
  • Huettig, F., Lachmann, T., Reis, A., & Petersson, K. M. (2018). Distinguishing cause from effect - Many deficits associated with developmental dyslexia may be a consequence of reduced and suboptimal reading experience. Language, Cognition and Neuroscience, 33(3), 333-350. doi:10.1080/23273798.2017.1348528.

    Abstract

    The cause of developmental dyslexia is still unknown despite decades of intense research. Many causal explanations have been proposed, based on the range of impairments displayed by affected individuals. Here we draw attention to the fact that many of these impairments are also shown by illiterate individuals who have not received any or very little reading instruction. We suggest that this fact may not be coincidental and that the performance differences of both illiterates and individuals with dyslexia compared to literate controls are, to a substantial extent, secondary consequences of either reduced or suboptimal reading experience or a combination of both. The search for the primary causes of reading impairments will make progress if the consequences of quantitative and qualitative differences in reading experience are better taken into account and not mistaken for the causes of reading disorders. We close by providing four recommendations for future research.
  • Huettig, F., & Hartsuiker, R. J. (2010). Listening to yourself is like listening to others: External, but not internal, verbal self-monitoring is based on speech perception. Language and Cognitive Processes, 3, 347 -374. doi:10.1080/01690960903046926.

    Abstract

    Theories of verbal self-monitoring generally assume an internal (pre-articulatory) monitoring channel, but there is debate about whether this channel relies on speech perception or on production-internal mechanisms. Perception-based theories predict that listening to one's own inner speech has similar behavioral consequences as listening to someone else's speech. Our experiment therefore registered eye-movements while speakers named objects accompanied by phonologically related or unrelated written words. The data showed that listening to one's own speech drives eye-movements to phonologically related words, just as listening to someone else's speech does in perception experiments. The time-course of these eye-movements was very similar to that in other-perception (starting 300 ms post-articulation), which demonstrates that these eye-movements were driven by the perception of overt speech, not inner speech. We conclude that external, but not internal monitoring, is based on speech perception.
  • Huisman, J. L. A., & Majid, A. (2018). Psycholinguistic variables matter in odor naming. Memory & Cognition, 46, 577-588. doi:10.3758/s13421-017-0785-1.

    Abstract

    People from Western societies generally find it difficult to name odors. In trying to explain this, the olfactory literature has proposed several theories that focus heavily on properties of the odor itself but rarely discuss properties of the label used to describe it. However, recent studies show speakers of languages with dedicated smell lexicons can name odors with relative ease. Has the role of the lexicon been overlooked in the olfactory literature? Word production studies show properties of the label, such as word frequency and semantic context, influence naming; but this field of research focuses heavily on the visual domain. The current study combines methods from both fields to investigate word production for olfaction in two experiments. In the first experiment, participants named odors whose veridical labels were either high-frequency or low-frequency words in Dutch, and we found that odors with high-frequency labels were named correctly more often. In the second experiment, edibility was used for manipulating semantic context in search of a semantic interference effect, presenting the odors in blocks of edible and inedible odor source objects to half of the participants. While no evidence was found for a semantic interference effect, an effect of word frequency was again present. Our results demonstrate psycholinguistic variables—such as word frequency—are relevant for olfactory naming, and may, in part, explain why it is difficult to name odors in certain languages. Olfactory researchers cannot afford to ignore properties of an odor’s label.
  • Hulten, A., Vihla, M., Laine, M., & Salmelin, R. (2009). Accessing newly learned names and meanings in the native language. Human Brain Mapping, 30, 979-989. doi:10.1002/hbm.20561.

    Abstract

    Ten healthy adults encountered pictures of unfamiliar archaic tools and successfully learned either their name, verbal definition of their usage, or both. Neural representation of the newly acquired information was probed with magnetoencephalography in an overt picture-naming task before and after learning, and in two categorization tasks after learning. Within 400 ms, activation proceeded from occipital through parietal to left temporal cortex, inferior frontal cortex (naming) and right temporal cortex (categorization). Comparison of naming of newly learned versus familiar pictures indicated that acquisition and maintenance of word forms are supported by the same neural network. Explicit access to newly learned phonology when such information was known strongly enhanced left temporal activation. By contrast, access to newly learned semantics had no comparable, direct neural effects. Both the behavioral learning pattern and neurophysiological results point to fundamentally different implementation of and access to phonological versus semantic features in processing pictured objects.
  • Hulten, A., Laaksonen, H., Vihla, M., Laine, M., & Salmelin, R. (2010). Modulation of brain activity after learning predicts long-term memory for words. Journal of Neuroscience, 30(45), 15160-15164. doi:10.1523/​JNEUROSCI.1278-10.2010.

    Abstract

    The acquisition and maintenance of new language information, such as picking up new words, is a critical human ability that is needed throughout the life span. Most likely you learned the word “blog” quite recently as an adult, whereas the word “kipe,” which in the 1970s denoted stealing, now seems unfamiliar. Brain mechanisms underlying the long-term maintenance of new words have remained unknown, albeit they could provide important clues to the considerable individual differences in the ability to remember words. After successful training of a set of novel object names we tracked, over a period of 10 months, the maintenance of this new vocabulary in 10 human participants by repeated behavioral tests and magnetoencephalography measurements of overt picture naming. When namingrelated activation in the left frontal and temporal cortex was enhanced 1 week after training, compared with the level at the end of training, the individual retained a good command of the new vocabulary at 10 months; vice versa, individuals with reduced activation at 1 week posttraining were less successful in recalling the names at 10 months. This finding suggests an individual neural marker for memory, in the context of language. Learning is not over when the acquisition phase has been successfully completed: neural events during the access to recently established word representations appear to be important for the long-term outcome of learning.
  • Hunley, K., Dunn, M., Lindström, E., Reesink, G., Terrill, A., Healy, M. E., Koki, G., Friedlaender, F. R., & Friedlaender, J. S. (2008). Genetic and linguistic coevolution in Northern Island Melanesia. PLoS Genetics, 4(10): e1000239. doi:10.1371/journal.pgen.1000239.

    Abstract

    Recent studies have detailed a remarkable degree of genetic and linguistic diversity in Northern Island Melanesia. Here we utilize that diversity to examine two models of genetic and linguistic coevolution. The first model predicts that genetic and linguistic correspondences formed following population splits and isolation at the time of early range expansions into the region. The second is analogous to the genetic model of isolation by distance, and it predicts that genetic and linguistic correspondences formed through continuing genetic and linguistic exchange between neighboring populations. We tested the predictions of the two models by comparing observed and simulated patterns of genetic variation, genetic and linguistic trees, and matrices of genetic, linguistic, and geographic distances. The data consist of 751 autosomal microsatellites and 108 structural linguistic features collected from 33 Northern Island Melanesian populations. The results of the tests indicate that linguistic and genetic exchange have erased any evidence of a splitting and isolation process that might have occurred early in the settlement history of the region. The correlation patterns are also inconsistent with the predictions of the isolation by distance coevolutionary process in the larger Northern Island Melanesian region, but there is strong evidence for the process in the rugged interior of the largest island in the region (New Britain). There we found some of the strongest recorded correlations between genetic, linguistic, and geographic distances. We also found that, throughout the region, linguistic features have generally been less likely to diffuse across population boundaries than genes. The results from our study, based on exceptionally fine-grained data, show that local genetic and linguistic exchange are likely to obscure evidence of the early history of a region, and that language barriers do not particularly hinder genetic exchange. In contrast, global patterns may emphasize more ancient demographic events, including population splits associated with the early colonization of major world regions.
  • Inacio, F., Faisca, L., Forkstam, C., Araujo, S., Bramao, I., Reis, A., & Petersson, K. M. (2018). Implicit sequence learning is preserved in dyslexic children. Annals of Dyslexia, 68(1), 1-14. doi:10.1007/s11881-018-0158-x.

    Abstract

    This study investigates the implicit sequence learning abilities of dyslexic children using an artificial grammar learning task with an extended exposure period. Twenty children with developmental dyslexia participated in the study and were matched with two control groups—one matched for age and other for reading skills. During 3 days, all participants performed an acquisition task, where they were exposed to colored geometrical forms sequences with an underlying grammatical structure. On the last day, after the acquisition task, participants were tested in a grammaticality classification task. Implicit sequence learning was present in dyslexic children, as well as in both control groups, and no differences between groups were observed. These results suggest that implicit learning deficits per se cannot explain the characteristic reading difficulties of the dyslexics.
  • Indefrey, P., & Gullberg, M. (Eds.). (2008). Time to speak: Cognitive and neural prerequisites for time in language [Special Issue]. Language Learning, 58(suppl. 1).

    Abstract

    Time is a fundamental aspect of human cognition and action. All languages have developed rich means to express various facets of time, such as bare time spans, their position on the time line, or their duration. The articles in this volume give an overview of what we know about the neural and cognitive representations of time that speakers can draw on in language. Starting with an overview of the main devices used to encode time in natural language, such as lexical elements, tense and aspect, the research presented in this volume addresses the relationship between temporal language, culture, and thought, the relationship between verb aspect and mental simulations of events, the development of temporal concepts, time perception, the storage and retrieval of temporal information in autobiographical memory, and neural correlates of tense processing and sequence planning. The psychological and neurobiological findings presented here will provide important insights to inform and extend current studies of time in language and in language acquisition.
  • Indefrey, P. (1998). De neurale architectuur van taal: Welke hersengebieden zijn betrokken bij het spreken. Neuropraxis, 2(6), 230-237.
  • Indefrey, P., & Gullberg, M. (2010). Foreword. Language Learning, 60(S2), v. doi:10.1111/j.1467-9922.2010.00596.x.

    Abstract

    The articles in this volume are the result of an invited conference entitled "The Earliest Stages of Language Learning" held at the Max Planck Institute for Psycholinguistics in Nijmegen, The Netherlands, in October 2009.
  • Indefrey, P., Gruber, O., Brown, C. M., Hagoort, P., Posse, S., & Kleinschmidt, A. (1998). Lexicality and not syllable frequency determine lateralized premotor activation during the pronunciation of word-like stimuli: An fMRI study. NeuroImage, 7, S4.
  • Indefrey, P., & Gullberg, M. (2010). The earliest stages of language learning: Introduction. Language Learning, 60(S2), 1-4. doi:10.1111/j.1467-9922.2010.00597.x.
  • Ingason, A., Giegling, I., Cichon, S., Hansen, T., Rasmussen, H. B., Nielsen, J., Jurgens, G., Muglia, P., Hartmann, A. M., Strengman, E., Vasilescu, C., Muhleisen, T. W., Djurovic, S., Melle, I., Lerer, B., Möller, H.-J., Francks, C., Pietilainen, O. P. H., Lonnqvist, J., Suvisaari, J. and 20 moreIngason, A., Giegling, I., Cichon, S., Hansen, T., Rasmussen, H. B., Nielsen, J., Jurgens, G., Muglia, P., Hartmann, A. M., Strengman, E., Vasilescu, C., Muhleisen, T. W., Djurovic, S., Melle, I., Lerer, B., Möller, H.-J., Francks, C., Pietilainen, O. P. H., Lonnqvist, J., Suvisaari, J., Tuulio-Henriksson, A., Walshe, M., Vassos, E., Di Forti, M., Murray, R., Bonetto, C., Tosato, S., Cantor, R. M., Rietschel, M., Craddock, N., Owen, M. J., Andreassen, O. A., Nothen, M. M., Peltonen, L., St. Clair, D., Ophoff, R. A., O’Donovan, M. C., Collier, D. A., Werge, T., & Rujescu, D. (2010). A large replication study and meta-analysis in European samples provides further support for association of AHI1 markers with schizophrenia. Human Molecular Genetics, 19(7), 1379-1386. doi:10.1093/hmg/ddq009.

    Abstract

    The Abelson helper integration site 1 (AHI1) gene locus on chromosome 6q23 is among a group of candidate loci for schizophrenia susceptibility that were initially identified by linkage followed by linkage disequilibrium mapping, and subsequent replication of the association in an independent sample. Here, we present results of a replication study of AHI1 locus markers, previously implicated in schizophrenia, in a large European sample (in total 3907 affected and 7429 controls). Furthermore, we perform a meta-analysis of the implicated markers in 4496 affected and 18,920 controls. Both the replication study of new samples and the meta-analysis show evidence for significant overrepresentation of all tested alleles in patients compared with controls (meta-analysis; P = 8.2 x 10(-5)-1.7 x 10(-3), common OR = 1.09-1.11). The region contains two genes, AHI1 and C6orf217, and both genes-as well as the neighbouring phosphodiesterase 7B (PDE7B)-may be considered candidates for involvement in the genetic aetiology of schizophrenia.
  • Isaac, A., Schlobach, S., Matthezing, H., & Zinn, C. (2008). Integrated access to cultural heritage resources through representation and alignment of controlled vocabularies. Library Review, 57(3), 187-199.
  • Isaac, A., Wang, S., Van der Meij, L., Schlobach, S., Zinn, C., & Matthezing, H. (2009). Evaluating thesaurus alignments for semantic interoperability in the library domain. IEEE Intelligent Systems, 24(2), 76-86.

    Abstract

    Thesaurus alignments play an important role in realising efficient access to heterogeneous Cultural Heritage data. Current technology, however, provides only limited value for such access as it fails to bridge the gap between theoretical study and user needs that stem from practical application requirements. In this paper, we explore common real-world problems of a library, and identify solutions that would greatly benefit from a more application embedded study, development, and evaluation of matching technology.
  • Jackson, C., & Roberts, L. (2010). Animacy affects the processing of subject–object ambiguities in the second language: Evidence from self-paced reading with German second language learners of Dutch. Applied Psycholinguistics, 31(4), 671-691. doi:10.1017/S0142716410000196.

    Abstract

    The results of a self-paced reading study with German second language (L2) learners of Dutch showed that noun animacy affected the learners' on-line commitments when comprehending relative clauses in their L2. Earlier research has found that German L2 learners of Dutch do not show an on-line preference for subject–object word order in temporarily ambiguous relative clauses when no disambiguating material is available prior to the auxiliary verb. We investigated whether manipulating the animacy of the ambiguous noun phrases would push the learners to make an on-line commitment to either a subject- or object-first analysis. Results showed they performed like Dutch native speakers in that their reading times reflected an interaction between topichood and animacy in the on-line assignment of grammatical roles
  • Jackson, C. N., Mormer, E., & Brehm, L. (2018). The production of subject-verb agreement among Swedish and Chinese second language speakers of English. Studies in Second Language Acquisition, 40(4), 907-921. doi: 10.1017/S0272263118000025.

    Abstract

    This study uses a sentence completion task with Swedish and Chinese L2 English speakers to investigate how L1 morphosyntax and L2 proficiency influence L2 English subject-verb agreement production. Chinese has limited nominal and verbal number morphology, while Swedish has robust noun phrase (NP) morphology but does not number-mark verbs. Results showed that like L1 English speakers, both L2 groups used grammatical and conceptual number to produce subject-verb agreement. However, only L1 Chinese speakers—and less-proficient speakers in both L2 groups—were similarly influenced by grammatical and conceptual number when producing the subject NP. These findings demonstrate how L2 proficiency, perhaps combined with cross-linguistic differences, influence L2 production and underscore that encoding of noun and verb number are not independent.
  • Jacobs, A. M., & Willems, R. M. (2018). The fictive brain: Neurocognitive correlates of engagement in literature. Review of General Psychology, 22(2), 147-160. doi:10.1037/gpr0000106.

    Abstract

    Fiction is vital to our being. Many people enjoy engaging with fiction every day. Here we focus on literary reading as 1 instance of fiction consumption from a cognitive neuroscience perspective. The brain processes which play a role in the mental construction of fiction worlds and the related engagement with fictional characters, remain largely unknown. The authors discuss the neurocognitive poetics model (Jacobs, 2015a) of literary reading specifying the likely neuronal correlates of several key processes in literary reading, namely inference and situation model building, immersion, mental simulation and imagery, figurative language and style, and the issue of distinguishing fact from fiction. An overview of recent work on these key processes is followed by a discussion of methodological challenges in studying the brain bases of fiction processing
  • Jadoul, Y., Thompson, B., & De Boer, B. (2018). Introducing Parselmouth: A Python interface to Praat. Journal of Phonetics, 71, 1-15. doi:10.1016/j.wocn.2018.07.001.

    Abstract

    This paper introduces Parselmouth, an open-source Python library that facilitates access to core functionality of Praat in Python, in an efficient and programmer-friendly way. We introduce and motivate the package, and present simple usage examples. Specifically, we focus on applications in data visualisation, file manipulation, audio manipulation, statistical analysis, and integration of Parselmouth into a Python-based experimental design for automated, in-the-loop manipulation of acoustic data. Parselmouth is available at https://github.com/YannickJadoul/Parselmouth.
  • Jaeger, T. F., & Norcliffe, E. (2009). The cross-linguistic study of sentence production. Language and Linguistics Compass, 3, 866-887. doi:10.1111/j.1749-818x.2009.00147.x.

    Abstract

    The mechanisms underlying language production are often assumed to be universal, and hence not contingent on a speaker’s language. This assumption is problematic for at least two reasons. Given the typological diversity of the world’s languages, only a small subset of languages has actually been studied psycholinguistically. And, in some cases, these investigations have returned results that at least superficially raise doubt about the assumption of universal production mechanisms. The goal of this paper is to illustrate the need for more psycholinguistic work on a typologically more diverse set of languages. We summarize cross-linguistic work on sentence production (specifically: grammatical encoding), focusing on examples where such work has improved our theoretical understanding beyond what studies on English alone could have achieved. But cross-linguistic research has much to offer beyond the testing of existing hypotheses: it can guide the development of theories by revealing the full extent of the human ability to produce language structures. We discuss the potential for interdisciplinary collaborations, and close with a remark on the impact of language endangerment on psycholinguistic research on understudied languages.
  • Janse, E., De Bree, E., & Brouwer, S. (2010). Decreased sensitivity to phonemic mismatch in spoken word processing in adult developmental dyslexia. Journal of Psycholinguistic Research, 39(6), 523-539. doi:10.1007/s10936-010-9150-2.

    Abstract

    Initial lexical activation in typical populations is a direct reflection of the goodness of fit between the presented stimulus and the intended target. In this study, lexical activation was investigated upon presentation of polysyllabic pseudowords (such as procodile for crocodile) for the atypical population of dyslexic adults to see to what extent mismatching phonemic information affects lexical activation in the face of overwhelming support for one specific lexical candidate. Results of an auditory lexical decision task showed that sensitivity to phonemic mismatch was less in the dyslexic population, compared to the respective control group. However, the dyslexic participants were outperformed by their controls only for word-initial mismatches. It is argued that a subtle speech decoding deficit affects lexical activation levels and makes spoken word processing less robust against distortion.
  • Janse, E. (2009). Neighbourhood density effects in auditory nonword processing in aphasic listeners. Clinical Linguistics and Phonetics, 23(3), 196-207. doi:10.1080/02699200802394989.

    Abstract

    This study investigates neighbourhood density effects on lexical decision performance (both accuracy and response times) of aphasic patients. Given earlier results on lexical activation and deactivation in Broca's and Wernicke's aphasia, the prediction was that smaller neighbourhood density effects would be found for Broca's aphasic patients, compared to age-matched non-brain-damaged control participants, whereas enlarged density effects were expected for Wernicke's aphasic patients. The results showed density effects for all three groups of listeners, and overall differences in performance between groups, but no significant interaction between neighbourhood density and listener group. Several factors are discussed to account for the present results.
  • Janse, E. (2009). Processing of fast speech by elderly listeners. Journal of the Acoustical Society of America, 125(4), 2361-2373. doi:10.1121/1.3082117.

    Abstract

    This study investigates the relative contributions of auditory and cognitive factors to the common finding that an increase in speech rate affects elderly listeners more than young listeners. Since a direct relation between non-auditory factors, such as age-related cognitive slowing, and fast speech performance has been difficult to demonstrate, the present study took an on-line, rather than off-line, approach and focused on processing time. Elderly and young listeners were presented with speech at two rates of time compression and were asked to detect pre-assigned target words as quickly as possible. A number of auditory and cognitive measures were entered in a statistical model as predictors of elderly participants’ fast speech performance: hearing acuity, an information processing rate measure, and two measures of reading speed. The results showed that hearing loss played a primary role in explaining elderly listeners’ increased difficulty with fast speech. However, non-auditory factors such as reading speed and the extent to which participants were affected by
    increased rate of presentation in a visual analog of the listening experiment also predicted fast
    speech performance differences among the elderly participants. These on-line results confirm that slowed information processing is indeed part of elderly listeners’ problem keeping up with fast language
  • Janse, E., & Ernestus, M. (2009). Recognition of reduced speech and use of phonetic context in listeners with age-related hearing impairment [Abstract]. Journal of the Acoustical Society of America, 125(4), 2535.
  • Janse, E. (2010). Spoken word processing and the effect of phonemic mismatch in aphasia. Aphasiology, 24(1), 3-27. doi:10.1080/02687030802339997.

    Abstract

    Background: There is evidence that, unlike in typical populations, initial lexical activation upon hearing spoken words in aphasic patients is not a direct reflection of the goodness of fit between the presented stimulus and the intended target. Earlier studies have mainly used short monosyllabic target words. Short words are relatively difficult to recognise because they are not highly redundant: changing one phoneme will often result in a (similar-sounding) different word. Aims: The present study aimed to investigate sensitivity of the lexical recognition system in aphasia. The focus was on longer words that contain more redundancy, to investigate whether aphasic adults might be impaired in deactivation of strongly activated lexical candidates. This was done by studying lexical activation upon presentation of spoken polysyllabic pseudowords (such as procodile) to see to what extent mismatching phonemic information leads to deactivation in the face of overwhelming support for one specific lexical candidate. Methods & Procedures: Speeded auditory lexical decision was used to investigate response time and accuracy to pseudowords with a word-initial or word-final phonemic mismatch in 21 aphasic patients and in an age-matched control group. Outcomes & Results: Results of an auditory lexical decision task showed that aphasic participants were less sensitive to phonemic mismatch if there was strong evidence for one particular lexical candidate, compared to the control group. Classifications of patients as Broca's vs Wernicke's or as fluent vs non-fluent did not reveal differences in sensitivity to mismatch between aphasia types. There was no reliable relationship between measures of auditory verbal short-term memory and lexical decision performance. Conclusions: It is argued that the aphasic results can best be viewed as lexical “overactivation” and that a verbal short-term memory account is less appropriate.
  • Janse, E. (2008). Spoken-word processing in aphasia: Effects of item overlap and item repetition. Brain and Language, 105, 185-198. doi:10.1016/j.bandl.2007.10.002.

    Abstract

    Two studies were carried out to investigate the effects of presentation of primes showing partial (word-initial) or full overlap on processing of spoken target words. The first study investigated whether time compression would interfere with lexical processing so as to elicit aphasic-like performance in non-brain-damaged subjects. The second study was designed to compare effects of item overlap and item repetition in aphasic patients of different diagnostic types. Time compression did not interfere with lexical deactivation for the non-brain-damaged subjects. Furthermore, all aphasic patients showed immediate inhibition of co-activated candidates. These combined results show that deactivation is a fast process. Repetition effects, however, seem to arise only at the longer term in aphasic patients. Importantly, poor performance on diagnostic verbal STM tasks was shown to be related to lexical decision performance in both overlap and repetition conditions, which suggests a common underlying deficit.
  • Janssen, R., Moisik, S. R., & Dediu, D. (2018). Modelling human hard palate shape with Bézier curves. PLoS One, 13(2): e0191557. doi:10.1371/journal.pone.0191557.

    Abstract

    People vary at most levels, from the molecular to the cognitive, and the shape of the hard palate (the bony roof of the mouth) is no exception. The patterns of variation in the hard palate are important for the forensic sciences and (palaeo)anthropology, and might also play a role in speech production, both in pathological cases and normal variation. Here we describe a method based on Bézier curves, whose main aim is to generate possible shapes of the hard palate in humans for use in computer simulations of speech production and language evolution. Moreover, our method can also capture existing patterns of variation using few and easy-to-interpret parameters, and fits actual data obtained from MRI traces very well with as little as two or three free parameters. When compared to the widely-used Principal Component Analysis (PCA), our method fits actual data slightly worse for the same number of degrees of freedom. However, it is much better at generating new shapes without requiring a calibration sample, its parameters have clearer interpretations, and their ranges are grounded in geometrical considerations. © 2018 Janssen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
  • Janzen, G., Jansen, C., & Van Turennout, M. (2008). Memory consolidation of landmarks in good navigators. Hippocampus, 18, 40-47.

    Abstract

    Landmarks play an important role in successful navigation. To successfully find your way around an environment, navigationally relevant information needs to be stored and become available at later moments in time. Evidence from functional magnetic resonance imaging (fMRI) studies shows that the human parahippocampal gyrus encodes the navigational relevance of landmarks. In the present event-related fMRI experiment, we investigated memory consolidation of navigationally relevant landmarks in the medial temporal lobe after route learning. Sixteen right-handed volunteers viewed two film sequences through a virtual museum with objects placed at locations relevant (decision points) or irrelevant (nondecision points) for navigation. To investigate consolidation effects, one film sequence was seen in the evening before scanning, the other one was seen the following morning, directly before scanning. Event-related fMRI data were acquired during an object recognition task. Participants decided whether they had seen the objects in the previously shown films. After scanning, participants answered standardized questions about their navigational skills, and were divided into groups of good and bad navigators, based on their scores. An effect of memory consolidation was obtained in the hippocampus: Objects that were seen the evening before scanning (remote objects) elicited more activity than objects seen directly before scanning (recent objects). This increase in activity in bilateral hippocampus for remote objects was observed in good navigators only. In addition, a spatial-specific effect of memory consolidation for navigationally relevant objects was observed in the parahippocampal gyrus. Remote decision point objects induced increased activity as compared with recent decision point objects, again in good navigators only. The results provide initial evidence for a connection between memory consolidation and navigational ability that can provide a basis for successful navigation.
  • Järvikivi, J., Pyykkönen, P., & Niemi, J. (2009). Exploiting degrees of inflectional ambiguity: Stem form and the time course of morphological processing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35(1), 221-237. doi:10.1037/a0014355.

    Abstract

    The authors compared sublexical and supralexical approaches to morphological processing with unambiguous and ambiguous inflected words and words with ambiguous stems in 3 masked and unmasked priming experiments in Finnish. Experiment 1 showed equal facilitation for all prime types with a short 60-ms stimulus onset asynchrony (SOA) but significant facilitation for unambiguous words only with a long 300-ms SOA. Experiment 2 showed that all potential readings of ambiguous inflections were activated under a short SOA. Whereas the prime-target form overlap did not affect the results under a short SOA, it significantly modulated the results with a long SOA. Experiment 3 confirmed that the results from masked priming were modulated by the morphological structure of the words but not by the prime-target form overlap alone. The results support approaches in which early prelexical morphological processing is driven by morph-based segmentation and form is used to cue selection between 2 candidates only during later processing.

    Files private

    Request files
  • Järvikivi, J., Vainio, M., & Aalto, D. (2010). Real-time correlates of phonological quantity reveal unity of tonal and non-tonal languages. Plos One, 5(9), e12603. doi:10.1371/journal.pone.0012603.

    Abstract

    Discrete phonological phenomena form our conscious experience of language: continuous changes in pitch appear as distinct tones to the speakers of tone languages, whereas the speakers of quantity languages experience duration categorically. The categorical nature of our linguistic experience is directly reflected in the traditionally clear-cut linguistic classification of languages into tonal or non-tonal. However, some evidence suggests that duration and pitch are fundamentally interconnected and co-vary in signaling word meaning in non-tonal languages as well. We show that pitch information affects real-time language processing in a (non-tonal) quantity language. The results suggest that there is no unidirectional causal link from a genetically-based perceptual sensitivity towards pitch information to the appearance of a tone language. They further suggest that the contrastive categories tone and quantity may be based on simultaneously co-varying properties of the speech signal and the processing system, even though the conscious experience of the speakers may highlight only one discrete variable at a time.
  • Jesse, A., & Massaro, D. W. (2010). Seeing a singer helps comprehension of the song's lyrics. Psychonomic Bulletin & Review, 17, 323-328.

    Abstract

    When listening to speech, we often benefit when also seeing the speaker talk. If this benefit is not domain-specific for speech, then the recognition of sung lyrics should likewise benefit from seeing the singer. Nevertheless, previous research failed to obtain a substantial improvement in that domain. Our study shows that this failure was not due to inherent differences between singing and speaking but rather to less informative visual presentations. By presenting a professional singer, we found a substantial audiovisual benefit of about 35% improvement for lyrics recognition. This benefit was further robust across participants, phrases, and repetition of the test materials. Our results provide the first evidence that lyrics recognition just like speech and music perception is a multimodal process.
  • Jesse, A., & Janse, E. (2009). Seeing a speaker's face helps stream segregation for younger and elderly adults [Abstract]. Journal of the Acoustical Society of America, 125(4), 2361.
  • Jesse, A., & Massaro, D. W. (2010). The temporal distribution of information in audiovisual spoken-word identification. Attention, Perception & Psychophysics, 72(1), 209-225. doi:10.3758/APP.72.1.209.

    Abstract

    In the present study, we examined the distribution and processing of information over time in auditory and visual speech as it is used in unimodal and bimodal word recognition. English consonant—vowel—consonant words representing all possible initial consonants were presented as auditory, visual, or audiovisual speech in a gating task. The distribution of information over time varied across and within features. Visual speech information was generally fully available early during the phoneme, whereas auditory information was still accumulated. An audiovisual benefit was therefore already found early during the phoneme. The nature of the audiovisual recognition benefit changed, however, as more of the phoneme was presented. More features benefited at short gates rather than at longer ones. Visual speech information plays, therefore, a more important role early during the phoneme rather than later. The results of the study showed the complex interplay of information across modalities and time, since this is essential in determining the time course of audiovisual spoken-word recognition.
  • Johnson, E. K., & Seidl, A. (2009). At 11 months, prosody still outranks statistics. Developmental Science, 12, 131-141. doi:10.1111/j.1467-7687.2008.00740.x.

    Abstract

    English-learning 7.5-month-olds are heavily biased to perceive stressed syllables as word onsets. By 11 months, however, infants begin segmenting non-initially stressed words from speech.Using the same artificial language methodology as Johnson and Jusczyk (2001), we explored the possibility that the emergence of this ability is linked to a decreased reliance on prosodic cues to word boundaries accompanied by an increased reliance on syllable distribution cues. In a baseline study, where only statistical cues to word boundaries were present, infants exhibited a familiarity preference for statistical words. When conflicting stress cues were added to the speech stream, infants exhibited a familiarity preference for stress as opposed to statistical words. This was interpreted as evidence that 11-month-olds weight stress cues to word boundaries more heavily than statistical cues. Experiment 2 further investigated these results with a language containing convergent cues to word boundaries. The results of Experiment 2 were not conclusive. A third experiment using new stimuli and a different experimental design supported the conclusion that 11-month-olds rely more heavily on prosodic than statistical cues to word boundaries. We conclude that the emergence of the ability to segment non-initially stressed words from speech is not likely to be tied to an increased reliance on syllable distribution cues relative to stress cues, but instead may emerge due to an increased reliance on and integration of a broad array of segmentation cues.
  • Johnson, E. K., Bruggeman, L., & Cutler, A. (2018). Abstraction and the (misnamed) language familiarity effect. Cognitive Science, 42, 633-645. doi:10.1111/cogs.12520.

    Abstract

    Talkers are recognized more accurately if they are speaking the listeners’ native language rather than an unfamiliar language. This “language familiarity effect” has been shown not to depend upon comprehension and must instead involve language sound patterns. We further examine the level of sound-pattern processing involved, by comparing talker recognition in foreign languages versus two varieties of English, by (a) English speakers of one variety, (b) English speakers of the other variety, and (c) non-native listeners (more familiar with one of the varieties). All listener groups performed better with native than foreign speech, but no effect of language variety appeared: Native listeners discriminated talkers equally well in each, with the native variety never outdoing the other variety, and non-native listeners discriminated talkers equally poorly in each, irrespective of the variety's familiarity. The results suggest that this talker recognition effect rests not on simple familiarity, but on an abstract level of phonological processing
  • Johnson, E. K., & Seidl, A. (2008). Clause segmentation by 6-month-olds: A crosslingusitic perspective. Infancy, 13, 440-455. doi:10.1080/15250000802329321.

    Abstract

    Each clause and phrase boundary necessarily aligns with a word boundary. Thus, infants’ attention to the edges of clauses and phrases may help them learn some of the language-specific cues defining word boundaries. Attention to prosodically wellformed clauses and phrases may also help infants begin to extract information important for learning the grammatical structure of their language. Despite the potentially important role that the perception of large prosodic units may play in early language acquisition, there has been little work investigating the extraction of these units from fluent speech by infants learning languages other than English. We report 2 experiments investigating Dutch learners’ clause segmentation abilities.In these studies, Dutch-learning 6-month-olds readily extract clauses from speech. However, Dutch learners differ from English learners in that they seem to be more reliant on pauses to detect clause boundaries. Two closely related explanations for this finding are considered, both of which stem from the acoustic differences in clause boundary realizations in Dutch versus English.
  • Johnson, E. K., & Tyler, M. (2010). Testing the limits of statistical learning for word segmentation. Developmental Science, 13, 339-345. doi:10.1111/j.1467-7687.2009.00886.x.

    Abstract

    Past research has demonstrated that infants can rapidly extract syllable distribution information from an artificial language and use this knowledge to infer likely word boundaries in speech. However, artificial languages are extremely simplified with respect to natural language. In this study, we ask whether infants’ ability to track transitional probabilities between syllables in an artificial language can scale up to the challenge of natural language. We do so by testing both 5.5- and 8-month-olds’ ability to segment an artificial language containing four words of uniform length (all CVCV) or four words of varying length (two CVCV, two CVCVCV). The transitional probability cues to word boundaries were held equal across the two languages. Both age groups segmented the language containing words of uniform length, demonstrating that even 5.5-month-olds are extremely sensitive to the conditional probabilities in their environment. However, either age group succeeded in segmenting the language containing words of varying length, despite the fact that the transitional probability cues defining word boundaries were equally strong in the two languages. We conclude that infants’ statistical learning abilities may not be as robust as earlier studies have suggested.
  • Jordan, F., Gray, R., Greenhill, S., & Mace, R. (2009). Matrilocal residence is ancestral in Austronesian societies. Proceedings of the Royal Society of London Series B-Biological Sciences, 276(1664), 1957-1964. doi:10.1098/rspb.2009.0088.

    Abstract

    The nature of social life in human prehistory is elusive, yet knowing how kinship systems evolve is critical for understanding population history and cultural diversity. Post-marital residence rules specify sex-specific dispersal and kin association, influencing the pattern of genetic markers across populations. Cultural phylogenetics allows us to practise 'virtual archaeology' on these aspects of social life that leave no trace in the archaeological record. Here we show that early Austronesian societies practised matrilocal post-marital residence. Using a Markov-chain Monte Carlo comparative method implemented in a Bayesian phylogenetic framework, we estimated the type of residence at each ancestral node in a sample of Austronesian language trees spanning 135 Pacific societies. Matrilocal residence has been hypothesized for proto-Oceanic society (ca 3500 BP), but we find strong evidence that matrilocality was predominant in earlier Austronesian societies ca 5000-4500 BP, at the root of the language family and its early branches. Our results illuminate the divergent patterns of mtDNA and Y-chromosome markers seen in the Pacific. The analysis of present-day cross-cultural data in this way allows us to directly address cultural evolutionary and life-history processes in prehistory.
  • Jordan, F., & Dunn, M. (2010). Kin term diversity is the result of multilevel, historical processes [Comment on Doug Jones]. Behavioral and Brain Sciences, 33, 388. doi:10.1017/S0140525X10001962.

    Abstract

    Explanations in the domain of kinship can be sought on several different levels: Jones addresses online processing, as well as issues of origins and innateness. We argue that his framework can more usefully be applied at the levels of developmental and historical change, the latter especially. A phylogenetic approach to the diversity of kinship terminologies is most urgently required.
  • Kalashnikova, M., Escudero, P., & Kidd, E. (2018). The development of fast-mapping and novel word retention strategies in monolingual and bilingual infants. Developmental Science, 21(6): e12674. doi:10.1111/desc.12674.

    Abstract

    The mutual exclusivity (ME) assumption is proposed to facilitate early word learning by guiding infants to map novel words to novel referents. This study assessed the emergence and use of ME to both disambiguate and retain the meanings of novel words across development in 18‐month‐old monolingual and bilingual children (Experiment 1; N = 58), and in a sub‐group of these children again at 24 months of age (Experiment 2: N = 32). Both monolinguals and bilinguals employed ME to select the referent of a novel label to a similar extent at 18 and 24 months. At 18 months, there were also no differences in novel word retention between the two language‐background groups. However, at 24 months, only monolinguals showed the ability to retain these label–object mappings. These findings indicate that the development of the ME assumption as a reliable word‐learning strategy is shaped by children's individual language exposure and experience with language use.

    Files private

    Request files

Share this page