Publications

Displaying 301 - 400 of 981
  • De Grauwe, S., Willems, R. M., Rüschemeyer, S.-A., Lemhöfer, K., & Schriefers, H. (2014). Embodied language in first- and second-language speakers: Neural correlates of processing motor verbs. Neuropsychologia, 56, 334-349. doi:10.1016/j.neuropsychologia.2014.02.003.

    Abstract

    The involvement of neural motor and sensory systems in the processing of language has so far mainly been studied in native (L1) speakers. In an fMRI experiment, we investigated whether non-native (L2) semantic representations are rich enough to allow for activation in motor and somatosensory brain areas. German learners of Dutch and a control group of Dutch native speakers made lexical decisions about visually presented Dutch motor and non-motor verbs. Region-of-interest (ROI) and whole-brain analyses indicated that L2 speakers, like L1 speakers, showed significantly increased activation for simple motor compared to non-motor verbs in motor and somatosensory regions. This effect was not restricted to Dutch-German cognate verbs, but was also present for non-cognate verbs. These results indicate that L2 semantic representations are rich enough for motor-related activations to develop in motor and somatosensory areas.
  • De Grauwe, S., Lemhöfer, K., Willems, R. M., & Schriefers, H. (2014). L2 speakers decompose morphologically complex verbs: fMRI evidence from priming of transparent derived verbs. Frontiers in Human Neuroscience, 8: 802. doi:10.3389/fnhum.2014.00802.

    Abstract

    In this functional magnetic resonance imaging (fMRI) long-lag priming study, we investigated the processing of Dutch semantically transparent, derived prefix verbs. In such words, the meaning of the word as a whole can be deduced from the meanings of its parts, e.g., wegleggen “put aside.” Many behavioral and some fMRI studies suggest that native (L1) speakers decompose transparent derived words. The brain region usually implicated in morphological decomposition is the left inferior frontal gyrus (LIFG). In non-native (L2) speakers, the processing of transparent derived words has hardly been investigated, especially in fMRI studies, and results are contradictory: some studies find more reliance on holistic (i.e., non-decompositional) processing by L2 speakers; some find no difference between L1 and L2 speakers. In this study, we wanted to find out whether Dutch transparent derived prefix verbs are decomposed or processed holistically by German L2 speakers of Dutch. Half of the derived verbs (e.g., omvallen “fall down”) were preceded by their stem (e.g., vallen “fall”) with a lag of 4–6 words (“primed”); the other half (e.g., inslapen “fall asleep”) were not (“unprimed”). L1 and L2 speakers of Dutch made lexical decisions on these visually presented verbs. Both region of interest analyses and whole-brain analyses showed that there was a significant repetition suppression effect for primed compared to unprimed derived verbs in the LIFG. This was true both for the analyses over L2 speakers only and for the analyses over the two language groups together. The latter did not reveal any interaction with language group (L1 vs. L2) in the LIFG. Thus, L2 speakers show a clear priming effect in the LIFG, an area that has been associated with morphological decomposition. Our findings are consistent with the idea that L2 speakers engage in decomposition of transparent derived verbs rather than processing them holistically

    Additional information

    Data Sheet 1.docx
  • Gray, R., & Jordan, F. (2000). Language trees support the express-train sequence of Austronesian expansion. Nature, 405, 1052-1055. doi:10.1038/35016575.

    Abstract

    Languages, like molecules, document evolutionary history. Darwin(1) observed that evolutionary change in languages greatly resembled the processes of biological evolution: inheritance from a common ancestor and convergent evolution operate in both. Despite many suggestions(2-4), few attempts have been made to apply the phylogenetic methods used in biology to linguistic data. Here we report a parsimony analysis of a large language data set. We use this analysis to test competing hypotheses - the "express-train''(5) and the "entangled-bank''(6,7) models - for the colonization of the Pacific by Austronesian-speaking peoples. The parsimony analysis of a matrix of 77 Austronesian languages with 5,185 lexical items produced a single most-parsimonious tree. The express-train model was converted into an ordered geographical character and mapped onto the language tree. We found that the topology of the language tree was highly compatible with the express-train model.
  • Griffin, Z. M., & Bock, K. (2000). What the eyes say about speaking. Psychological Science, 11(4), 274-279. doi:10.1111/1467-9280.00255.

    Abstract

    To study the time course of sentence formulation, we monitored the eye movements of speakers as they described simple events. The similarity between speakers' initial eye movements and those of observers performing a nonverbal event-comprehension task suggested that response-relevant information was rapidly extracted from scenes, allowing speakers to select grammatical subjects based on comprehended events rather than salience. When speaking extemporaneously, speakers began fixating pictured elements less than a second before naming them within their descriptions, a finding consistent with incremental lexical encoding. Eye movements anticipated the order of mention despite changes in picture orientation, in who-did-what-to-whom, and in sentence structure. The results support Wundt's theory of sentence production.

    Files private

    Request files
  • Groen, W. B., Tesink, C. M. J. Y., Petersson, K. M., Van Berkum, J. J. A., Van der Gaag, R. J., Hagoort, P., & Buitelaar, J. K. (2010). Semantic, factual, and social language comprehension in adolescents with autism: An fMRI study. Cerebral Cortex, 20(8), 1937-1945. doi:10.1093/cercor/bhp264.

    Abstract

    Language in high-functioning autism is characterized by pragmatic and semantic deficits, and people with autism have a reduced tendency to integrate information. Because the left and right inferior frontal (LIF and RIF) regions are implicated with integration of speaker information, world knowledge, and semantic knowledge, we hypothesized that abnormal functioning of the LIF and RIF regions might contribute to pragmatic and semantic language deficits in autism. Brain activation of sixteen 12- to 18-year-old, high-functioning autistic participants was measured with functional magnetic resonance imaging during sentence comprehension and compared with that of twenty-six matched controls. The content of the pragmatic sentence was congruent or incongruent with respect to the speaker characteristics (male/female, child/adult, and upper class/lower class). The semantic- and world-knowledge sentences were congruent or incongruent with respect to semantic expectancies and factual expectancies about the world, respectively. In the semanticknowledge and world-knowledge condition, activation of the LIF region did not differ between groups. In sentences that required integration of speaker information, the autism group showed abnormally reduced activation of the LIF region. The results suggest that people with autism may recruit the LIF region in a different manner in tasks that demand integration of social information.
  • Guadalupe, T., Willems, R. M., Zwiers, M., Arias Vasquez, A., Hoogman, M., Hagoort, P., Fernández, G., Buitelaar, J., Franke, B., Fisher, S. E., & Francks, C. (2014). Differences in cerebral cortical anatomy of left- and right-handers. Frontiers in Psychology, 5: 261. doi:10.3389/fpsyg.2014.00261.

    Abstract

    The left and right sides of the human brain are specialized for different kinds of information processing, and much of our cognition is lateralized to an extent towards one side or the other. Handedness is a reflection of nervous system lateralization. Roughly ten percent of people are mixed- or left-handed, and they show an elevated rate of reductions or reversals of some cerebral functional asymmetries compared to right-handers. Brain anatomical correlates of left-handedness have also been suggested. However, the relationships of left-handedness to brain structure and function remain far from clear. We carried out a comprehensive analysis of cortical surface area differences between 106 left-handed subjects and 1960 right-handed subjects, measured using an automated method of regional parcellation (FreeSurfer, Destrieux atlas). This is the largest study sample that has so far been used in relation to this issue. No individual cortical region showed an association with left-handedness that survived statistical correction for multiple testing, although there was a nominally significant association with the surface area of a previously implicated region: the left precentral sulcus. Identifying brain structural correlates of handedness may prove useful for genetic studies of cerebral asymmetries, as well as providing new avenues for the study of relations between handedness, cerebral lateralization and cognition.
  • Guadalupe, T., Zwiers, M. P., Teumer, A., Wittfeld, K., Arias Vasquez, A., Hoogman, M., Hagoort, P., Fernández, G., Buitelaar, J., Hegenscheid, K., Völzke, H., Franke, B., Fisher, S. E., Grabe, H. J., & Francks, C. (2014). Measurement and genetics of human subcortical and hippocampal asymmetries in large datasets. Human Brain Mapping, 35(7), 3277-3289. doi:10.1002/hbm.22401.

    Abstract

    Functional and anatomical asymmetries are prevalent features of the human brain, linked to gender, handedness, and cognition. However, little is known about the neurodevelopmental processes involved. In zebrafish, asymmetries arise in the diencephalon before extending within the central nervous system. We aimed to identify genes involved in the development of subtle, left-right volumetric asymmetries of human subcortical structures using large datasets. We first tested the feasibility of measuring left-right volume differences in such large-scale samples, as assessed by two automated methods of subcortical segmentation (FSL|FIRST and FreeSurfer), using data from 235 subjects who had undergone MRI twice. We tested the agreement between the first and second scan, and the agreement between the segmentation methods, for measures of bilateral volumes of six subcortical structures and the hippocampus, and their volumetric asymmetries. We also tested whether there were biases introduced by left-right differences in the regional atlases used by the methods, by analyzing left-right flipped images. While many bilateral volumes were measured well (scan-rescan r = 0.6-0.8), most asymmetries, with the exception of the caudate nucleus, showed lower repeatabilites. We meta-analyzed genome-wide association scan results for caudate nucleus asymmetry in a combined sample of 3,028 adult subjects but did not detect associations at genome-wide significance (P < 5 × 10-8). There was no enrichment of genetic association in genes involved in left-right patterning of the viscera. Our results provide important information for researchers who are currently aiming to carry out large-scale genome-wide studies of subcortical and hippocampal volumes, and their asymmetries
  • Gubian, M., Bergmann, C., & Boves, L. (2010). Investigating word learning processes in an artificial agent. In Proceedings of the IXth IEEE International Conference on Development and Learning (ICDL). Ann Arbor, MI, 18-21 Aug. 2010 (pp. 178 -184). IEEE.

    Abstract

    Researchers in human language processing and acquisition are making an increasing use of computational models. Computer simulations provide a valuable platform to reproduce hypothesised learning mechanisms that are otherwise very difficult, if not impossible, to verify on human subjects. However, computational models come with problems and risks. It is difficult to (automatically) extract essential information about the developing internal representations from a set of simulation runs, and often researchers limit themselves to analysing learning curves based on empirical recognition accuracy through time. The associated risk is to erroneously deem a specific learning behaviour as generalisable to human learners, while it could also be a mere consequence (artifact) of the implementation of the artificial learner or of the input coding scheme. In this paper a set of simulation runs taken from the ACORNS project is investigated. First a look `inside the box' of the learner is provided by employing novel quantitative methods for analysing changing structures in large data sets. Then, the obtained findings are discussed in the perspective of their ecological validity in the field of child language acquisition.
  • Guerra, E., & Knoeferle, P. (2014). Spatial distance effects on incremental semantic interpretation of abstract sentences: Evidence from eye tracking. Cognition, 133(3), 535-552. doi:10.1016/j.cognition.2014.07.007.

    Abstract

    A large body of evidence has shown that visual context information can rapidly modulate language comprehension for concrete sentences and when it is mediated by a referential or a lexical-semantic link. What has not yet been examined is whether visual context can also modulate comprehension of abstract sentences incrementally when it is neither referenced by, nor lexically associated with, the sentence. Three eye-tracking reading experiments examined the effects of spatial distance between words (Experiment 1) and objects (Experiment 2 and 3) on participants’ reading times for sentences that convey similarity or difference between two abstract nouns (e.g., ‘Peace and war are certainly different...’). Before reading the sentence, participants inspected a visual context with two playing cards that moved either far apart or close together. In Experiment 1, the cards turned and showed the first two nouns of the sentence (e.g., ‘peace’, ‘war’). In Experiments 2 and 3, they turned but remained blank. Participants’ reading times at the adjective (Experiment 1: first-pass reading time; Experiment 2: total times) and at the second noun phrase (Experiment 3: first-pass times) were faster for sentences that expressed similarity when the preceding words/objects were close together (vs. far apart) and for sentences that expressed dissimilarity when the preceding words/objects were far apart (vs. close together). Thus, spatial distance between words or entirely unrelated objects can rapidly and incrementally modulate the semantic interpretation of abstract sentences.

    Additional information

    mmc1.doc
  • Guerra, E., Huettig, F., & Knoeferle, P. (2014). Assessing the time course of the influence of featural, distributional and spatial representations during reading. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 2309-2314). Austin, TX: Cognitive Science Society. Retrieved from https://mindmodeling.org/cogsci2014/papers/402/.

    Abstract

    What does semantic similarity between two concepts mean? How could we measure it? The way in which semantic similarity is calculated might differ depending on the theoretical notion of semantic representation. In an eye-tracking reading experiment, we investigated whether two widely used semantic similarity measures (based on featural or distributional representations) have distinctive effects on sentence reading times. In other words, we explored whether these measures of semantic similarity differ qualitatively. In addition, we examined whether visually perceived spatial distance interacts with either or both of these measures. Our results showed that the effect of featural and distributional representations on reading times can differ both in direction and in its time course. Moreover, both featural and distributional information interacted with spatial distance, yet in different sentence regions and reading measures. We conclude that featural and distributional representations are distinct components of semantic representation.
  • Guerra, E., & Knoeferle, P. (2014). Spatial distance modulates reading times for sentences about social relations: evidence from eye tracking. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 2315-2320). Austin, TX: Cognitive Science Society. Retrieved from https://mindmodeling.org/cogsci2014/papers/403/.

    Abstract

    Recent evidence from eye tracking during reading showed that non-referential spatial distance presented in a visual context can modulate semantic interpretation of similarity relations rapidly and incrementally. In two eye-tracking reading experiments we extended these findings in two important ways; first, we examined whether other semantic domains (social relations) could also be rapidly influenced by spatial distance during sentence comprehension. Second, we aimed to further specify how abstract language is co-indexed with spatial information by varying the syntactic structure of sentences between experiments. Spatial distance rapidly modulated reading times as a function of the social relation expressed by a sentence. Moreover, our findings suggest that abstract language can be co-indexed as soon as critical information becomes available for the reader.
  • Guggenheim, J. A., Williams, C., Northstone, K., Howe, L. D., Tilling, K., St Pourcain, B., McMahon, G., & Lawlor, D. A. (2014). Does Vitamin D Mediate the Protective Effects of Time Outdoors On Myopia? Findings From a Prospective Birth Cohort. Investigative Ophthalmology & Visual Science, 55(12), 8550-8558. doi:10.1167/iovs.14-15839.
  • Gullberg, M., Roberts, L., Dimroth, C., Veroude, K., & Indefrey, P. (2010). Adult language learning after minimal exposure to an unknown natural language. In M. Gullberg, & P. Indefrey (Eds.), The earliest stages of language learning (pp. 5-24). Malden, MA: Wiley-Blackwell.
  • Gullberg, M., Roberts, L., Dimroth, C., Veroude, K., & Indefrey, P. (2010). Adult language learning after minimal exposure to an unknown natural language. Language Learning, 60(S2), 5-24. doi:10.1111/j.1467-9922.2010.00598.x.

    Abstract

    Despite the literature on the role of input in adult second-language (L2) acquisition and on artificial and statistical language learning, surprisingly little is known about how adults break into a new language in the wild. This article reports on a series of behavioral and neuroimaging studies that examine what linguistic information adults can extract from naturalistic but controlled audiovisual input in an unknown and typologically distant L2 after minimal exposure (7–14 min) without instruction or training. We tested the stepwise development of segmental, phonotactic, and lexical knowledge in Dutch adults after minimal exposure to Mandarin Chinese and the role of item frequency, speech-associated gestures, and word length at the earliest stages of learning. In an exploratory neural connectivity study we further examined the neural correlates of word recognition in a new language, identifying brain regions whose connectivity was related to performance both before and after learning. While emphasizing the complexity of the learning task, the results suggest that the adult learning mechanism is more powerful than is normally assumed when faced with small amounts of complex, continuous audiovisual language input.
  • Gullberg, M., De Bot, K., & Volterra, V. (2010). Gestures and some key issues in the study of language development. In M. Gullberg, & K. De Bot (Eds.), Gestures in language development (pp. 3-33). Amsterdam: Benjamins.
  • Gullberg, M., & De Bot, K. (Eds.). (2010). Gestures in language development. Amsterdam: Benjamins.

    Abstract

    Gestures are prevalent in communication and tightly linked to language and speech. As such they can shed important light on issues of language development across the lifespan. This volume, originally published as a Special Issue of Gesture Volume 8:2 (2008), brings together studies from different disciplines that examine language development in children and adults from varying perspectives. It provides a review of common theoretical and empirical themes, and the contributions address topics such as gesture use in prelinguistic infants, the relationship between gestures and lexical development in typically and atypically developing children and in second language learners, what gestures reveal about discourse, and how all languages that adult second language speakers know can influence each other. The papers exemplify a vibrant new field of study with relevance for multiple disciplines.
  • Gullberg, M. (2010). Methodological reflections on gesture analysis in second language acquisition and bilingualism research. Second Language Research, 26(1), 75-102. doi:10.1177/0267658309337639.

    Abstract

    Gestures, the symbolic movements speakers perform while they speak, form a closely inter-connected system with speech where gestures serve both addressee-directed (‘communicative’) and speaker-directed (’internal’) functions. This paper aims (1) to show that a combined analysis of gesture and speech offers new ways to address theoretical issues in SLA and bilingualism studies, probing SLA and bilingualism as product and process; and (2) to outline some methodological concerns and desiderata to facilitate the inclusion of gesture in SLA and bilingualism research.
  • Gullberg, M., & Indefrey, P. (Eds.). (2010). The earliest stages of language learning. Malden, MA: Wiley-Blackwell.

    Abstract

    To understand the nature of language learning, the factors that influence it, and the mechanisms that govern it, it is crucial to study the very earliest stages of language learning. This volume provides a state-of-the art overview of what we know about the cognitive and neurobiological aspects of the adult capacity for language learning. It brings together studies from several fields that examine learning from multiple perspectives using various methods. The papers examine learning after anything from a few minutes to months of language exposure; they target the learning of both artificial and natural languages, involve both explicit and implicit learning, and cover linguistic domains ranging from phonology and semantics to morphosyntax. The findings will inform and extend further studies of language learning in multiple disciplines.
  • Gullberg, M., & Indefrey, P. (Eds.). (2010). The earliest stages of language learning [Special Issue]. Language Learning, 60(Supplement s2).
  • Gullberg, M., & Narasimhan, B. (2010). What gestures reveal about the development of semantic distinctions in Dutch children's placement verbs. Cognitive Linguistics, 21(2), 239-262. doi:10.1515/COGL.2010.009.

    Abstract

    Placement verbs describe every-day events like putting a toy in a box. Dutch uses two semi-obligatory caused posture verbs (leggen ‘lay’ and zetten ‘set/stand’) to distinguish between events based on whether the located object is placed horizontally or vertically. Although prevalent in the input, these verbs cause Dutch children difficulties even at age five (Narasimhan & Gullberg, submitted). Children overextend leggen to all placement events and underextend use of zetten. This study examines what gestures can reveal about Dutch three- and five-year-olds’ semantic representations of such verbs. The results show that children gesture differently from adults in this domain. Three-year-olds express only the path of the caused motion, whereas five-year-olds, like adults, also incorporate the located object. Crucially, gesture patterns are tied to verb use: those children who over-use leggen 'lay' for all placement events only gesture about path. Conversely, children who use the two verbs differentially for horizontal and vertical placement also incorporate objects in gestures like adults. We argue that children's gestures reflect their current knowledge of verb semantics, and indicate a developmental transition from a system with a single semantic component – (caused) movement – to an (adult-like) focus on two semantic components – (caused) movement-and-object
  • Guo, Y., Martin, R. C., Hamilton, C., Van Dyke, J., & Tan, Y. (2010). Neural basis of semantic and syntactic interference resolution in sentence comprehension. Procedia - Social and Behavioral Sciences, 6, 88-89. doi:10.1016/j.sbspro.2010.08.045.
  • Gussenhoven, C., Chen, Y., & Dediu, D. (Eds.). (2014). 4th International Symposium on Tonal Aspects of Language, Nijmegen, The Netherlands, May 13-16, 2014. ISCA Archive.
  • Gussenhoven, C., & Chen, A. (2000). Universal and language-specific effects in the perception of question intonation. In B. Yuan, T. Huang, & X. Tang (Eds.), Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP) (pp. 91-94). Beijing: China Military Friendship Publish.

    Abstract

    Three groups of monolingual listeners, with Standard Chinese, Dutch and Hungarian as their native language, judged pairs of trisyllabic stimuli which differed only in their itch pattern. The segmental structure of the stimuli was made up by the experimenters and presented to subjects as being taken from a little-known language spoken on a South Pacific island. Pitch patterns consisted of a single rise-fall located on or near the second syllable. By and large, listeners selected the stimulus with the higher peak, the later eak, and the higher end rise as the one that signalled a question, regardless of language group. The result is argued to reflect innate, non-linguistic knowledge of the meaning of pitch variation, notably Ohala’s Frequency Code. A significant difference between groups is explained as due to the influence of the mother tongue.
  • Gussenhoven, C., & Chen, A. (2000). Universal and language-specific effects in the perception of question intonation. In Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP) (pp. 91-94).
  • Hagoort, P. (2000). De toekomstige eeuw der cognitieve neurowetenschap [inaugural lecture]. Katholieke Universiteit Nijmegen.

    Abstract

    Rede uitgesproken op 12 mei 2000 bij de aanvaarding van het ambt van hoogleraar in de neuropsychologie aan de Faculteit Sociale Wetenschappen KUN.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech compared to reading: the P600/SPS to syntactic violations in spoken sentences and rapid serial visual presentation. Neuropsychologia, 38, 1531-1549.

    Abstract

    In this study, event-related brain potential ffects of speech processing are obtained and compared to similar effects in sentence reading. In two experiments sentences were presented that contained three different types of grammatical violations. In one experiment sentences were presented word by word at a rate of four words per second. The grammatical violations elicited a Syntactic Positive Shift (P600/SPS), 500 ms after the onset of the word that rendered the sentence ungrammatical. The P600/SPS consisted of two phases, an early phase with a relatively equal anterior-posterior distribution and a later phase with a strong posterior distribution. We interpret the first phase as an indication of structural integration complexity, and the second phase as an indication of failing parsing operations and/or an attempt at reanalysis. In the second experiment the same syntactic violations were presented in sentences spoken at a normal rate and with normal intonation. These violations elicited a P600/SPS with the same onset as was observed for the reading of these sentences. In addition two of the three violations showed a preceding frontal negativity, most clearly over the left hemisphere.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech: semantic ERP effects. Neuropsychologia, 38, 1518-1530.

    Abstract

    In this study, event-related brain potential effects of speech processing are obtained and compared to similar effects insentence reading. In two experiments spoken sentences were presented with semantic violations in sentence-signal or mid-sentence positions. For these violations N400 effects were obtained that were very similar to N400 effects obtained in reading. However, the N400 effects in speech were preceded by an earlier negativity (N250). This negativity is not commonly observed with written input. The early effect is explained as a manifestation of a mismatch between the word forms expected on the basis of the context, and the actual cohort of activated word candidates that is generated on the basis of the speech signal.
  • Hagoort, P. (2014). Introduction to section on language and abstract thought. In M. S. Gazzaniga, & G. R. Mangun (Eds.), The cognitive neurosciences (5th ed., pp. 615-618). Cambridge, Mass: MIT Press.
  • Hagoort, P., & Levinson, S. C. (2014). Neuropragmatics. In M. S. Gazzaniga, & G. R. Mangun (Eds.), The cognitive neurosciences (5th ed., pp. 667-674). Cambridge, Mass: MIT Press.
  • Hagoort, P. (2014). Nodes and networks in the neural architecture for language: Broca's region and beyond. Current Opinion in Neurobiology, 28, 136-141. doi:10.1016/j.conb.2014.07.013.

    Abstract

    Current views on the neurobiological underpinnings of language are discussed that deviate in a number of ways from the classical Wernicke–Lichtheim–Geschwind model. More areas than Broca's and Wernicke's region are involved in language. Moreover, a division along the axis of language production and language comprehension does not seem to be warranted. Instead, for central aspects of language processing neural infrastructure is shared between production and comprehension. Three different accounts of the role of Broca's area in language are discussed. Arguments are presented in favor of a dynamic network view, in which the functionality of a region is co-determined by the network of regions in which it is embedded at particular moments in time. Finally, core regions of language processing need to interact with other networks (e.g. the attentional networks and the ToM network) to establish full functionality of language and communication.
  • Hagoort, P., & Indefrey, P. (2014). The neurobiology of language beyond single words. Annual Review of Neuroscience, 37, 347-362. doi:10.1146/annurev-neuro-071013-013847.

    Abstract

    A hallmark of human language is that we combine lexical building blocks retrieved from memory in endless new ways. This combinatorial aspect of language is referred to as unification. Here we focus on the neurobiological infrastructure for syntactic and semantic unification. Unification is characterized by a high-speed temporal profile including both prediction and integration of retrieved lexical elements. A meta-analysis of numerous neuroimaging studies reveals a clear dorsal/ventral gradient in both left inferior frontal cortex and left posterior temporal cortex, with dorsal foci for syntactic processing and ventral foci for semantic processing. In addition to core areas for unification, further networks need to be recruited to realize language-driven communication to its full extent. One example is the theory of mind network, which allows listeners and readers to infer the intended message (speaker meaning) from the coded meaning of the linguistic utterance. This indicates that sensorimotor simulation cannot handle all of language processing.
  • Hagoort, P. (2000). What we shall know only tomorrow. Brain and Language, 71, 89-92. doi:10.1006/brln.1999.2221.
  • Hamans, C., & Seuren, P. A. M. (2010). Chomsky in search of a pedigree. In D. A. Kibbee (Ed.), Chomskyan (R)evolutions (pp. 377-394). Amsterdam/Philadelphia: Benjamins.

    Abstract

    This paper follows the changing fortunes of Chomsky’s search for a pedigree in the history of Western thought during the late 1960s. Having achieved a unique position of supremacy in the theory of syntax and having exploited that position far beyond the narrow circles of professional syntacticians, he felt the need to shore up his theory with the authority of history. It is shown that this attempt, resulting mainly in his Cartesian Linguistics of 1966, was widely, and rightly, judged to be a radical failure, even though it led to a sudden revival of interest in the history of linguistics. Ironically, the very upswing in historical studies caused by Cartesian Linguistics ended up showing that the real pedigree belongs to Generative Semantics, developed by the same ‘angry young men’ Chomsky was so bent on destroying.
  • Hammarstroem, H., & Güldemann, T. (2014). Quantifying geographical determinants of large-scale distributions of linguistic features. Language Dynamics and Change, 4, 87-115. doi:10.1163/22105832-00401002.

    Abstract

    In the recent past the work on large-scale linguistic distributions across the globe has intensified considerably. Work on macro-areal relationships in Africa (Güldemann, 2010) suggests that the shape of convergence areas may be determined by climatic factors and geophysical features such as mountains, water bodies, coastlines, etc. Worldwide data is now available for geophysical features as well as linguistic features, including numeral systems and basic constituent order. We explore the possibility that the shape of areal aggregations of individual features in these two linguistic domains correlates with Köppen-Geiger climate zones. Furthermore, we test the hypothesis that the shape of such areal feature aggregations is determined by the contour of adjacent geophysical features like mountain ranges or coastlines. In these first basic tests, we do not find clear evidence that either Köppen-Geiger climate zones or the contours of geophysical features are major predictors for the linguistic data at hand

    Files private

    Request files
  • Hammarstroem, H., & Donohue, M. (2014). Some principles on the use of macro-areas in typological comparison. Language Dynamics and Change, 4, 167-187. doi:10.1163/22105832-00401001.

    Abstract

    While the notion of the ‘area’ or ‘Sprachbund’ has a long history in linguistics, with geographically-defined regions frequently cited as a useful means to explain typological distributions, the problem of delimiting areas has not been well addressed. Lists of general-purpose, largely independent ‘macro-areas’ (typically continent size) have been proposed as a step to rule out contact as an explanation for various large-scale linguistic phenomena. This squib points out some problems in some of the currently widely-used predetermined areas, those found in the World Atlas of Language Structures (Haspelmath et al., 2005). Instead, we propose a principled division of the world’s landmasses into six macro-areas that arguably have better geographical independence properties
  • Hammarström, H. (2014). Basic vocabulary comparison in South American languages. In P. Muysken, & L. O'Connor (Eds.), Language contact in South America (pp. 56-72). Cambridge: Cambridge University Press.
  • Hammarström, H. (2010). A full-scale test of the language farming dispersal hypothesis. Diachronica, 27(2), 197-213. doi:10.1075/dia.27.2.02ham.

    Abstract

    One attempt at explaining why some language families are large (while others are small) is the hypothesis that the families that are now large became large because their ancestral speakers had a technological advantage, most often agriculture. Variants of this idea are referred to as the Language Farming Dispersal Hypothesis. Previously, detailed language family studies have uncovered various supporting examples and counterexamples to this idea. In the present paper I weigh the evidence from ALL attested language families. For each family, I use the number of member languages as a measure of cardinal size, member language coordinates to measure geospatial size and ethnographic evidence to assess subsistence status. This data shows that, although agricultural families tend to be larger in cardinal size, their size is hardly due to the simple presence of farming. If farming were responsible for language family expansions, we would expect a greater east-west geospatial spread of large families than is actually observed. The data, however, is compatible with weaker versions of the farming dispersal hypothesis as well with models where large families acquire farming because of their size, rather than the other way around.
  • Hammarström, H. (2014). [Review of the book A grammar of the great Andamanese language: An ethnolinguistic study by Anvita Abbi]. Journal of South Asian Languages and Linguistics, 1, 111-116. doi:10.1515/jsall-2014-0007.
  • Hammarström, H. (2014). Papuan languages. In M. Aronoff (Ed.), Oxford bibliographies in linguistics. New York: Oxford University Press. doi:10.1093/OBO/9780199772810-0165.
  • Hammarström, H. (2010). Rarities in numeral systems. In J. Wohlgemuth, & M. Cysouw (Eds.), Rethinking universals. How rarities affect linguistic theory (pp. 11-60). Berlin: De Gruyter.
  • Hammarström, H. (2010). The status of the least documented language families in the world. Language Documentation and Conservation, 4, 177-212. Retrieved from http://hdl.handle.net/10125/4478.

    Abstract

    This paper aims to list all known language families that are not yet extinct and all of whose member languages are very poorly documented, i.e., less than a sketch grammar’s worth of data has been collected. It explains what constitutes a valid family, what amount and kinds of documentary data are sufficient, when a language is considered extinct, and more. It is hoped that the survey will be useful in setting priorities for documentation fieldwork, in particular for those documentation efforts whose underlying goal is to understand linguistic diversity.
  • Hammond, J. (2014). Switch-reference antecedence and subordination in Whitesands (Oceanic). In R. van Gijn, J. Hammond, D. Matić, S. van Putten, & A. V. Galucio (Eds.), Information structure and reference tracking in complex sentences. (pp. 263-290). Amsterdam: Benjamins.

    Abstract

    Whitesands is an Oceanic language of the southern Vanuatu subgroup. Like the related languages of southern Vanuatu, Whitesands has developed a clause-linkage system which monitors referent continuity on new clauses – typically contrasting with the previous clause. In this chapter I address how the construction interacts with topic continuity in discourse. I outline the morphosyntactic form of this anaphoric co-reference device. From a functionalist perspective, I show how the system is used in natural discourse and discuss its restrictions with respect to relative and complement clauses. I conclude with a discussion on its interactions with theoretical notions of information structure – in particular the nature of presupposed versus asserted clauses, information back- and foregrounding and how these affect the use of the switch-reference system
  • Hanique, I., Schuppler, B., & Ernestus, M. (2010). Morphological and predictability effects on schwa reduction: The case of Dutch word-initial syllables. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (Interspeech 2010), Makuhari, Japan (pp. 933-936).

    Abstract

    This corpus-based study shows that the presence and duration of schwa in Dutch word-initial syllables are affected by a word’s predictability and its morphological structure. Schwa is less reduced in words that are more predictable given the following word. In addition, schwa may be longer if the syllable forms a prefix, and in prefixes the duration of schwa is positively correlated with the frequency of the word relative to its stem. Our results suggest that the conditions which favor reduced realizations are more complex than one would expect on the basis of the current literature.
  • Hanulikova, A., & Hamann, S. (2010). Illustrations of Slovak IPA. Journal of the International Phonetic Association, 40(3), 373-378. doi:10.1017/S0025100310000162.

    Abstract

    Slovak (sometimes also called Slovakian) is an Indo-European language belonging to the West-Slavic branch, and is most closely related to Czech. Slovak is spoken as a native language by 4.6 million speakers in Slovakia (that is by roughly 85% of the population), and by over two million Slovaks living abroad, most of them in the USA, the Czech Republic, Hungary, Canada and Great Britain (Office for Slovaks Living Abroad 2009).
  • Hanulikova, A., & Weber, A. (2010). Production of English interdental fricatives by Dutch, German, and English speakers. In K. Dziubalska-Kołaczyk, M. Wrembel, & M. Kul (Eds.), Proceedings of the 6th International Symposium on the Acquisition of Second Language Speech, New Sounds 2010, Poznań, Poland, 1-3 May 2010 (pp. 173-178). Poznan: Adam Mickiewicz University.

    Abstract

    Non-native (L2) speakers of English often experience difficulties in producing English interdental fricatives (e.g. the voiceless [θ]), and this leads to frequent substitutions of these fricatives (e.g. with [t], [s], and [f]). Differences in the choice of [θ]-substitutions across L2 speakers with different native (L1) language backgrounds have been extensively explored. However, even within one foreign accent, more than one substitution choice occurs, but this has been less systematically studied. Furthermore, little is known about whether the substitutions of voiceless [θ] are phonetically clear instances of [t], [s], and [f], as they are often labelled. In this study, we attempted a phonetic approach to examine language-specific preferences for [θ]-substitutions by carrying out acoustic measurements of L1 and L2 realizations of these sounds. To this end, we collected a corpus of spoken English with L1 speakers (UK-English), and Dutch and German L2 speakers. We show a) that the distribution of differential substitutions using identical materials differs between Dutch and German L2 speakers, b) that [t,s,f]-substitutes differ acoustically from intended [t,s,f], and c) that L2 productions of [θ] are acoustically comparable to L1 productions.
  • Hanulikova, A., McQueen, J. M., & Mitterer, H. (2010). Possible words and fixed stress in the segmentation of Slovak speech. Quarterly Journal of Experimental Psychology, 63, 555 -579. doi:10.1080/17470210903038958.

    Abstract

    The possible-word constraint (PWC; Norris, McQueen, Cutler, & Butterfield, 1997) has been proposed as a language-universal segmentation principle: Lexical candidates are disfavoured if the resulting segmentation of continuous speech leads to vowelless residues in the input—for example, single consonants. Three word-spotting experiments investigated segmentation in Slovak, a language with single-consonant words and fixed stress. In Experiment 1, Slovak listeners detected real words such as ruka “hand” embedded in prepositional-consonant contexts (e.g., /gruka/) faster than those in nonprepositional-consonant contexts (e.g., /truka/) and slowest in syllable contexts (e.g., /dugruka/). The second experiment controlled for effects of stress. Responses were still fastest in prepositional-consonant contexts, but were now slowest in nonprepositional-consonant contexts. In Experiment 3, the lexical and syllabic status of the contexts was manipulated. Responses were again slowest in nonprepositional-consonant contexts but equally fast in prepositional-consonant, prepositional-vowel, and nonprepositional-vowel contexts. These results suggest that Slovak listeners use fixed stress and the PWC to segment speech, but that single consonants that can be words have a special status in Slovak segmentation. Knowledge about what constitutes a phonologically acceptable word in a given language therefore determines whether vowelless stretches of speech are or are not treated as acceptable parts of the lexical parse.
  • Harbusch, K., & Kempen, G. (2000). Complexity of linear order computation in Performance Grammar, TAG and HPSG. In Proceedings of Fifth International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+5) (pp. 101-106).

    Abstract

    This paper investigates the time and space complexity of word order computation in the psycholinguistically motivated grammar formalism of Performance Grammar (PG). In PG, the first stage of syntax assembly yields an unordered tree ('mobile') consisting of a hierarchy of lexical frames (lexically anchored elementary trees). Associated with each lexica l frame is a linearizer—a Finite-State Automaton that locally computes the left-to-right order of the branches of the frame. Linearization takes place after the promotion component may have raised certain constituents (e.g. Wh- or focused phrases) into the domain of lexical frames higher up in the syntactic mobile. We show that the worst-case time and space complexity of analyzing input strings of length n is O(n5) and O(n4), respectively. This result compares favorably with the time complexity of word-order computations in Tree Adjoining Grammar (TAG). A comparison with Head-Driven Phrase Structure Grammar (HPSG) reveals that PG yields a more declarative linearization method, provided that the FSA is rewritten as an equivalent regular expression.
  • Haun, D. B. M., Rekers, Y., & Tomasello, M. (2014). Children conform the behavior of peers; Other great apes stick with what they know. Psychological Science, 25, 2160-2167. doi:10.1177/0956797614553235.

    Abstract

    All primates learn things from conspecifics socially, but it is not clear whether they conform to the behavior of these conspecifics—if conformity is defined as overriding individually acquired behavioral tendencies in order to copy peers’ behavior. In the current study, chimpanzees, orangutans, and 2-year-old human children individually acquired a problem-solving strategy. They then watched several conspecific peers demonstrate an alternative strategy. The children switched to this new, socially demonstrated strategy in roughly half of all instances, whereas the other two great-ape species almost never adjusted their behavior to the majority’s. In a follow-up study, children switched much more when the peer demonstrators were still present than when they were absent, which suggests that their conformity arose at least in part from social motivations. These results demonstrate an important difference between the social learning of humans and great apes, a difference that might help to account for differences in human and nonhuman cultures

    Additional information

    Haun_Rekers_Tomasello_2014_supp.pdf
  • Haun, D. B. M., Jordan, F., Vallortigara, G., & Clayton, N. S. (2010). Origins of spatial, temporal and numerical cognition: Insights from comparative psychology [Review article]. Trends in Cognitive Sciences, 14, 552-560. doi:10.1016/j.tics.2010.09.006.

    Abstract

    Contemporary comparative cognition has a large repertoire of animal models and methods, with concurrent theoretical advances that are providing initial answers to crucial questions about human cognition. What cognitive traits are uniquely human? What are the species-typical inherited predispositions of the human mind? What is the human mind capable of without certain types of specific experiences with the surrounding environment? Here, we review recent findings from the domains of space, time and number cognition. These findings are produced using different comparative methodologies relying on different animal species, namely birds and non-human great apes. The study of these species not only reveals the range of cognitive abilities across vertebrates, but also increases our understanding of human cognition in crucial ways.
  • Heid, I. M., Henneman, P., Hicks, A., Coassin, S., Winkler, T., Aulchenko, Y. S., Fuchsberger, C., Song, K., Hivert, M.-F., Waterworth, D. M., Timpson, N. J., Richards, J. B., Perry, J. R. B., Tanaka, T., Amin, N., Kollerits, B., Pichler, I., Oostra, B. A., Thorand, B., Frants, R. R. and 22 moreHeid, I. M., Henneman, P., Hicks, A., Coassin, S., Winkler, T., Aulchenko, Y. S., Fuchsberger, C., Song, K., Hivert, M.-F., Waterworth, D. M., Timpson, N. J., Richards, J. B., Perry, J. R. B., Tanaka, T., Amin, N., Kollerits, B., Pichler, I., Oostra, B. A., Thorand, B., Frants, R. R., Illig, T., Dupuis, J., Glaser, B., Spector, T., Guralnik, J., Egan, J. M., Florez, J. C., Evans, D. M., Soranzo, N., Bandinelli, S., Carlson, O. D., Frayling, T. M., Burling, K., Smith, G. D., Mooser, V., Ferrucci, L., Meigs, J. B., Vollenweider, P., Dijk, K. W. v., Pramstaller, P., Kronenberg, F., & van Duijn, C. M. (2010). Clear detection of ADIPOQ locus as the major gene for plasma adiponectin: Results of genome-wide association analyses including 4659 European individuals. Atherosclerosis, 208(2), 412-420. doi:10.1016/j.atherosclerosis.2009.11.035.

    Abstract

    OBJECTIVE: Plasma adiponectin is strongly associated with various components of metabolic syndrome, type 2 diabetes and cardiovascular outcomes. Concentrations are highly heritable and differ between men and women. We therefore aimed to investigate the genetics of plasma adiponectin in men and women. METHODS: We combined genome-wide association scans of three population-based studies including 4659 persons. For the replication stage in 13795 subjects, we selected the 20 top signals of the combined analysis, as well as the 10 top signals with p-values less than 1.0 x 10(-4) for each the men- and the women-specific analyses. We further selected 73 SNPs that were consistently associated with metabolic syndrome parameters in previous genome-wide association studies to check for their association with plasma adiponectin. RESULTS: The ADIPOQ locus showed genome-wide significant p-values in the combined (p=4.3 x 10(-24)) as well as in both women- and men-specific analyses (p=8.7 x 10(-17) and p=2.5 x 10(-11), respectively). None of the other 39 top signal SNPs showed evidence for association in the replication analysis. None of 73 SNPs from metabolic syndrome loci exhibited association with plasma adiponectin (p>0.01). CONCLUSIONS: We demonstrated the ADIPOQ gene as the only major gene for plasma adiponectin, which explains 6.7% of the phenotypic variance. We further found that neither this gene nor any of the metabolic syndrome loci explained the sex differences observed for plasma adiponectin. Larger studies are needed to identify more moderate genetic determinants of plasma adiponectin.
  • Heinemann, T. (2010). The question–response system of Danish. Journal of Pragmatics, 42, 2703-2725. doi:10.1016/j.pragma.2010.04.007.

    Abstract

    This paper provides an overview of the question–response system of Danish, based on a collection of 350 questions (and responses) collected from video recordings of naturally occurring face-to-face interactions between native speakers of Danish. The paper identifies the lexico-grammatical options for formulating questions, the range of social actions that can be implemented through questions and the relationship between questions and responses. It further describes features where Danish questions differ from a range of other languages in terms of, for instance, distribution and the relationship between question format and social action. For instance, Danish has a high frequency of interrogatively formatted questions and questions that are negatively formulated, when compared to languages that have the same grammatical options. In terms of action, Danish shows a higher number of questions that are used for making suggestions, offers and requests and does not use repetition as a way of answering a question as often as other languages.
  • Heritage, J., Elliott, M. N., Stivers, T., Richardson, A., & Mangione-Smith, R. (2010). Reducing inappropriate antibiotics prescribing: The role of online commentary on physical examination findings. Patient Education and Counseling, 81, 119-125. doi:10.1016/j.pec.2009.12.005.

    Abstract

    Objective: This study investigates the relationship of ‘online commentary’(contemporaneous physician comments about physical examination [PE] findings) with (i) parent questioning of the treatment recommendation and (ii) inappropriate antibiotic prescribing. Methods: A nested cross-sectional study of 522 encounters motivated by upper respiratory symptoms in 27 California pediatric practices (38 pediatricians). Physicians completed a post-visit survey regarding physical examination findings, diagnosis, treatment, and whether they perceived the parent as expecting an antibiotic. Taped encounters were coded for ‘problem’ online commentary (PE findings discussed as significant or clearly abnormal) and ‘no problem’ online commentary (PE findings discussed reassuringly as normal or insignificant). Results: Online commentary during the PE occurred in 73% of visits with viral diagnoses (n = 261). Compared to similar cases with ‘no problem’ online commentary, ‘problem’ comments were associated with a 13% greater probability of parents uestioning a non-antibiotic treatment plan (95% CI 0-26%, p = .05,) and a 27% (95% CI: 2-52%, p < .05) greater probability of an inappropriate antibiotic prescription. Conclusion: With viral illnesses, problematic online comments are associated with more pediatrician-parent conflict over non-antibiotic treatment recommendations. This may increase inappropriate antibiotic prescribing. Practice implications: In viral cases, physicians should consider avoiding the use of problematic online commentary.
  • Hersh, T., King, B., & Lutton, B. V. (2014). Novel bioinformatics tools for analysis of gene expression in the skate, Leucoraja erinacea. The Bulletin, MDI Biological Laboratory, 53, 16-18.
  • Hervais-Adelman, A., Pefkou, M., & Golestani, N. (2014). Bilingual speech-in-noise: Neural bases of semantic context use in the native language. Brain and Language, 132, 1-6. doi:10.1016/j.bandl.2014.01.009.

    Abstract

    Bilingual listeners comprehend speech-in-noise better in their native than non-native language. This native-language benefit is thought to arise from greater use of top-down linguistic information to assist degraded speech comprehension. Using functional magnetic resonance imaging, we recently showed that left angular gyrus activation is modulated when semantic context is used to assist native language speech-in-noise comprehension (Golestani, Hervais-Adelman, Obleser, & Scott, 2013). Here, we extend the previous work, by reanalyzing the previous data alongside the results obtained in the non-native language of the same late bilingual participants. We found a behavioral benefit of semantic context in processing speech-in-noise in the native language only, and the imaging results also revealed a native language context effect in the left angular gyrus. We also find a complementary role of lower-level auditory regions during stimulus-driven processing. Our findings help to elucidate the neural basis of the established native language behavioral benefit of speech-in-noise processing. (C) 2014 Elsevier Inc. All rights reserved.
  • Hessels, R. S., Hooge, I., Snijders, T. M., & Kemner, C. (2014). Is there a limit to the superiority of individuals with ASD in visual search? Journal of Autism and Developmental Disorders, 44, 443-451. doi:10.1007/s10803-013-1886-8.

    Abstract

    Superiority in visual search for individuals diagnosed with autism spectrum disorder (ASD) is a well-reported finding. We administered two visual search tasks to individuals with ASD and matched controls. One showed no difference between the groups, and one did show the expected superior performance for individuals with ASD. These results offer an explanation, formulated in terms of load theory. We suggest that there is a limit to the superiority in visual search for individuals with ASD, related to the perceptual load of the stimuli. When perceptual load becomes so high that no additional task-(ir)relevant information can be processed, performance will be based on single stimulus identification, in which no differences between individuals with ASD and controls have been demonstrated
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2014). In dialogue with an avatar, syntax production is identical compared to dialogue with a human partner. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 2351-2356). Austin, Tx: Cognitive Science Society.

    Abstract

    The use of virtual reality (VR) as a methodological tool is
    becoming increasingly popular in behavioural research due
    to its seemingly limitless possibilities. This new method has
    not been used frequently in the field of psycholinguistics,
    however, possibly due to the assumption that humancomputer
    interaction does not accurately reflect human-human
    interaction. In the current study we compare participants’
    language behaviour in a syntactic priming task with human
    versus avatar partners. Our study shows comparable priming
    effects between human and avatar partners (Human: 12.3%;
    Avatar: 12.6% for passive sentences) suggesting that VR is a
    valid platform for conducting language research and studying
    dialogue interactions.
  • Hill, C. (2010). Emergency language documentation teams: The Cape York Peninsula experience. In J. Hobson, K. Lowe, S. Poetsch, & M. Walsh (Eds.), Re-awakening languages: Theory and practice in the revitalisation of Australia’s Indigenous languages (pp. 418-432). Sydney: Sydney University Press.
  • Hill, C. (2010). [Review of the book Discourse and Grammar in Australian Languages ed. by Ilana Mushin and Brett Baker]. Studies in Language, 34(1), 215-225. doi:10.1075/sl.34.1.12hil.
  • Hintz, F. (2010). Speech and speaker recognition in dyslexic individuals. Bachelor Thesis, Max Planck Institute for Human Cognitive and Brain Sciences (Leipzig)/University of Leipzig.
  • Hoedemaker, R. S., & Gordon, P. C. (2014). Embodied language comprehension: Encoding-based and goal-driven processes. Journal of Experimental Psychology: General, 143(2), 914-929. doi:10.1037/a0032348.

    Abstract

    Theories of embodied language comprehension have proposed that language is understood through perceptual simulation of the sensorimotor characteristics of its meaning. Strong support for this claim requires demonstration of encoding-based activation of sensorimotor representations that is distinct from task-related or goal-driven processes. Participants in 3 eye-tracking experiments were presented with triplets of either numbers or object and animal names. In Experiment 1, participants indicated whether the size of the referent of the middle object or animal name was in between the size of the 2 outer items. In Experiment 2, the object and animal names were encoded for an immediate recognition memory task. In Experiment 3, participants completed the same comparison task of Experiment 1 for both words and numbers. During the comparison tasks, word and number decision times showed a symbolic distance effect, such that response time was inversely related to the size difference between the items. A symbolic distance effect was also observed for animal and object encoding times in cases where encoding time likely reflected some goal-driven processes as well. When semantic size was irrelevant to the task (Experiment 2), it had no effect on word encoding times. Number encoding times showed a numerical distance priming effect: Encoding time increased with numerical difference between items. Together these results suggest that while activation of numerical magnitude representations is encoding-based as well as goal-driven, activation of size information associated with words is goal-driven and does not occur automatically during encoding. This conclusion challenges strong theories of embodied cognition which claim that language comprehension consists of activation of analog sensorimotor representations irrespective of higher level processes related to context or task-specific goals
  • Hoedemaker, R. S., & Gordon, P. C. (2014). It takes time to prime: Semantic priming in the ocular lexical decision task. Journal of Experimental Psychology: Human Perception and Performance, 40(6), 2179-2197. doi:10.1037/a0037677.

    Abstract

    Two eye-tracking experiments were conducted in which the manual response mode typically used in lexical decision tasks (LDTs) was replaced with an eye-movement response through a sequence of 3 words. This ocular LDT combines the explicit control of task goals found in LDTs with the highly practiced ocular response used in reading text. In Experiment 1, forward saccades indicated an affirmative lexical decision (LD) on each word in the triplet. In Experiment 2, LD responses were delayed until all 3 letter strings had been read. The goal of the study was to evaluate the contribution of task goals and response mode to semantic priming. Semantic priming is very robust in tasks that involve recognition of words in isolation, such as LDT, but limited during text reading, as measured using eye movements. Gaze durations in both experiments showed robust semantic priming even though ocular response times were much shorter than manual LDs for the same words in the English Lexicon Project. Ex-Gaussian distribution fits revealed that the priming effect was concentrated in estimates of tau (τ), meaning that priming was most pronounced in the slow tail of the distribution. This pattern shows differential use of the prime information, which may be more heavily recruited in cases in which the LD is difficult, as indicated by longer response times. Compared with the manual LD responses, ocular LDs provide a more sensitive measure of this task-related influence on word recognition as measured by the LDT.
  • Hoey, E. (2014). Sighing in interaction: Somatic, semiotic, and social. Research on Language and Social Interaction, 47(2), 175-200. doi:10.1080/08351813.2014.900229.

    Abstract

    Participants in interaction routinely orient to gaze, bodily comportment, and nonlexical vocalizations as salient for developing an analysis of the unfolding course of action. In this article, I address the respiratory phenomenon of sighing, the aim being to describe sighing as a situated practice that contributes to the achievement of particular actions in interaction. I report on the various actions sighs implement or construct and how their positioning and delivery informs participants’ understandings of their significance for interaction. Data are in American English
  • Hoffmann, C. W. G., Sadakata, M., Chen, A., Desain, P., & McQueen, J. M. (2014). Within-category variance and lexical tone discrimination in native and non-native speakers. In C. Gussenhoven, Y. Chen, & D. Dediu (Eds.), Proceedings of the 4th International Symposium on Tonal Aspects of Language (pp. 45-49). Nijmegen: Radboud University Nijmegen.

    Abstract

    In this paper, we show how acoustic variance within lexical tones in disyllabic Mandarin Chinese pseudowords affects discrimination abilities in both native and non-native speakers of Mandarin Chinese. Within-category acoustic variance did not hinder native speakers in discriminating between lexical tones, whereas it precludes Dutch native speakers from reaching native level performance. Furthermore, the influence of acoustic variance was not uniform but asymmetric, dependent on the presentation order of the lexical tones to be discriminated. An exploratory analysis using an active adaptive oddball paradigm was used to quantify the extent of the perceptual asymmetry. We discuss two possible mechanisms underlying this asymmetry and propose possible paradigms to investigate these mechanisms
  • Hogan-Brown, A. L., Hoedemaker, R. S., Gordon, P. C., & Losh, M. (2014). Eye-voice span during rapid automatized naming: Evidence of reduced automaticity in individuals with autism spectrum disorder and their siblings. Journal of Neurodevelopmental Disorders, 6(1): 33. doi:10.1186/1866-1955-6-33.

    Abstract

    Background: Individuals with autism spectrum disorder (ASD) and their parents demonstrate impaired performance in rapid automatized naming (RAN), a task that recruits a variety of linguistic and executive processes. Though the basic processes that contribute to RAN differences remain unclear, eye-voice relationships, as measured through eye tracking, can provide insight into cognitive and perceptual processes contributing to RAN performance. For example, in RAN, eye-voice span (EVS), the distance ahead the eyes are when articulation of a target item's label begins, is an indirect measure of automaticity of the processes underlying RAN. The primary objective of this study was to investigate automaticity in naming processes, as indexed by EVS during RAN. The secondary objective was to characterize RAN difficulties in individuals with ASD and their siblings. Methods: Participants (aged 15 – 33 years) included 21 individuals with ASD, 23 siblings of individuals with ASD, and 24 control subjects, group-matched on chronological age. Naming time, frequency of errors, and EVS were measured during a RAN task and compared across groups. Results: A stepwise pattern of RAN performance was observed, with individuals with ASD demonstrating the slowest naming across all RAN conditions, controls demonstrating the fastest naming, and siblings demonstrating intermediate performance. Individuals with ASD exhibited smaller EVSs than controls on all RAN conditions, and siblings exhibited smaller EVSs during number naming (the most highly automatized type of naming). EVSs were correlated with naming times in controls only, and only in the more automatized conditions. Conclusions: These results suggest that reduced automaticity in the component processes of RAN may underpin differences in individuals with ASD and their siblings. These findings also provide further support that RAN abilities are impacted by genetic liability to ASD. This study has important implications for understanding the underlying skills contributing to language-related deficits in ASD.
  • Holler, J. (2014). Experimental methods in co-speech gesture research. In C. Mueller, A. Cienki, D. McNeill, & E. Fricke (Eds.), Body -language – communication: An international handbook on multimodality in human interaction. Volume 1 (pp. 837-856). Berlin: De Gruyter.
  • Holler, J., Schubotz, L., Kelly, S., Hagoort, P., Schuetze, M., & Ozyurek, A. (2014). Social eye gaze modulates processing of speech and co-speech gesture. Cognition, 133, 692-697. doi:10.1016/j.cognition.2014.08.008.

    Abstract

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech + gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker’s preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients’ speech processing suffers, gestures can enhance the comprehension of a speaker’s message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension.
  • Holler, J. (2010). Speakers’ use of interactive gestures to mark common ground. In S. Kopp, & I. Wachsmuth (Eds.), Gesture in embodied communication and human-computer interaction. 8th International Gesture Workshop, Bielefeld, Germany, 2009; Selected Revised Papers (pp. 11-22). Heidelberg: Springer Verlag.
  • Hoogman, M., Guadalupe, T., Zwiers, M. P., Klarenbeek, P., Francks, C., & Fisher, S. E. (2014). Assessing the effects of common variation in the FOXP2 gene on human brain structure. Frontiers in Human Neuroscience, 8: 473. doi:10.3389/fnhum.2014.00473.

    Abstract

    The FOXP2 transcription factor is one of the most well-known genes to have been implicated in developmental speech and language disorders. Rare mutations disrupting the function of this gene have been described in different families and cases. In a large three-generation family carrying a missense mutation, neuroimaging studies revealed significant effects on brain structure and function, most notably in the inferior frontal gyrus, caudate nucleus and cerebellum. After the identification of rare disruptive FOXP2 variants impacting on brain structure, several reports proposed that common variants at this locus may also have detectable effects on the brain, extending beyond disorder into normal phenotypic variation. These neuroimaging genetics studies used groups of between 14 and 96 participants. The current study assessed effects of common FOXP2 variants on neuroanatomy using voxel-based morphometry and volumetric techniques in a sample of >1300 people from the general population. In a first targeted stage we analyzed single nucleotide polymorphisms (SNPs) claimed to have effects in prior smaller studies (rs2253478, rs12533005, rs2396753, rs6980093, rs7784315, rs17137124, rs10230558, rs7782412, rs1456031), beginning with regions proposed in the relevant papers, then assessing impact across the entire brain. In the second gene-wide stage, we tested all common FOXP2 variation, focusing on volumetry of those regions most strongly implicated from analyses of rare disruptive mutations. Despite using a sample that is more than ten times that used for prior studies of common FOXP2 variation, we found no evidence for effects of SNPs on variability in neuroanatomy in the general population. Thus, the impact of this gene on brain structure may be largely limited to extreme cases of rare disruptive alleles. Alternatively, effects of common variants at this gene exist but are too subtle to be detected with standard volumetric techniques
  • Houston, D. M., Jusczyk, P. W., Kuijpers, C., Coolen, R., & Cutler, A. (2000). Cross-language word segmentation by 9-month-olds. Psychonomic Bulletin & Review, 7, 504-509.

    Abstract

    Dutch-learning and English-learning 9-month-olds were tested, using the Headturn Preference Procedure, for their ability to segment Dutch words with strong/weak stress patterns from fluent Dutch speech. This prosodic pattern is highly typical for words of both languages. The infants were familiarized with pairs of words and then tested on four passages, two that included the familiarized words and two that did not. Both the Dutch- and the English-learning infants gave evidence of segmenting the targets from the passages, to an equivalent degree. Thus, English-learning infants are able to extract words from fluent speech in a language that is phonetically different from English. We discuss the possibility that this cross-language segmentation ability is aided by the similarity of the typical rhythmic structure of Dutch and English words.
  • Howarth, H., Sommer, V., & Jordan, F. (2010). Visual depictions of female genitalia differ depending on source. Medical Humanities, 36, 75-79. doi:10.1136/jmh.2009.003707.

    Abstract

    Very little research has attempted to describe normal human variation in female genitalia, and no studies have compared the visual images that women might use in constructing their ideas of average and acceptable genital morphology to see if there are any systematic differences. Our objective was to determine if visual depictions of the vulva differed according to their source so as to alert medical professionals and their patients to how these depictions might capture variation and thus influence perceptions of "normality". We conducted a comparative analysis by measuring (a) published visual materials from human anatomy textbooks in a university library, (b) feminist publications (both print and online) depicting vulval morphology, and (c) online pornography, focusing on the most visited and freely accessible sites in the UK. Post-hoc tests showed that labial protuberance was significantly less (p < .001, equivalent to approximately 7 mm) in images from online pornography compared to feminist publications. All five measures taken of vulval features were significantly correlated (p < .001) in the online pornography sample, indicating a less varied range of differences in organ proportions than the other sources where not all measures were correlated. Women and health professionals should be aware that specific sources of imagery may depict different types of genital morphology and may not accurately reflect true variation in the population, and consultations for genital surgeries should include discussion about the actual and perceived range of variation in female genital morphology.
  • Hoymann, G. (2014). [Review of the book Bridging the language gap, Approaches to Herero verbal interaction as development practice in Namibia by Rose Marie Beck]. Journal of African languages and linguistics, 35(1), 130-133. doi:10.1515/jall-2014-0004.
  • Hoymann, G. (2010). Questions and responses in ╪Ākhoe Hai||om. Journal of Pragmatics, 42(10), 2726-2740. doi:10.1016/j.pragma.2010.04.008.

    Abstract

    This paper examines ╪Ākhoe Hai||om, a Khoe language of the Khoisan family spoken in Northern Namibia. I document the way questions are posed in natural conversation, the actions the questions are used for and the manner in which they are responded to. I show that in this language speakers rely most heavily on content questions. I also find that speakers of ╪Ākhoe Hai||om address fewer questions to a specific individual than would be expected from prior research on Indo European languages. Finally, I discuss some possible explanations for these findings.
  • Huettig, F. (2014). Role of prediction in language learning. In P. J. Brooks, & V. Kempe (Eds.), Encyclopedia of language development (pp. 479-481). London: Sage Publications.
  • Huettig, F., Chen, J., Bowerman, M., & Majid, A. (2010). Do language-specific categories shape conceptual processing? Mandarin classifier distinctions influence eye gaze behavior, but only during linguistic processing. Journal of Cognition and Culture, 10(1/2), 39-58. doi:10.1163/156853710X497167.

    Abstract

    In two eye-tracking studies we investigated the influence of Mandarin numeral classifiers - a grammatical category in the language - on online overt attention. Mandarin speakers were presented with simple sentences through headphones while their eye-movements to objects presented on a computer screen were monitored. The crucial question is what participants look at while listening to a pre-specified target noun. If classifier categories influence Mandarin speakers' general conceptual processing, then on hearing the target noun they should look at objects that are members of the same classifier category - even when the classifier is not explicitly present (cf. Huettig & Altmann, 2005). The data show that when participants heard a classifier (e.g., ba3, Experiment 1) they shifted overt attention significantly more to classifier-match objects (e.g., chair) than to distractor objects. But when the classifier was not explicitly presented in speech, overt attention to classifier-match objects and distractor objects did not differ (Experiment 2). This suggests that although classifier distinctions do influence eye-gaze behavior, they do so only during linguistic processing of that distinction and not in moment-to-moment general conceptual processing.
  • Huettig, F., & Mishra, R. K. (2014). How literacy acquisition affects the illiterate mind - A critical examination of theories and evidence. Language and Linguistics Compass, 8(10), 401-427. doi:10.1111/lnc3.12092.

    Abstract

    At present, more than one-fifth of humanity is unable to read and write. We critically examine experimental evidence and theories of how (il)literacy affects the human mind. In our discussion we show that literacy has significant cognitive consequences that go beyond the processing of written words and sentences. Thus, cultural inventions such as reading shape general cognitive processing in non-trivial ways. We suggest that this has important implications for educational policy and guidance as well as research into cognitive processing and brain functioning.
  • Huettig, F., & Hartsuiker, R. J. (2010). Listening to yourself is like listening to others: External, but not internal, verbal self-monitoring is based on speech perception. Language and Cognitive Processes, 3, 347 -374. doi:10.1080/01690960903046926.

    Abstract

    Theories of verbal self-monitoring generally assume an internal (pre-articulatory) monitoring channel, but there is debate about whether this channel relies on speech perception or on production-internal mechanisms. Perception-based theories predict that listening to one's own inner speech has similar behavioral consequences as listening to someone else's speech. Our experiment therefore registered eye-movements while speakers named objects accompanied by phonologically related or unrelated written words. The data showed that listening to one's own speech drives eye-movements to phonologically related words, just as listening to someone else's speech does in perception experiments. The time-course of these eye-movements was very similar to that in other-perception (starting 300 ms post-articulation), which demonstrates that these eye-movements were driven by the perception of overt speech, not inner speech. We conclude that external, but not internal monitoring, is based on speech perception.
  • Hulten, A., Laaksonen, H., Vihla, M., Laine, M., & Salmelin, R. (2010). Modulation of brain activity after learning predicts long-term memory for words. Journal of Neuroscience, 30(45), 15160-15164. doi:10.1523/​JNEUROSCI.1278-10.2010.

    Abstract

    The acquisition and maintenance of new language information, such as picking up new words, is a critical human ability that is needed throughout the life span. Most likely you learned the word “blog” quite recently as an adult, whereas the word “kipe,” which in the 1970s denoted stealing, now seems unfamiliar. Brain mechanisms underlying the long-term maintenance of new words have remained unknown, albeit they could provide important clues to the considerable individual differences in the ability to remember words. After successful training of a set of novel object names we tracked, over a period of 10 months, the maintenance of this new vocabulary in 10 human participants by repeated behavioral tests and magnetoencephalography measurements of overt picture naming. When namingrelated activation in the left frontal and temporal cortex was enhanced 1 week after training, compared with the level at the end of training, the individual retained a good command of the new vocabulary at 10 months; vice versa, individuals with reduced activation at 1 week posttraining were less successful in recalling the names at 10 months. This finding suggests an individual neural marker for memory, in the context of language. Learning is not over when the acquisition phase has been successfully completed: neural events during the access to recently established word representations appear to be important for the long-term outcome of learning.
  • Hulten, A., Karvonen, L., Laine, M., & Salmelin, R. (2014). Producing speech with a newly learned morphosyntax and vocabulary: An MEG study. Journal of Cognitive Neuroscience, 26(8), 1721-1735. doi:10.1162/jocn_a_00558.
  • Hulten, A. (2010). Sanan tuottaminen [Word production]. In Kieli ja aivot [Language and the Brain - Textbook series] (pp. 106-116).
  • Indefrey, P., & Gullberg, M. (2010). Foreword. Language Learning, 60(S2), v. doi:10.1111/j.1467-9922.2010.00596.x.

    Abstract

    The articles in this volume are the result of an invited conference entitled "The Earliest Stages of Language Learning" held at the Max Planck Institute for Psycholinguistics in Nijmegen, The Netherlands, in October 2009.
  • Indefrey, P., & Gullberg, M. (2010). The earliest stages of language learning: Introduction. Language Learning, 60(S2), 1-4. doi:10.1111/j.1467-9922.2010.00597.x.
  • Indefrey, P., & Gullberg, M. (2010). The earliest stages of language learning: Introduction. In M. Gullberg, & P. Indefrey (Eds.), The earliest stages of language learning (pp. 1-4). Malden, MA: Wiley-Blackwell.
  • Indefrey, P., & Levelt, W. J. M. (2000). The neural correlates of language production. In M. S. Gazzaniga (Ed.), The new cognitive neurosciences; 2nd ed. (pp. 845-865). Cambridge, MA: MIT Press.

    Abstract

    This chapter reviews the findings of 58 word production experiments using different tasks and neuroimaging techniques. The reported cerebral activation sites are coded in a common anatomic reference system. Based on a functional model of language production, the different word production tasks are analyzed in terms of their processing components. This approach allows a distinction between the core process of word production and preceding task-specific processes (lead-in processes) such as visual or auditory stimulus recognition. The core process of word production is subserved by a left-lateralized perisylvian/thalamic language production network. Within this network there seems to be functional specialization for the processing stages of word production. In addition, this chapter includes a discussion of the available evidence on syntactic production, self-monitoring, and the time course of word production.
  • Indefrey, P. (2014). Time course of word production does not support a parallel input architecture. Language, Cognition and Neuroscience, 29(1), 33-34. doi:10.1080/01690965.2013.847191.

    Abstract

    Hickok's enterprise to unify psycholinguistic and motor control models is highly stimulating. Nonetheless, there are problems of the model with respect to the time course of neural activation in word production, the flexibility for continuous speech, and the need for non-motor feedback.

    Files private

    Request files
  • Ingason, A., Giegling, I., Cichon, S., Hansen, T., Rasmussen, H. B., Nielsen, J., Jurgens, G., Muglia, P., Hartmann, A. M., Strengman, E., Vasilescu, C., Muhleisen, T. W., Djurovic, S., Melle, I., Lerer, B., Möller, H.-J., Francks, C., Pietilainen, O. P. H., Lonnqvist, J., Suvisaari, J. and 20 moreIngason, A., Giegling, I., Cichon, S., Hansen, T., Rasmussen, H. B., Nielsen, J., Jurgens, G., Muglia, P., Hartmann, A. M., Strengman, E., Vasilescu, C., Muhleisen, T. W., Djurovic, S., Melle, I., Lerer, B., Möller, H.-J., Francks, C., Pietilainen, O. P. H., Lonnqvist, J., Suvisaari, J., Tuulio-Henriksson, A., Walshe, M., Vassos, E., Di Forti, M., Murray, R., Bonetto, C., Tosato, S., Cantor, R. M., Rietschel, M., Craddock, N., Owen, M. J., Andreassen, O. A., Nothen, M. M., Peltonen, L., St. Clair, D., Ophoff, R. A., O’Donovan, M. C., Collier, D. A., Werge, T., & Rujescu, D. (2010). A large replication study and meta-analysis in European samples provides further support for association of AHI1 markers with schizophrenia. Human Molecular Genetics, 19(7), 1379-1386. doi:10.1093/hmg/ddq009.

    Abstract

    The Abelson helper integration site 1 (AHI1) gene locus on chromosome 6q23 is among a group of candidate loci for schizophrenia susceptibility that were initially identified by linkage followed by linkage disequilibrium mapping, and subsequent replication of the association in an independent sample. Here, we present results of a replication study of AHI1 locus markers, previously implicated in schizophrenia, in a large European sample (in total 3907 affected and 7429 controls). Furthermore, we perform a meta-analysis of the implicated markers in 4496 affected and 18,920 controls. Both the replication study of new samples and the meta-analysis show evidence for significant overrepresentation of all tested alleles in patients compared with controls (meta-analysis; P = 8.2 x 10(-5)-1.7 x 10(-3), common OR = 1.09-1.11). The region contains two genes, AHI1 and C6orf217, and both genes-as well as the neighbouring phosphodiesterase 7B (PDE7B)-may be considered candidates for involvement in the genetic aetiology of schizophrenia.
  • Ingvar, M., & Petersson, K. M. (2000). Functional maps and brain networks. In A. W. Toga (Ed.), Brain mapping: The systems (pp. 111-140). San Diego: Academic Press.
  • Jackson, C., & Roberts, L. (2010). Animacy affects the processing of subject–object ambiguities in the second language: Evidence from self-paced reading with German second language learners of Dutch. Applied Psycholinguistics, 31(4), 671-691. doi:10.1017/S0142716410000196.

    Abstract

    The results of a self-paced reading study with German second language (L2) learners of Dutch showed that noun animacy affected the learners' on-line commitments when comprehending relative clauses in their L2. Earlier research has found that German L2 learners of Dutch do not show an on-line preference for subject–object word order in temporarily ambiguous relative clauses when no disambiguating material is available prior to the auxiliary verb. We investigated whether manipulating the animacy of the ambiguous noun phrases would push the learners to make an on-line commitment to either a subject- or object-first analysis. Results showed they performed like Dutch native speakers in that their reading times reflected an interaction between topichood and animacy in the on-line assignment of grammatical roles
  • Janse, E., De Bree, E., & Brouwer, S. (2010). Decreased sensitivity to phonemic mismatch in spoken word processing in adult developmental dyslexia. Journal of Psycholinguistic Research, 39(6), 523-539. doi:10.1007/s10936-010-9150-2.

    Abstract

    Initial lexical activation in typical populations is a direct reflection of the goodness of fit between the presented stimulus and the intended target. In this study, lexical activation was investigated upon presentation of polysyllabic pseudowords (such as procodile for crocodile) for the atypical population of dyslexic adults to see to what extent mismatching phonemic information affects lexical activation in the face of overwhelming support for one specific lexical candidate. Results of an auditory lexical decision task showed that sensitivity to phonemic mismatch was less in the dyslexic population, compared to the respective control group. However, the dyslexic participants were outperformed by their controls only for word-initial mismatches. It is argued that a subtle speech decoding deficit affects lexical activation levels and makes spoken word processing less robust against distortion.
  • Janse, E., Sennema, A., & Slis, A. (2000). Fast speech timing in Dutch: The durational correlates of lexical stress and pitch accent. In Proceedings of the VIth International Conference on Spoken Language Processing, Vol. III (pp. 251-254).

    Abstract

    n this study we investigated the durational correlates of lexical stress and pitch accent at normal and fast speech rate in Dutch. Previous literature on English shows that durations of lexically unstressed vowels are reduced more than stressed vowels when speakers increase their speech rate. We found that the same holds for Dutch, irrespective of whether the unstressed vowel is schwa or a "full" vowel. In the same line, we expected that vowels in words without a pitch accent would be shortened relatively more than vowels in words with a pitch accent. This was not the case: if anything, the accented vowels were shortened relatively more than the unaccented vowels. We conclude that duration is an important cue for lexical stress, but not for pitch accent.
  • Janse, E. (2000). Intelligibility of time-compressed speech: Three ways of time-compression. In Proceedings of the VIth International Conference on Spoken Language Processing, vol. III (pp. 786-789).

    Abstract

    Studies on fast speech have shown that word-level timing of fast speech differs from that of normal rate speech in that unstressed syllables are shortened more than stressed syllables as speech rate increases. An earlier experiment showed that the intelligibility of time-compressed speech could not be improved by making its temporal organisation closer to natural fast speech. To test the hypothesis that segmental intelligibility is more important than prosodic timing in listening to timecompressed speech, the intelligibility of bisyllabic words was tested in three time-compression conditions: either stressed and unstressed syllable were compressed to the same degree, or the stressed syllable was compressed more than the unstressed syllable, or the reverse. As was found before, imitating wordlevel timing of fast speech did not improve intelligibility over linear compression. However, the results did not confirm the hypothesis either: there was no difference in intelligibility between the three compression conditions. We conclude that segmental intelligibility plays an important role, but further research is necessary to decide between the contributions of prosody and segmental intelligibility to the word-level intelligibility of time-compressed speech.
  • Janse, E. (2010). Spoken word processing and the effect of phonemic mismatch in aphasia. Aphasiology, 24(1), 3-27. doi:10.1080/02687030802339997.

    Abstract

    Background: There is evidence that, unlike in typical populations, initial lexical activation upon hearing spoken words in aphasic patients is not a direct reflection of the goodness of fit between the presented stimulus and the intended target. Earlier studies have mainly used short monosyllabic target words. Short words are relatively difficult to recognise because they are not highly redundant: changing one phoneme will often result in a (similar-sounding) different word. Aims: The present study aimed to investigate sensitivity of the lexical recognition system in aphasia. The focus was on longer words that contain more redundancy, to investigate whether aphasic adults might be impaired in deactivation of strongly activated lexical candidates. This was done by studying lexical activation upon presentation of spoken polysyllabic pseudowords (such as procodile) to see to what extent mismatching phonemic information leads to deactivation in the face of overwhelming support for one specific lexical candidate. Methods & Procedures: Speeded auditory lexical decision was used to investigate response time and accuracy to pseudowords with a word-initial or word-final phonemic mismatch in 21 aphasic patients and in an age-matched control group. Outcomes & Results: Results of an auditory lexical decision task showed that aphasic participants were less sensitive to phonemic mismatch if there was strong evidence for one particular lexical candidate, compared to the control group. Classifications of patients as Broca's vs Wernicke's or as fluent vs non-fluent did not reveal differences in sensitivity to mismatch between aphasia types. There was no reliable relationship between measures of auditory verbal short-term memory and lexical decision performance. Conclusions: It is argued that the aphasic results can best be viewed as lexical “overactivation” and that a verbal short-term memory account is less appropriate.
  • Janse, E., & Jesse, A. (2014). Working memory affects older adults’ use of context in spoken-word recognition. Quarterly Journal of Experimental Psychology, 67, 1842-1862. doi:10.1080/17470218.2013.879391.

    Abstract

    Many older listeners report difficulties in understanding speech in noisy situations. Working memory and other cognitive skills may modulate, however, older listeners’ ability to use context information to alleviate the effects of noise on spoken-word recognition. In the present study, we investigated whether working memory predicts older adults’ ability to immediately use context information in the recognition of words embedded in sentences, presented in different listening conditions. In a phoneme-monitoring task, older adults were asked to detect as fast and as accurately as possible target phonemes in sentences spoken by a target speaker. Target speech was presented without noise, with fluctuating speech-shaped noise, or with competing speech from a single distractor speaker. The gradient measure of contextual probability (derived from a separate offline rating study) mainly affected the speed of recognition, with only a marginal effect on detection accuracy. Contextual facilitation was modulated by older listeners’ working memory and age across listening conditions. Working memory and age, as well as hearing loss, were also the most consistent predictors of overall listening performance. Older listeners’ immediate benefit from context in spoken-word recognition thus relates to their ability to keep and update a semantic representation of the sentence content in working memory.

    Files private

    Request files
  • Janzen, G., Herrmann, T., Katz, S., & Schweizer, K. (2000). Oblique Angled Intersections and Barriers: Navigating through a Virtual Maze. In Spatial Cognition II (pp. 277-294). Berlin: Springer.

    Abstract

    The configuration of a spatial layout has a substantial effect on the acquisition and the representation of the environment. In four experiments, we investigated navigation difficulties arising at oblique angled intersections. In the first three studies we investigated specific arrow-fork configurations. In dependence on the branch subjects use to enter the intersection different decision latencies and numbers of errors arise. If subjects see the intersection as a fork, it is more difficult to find the correct way as if it is seen as an arrow. In a fourth study we investigated different heuristics people use while making a detour around a barrier. Detour behaviour varies with the perspective. If subjects learn and navigate through the maze in a field perspective they use a heuristic of preferring right angled paths. If they have a view from above and acquire their knowledge in an observer perspective they use oblique angled paths more often.

    Files private

    Request files
  • Järvikivi, J., & Pyykkönen, P. (2010). Lauseiden ymmärtäminen [Engl. Sentence comprehension]. In P. Korpilahti, O. Aaltonen, & M. Laine (Eds.), Kieli ja aivot: Kommunikaation perusteet, häiriöt ja kuntoutus (pp. 117-125). Turku: Turku yliopisto.

    Abstract

    Kun kuuntelemme puhetta tai luemme tekstiä, alamme välittömästi rakentaa koherenttia tulkintaa. Toisin kuin lukemisessa, puheen havaitsemisessa kuulija voi harvoin kontrolloida nopeutta, jolla hänelle puhutaan. Huolimatta hyvin nopeasta syötteestä - noin 4-7 tavua sekunnissa - ihmiset kykenevät tulkitsemaan puhetta hyvin vaivattomasti. Lauseen ymmärtämisen tutkimuksessa selvitetäänkin, miten tällainen nopea ja useimmiten vaivaton tulkintaprosessi tapahtuu, mitkä kognitiiviset prosessit osallistuvat reaaliaikaiseen tulkintaan ja millaista informaatiota missäkin vaiheessa prosessointia ihminen käyttää hyväkseen johdonmukaisen tulkinnan muodostamiseksi. Tämä kappale on katsaus lauseen ymmärtämisen prosesseihin ja niiden tutkimukseen. Käsittelemme lyhyesti prosessointimalleja, aikuisten ja lasten kielen suhdetta, lauseen sisäisten ja välisten viittaussuhteiden tulkintaa ja sensorisen ympäristön sekä motorisen toiminnan roolia lauseiden tulkintaprosessissa.
  • Järvikivi, J., Vainio, M., & Aalto, D. (2010). Real-time correlates of phonological quantity reveal unity of tonal and non-tonal languages. Plos One, 5(9), e12603. doi:10.1371/journal.pone.0012603.

    Abstract

    Discrete phonological phenomena form our conscious experience of language: continuous changes in pitch appear as distinct tones to the speakers of tone languages, whereas the speakers of quantity languages experience duration categorically. The categorical nature of our linguistic experience is directly reflected in the traditionally clear-cut linguistic classification of languages into tonal or non-tonal. However, some evidence suggests that duration and pitch are fundamentally interconnected and co-vary in signaling word meaning in non-tonal languages as well. We show that pitch information affects real-time language processing in a (non-tonal) quantity language. The results suggest that there is no unidirectional causal link from a genetically-based perceptual sensitivity towards pitch information to the appearance of a tone language. They further suggest that the contrastive categories tone and quantity may be based on simultaneously co-varying properties of the speech signal and the processing system, even though the conscious experience of the speakers may highlight only one discrete variable at a time.
  • Jasmin, K., & Casasanto, D. (2010). Stereotyping: How the QWERTY keyboard shapes the mental lexicon [Abstract]. In Proceedings of the 16th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2010] (pp. 159). York: University of York.
  • Jesse, A., Reinisch, E., & Nygaard, L. C. (2010). Learning of adjectival word meaning through tone of voice [Abstract]. Journal of the Acoustical Society of America, 128, 2475.

    Abstract

    Speakers express word meaning through systematic but non-canonical acoustic variation of tone of voice (ToV), i.e., variation of speaking rate, pitch, vocal effort, or loudness. Words are, for example, pronounced at a higher pitch when referring to small than to big referents. In the present study, we examined whether listeners can use ToV to learn the meaning of novel adjectives (e.g., “blicket”). During training, participants heard sentences such as “Can you find the blicket one?” spoken with ToV representing hot-cold, strong-weak, and big-small. Participants’ eye movements to two simultaneously shown objects with properties representing the relevant two endpoints (e.g., an elephant and an ant for big-small) were monitored. Assignment of novel adjectives to endpoints was counterbalanced across participants. During test, participants heard the sentences spoken with a neutral ToV, while seeing old or novel picture pairs varying along the same dimensions (e.g., a truck and a car for big-small). Participants had to click on the adjective’s referent. As evident from eye movements, participants did not infer the intended meaning during first exposure, but learned the meaning with the help of ToV during training. At test listeners applied this knowledge to old and novel items even in the absence of informative ToV.
  • Jesse, A., & McQueen, J. M. (2014). Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition. Quarterly Journal of Experimental Psychology, 67, 793-808. doi:10.1080/17470218.2013.834371.

    Abstract

    Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., 'ca-vi from cavia "guinea pig" vs. 'ka-vi from kaviaar "caviar"). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-'jec from projector "projector" vs. 'pro-jec from projectiel "projectile"), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress.
  • Jesse, A., & Massaro, D. W. (2010). Seeing a singer helps comprehension of the song's lyrics. Psychonomic Bulletin & Review, 17, 323-328.

    Abstract

    When listening to speech, we often benefit when also seeing the speaker talk. If this benefit is not domain-specific for speech, then the recognition of sung lyrics should likewise benefit from seeing the singer. Nevertheless, previous research failed to obtain a substantial improvement in that domain. Our study shows that this failure was not due to inherent differences between singing and speaking but rather to less informative visual presentations. By presenting a professional singer, we found a substantial audiovisual benefit of about 35% improvement for lyrics recognition. This benefit was further robust across participants, phrases, and repetition of the test materials. Our results provide the first evidence that lyrics recognition just like speech and music perception is a multimodal process.
  • Jesse, A., Vrignaud, N., Cohen, M. M., & Massaro, D. W. (2000). The processing of information from multiple sources in simultaneous interpreting. Interpreting, 5(2), 95-115. doi:10.1075/intp.5.2.04jes.

    Abstract

    Language processing is influenced by multiple sources of information. We examined whether the performance in simultaneous interpreting would be improved when providing two sources of information, the auditory speech as well as corresponding lip-movements, in comparison to presenting the auditory speech alone. Although there was an improvement in sentence recognition when presented with visible speech, there was no difference in performance between these two presentation conditions when bilinguals simultaneously interpreted from English to German or from English to Spanish. The reason why visual speech did not contribute to performance could be the presentation of the auditory signal without noise (Massaro, 1998). This hypothesis should be tested in the future. Furthermore, it should be investigated if an effect of visible speech can be found for other contexts, when visual information could provide cues for emotions, prosody, or syntax.

Share this page