Publications

Displaying 201 - 300 of 634
  • Francks, C., Fisher, S. E., Marlow, A. J., MacPhie, I. L., Taylor, K. E., Richardson, A. J., Stein, J. F., & Monaco, A. P. (2003). Familial and genetic effects on motor coordination, laterality, and reading-related cognition. American Journal of Psychiatry, 160(11), 1970-1977. doi:10.1176/appi.ajp.160.11.1970.

    Abstract

    OBJECTIVE: Recent research has provided evidence for a genetically mediated association between language or reading-related cognitive deficits and impaired motor coordination. Other studies have identified relationships between lateralization of hand skill and cognitive abilities. With a large sample, the authors aimed to investigate genetic relationships between measures of reading-related cognition, hand motor skill, and hand skill lateralization.

    METHOD: The authors applied univariate and bivariate correlation and familiality analyses to a range of measures. They also performed genomewide linkage analysis of hand motor skill in a subgroup of 195 sibling pairs.

    RESULTS: Hand motor skill was significantly familial (maximum heritability=41%), as were reading-related measures. Hand motor skill was weakly but significantly correlated with reading-related measures, such as nonword reading and irregular word reading. However, these correlations were not significantly familial in nature, and the authors did not observe linkage of hand motor skill to any chromosomal regions implicated in susceptibility to dyslexia. Lateralization of hand skill was not correlated with reading or cognitive ability.

    CONCLUSIONS: The authors confirmed a relationship between lower motor ability and poor reading performance. However, the genetic effects on motor skill and reading ability appeared to be largely or wholly distinct, suggesting that the correlation between these traits may have arisen from environmental influences. Finally, the authors found no evidence that reading disability and/or low general cognitive ability were associated with ambidexterity.
  • Francks, C. (2011). Leucine-rich repeat genes and the fine-tuning of synapses. Biological Psychiatry, 69, 820-821. doi:10.1016/j.biopsych.2010.12.018.
  • Francks, C., DeLisi, L. E., Shaw, S. H., Fisher, S. E., Richardson, A. J., Stein, J. F., & Monaco, A. P. (2003). Parent-of-origin effects on handedness and schizophrenia susceptibility on chromosome 2p12-q11. Human Molecular Genetics, 12(24), 3225-3230. doi:10.1093/hmg/ddg362.

    Abstract

    Schizophrenia and non-right-handedness are moderately associated, and both traits are often accompanied by abnormalities of asymmetrical brain morphology or function. We have found linkage previously of chromosome 2p12-q11 to a quantitative measure of handedness, and we have also found linkage of schizophrenia/schizoaffective disorder to this same chromosomal region in a separate study. Now, we have found that in one of our samples (191 reading-disabled sibling pairs), the relative hand skill of siblings was correlated more strongly with paternal than maternal relative hand skill. This led us to re-analyse 2p12-q11 under parent-of-origin linkage models. We found linkage of relative hand skill in the RD siblings to 2p12-q11 with P=0.0000037 for paternal identity-by-descent sharing, whereas the maternally inherited locus was not linked to the trait (P>0.2). Similarly, in affected-sib-pair analysis of our schizophrenia dataset (241 sibling pairs), we found linkage to schizophrenia for paternal sharing with LOD=4.72, P=0.0000016, within 3 cM of the peak linkage to relative hand skill. Maternal linkage across the region was weak or non-significant. These similar paternal-specific linkages suggest that the causative genetic effects on 2p12-q11 are related. The linkages may be due to a single maternally imprinted influence on lateralized brain development that contains common functional polymorphisms.
  • Frank, S. L., Koppen, M., Noordman, L. G. M., & Vonk, W. (2003). A model for knowledge-based pronoun resolution. In F. Detje, D. Dörner, & H. Schaub (Eds.), The logic of cognitive systems (pp. 245-246). Bamberg: Otto-Friedrich Universität.

    Abstract

    Several sources of information are used in choosing the intended referent of an ambiguous pronoun. The two sources considered in this paper are foregrounding and context. The first refers to the accessibility of discourse entities. An entity that is foregrounded is more likely to become the pronoun’s referent than an entity that is not. Context information affects pronoun resolution when world knowledge is needed to find the referent. The model presented here simulates how world knowledge invoked by context, together with foregrounding, influences pronoun resolution. It was developed as an extension to the Distributed Situation Space (DSS) model of knowledge-based inferencing in story comprehension (Frank, Koppen, Noordman, & Vonk, 2003), which shall be introduced first.
  • Frank, S. L., Koppen, M., Noordman, L. G. M., & Vonk, W. (2003). Modeling knowledge-based inferences in story comprehension. Cognitive Science, 27(6), 875-910. doi:10.1016/j.cogsci.2003.07.002.

    Abstract

    A computational model of inference during story comprehension is presented, in which story situations are represented distributively as points in a high-dimensional “situation-state space.” This state space organizes itself on the basis of a constructed microworld description. From the same description, causal/temporal world knowledge is extracted. The distributed representation of story situations is more flexible than Golden and Rumelhart’s [Discourse Proc 16 (1993) 203] localist representation. A story taking place in the microworld corresponds to a trajectory through situation-state space. During the inference process, world knowledge is applied to the story trajectory. This results in an adjusted trajectory, reflecting the inference of propositions that are likely to be the case. Although inferences do not result from a search for coherence, they do cause story coherence to increase. The results of simulations correspond to empirical data concerning inference, reading time, and depth of processing. An extension of the model for simulating story retention shows how coherence is preserved during retention without controlling the retention process. Simulation results correspond to empirical data concerning story recall and intrusion.
  • Gaby, A., & Faller, M. (2003). Reciprocity questionnaire. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 77-80). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877641.

    Abstract

    This project is part of a collaborative project with the research group “Reciprocals across languages” led by Nick Evans. One goal of this project is to develop a typology of reciprocals. This questionnaire is designed to help field workers get an overview over the type of markers used in the expression of reciprocity in the language studied.
  • Ganushchak, L. Y., Verdonschot, R. G., & Schiller, N. O. (2011). When leaf becomes neuter: Event related potential evidence for grammatical gender transfer in bilingualism. Neuroreport, 22(3), 106-110. doi:10.1097/WNR.0b013e3283427359.

    Abstract

    This study addressed the question as to whether grammatical properties of a first language are transferred to a second language. Dutch-English bilinguals classified Dutch words in white print according to their grammatical gender and colored words (i.e. Dutch common and neuter words, and their English translations) according to their color. Both the classifications were made with the same hand (congruent trials) or different hands (incongruent trials). Performance was more erroneous and the error-elated negativity was enhanced on incongruent compared with congruent trials. This effect was independent of the language in which words were presented. These results provide evidence for the fact thatbilinguals may transfer grammatical characteristics oftheir first language to a second language, even when such characteristics are absent in the grammar of the latter.

    Files private

    Request files
  • Ganushchak, L. Y., Christoffels, I., & Schiller, N. (2011). The use of electroencephalography (EEG) in language production research: A review. Frontiers in Psychology, 2, 208. doi:10.3389/fpsyg.2011.00208.

    Abstract

    Speech production long avoided electrophysiological experiments due to the suspicion that potential artifacts caused by muscle activity of overt speech may lead to a bad signal-to-noise ratio in the measurements. Therefore, researchers have sought to assess speech production by using indirect speech production tasks, such as tacit or implicit naming, delayed naming, or metalinguistic tasks, such as phoneme monitoring. Covert speech may, however, involve different processes than overt speech production. Recently, overt speech has been investigated using EEG. As the number of papers published is rising steadily, this clearly indicates the increasing interest and demand for overt speech research within the field of cognitive neuroscience of language. Our main goal here is to review all currently available results of overt speech production involving EEG measurements, such as picture naming, Stroop naming, and reading aloud. We conclude that overt speech production can be successfully studied using electrophysiological measures, for instance, event-related brain potentials (ERPs). We will discuss possible relevant components in the ERP waveform of speech production and aim to address the issue of how to interpret the results of ERP research using overt speech, and whether the ERP components in language production are comparable to results from other fields.
  • Gertz, J., Varley, K. E., Reddy, T. E., Bowling, K. M., Pauli, F., Parker, S. L., Kucera, K. S., Willard, H. F., & Myers, R. M. (2011). Analysis of DNA Methylation in a three-generation family reveals widespread genetic influence on epigenetic regulation. PLoS Genetics, 7, e1002228. doi:10.1371/journal.pgen.1002228.

    Abstract

    The methylation of cytosines in CpG dinucleotides is essential for cellular differentiation and the progression of many cancers, and it plays an important role in gametic imprinting. To assess variation and inheritance of genome-wide patterns of DNA methylation simultaneously in humans, we applied reduced representation bisulfite sequencing (RRBS) to somatic DNA from six members of a three-generation family. We observed that 8.1% of heterozygous SNPs are associated with differential methylation in cis, which provides a robust signature for Mendelian transmission and relatedness. The vast majority of differential methylation between homologous chromosomes (>92%) occurs on a particular haplotype as opposed to being associated with the gender of the parent of origin, indicating that genotype affects DNA methylation of far more loci than does gametic imprinting. We found that 75% of genotype-dependent differential methylation events in the family are also seen in unrelated individuals and that overall genotype can explain 80% of the variation in DNA methylation. These events are under-represented in CpG islands, enriched in intergenic regions, and located in regions of low evolutionary conservation. Even though they are generally not in functionally constrained regions, 22% (twice as many as expected by chance) of genes harboring genotype-dependent DNA methylation exhibited allele-specific gene expression as measured by RNA-seq of a lymphoblastoid cell line, indicating that some of these events are associated with gene expression differences. Overall, our results demonstrate that the influence of genotype on patterns of DNA methylation is widespread in the genome and greatly exceeds the influence of imprinting on genome-wide methylation patterns.
  • Ghatan, P. H., Hsieh, J. C., Petersson, K. M., Stone-Elander, S., & Ingvar, M. (1998). Coexistence of attention-based facilitation and inhibition in the human cortex. NeuroImage, 7, 23-29.

    Abstract

    A key function of attention is to select an appropriate subset of available information by facilitation of attended processes and/or inhibition of irrelevant processing. Functional imaging studies, using positron emission tomography, have during different experimental tasks revealed decreased neuronal activity in areas that process input from unattended sensory modalities. It has been hypothesized that these decreases reflect a selective inhibitory modulation of nonrelevant cortical processing. In this study we addressed this question using a continuous arithmetical task with and without concomitant disturbing auditory input (task-irrelevant speech). During the arithmetical task, irrelevant speech did not affect task-performance but yielded decreased activity in the auditory and midcingulate cortices and increased activity in the left posterior parietal cortex. This pattern of modulation is consistent with a top down inhibitory modulation of a nonattended input to the auditory cortex and a coexisting, attention-based facilitation of taskrelevant processing in higher order cortices. These findings suggest that task-related decreases in cortical activity may be of functional importance in the understanding of both attentional mechanisms and taskrelated information processing.
  • Gillespie, K., & San Roque, L. (2011). Music and language in Duna pikono. In A. Rumsey, & D. Niles (Eds.), Sung tales from the Papua New Guinea Highlands: Studies in form, meaning and sociocultural context (pp. 49-63). Canberra: ANU E Press.
  • Gisselgard, J., Petersson, K. M., Baddeley, A., & Ingvar, M. (2003). The irrelevant speech effect: A PET study. Neuropsychologia, 41, 1899-1911. doi:10.1016/S0028-3932(03)00122-2.

    Abstract

    Positron emission tomography (PET) was performed in normal volunteers during a serial recall task under the influence of irrelevant speech comprising both single item repetition and multi-item sequences. An interaction approach was used to identify brain areas specifically related to the irrelevant speech effect. We interpreted activations as compensatory recruitment of complementary working memory processing, and decreased activity in terms of suppression of task relevant areas invoked by the irrelevant speech. The interaction between the distractors and working memory revealed a significant effect in the left, and to a lesser extent in the right, superior temporal region, indicating that initial phonological processing was relatively suppressed. Additional areas of decreased activity were observed in an a priori defined cortical network related to verbalworking memory, incorporating the bilateral superior temporal and inferior/middle frontal corticesn extending into Broca’s area on the left. We also observed a weak activation in the left inferior parietal cortex, a region suggested to reflect the phonological store, the subcomponent where the interference is assumed to take place. The results suggest that the irrelevant speech effect is correlated with and thus tentatively may be explained in terms of a suppression of components of the verbal working memory network as outlined. The results can be interpreted in terms of inhibitory top–down attentional mechanisms attenuating the influence of the irrelevant speech, although additional studies are clearly necessary to more fully characterize the nature of this phenomenon and its theoretical implications for existing short-term memory models
  • Glaser, B., Gunnell, D., Timpson, N. J., Joinson, C., Zammit, S., Smith, G. D., & Lewis, G. (2011). Age- and puberty-dependent association between IQ score in early childhood and depressive symptoms in adolescence. Psychological Medicine, 41(2), 333-343. doi:10.1017/S0033291710000814.

    Abstract

    BACKGROUND: Lower cognitive functioning in early childhood has been proposed as a risk factor for depression in later life but its association with depressive symptoms during adolescence has rarely been investigated. Our study examines the relationship between total intelligence quotient (IQ) score at age 8 years, and depressive symptoms at 11, 13, 14 and 17 years. METHOD: Study participants were 5250 children and adolescents from the Avon Longitudinal Study of Parents and their Children (ALSPAC), UK, for whom longitudinal data on depressive symptoms were available. IQ was assessed with the Wechsler Intelligence Scale for Children III, and self-reported depressive symptoms were measured with the Short Mood and Feelings Questionnaire (SMFQ). RESULTS: Multi-level analysis on continuous SMFQ scores showed that IQ at age 8 years was inversely associated with depressive symptoms at age 11 years, but the association changed direction by age 13 and 14 years (age-IQ interaction, p<}0.0001; age squared-IQ interaction, p{<}0.0001) when a higher IQ score was associated with a higher risk of depressive symptoms. This change in IQ effect was also found in relation to pubertal stage (pubertal stage-IQ interaction, 0.00049{

    Additional information

    S0033291710000814sup001.doc
  • Graham, S. A., Antonopoulos, A., Hitchen, P. G., Haslam, S. M., Dell, A., Drickamer, K., & Taylor, M. E. (2011). Identification of neutrophil granule glycoproteins as Lewisx-containing ligands cleared by the scavenger receptor C-type lectin. Journal of Biological Chemistry, 286, 24336-24349. doi:10.1074/jbc.M111.244772.

    Abstract

    The scavenger receptor C-type lectin (SRCL) is a glycan-binding receptor that has the capacity to mediate endocytosis of glycoproteins carrying terminal Lewis(x) groups (Galβ1-4(Fucα1-3)GlcNAc). A screen for glycoprotein ligands for SRCL using affinity chromatography on immobilized SRCL followed by mass spectrometry-based proteomic analysis revealed that soluble glycoproteins from secondary granules of neutrophils, including lactoferrin and matrix metalloproteinases 8 and 9, are major ligands. Binding competition and surface plasmon resonance analysis showed affinities in the low micromolar range. Comparison of SRCL binding to neutrophil and milk lactoferrin indicates that the binding is dependent on cell-specific glycosylation in the neutrophils, as the milk form of the glycoprotein is a much poorer ligand. Binding to neutrophil glycoproteins is fucose dependent and mass spectrometry-based glycomic analysis of neutrophil and milk lactoferrin was used to establish a correlation between high affinity binding to SRCL and the presence of multiple, clustered terminal Lewis(x) groups on a heterogeneous mixture of branched glycans, some with poly N-acetyllactosamine extensions. The ability of SRCL to mediate uptake of neutrophil lactoferrin was confirmed using fibroblasts transfected with SRCL. The common presence of Lewis(x) groups in granule protein glycans can thus target granule proteins for clearance by SRCL. PCR and immunohistochemical analysis confirms that SRCL is widely expressed on endothelial cells and thus represents a distributed system which could scavenge released neutrophil glycoproteins both locally at sites of inflammation or systemically when they are released in the circulation.

    Additional information

    graham_supp_info.pdf
  • Gretsch, P. (2003). Omission impossible?: Topic and Focus in Focal Ellipsis. In K. Schwabe, & S. Winkler (Eds.), The Interfaces: Deriving and interpreting omitted structures (pp. 341-365). Amsterdam: John Benjamins.
  • Le Guen, O. (2011). Materiality vs. expressivity: The use of sensory vocabulary in Yucatec Maya. The Senses & Society, 6(1), 117-126. doi:10.2752/174589311X12893982233993.

    Abstract

    In this article, sensory vocabulary relating to color, texture, and other sensory experiences in Yucatec Maya (a language spoken in Mexico) is examined, and its possible relation to material culture practices explored. In Yucatec Maya, some perceptual experience can be expressed in a fine-grained way through a compact one-word adjective. Complex notions can be succinctly expressed by combining roots with a general meaning and applying templates or compounds to those sensory roots. For instance, the root tak’, which means ‘adhere/adherence,’ can be derived to express the notion of ‘dirty red’ chak-tak’-e’en or ‘sticky with an unbounded pattern’ tak’aknak, or the root ts’ap ‘piled-up’ can express ‘several tones of green (e.g. in the forest)’ ya’axts’ape’en or ‘piled-up, known through a tactile experience’ ts’aplemak. The productive nature of this linguistic system seems at first glance to be very well fitted to orient practices relating to the production of local material culture. In examining several hours of video-recorded natural data contrasting work and non-work directed interactions, it emerges that sensory vocabulary is not used for calibrating knowledge but is instead recruited by speakers to achieve vividness in an effort to verbally reproduce the way speakers experience percepts
  • Le Guen, O. (2011). Modes of pointing to existing spaces and the use of frames of reference. Gesture, 11, 271-307. doi:10.1075/gest.11.3.02leg.

    Abstract

    This paper aims at providing a systematic framework for investigating differences in how people point to existing spaces. Pointing is considered according to two conditions: (1) A non-transposed condition where the body of the speaker always constitutes the origo and where the various types of pointing are differentiated by the status of the target and (2) a transposed condition where both the distant figure and the distant ground are identified and their relation specified according to two frames of reference (FoRs): the egocentric FoR (where spatial relationships are coded with respect to the speaker's point of view) and the geocentric FoR (where spatial relationships are coded in relation to external cues in the environment). The preference for one or the other frame of reference not only has consequences for pointing to real spaces but has some resonance in other domains, constraining the production of gesture in these related domains.
  • Le Guen, O. (2003). Quand les morts reviennent, réflexion sur l'ancestralité chez les Mayas des Basses Terres. Journal de la Société des Américanistes, 89(2), 171-205.

    Abstract

    When the dead come home… Remarks on ancestor worship among the Lowland Mayas. In Amerindian ethnographical literature, ancestor worship is often mentioned but evidence of its existence is lacking. This article will try to demonstrate that some Lowland Maya do worship ancestors ; it will use precise criteria taken from ethnological studies of societies where ancestor worship is common, compared to maya beliefs and practices. The All Souls’ Day, or hanal pixan, seems to be the most significant manifestation of this cult. Our approach will be comparative, through time – using colonial and ethnographical data of the twentieth century, and space – contemplating uses and beliefs of two maya groups, the Yucatec and the Lacandon Maya.
  • Le Guen, O. (2011). Speech and gesture in spatial language and cognition among the Yucatec Mayas. Cognitive Science, 35, 905-938. doi:10.1111/j.1551-6709.2011.01183.x.

    Abstract

    In previous analyses of the influence of language on cognition, speech has been the main channel examined. In studies conducted among Yucatec Mayas, efforts to determine the preferred frame of reference in use in this community have failed to reach an agreement (Bohnemeyer & Stolz, 2006; Levinson, 2003 vs. Le Guen, 2006, 2009). This paper argues for a multimodal analysis of language that encompasses gesture as well as speech, and shows that the preferred frame of reference in Yucatec Maya is only detectable through the analysis of co-speech gesture and not through speech alone. A series of experiments compares knowledge of the semantics of spatial terms, performance on nonlinguistic tasks and gestures produced by men and women. The results show a striking gender difference in the knowledge of the semantics of spatial terms, but an equal preference for a geocentric frame of reference in nonverbal tasks. In a localization task, participants used a variety of strategies in their speech, but they all exhibited a systematic preference for a geocentric frame of reference in their gestures.
  • Gullberg, M. (2003). Eye movements and gestures in human face-to-face interaction. In J. Hyönä, R. Radach, & H. Deubel (Eds.), The mind's eyes: Cognitive and applied aspects of eye movements (pp. 685-703). Oxford: Elsevier.

    Abstract

    Gestures are visuospatial events, meaning carriers, and social interactional phenomena. As such they constitute a particularly favourable area for investigating visual attention in a complex everyday situation under conditions of competitive processing. This chapter discusses visual attention to spontaneous gestures in human face-to-face interaction as explored with eye-tracking. Some basic fixation patterns are described, live and video-based settings are compared, and preliminary results on the relationship between fixations and information processing are outlined.
  • Gullberg, M., & Kita, S. (2003). Das Beachten von Gesten: Eine Studie zu Blickverhalten und Integration gestisch ausgedrückter Informationen. In Max-Planck-Gesellschaft (Ed.), Jahrbuch der Max Planck Gesellschaft 2003 (pp. 949-953). Göttingen: Vandenhoeck & Ruprecht.
  • Gullberg, M. (2003). Gestures, referents, and anaphoric linkage in learner varieties. In C. Dimroth, & M. Starren (Eds.), Information structure, linguistic structure and the dynamics of language acquisition. (pp. 311-328). Amsterdam: Benjamins.

    Abstract

    This paper discusses how the gestural modality can contribute to our understanding of anaphoric linkage in learner varieties, focusing on gestural anaphoric linkage marking the introduction, maintenance, and shift of reference in story retellings by learners of French and Swedish. The comparison of gestural anaphoric linkage in native and non-native varieties reveals what appears to be a particular learner variety of gestural cohesion, which closely reflects the characteristics of anaphoric linkage in learners' speech. Specifically, particular forms co-occur with anaphoric gestures depending on the information organisation in discourse. The typical nominal over-marking of maintained referents or topic elements in speech is mirrored by gestural (over-)marking of the same items. The paper discusses two ways in which this finding may further the understanding of anaphoric over-explicitness of learner varieties. An addressee-based communicative perspective on anaphoric linkage highlights how over-marking in gesture and speech may be related to issues of hyper-clarity and ambiguity. An alternative speaker-based perspective is also explored in which anaphoric over-marking is seen as related to L2 speech planning.
  • Gullberg, M. (2011). Multilingual multimodality: Communicative difficulties and their solutions in second-language use. In J. Streeck, C. Goodwin, & C. LeBaron (Eds.), Embodied interaction: Language and body in the material world (pp. 137-151). Cambridge: Cambridge University Press.

    Abstract

    Using a poorly mastered second language (L2) in interaction with a native speaker is a challenging task. This paper explores how L2 speakers and their native interlocutors together deploy gestures and speech to sustain problematic interaction. Drawing on native and non-native interactions in Swedish, French, and Dutch, I examine lexical, grammatical and interaction-related problems in turn. The analyses reveal that (a) different problems yield behaviours with different formal and interactive properties that are common across the language pairs and the participant roles; (b) native and non-native behaviour differs in degree, not in kind; and (c) that individual communicative style determines behaviour more than the gravity of the linguistic problem. I discuss the implications for theories opposing 'efficient' L2 communication to learning. Also, contra the traditional view of compensatory gestures, I will argue for a multi-functional 'hydraulic' view grounded in gesture theory where speech and gesture are equal partners, but where the weight carried by the modalities shifts depending on expressive pressures.
  • Gullberg, M. (2011). Language-specific encoding of placement events in gestures. In J. Bohnemeyer, & E. Pederson (Eds.), Event representation in language and cognition (pp. 166-188). New York: Cambridge University Press.

    Abstract

    This study focuses on the effect of the semantics of placement verbs on placement event representations. Specifically, it explores to what extent the semantic properties of habitually used verbs guide attention to certain types of spatial information. French, which typically uses a general placement verb (mettre, 'put'), is contrasted with Dutch, which uses a set of fine-grained (semi-)obligatory posture verbs (zetten, leggen, 'set/stand', 'lay'). Analysis of the concomitant gesture production in the two languages reveals a patterning toward two distinct, language-specific event representations. The object being placed is an essential part of the Dutch representation, while French speakers instead focus only on the (path of the) placement movement. These perspectives permeate the entire placement domain regardless of the actual verb used.
  • Gullberg, M. (2011). Thinking, speaking, and gesturing about motion in more than one language. In A. Pavlenko (Ed.), Thinking and speaking in two languages (pp. 143-169). Bristol: Multilingual Matters.

    Abstract

    A key problem in studies of bilingual linguistic cognition is how to probe the details of underlying representations in order to gauge whether bilinguals' conceptualizations differ from those of monolinguals, and if so how. This chapter provides an overview of a line of studies that rely on speech-associated gestures to explore these issues. The gestures of adult monolingual native speakers differ systematically across languages, reflecting consistent differences in what information is selected for expression and how it is mapped onto morphosyntactic devices. Given such differences, gestures can provide more detailed information on how multilingual speakers conceptualize events treated differently in their respective languages, and therefore, ultimately, on the nature of their representations. This chapter reviews a series of studies in the domain of (voluntary and caused) motion event construal. I first discuss speech and gesture evidence for different construals in monolingual native speakers, then review studies on second language speakers showing gestural evidence of persistent L1 construals, shifts to L2 construals, and of bidirectional influences. I consider the implications for theories of ultimate attainment in SLA, transfer and convergence. I will also discuss the methodological implications, namely what gesture data do and do not reveal about linguistic conceptualisation and linguistic relativity proper.
  • Habets, B., Kita, S., Shao, Z., Ozyurek, A., & Hagoort, P. (2011). The role of synchrony and ambiguity in speech–gesture integration during comprehension. Journal of Cognitive Neuroscience, 23, 1845-1854. doi:10.1162/jocn.2010.21462.

    Abstract

    During face-to-face communication, one does not only hear speech but also see a speaker's communicative hand movements. It has been shown that such hand gestures play an important role in communication where the two modalities influence each other's interpretation. A gesture typically temporally overlaps with coexpressive speech, but the gesture is often initiated before (but not after) the coexpressive speech. The present ERP study investigated what degree of asynchrony in the speech and gesture onsets are optimal for semantic integration of the concurrent gesture and speech. Videos of a person gesturing were combined with speech segments that were either semantically congruent or incongruent with the gesture. Although gesture and speech always overlapped in time, gesture and speech were presented with three different degrees of asynchrony. In the SOA 0 condition, the gesture onset and the speech onset were simultaneous. In the SOA 160 and 360 conditions, speech was delayed by 160 and 360 msec, respectively. ERPs time locked to speech onset showed a significant difference between semantically congruent versus incongruent gesture–speech combinations on the N400 for the SOA 0 and 160 conditions. No significant difference was found for the SOA 360 condition. These results imply that speech and gesture are integrated most efficiently when the differences in onsets do not exceed a certain time span because of the fact that iconic gestures need speech to be disambiguated in a way relevant to the speech context.
  • Hagoort, P., Wassenaar, M., & Brown, C. M. (2003). Syntax-related ERP-effects in Dutch. Cognitive Brain Research, 16(1), 38-50. doi:10.1016/S0926-6410(02)00208-2.

    Abstract

    In two studies subjects were required to read Dutch sentences that in some cases contained a syntactic violation, in other cases a semantic violation. All syntactic violations were word category violations. The design excluded differential contributions of expectancy to influence the syntactic violation effects. The syntactic violations elicited an Anterior Negativity between 300 and 500 ms. This negativity was bilateral and had a frontal distribution. Over posterior sites the same violations elicited a P600/SPS starting at about 600 ms. The semantic violations elicited an N400 effect. The topographic distribution of the AN was more frontal than the distribution of the classical N400 effect, indicating that the underlying generators of the AN and the N400 are, at least to a certain extent, non-overlapping. Experiment 2 partly replicated the design of Experiment 1, but with differences in rate of presentation and in the distribution of items over subjects, and without semantic violations. The word category violations resulted in the same effects as were observed in Experiment 1, showing that they were independent of some of the specific parameters of Experiment 1. The discussion presents a tentative account of the functional differences in the triggering conditions of the AN and the P600/SPS.
  • Hagoort, P., Wassenaar, M., & Brown, C. M. (2003). Real-time semantic compensation in patients with agrammatic comprehension: Electrophysiological evidence for multiple-route plasticity. Proceedings of the National Academy of Sciences of the United States of America, 100(7), 4340-4345. doi:10.1073/pnas.0230613100.

    Abstract

    To understand spoken language requires that the brain provides rapid access to different kinds of knowledge, including the sounds and meanings of words, and syntax. Syntax specifies constraints on combining words in a grammatically well formed manner. Agrammatic patients are deficient in their ability to use these constraints, due to a lesion in the perisylvian area of the languagedominant hemisphere. We report a study on real-time auditory sentence processing in agrammatic comprehenders, examining
    their ability to accommodate damage to the language system. We recorded event-related brain potentials (ERPs) in agrammatic comprehenders, nonagrammatic aphasics, and age-matched controls. When listening to sentences with grammatical violations, the agrammatic aphasics did not show the same syntax-related ERP effect as the two other subject groups. Instead, the waveforms of the agrammatic aphasics were dominated by a meaning-related ERP effect, presumably reflecting their attempts to achieve understanding by the use of semantic constraints. These data demonstrate that although agrammatic aphasics are impaired in their ability to exploit syntactic information in real time, they can reduce the consequences of a syntactic deficit by exploiting a semantic route. They thus provide evidence for the compensation of a syntactic deficit by a stronger reliance on another route in mapping
    sound onto meaning. This is a form of plasticity that we refer to as multiple-route plasticity.
  • Hagoort, P. (2011). The binding problem for language, and its consequences for the neurocognition of comprehension. In E. A. Gibson, & N. J. Pearlmutter (Eds.), The processing and acquisition of reference (pp. 403-436). Cambridge, MA: MIT Press.
  • Hagoort, P. (2011). The neuronal infrastructure for unification at multiple levels. In G. Gaskell, & P. Zwitserlood (Eds.), Lexical representation: A multidisciplinary approach (pp. 231-242). Berlin: De Gruyter Mouton.
  • Hagoort, P. (1998). De electrofysiologie van taal: Wat hersenpotentialen vertellen over het menselijk taalvermogen. Neuropraxis, 2, 223-229.
  • Hagoort, P. (1998). De spreker als sprinter. Psychologie, 17, 48-49.
  • Hagoort, P. (2003). De verloving tussen neurowetenschap en psychologie. In K. Hilberdink (Ed.), Interdisciplinariteit in de geesteswetenschappen (pp. 73-81). Amsterdam: KNAW.
  • Hagoort, P. (2003). Die einzigartige, grösstenteils aber unbewusste Fähigkeit der Menschen zu sprachlicher Kommunikation. In G. Kaiser (Ed.), Jahrbuch 2002-2003 / Wissenschaftszentrum Nordrhein-Westfalen (pp. 33-46). Düsseldorf: Wissenschaftszentrum Nordrhein-Westfalen.
  • Hagoort, P. (2003). Functional brain imaging. In W. J. Frawley (Ed.), International encyclopedia of linguistics (pp. 142-145). New York: Oxford University Press.
  • Hagoort, P. (2003). How the brain solves the binding problem for language: A neurocomputational model of syntactic processing. NeuroImage, 20(suppl. 1), S18-S29. doi:10.1016/j.neuroimage.2003.09.013.

    Abstract

    Syntax is one of the components in the architecture of language processing that allows the listener/reader to bind single-word information into a unified interpretation of multiword utterances. This paper discusses ERP effects that have been observed in relation to syntactic processing. The fact that these effects differ from the semantic N400 indicates that the brain honors the distinction between semantic and syntactic binding operations. Two models of syntactic processing attempt to account for syntax-related ERP effects. One type of model is serial, with a first phase that is purely syntactic in nature (syntax-first model). The other type of model is parallel and assumes that information immediately guides the interpretation process once it becomes available. This is referred to as the immediacy model. ERP evidence is presented in support of the latter model. Next, an explicit computational model is proposed to explain the ERP data. This Unification Model assumes that syntactic frames are stored in memory and retrieved on the basis of the spoken or written word form input. The syntactic frames associated with the individual lexical items are unified by a dynamic binding process into a structural representation that spans the whole utterance. On the basis of a meta-analysis of imaging studies on syntax, it is argued that the left posterior inferior frontal cortex is involved in binding syntactic frames together, whereas the left superior temporal cortex is involved in retrieval of the syntactic frames stored in memory. Lesion data that support the involvement of this left frontotemporal network in syntactic processing are discussed.
  • Hagoort, P. (2003). Interplay between syntax and semantics during sentence comprehension: ERP effects of combining syntactic and semantic violations. Journal of Cognitive Neuroscience, 15(6), 883-899. doi:10.1162/089892903322370807.

    Abstract

    This study investigated the effects of combined semantic and syntactic violations in relation to the effects of single semantic and single syntactic violations on language-related event-related brain potential (ERP) effects (N400 and P600/ SPS). Syntactic violations consisted of a mismatch in grammatical gender or number features of the definite article and the noun in sentence-internal or sentence-final noun phrases (NPs). Semantic violations consisted of semantically implausible adjective–noun combinations in the same NPs. Combined syntactic and semantic violations were a summation of these two respective violation types. ERPs were recorded while subjects read the sentences with the different types of violations and the correct control sentences. ERP effects were computed relative to ERPs elicited by the sentence-internal or sentence-final nouns. The size of the N400 effect to the semantic violation was increased by an additional syntactic violation (the syntactic boost). In contrast, the size of the P600/ SPS to the syntactic violation was not affected by an additional semantic violation. This suggests that in the absence of syntactic ambiguity, the assignment of syntactic structure is independent of semantic context. However, semantic integration is influenced by syntactic processing. In the sentence-final position, additional global processing consequences were obtained as a result of earlier violations in the sentence. The resulting increase in the N400 amplitude to sentence-final words was independent of the nature of the violation. A speeded anomaly detection task revealed that it takes substantially longer to detect semantic than syntactic anomalies. These results are discussed in relation to the latency and processing characteristics of the N400 and P600/SPS effects. Overall, the results reveal an asymmetry in the interplay between syntax and semantics during on-line sentence comprehension.
  • Hagoort, P. (1998). Hersenen en taal in onderzoek en praktijk. Neuropraxis, 6, 204-205.
  • Hagoort, P. (1998). The shadows of lexical meaning in patients with semantic impairments. In B. Stemmer, & H. Whitaker (Eds.), Handbook of neurolinguistics (pp. 235-248). New York: Academic Press.
  • Hammarström, H. (2011). A note on the Maco (Piaroan) language of the lower Ventuari, Venezuela. Cadernos de Etnolingüística, 3(1), 1-11. Retrieved from http://www.etnolinguistica.org/issue:vol3n1.

    Abstract

    The present paper seeks to clarify the position of the Maco [wpc] language of the lower Ventuari, Venezuela, since there has been some uncertainty in the literature on this matter. Maco-Ventuari, not to be confused with other languages with a similar name, is so far poorly documented, but the present paper shows that it is nevertheless possible to show that it is a dialect of Piaroa or a language closely related to Piaroa
  • Hammarström, H., & Nordhoff, S. (2011). LangDoc: Bibliographic infrastructure for linguistic typology. Oslo Studies in Language, 3(2), 31-43. Retrieved from https://www.journals.uio.no/index.php/osla/article/view/75.

    Abstract

    The present paper describes the ongoing project LangDoc to make a bibliography website for linguistic typology, with a near-complete database of references to documents that contain descriptive data on the languages of the world. This is intended to provide typologists with a more precise and comprehensive way to search for information on languages, and for the specific kind information that they are interested in. The annotation scheme devised is a trade-off between annotation effort and search desiderata. The end goal is a website with browse, search, update, new items subscription and download facilities, which can hopefully be enriched by spontaneous collaborative efforts.
  • Hammarström, H., & Borin, L. (2011). Unsupervised learning of morphology. Computational Linguistics, 37(2), 309-350. doi:10.1162/COLI_a_00050.

    Abstract

    This article surveys work on Unsupervised Learning of Morphology. We define Unsupervised Learning of Morphology as the problem of inducing a description (of some kind, even if only morpheme segmentation) of how orthographic words are built up given only raw text data of a language. We briefly go through the history and motivation of this problem. Next, over 200 items of work are listed with a brief characterization, and the most important ideas in the field are critically discussed. We summarize the achievements so far and give pointers for future developments.
  • Hammond, J. (2011). JVC GY-HM100U HD video camera and FFmpeg libraries [Technology review]. Language Documentation and Conservation, 5, 69-80.
  • Hanulikova, A., Mitterer, H., & McQueen, J. M. (2011). Effects of first and second language on segmentation of non-native speech. Bilingualism: Language and Cognition, 14, 506-521. doi:10.1017/S1366728910000428.

    Abstract

    We examined whether Slovak-German bilinguals apply native Slovak phonological and lexical knowledge when segmenting German speech. When Slovaks listen to their native language (Hanulíková, McQueen, & Mitterer, 2010), segmentation is impaired when fixed-stress cues are absent, and, following the Possible-Word Constraint (PWC; Norris, McQueen, Cutler, & Butterfield, 1997), lexical candidates are disfavored if segmentation leads to vowelless residues, unless those residues are existing Slovak words. In the present study, fixed-stress cues on German target words were again absent. Nevertheless, in support of the PWC, both German and Slovak listeners recognized German words (e.g., Rose "rose") faster in syllable contexts (suckrose) than in single- onsonant contexts (krose, trose). But only the Slovak listeners recognized Rose, for example, faster in krose than in trose (k is a Slovak word, t is not). It appears that non-native listeners can suppress native stress segmentation procedures, but that they suffer from prevailing interference from native lexical knowledge
  • Hanulová, J., Davidson, D. J., & Indefrey, P. (2011). Where does the delay in L2 picture naming come from? Psycholinguistic and neurocognitive evidence on second language word production. Language and Cognitive Processes, 26, 902-934. doi:10.1080/01690965.2010.509946.

    Abstract

    Bilinguals are slower when naming a picture in their second language than when naming it in their first language. Although the phenomenon has been frequently replicated, it is not known what causes the delay in the second language. In this article we discuss at what processing stages a delay might arise according to current models of bilingual processing and how the available behavioural and neurocognitive evidence relates to these proposals. Suggested plausible mechanisms, such as frequency or interference effects, are compatible with a naming delay arising at different processing stages. Haemodynamic and electrophysiological data seem to point to a postlexical stage but are still too scarce to support a definite conclusion.
  • Harbusch, K., & Kempen, G. (2011). Automatic online writing support for L2 learners of German through output monitoring by a natural-language paraphrase generator. In M. Levy, F. Blin, C. Bradin Siskin, & O. Takeuchi (Eds.), WorldCALL: International perspectives on computer-assisted language learning (pp. 128-143). New York: Routledge.

    Abstract

    Students who are learning to write in a foreign language, often want feedback on the grammatical quality of the sentences they produce. The usual NLP approach to this problem is based on parsing student-generated text. Here, we propose a generation-based ap- proach aiming at preventing errors ("scaffolding"). In our ICALL system, the student constructs sentences by composing syntactic trees out of lexically anchored "treelets" via a graphical drag & drop user interface. A natural-language generator computes all possible grammatically well-formed sentences entailed by the student-composed tree. It provides positive feedback if the student-composed tree belongs to the well-formed set, and negative feedback otherwise. If so requested by the student, it can substantiate the positive or negative feedback based on a comparison between the student-composed tree and its own trees (informative feedback on demand). In case of negative feedback, the system refuses to build the structure attempted by the student. Frequently occurring errors are handled in terms of "malrules." The system we describe is a prototype (implemented in JAVA and C++) which can be parameterized with respect to L1 and L2, the size of the lexicon, and the level of detail of the visually presented grammatical structures.
  • Hartsuiker, R. J., Huettig, F., & Olivers, C. N. (2011). Visual search and visual world: Interactions among visual attention, language, and working memory (introduction to the special issue). Acta Psychologica, 137(2), 135-137. doi:10.1016/j.actpsy.2011.01.005.
  • Haun, D. B. M. (2003). What's so special about spatial cognition. De Psychonoom, 18, 3-4.
  • Haun, D. B. M., & Waller, D. (2003). Alignment task. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 39-48). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Haun, D. B. M., & Tomasello, M. (2011). Conformity to peer pressure in preschool children. Child Development, 82, 1759-1767. doi:10.1111/j.1467-8624.2011.01666.x.

    Abstract

    Both adults and adolescents often conform their behavior and opinions to peer groups, even when they themselves know better. The current study investigated this phenomenon in 24 groups of 4 children between 4;2 and 4;9 years of age. Children often made their judgments conform to those of 3 peers, who had made obviously erroneous but unanimous public judgments right before them. A follow-up study with 18 groups of 4 children between 4;0 and 4;6 years of age revealed that children did not change their “real” judgment of the situation, but only their public expression of it. Preschool children are subject to peer pressure, indicating sensitivity to peers as a primary social reference group already during the preschool years.
  • Haun, D. B. M. (2011). How odd I am! In M. Brockman (Ed.), Future science: Essays from the cutting edge (pp. 228-235). New York: Random House.

    Abstract

    Cross-culturally, the human mind varies more than we generally assume
  • Haun, D. B. M. (2011). Memory for body movements in Namibian hunter-gatherer children. Journal of Cognitive Education and Psychology, 10, 56-62.

    Abstract

    Despite the global universality of physical space, different cultural groups vary substantially as to how they memorize it. Although European participants mostly prefer egocentric strategies (“left, right, front, back”) to memorize spatial relations, others use mostly allocentric strategies (“north, south, east, west”). Prior research has shown that some cultures show a general preference to memorize object locations and even also body movements in relation to the larger environment rather than in relation to their own body. Here, we investigate whether this cultural bias also applies to movements specifically directed at the participants' own body, emphasizing the role of ego. We show that even participants with generally allocentric biases preferentially memorize self-directed movements using egocentric spatial strategies. These results demonstrate an intricate system of interacting cultural biases and momentary situational characteristics.
  • Haun, D. B. M., Nawroth, C., & Call, J. (2011). Great apes’ risk-taking strategies in a decision making task. PLoS One, 6(12), e28801. doi:10.1371/journal.pone.0028801.

    Abstract

    We investigate decision-making behaviour in all four non-human great ape species. Apes chose between a safe and a risky option across trials of varying expected values. All species chose the safe option more often with decreasing probability of success. While all species were risk-seeking, orangutans and chimpanzees chose the risky option more often than gorillas and bonobos. Hence all four species' preferences were ordered in a manner consistent with normative dictates of expected value, but varied predictably in their willingness to take risks.
  • Haun, D. B. M., Jordan, F., Vallortigara, G., & Clayton, N. S. (2011). Origins of spatial, temporal and numerical cognition: Insights from comparative psychology [Reprint]. In S. Dehaene, & E. Brannon (Eds.), Space, time and number in the brain. Searching for the foundations of mathematical thought (pp. 191-206). London: Academic Press.

    Abstract

    Contemporary comparative cognition has a large repertoire of animal models and methods, with concurrent theoretical advances that are providing initial answers to crucial questions about human cognition. What cognitive traits are uniquely human? What are the species-typical inherited predispositions of the human mind? What is the human mind capable of without certain types of specific experiences with the surrounding environment? Here, we review recent findings from the domains of space, time and number cognition. These findings are produced using different comparative methodologies relying on different animal species, namely birds and non-human great apes. The study of these species not only reveals the range of cognitive abilities across vertebrates, but also increases our understanding of human cognition in crucial ways.
  • Haun, D. B. M. (2003). Path integration. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 33-38). Nijmegen: Max Planck Institute for Psycholinguistics. doi:10.17617/2.877644.
  • Haun, D. B. M. (2003). Spatial updating. In N. J. Enfield (Ed.), Field research manual 2003, part I: Multimodal interaction, space, event representation (pp. 49-56). Nijmegen: Max Planck Institute for Psycholinguistics.
  • Haun, D. B. M., Rapold, C. J., Janzen, G., & Levinson, S. C. (2011). Plasticity of human spatial memory: Spatial language and cognition covary across cultures. Cognition, 119, 70-80. doi:10.1016/j.cognition.2010.12.009.

    Abstract

    The present paper explores cross-cultural variation in spatial cognition by comparing spatial reconstruction tasks by Dutch and Namibian elementary school children. These two communities differ in the way they predominantly express spatial relations in language. Four experiments investigate cognitive strategy preferences across different levels of task-complexity and instruction. Data show a correlation between dominant linguistic spatial frames of reference and performance patterns in non-linguistic spatial memory tasks. This correlation is shown to be stable across an increase of complexity in the spatial array. When instructed to use their respective non-habitual cognitive strategy, participants were not easily able to switch between strategies and their attempts to do so impaired their performance. These results indicate a difference not only in preference but also in competence and suggest that spatial language and non-linguistic preferences and competences in spatial cognition are systematically aligned across human populations.

    Files private

    Request files
  • Hayano, K. (2011). Claiming epistemic primacy: Yo-marked assessments in Japanese. In T. Stivers, L. Mondada, & J. Steensig (Eds.), The morality of knowledge in conversation (pp. 58-81). Cambridge: Cambridge University Press.
  • Hayano, K. (2003). Self-presentation as a face-threatening act: A comparative study of self-oriented topic introduction in English and Japanese. Veritas, 24, 45-58.
  • Hervais-Adelman, A., Davis, M. H., Johnsrude, I. S., Taylor, K. J., & Carlyon, R. P. (2011). Generalization of Perceptual Learning of Vocoded Speech. Journal of Experimental Psychology: Human Perception and Performance, 37(1), 283-295. doi:10.1037/a0020772.

    Abstract

    Recent work demonstrates that learning to understand noise-vocoded (NV) speech alters sublexical perceptual processes but is enhanced by the simultaneous provision of higher-level, phonological, but not lexical content (Hervais-Adelman, Davis, Johnsrude, & Carlyon, 2008), consistent with top-down learning (Davis, Johnsrude, Hervais-Adelman, Taylor, & McGettigan, 2005; Hervais-Adelman et al., 2008). Here, we investigate whether training listeners with specific types of NV speech improves intelligibility of vocoded speech with different acoustic characteristics. Transfer of perceptual learning would provide evidence for abstraction from variable properties of the speech input. In Experiment 1, we demonstrate that learning of NV speech in one frequency region generalizes to an untrained frequency region. In Experiment 2, we assessed generalization among three carrier signals used to create NV speech: noise bands, pulse trains, and sine waves. Stimuli created using these three carriers possess the same slow, time-varying amplitude information and are equated for naive intelligibility but differ in their temporal fine structure. Perceptual learning generalized partially, but not completely, among different carrier signals. These results delimit the functional and neural locus of perceptual learning of vocoded speech. Generalization across frequency regions suggests that learning occurs at a stage of processing at which some abstraction from the physical signal has occurred, while incomplete transfer across carriers indicates that learning occurs at a stage of processing that is sensitive to acoustic features critical for speech perception (e.g., noise, periodicity).
  • Hervais-Adelman, A., Moser-Mercer, B., & Golestani, N. (2011). Executive control of language in the bilingual brain: Integrating the evidence from neuroinnaging to neuropsychology. Frontiers in Psychology, 2: 234. doi:10.3389/fpsyg.2011.00234.

    Abstract

    In this review we will focus on delineating the neural substrates of the executive control of language in the bilingual brain, based on the existing neuroimaging, intracranial, transcranial magnetic stimulation, and neuropsychological evidence. We will also offer insights from ongoing brain-imaging studies into the development of expertise in multilingual language control. We will concentrate specifically on evidence regarding how the brain selects and controls languages for comprehension and production. This question has been addressed in a number of ways and using various tasks, including language switching during production or perception, translation, and interpretation. We will attempt to synthesize existing evidence in order to bring to light the neural substrates that are crucial to executive control of language.
  • Hill, C. (2011). Collaborative narration and cross-speaker repetition in Umpila and Kuuku Ya'u. In B. Baker, R. Gardner, M. Harvey, & I. Mushin (Eds.), Indigenous language and social identity: Papers in honour of Michael Walsh (pp. 237-260). Canberra: Pacific Linguistics.
  • Hill, C. (2011). Named and unnamed spaces: Color, kin and the environment in Umpila. The Senses & Society, 6(1), 57-67. doi:10.2752/174589311X12893982233759.

    Abstract

    Imagine describing the particular characteristics of the hue of a flower, or the quality of its scent, or the texture of its petal. Introspection suggests the expression of such sensory experiences in words is something quite different than the task of naming artifacts. The particular challenges in the linguistic encoding of sensorial experiences pose questions regarding how languages manage semantic gaps and “ineffability.” That is, what strategies do speakers have available to manage phenomena or domains of experience that are inexpressible or difficult to express in their language? This article considers this issue with regard to color in Umpila, an Aboriginal Australian language of the Paman family. The investigation of color naming and ineffability in Umpila reveals rich associations and mappings between color and visual perceptual qualities more generally, categorization of the human social world, and the environment. “Gaps” in the color system are filled or supported by associations with two of the most linguistically and culturally salient domains for Umpila - kinship and the environment
  • Holler, J., & Wilkin, K. (2011). Co-speech gesture mimicry in the process of collaborative referring during face-to-face dialogue. Journal of Nonverbal Behavior, 35, 133-153. doi:10.1007/s10919-011-0105-6.

    Abstract

    Mimicry has been observed regarding a range of nonverbal behaviors, but only recently have researchers started to investigate mimicry in co-speech gestures. These gestures are considered to be crucially different from other aspects of nonverbal behavior due to their tight link with speech. This study provides evidence of mimicry in co-speech gestures in face-to-face dialogue, the most common forum of everyday talk. In addition, it offers an analysis of the functions that mimicked co-speech gestures fulfill in the collaborative process of creating a mutually shared understanding of referring expressions. The implications bear on theories of gesture production, research on grounding, and the mechanisms underlying behavioral mimicry.
  • Holler, J., & Wilkin, K. (2011). An experimental investigation of how addressee feedback affects co-speech gestures accompanying speakers’ responses. Journal of Pragmatics, 43, 3522-3536. doi:10.1016/j.pragma.2011.08.002.

    Abstract

    There is evidence that co-speech gestures communicate information to addressees and that they are often communicatively intended. However, we still know comparatively little about the role of gestures in the actual process of communication. The present study offers a systematic investigation of speakers’ gesture use before and after addressee feedback. The findings show that when speakers responded to addressees’ feedback gesture rate remained constant when this feedback encouraged clarification, elaboration or correction. However, speakers gestured proportionally less often after feedback when providing confirmatory responses. That is, speakers may not be drawing on gesture in response to addressee feedback per se, but particularly with responses that enhance addressees’ understanding. Further, the large majority of speakers’ gestures changed in their form. They tended to be more precise, larger, or more visually prominent after feedback. Some changes in gesture viewpoint were also observed. In addition, we found that speakers used deixis in speech and gaze to increase the salience of gestures occurring in response to feedback. Speakers appear to conceive of gesture as a useful modality in redesigning utterances to make them more accessible to addressees. The findings further our understanding of recipient design and co-speech gestures in face-to-face dialogue.
    Highlights

    ► Gesture rate remains constant in response to addressee feedback when the response aims to correct or clarify understanding. ► But gesture rate decreases when speakers provide confirmatory responses to feedback signalling correct understanding. ► Gestures are more communicative in response to addressee feedback, particularly in terms of precision, size and visual prominence. ► Speakers make gestures in response to addressee feedback more salient by using deictic markers in speech and gaze.
  • Holler, J., & Beattie, G. (2003). How iconic gestures and speech interact in the representation of meaning: are both aspects really integral to the process? Semiotica, 146, 81-116.
  • Holler, J., & Beattie, G. (2003). Pragmatic aspects of representational gestures: Do speakers use them to clarify verbal ambiguity for the listener? Gesture, 3, 127-154.
  • Holler, J. (2011). Verhaltenskoordination, Mimikry und sprachbegleitende Gestik in der Interaktion. Psychotherapie - Wissenschaft: Special issue: "Sieh mal, wer da spricht" - der Koerper in der Psychotherapie Teil IV, 1(1), 56-64. Retrieved from http://www.psychotherapie-wissenschaft.info/index.php/psy-wis/article/view/13/65.
  • Holman, E. W., Brown, C. H., Wichmann, S., Müller, A., Velupillai, V., Hammarström, H., Sauppe, S., Jung, H., Bakker, D., Brown, P., Belyaev, O., Urban, M., Mailhammer, R., List, J.-M., & Egorov, D. (2011). Automated dating of the world’s language families based on lexical similarity. Current Anthropology, 52(6), 841-875. doi:10.1086/662127.

    Abstract

    This paper describes a computerized alternative to glottochronology for estimating elapsed time since parent languages diverged into daughter languages. The method, developed by the Automated Similarity Judgment Program (ASJP) consortium, is different from glottochronology in four major respects: (1) it is automated and thus is more objective, (2) it applies a uniform analytical approach to a single database of worldwide languages, (3) it is based on lexical similarity as determined from Levenshtein (edit) distances rather than on cognate percentages, and (4) it provides a formula for date calculation that mathematically recognizes the lexical heterogeneity of individual languages, including parent languages just before their breakup into daughter languages. Automated judgments of lexical similarity for groups of related languages are calibrated with historical, epigraphic, and archaeological divergence dates for 52 language groups. The discrepancies between estimated and calibration dates are found to be on average 29% as large as the estimated dates themselves, a figure that does not differ significantly among language families. As a resource for further research that may require dates of known level of accuracy, we offer a list of ASJP time depths for nearly all the world’s recognized language families and for many subfamilies.

    Files private

    Request files
  • Hoogman, M., Aarts, E., Zwiers, M., Slaats-Willemse, D., Naber, M., Onnink, M., Cools, R., Kan, C., Buitelaar, J., & Franke, B. (2011). Nitric Oxide Synthase genotype modulation of impulsivity and ventral striatal activity in adult ADHD patients and healthy comparison subjects. American Journal of Psychiatry, 168, 1099-1106. doi:10.1176/appi.ajp.2011.10101446.

    Abstract

    Objective: Attention deficit hyperactivity disorder (ADHD) is a highly heritable disorder. The NOS1 gene encoding nitric oxide synthase is a candidate gene for ADHD and has been previously linked with impulsivity. In the present study, the authors investigated the effect of a functional variable number of tandem repeats (VNTR) polymorphism in NOS1 (NOS1 exon 1f-VNTR) on the processing of rewards, one of the cognitive deficits in ADHD. Method: A sample of 136 participants, consisting of 87 adult ADHD patients and 49 healthy comparison subjects, completed a reward-related impulsivity task. A total of 104 participants also underwent functional magnetic resonance imaging during a reward anticipation task. The effect of the NOS1 exon 1f-VNTR genotype on reward-related impulsivity and reward-related ventral striatal activity was examined. Results: ADHD patients had higher impulsivity scores and lower ventral striatal activity than healthy comparison subjects. The association between the short allele and increased impulsivity was confirmed. However, independent of disease status, homozygous carriers of the short allele of NOS1, the ADHD risk genotype, demonstrated higher ventral striatal activity than carriers of the other NOS1 VNTR genotypes. Conclusions: The authors suggest that the NOS1 genotype influences impulsivity and its relation with ADHD is mediated through effects on this behavioral trait. Increased ventral striatal activity related to NOS1 may be compensatory for effects in other brain regions.
  • Hribar, A., Haun, D. B. M., & Call, J. (2011). Great apes’ strategies to map spatial relations. Animal Cognition, 14, 511-523. doi:10.1007/s10071-011-0385-6.

    Abstract

    We investigated reasoning about spatial relational similarity in three great ape species: chimpanzees, bonobos, and orangutans. Apes were presented with three spatial mapping tasks in which they were required to find a reward in an array of three cups, after observing a reward being hidden in a different array of three cups. To obtain a food reward, apes needed to choose the cup that was in the same relative position (i.e., on the left) as the baited cup in the other array. The three tasks differed in the constellation of the two arrays. In Experiment 1, the arrays were placed next to each other, forming a line. In Experiment 2, the positioning of the two arrays varied each trial, being placed either one behind the other in two rows, or next to each other, forming a line. Finally, in Experiment 3, the two arrays were always positioned one behind the other in two rows, but misaligned. Results suggested that apes compared the two arrays and recognized that they were similar in some way. However, we believe that instead of mapping the left–left, middle–middle, and right–right cups from each array, they mapped the cups that shared the most similar relations to nearby landmarks (table’s visual boundaries).
  • Huettig, F., & McQueen, J. M. (2011). The nature of the visual environment induces implicit biases during language-mediated visual search. Memory & Cognition, 39, 1068-1084. doi:10.3758/s13421-011-0086-z.

    Abstract

    Four eye-tracking experiments examined whether semantic and visual-shape representations are routinely retrieved from printed-word displays and used during language-mediated visual search. Participants listened to sentences containing target words which were similar semantically or in shape to concepts invoked by concurrently-displayed printed words. In Experiment 1 the displays contained semantic and shape competitors of the targets, and two unrelated words. There were significant shifts in eye gaze as targets were heard towards semantic but not shape competitors. In Experiments 2-4, semantic competitors were replaced with unrelated words, semantically richer sentences were presented to encourage visual imagery, or participants rated the shape similarity of the stimuli before doing the eye-tracking task. In all cases there were no immediate shifts in eye gaze to shape competitors, even though, in response to the Experiment 1 spoken materials, participants looked to these competitors when they were presented as pictures (Huettig & McQueen, 2007). There was a late shape-competitor bias (more than 2500 ms after target onset) in all experiments. These data show that shape information is not used in online search of printed-word displays (whereas it is used with picture displays). The nature of the visual environment appears to induce implicit biases towards particular modes of processing during language-mediated visual search.
  • Huettig, F. (2011). The role of color during language-vision interactions. In R. K. Mishra, & N. Srinivasan (Eds.), Language-Cognition interface: State of the art (pp. 93-113). München: Lincom.
  • Huettig, F., Rommers, J., & Meyer, A. S. (2011). Using the visual world paradigm to study language processing: A review and critical evaluation. Acta Psychologica, 137, 151-171. doi:10.1016/j.actpsy.2010.11.003.

    Abstract

    We describe the key features of the visual world paradigm and review the main research areas where it has been used. In our discussion we highlight that the paradigm provides information about the way language users integrate linguistic information with information derived from the visual environment. Therefore the paradigm is well suited to study one of the key issues of current cognitive psychology, namely the interplay between linguistic and visual information processing. However, conclusions about linguistic processing (e.g., about activation, competition, and timing of access of linguistic representations) in the absence of relevant visual information must be drawn with caution.
  • Huettig, F., & Altmann, G. (2011). Looking at anything that is green when hearing ‘frog’: How object surface colour and stored object colour knowledge influence language-mediated overt attention. Quarterly Journal of Experimental Psychology, 64(1), 122-145. doi:10.1080/17470218.2010.481474.

    Abstract

    Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., "spinach"; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of a frog), (b) objects associated with a diagnostic colour but presented in an appropriate but atypical colour (e.g., a colour photograph of a yellow frog), and (c) objects not associated with a diagnostic colour but presented in the diagnostic colour of the target concept (e.g., a green blouse; blouses are not typically green). We observed that colour-mediated shifts in overt attention are primarily due to the perceived surface attributes of the visual objects rather than stored knowledge about the typical colour of the object. In addition our data reveal that conceptual category information is the primary determinant of overt attention if both conceptual category and surface colour competitors are copresent in the visual environment.
  • Huettig, F., Olivers, C. N. L., & Hartsuiker, R. J. (2011). Looking, language, and memory: Bridging research from the visual world and visual search paradigms. Acta Psychologica, 137, 138-150. doi:10.1016/j.actpsy.2010.07.013.

    Abstract

    In the visual world paradigm as used in psycholinguistics, eye gaze (i.e. visual orienting) is measured in order to draw conclusions about linguistic processing. However, current theories are underspecified with respect to how visual attention is guided on the basis of linguistic representations. In the visual search paradigm as used within the area of visual attention research, investigators have become more and more interested in how visual orienting is affected by higher order representations, such as those involved in memory and language. Within this area more specific models of orienting on the basis of visual information exist, but they need to be extended with mechanisms that allow for language-mediated orienting. In the present paper we review the evidence from these two different – but highly related – research areas. We arrive at a model in which working memory serves as the nexus in which long-term visual as well as linguistic representations (i.e. types) are bound to specific locations (i.e. tokens or indices). The model predicts that the interaction between language and visual attention is subject to a number of conditions, such as the presence of the guiding representation in working memory, capacity limitations, and cognitive control mechanisms.
  • Huettig, F., Singh, N., & Mishra, R. K. (2011). Language-mediated visual orienting behavior in low and high literates. Frontiers in Psychology, 2: e285. doi:10.3389/fpsyg.2011.00285.

    Abstract

    The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task (cf. Huettig & Altmann, 2005) which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., 'magar', crocodile) while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., 'matar', peas; a semantic competitor, e.g., 'kachuwa', turtle, and two unrelated distractors). In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1). In both experiments high literates shifted their eye gaze towards phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were impossible (Experiment 2) but in contrast to high literates these phonologically-mediated shifts in eye gaze were not closely time-locked to the speech input. We conclude that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word-object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2), low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate counterparts.
  • Hutton, J., & Kidd, E. (2011). Structural priming in comprehension of relative clause sentences: In search of a frequency x regularity interaction. In E. Kidd (Ed.), The acquisition of relative clauses: Processing, typology and function (pp. 227-242). Amsterdam: Benjamins.

    Abstract

    The current chapter discusses a structural priming experiment that investigated the on-line processing of English subject- and object- relative clauses. Sixty-one monolingual English-speaking adults participated in a self-paced reading experiment where they read prime-target pairs that fully crossed the relativised element within the relative clause (subject- versus object) across prime and target sentences. Following probabilistic theories of sentence processing, which predict that low frequency structures like object relatives are subject to greater priming effects due to their marked status, it was hypothesised that the normally-observed subject RC processing advantage would be eliminated following priming. The hypothesis was supported, identifying an important role for structural frequency in the processing of relative clause structures.
  • Indefrey, P. (1998). De neurale architectuur van taal: Welke hersengebieden zijn betrokken bij het spreken. Neuropraxis, 2(6), 230-237.
  • Indefrey, P., Gruber, O., Brown, C. M., Hagoort, P., Posse, S., & Kleinschmidt, A. (1998). Lexicality and not syllable frequency determine lateralized premotor activation during the pronunciation of word-like stimuli: An fMRI study. NeuroImage, 7, S4.
  • Indefrey, P. (2011). Neurobiology of syntax. In P. C. Hogan (Ed.), The Cambridge encyclopedia of the language sciences (pp. 835-838). New York: Cambridge University Press.
  • Indefrey, P. (2011). The spatial and temporal signatures of word production components: a critical update. Frontiers in Psychology, 2(255): 255. doi:10.3389/fpsyg.2011.00255.

    Abstract

    In the first decade of neurocognitive word production research the predominant approach was brain mapping, i.e., investigating the regional cerebral brain activation patterns correlated with word production tasks, such as picture naming and word generation. Indefrey and Levelt (2004) conducted a comprehensive meta-analysis of word production studies that used this approach and combined the resulting spatial information on neural correlates of component processes of word production with information on the time course of word production provided by behavioral and electromagnetic studies. In recent years, neurocognitive word production research has seen a major change toward a hypothesis-testing approach. This approach is characterized by the design of experimental variables modulating single component processes of word production and testing for predicted effects on spatial or temporal neurocognitive signatures of these components. This change was accompanied by the development of a broader spectrum of measurement and analysis techniques. The article reviews the findings of recent studies using the new approach. The time course assumptions of Indefrey and Levelt (2004) have largely been confirmed requiring only minor adaptations. Adaptations of the brain structure/function relationships proposed by Indefrey and Leven (2004) include the precise role of subregions of the left inferior frontal gyrus as well as a probable, yet to date unclear role of the inferior parietal cortex in word production.
  • Ingason, A., Rujescu, D., Cichon, S., Sigurdsson, E., Sigmundsson, T., Pietilainen, O. P. H., Buizer-Voskamp, J. E., Strengman, E., Francks, C., Muglia, P., Gylfason, A., Gustafsson, O., Olason, P. I., Steinberg, S., Hansen, T., Jakobsen, K. D., Rasmussen, H. B., Giegling, I., Möller, H.-J., Hartmann, A. and 28 moreIngason, A., Rujescu, D., Cichon, S., Sigurdsson, E., Sigmundsson, T., Pietilainen, O. P. H., Buizer-Voskamp, J. E., Strengman, E., Francks, C., Muglia, P., Gylfason, A., Gustafsson, O., Olason, P. I., Steinberg, S., Hansen, T., Jakobsen, K. D., Rasmussen, H. B., Giegling, I., Möller, H.-J., Hartmann, A., Crombie, C., Fraser, G., Walker, N., Lonnqvist, J., Suvisaari, J., Tuulio-Henriksson, A., Bramon, E., Kiemeney, L. A., Franke, B., Murray, R., Vassos, E., Toulopoulou, T., Mühleisen, T. W., Tosato, S., Ruggeri, M., Djurovic, S., Andreassen, O. A., Zhang, Z., Werge, T., Ophoff, R. A., Rietschel, M., Nöthen, M. M., Petursson, H., Stefansson, H., Peltonen, L., Collier, D., Stefansson, K., & St Clair, D. M. (2011). Copy number variations of chromosome 16p13.1 region associated with schizophrenia. Molecular Psychiatry, 16, 17-25. doi:10.1038/mp.2009.101.

    Abstract

    Deletions and reciprocal duplications of the chromosome 16p13.1 region have recently been reported in several cases of autism and mental retardation (MR). As genomic copy number variants found in these two disorders may also associate with schizophrenia, we examined 4345 schizophrenia patients and 35 079 controls from 8 European populations for duplications and deletions at the 16p13.1 locus, using microarray data. We found a threefold excess of duplications and deletions in schizophrenia cases compared with controls, with duplications present in 0.30% of cases versus 0.09% of controls (P=0.007) and deletions in 0.12 % of cases and 0.04% of controls (P>0.05). The region can be divided into three intervals defined by flanking low copy repeats. Duplications spanning intervals I and II showed the most significant (P=0.00010) association with schizophrenia. The age of onset in duplication and deletion carriers among cases ranged from 12 to 35 years, and the majority were males with a family history of psychiatric disorders. In a single Icelandic family, a duplication spanning intervals I and II was present in two cases of schizophrenia, and individual cases of alcoholism, attention deficit hyperactivity disorder and dyslexia. Candidate genes in the region include NTAN1 and NDE1. We conclude that duplications and perhaps also deletions of chromosome 16p13.1, previously reported to be associated with autism and MR, also confer risk of schizophrenia.
  • Janse, E., & Ernestus, M. (2011). The roles of bottom-up and top-down information in the recognition of reduced speech: Evidence from listeners with normal and impaired hearing. Journal of Phonetics, 39(3), 330-343. doi:10.1016/j.wocn.2011.03.005.
  • Janse, E., Nooteboom, S. G., & Quené, H. (2003). Word-level intelligibility of time-compressed speech: Prosodic and segmental factors. Speech Communication, 41, 287-301. doi:10.1016/S0167-6393(02)00130-9.

    Abstract

    In this study we investigate whether speakers, in line with the predictions of the Hyper- and Hypospeech theory, speed up most during the least informative parts and less during the more informative parts, when they are asked to speak faster. We expected listeners to benefit from these changes in timing, and our main goal was to find out whether making the temporal organisation of artificially time-compressed speech more like that of natural fast speech would improve intelligibility over linear time compression. Our production study showed that speakers reduce unstressed syllables more than stressed syllables, thereby making the prosodic pattern more pronounced. We extrapolated fast speech timing to even faster rates because we expected that the more salient prosodic pattern could be exploited in difficult listening situations. However, at very fast speech rates, applying fast speech timing worsens intelligibility. We argue that the non-uniform way of speeding up may not be due to an underlying communicative principle, but may result from speakers’ inability to speed up otherwise. As both prosodic and segmental information contribute to word recognition, we conclude that extrapolating fast speech timing to extremely fast rates distorts this balance between prosodic and segmental information.
  • Jescheniak, J. D., Levelt, W. J. M., & Meyer, A. S. (2003). Specific word frequency is not all that counts in speech production: Comments on Caramazza, Costa, et al. (2001) and new experimental data. Journal of Experimental Psychology: Learning, Memory, & Cognition, 29(3), 432-438. doi:10.1037/0278-7393.29.3.432.

    Abstract

    A. Caramazza, A. Costa, M. Miozzo, and Y. Bi(2001) reported a series of experiments demonstrating that the ease of producing a word depends only on the frequency of that specific word but not on the frequency of a homophone twin. A. Caramazza, A. Costa, et al. concluded that homophones have separate word form representations and that the absence of frequency-inheritance effects for homophones undermines an important argument in support of 2-stage models of lexical access, which assume that syntactic (lemma) representations mediate between conceptual and phonological representations. The authors of this article evaluate the empirical basis of this conclusion, report 2 experiments demonstrating a frequency-inheritance effect, and discuss other recent evidence. It is concluded that homophones share a common word form and that the distinction between lemmas and word forms should be upheld.
  • Jesse, A., & McQueen, J. M. (2011). Positional effects in the lexical retuning of speech perception. Psychonomic Bulletin & Review, 18, 943-950. doi:10.3758/s13423-011-0129-2.

    Abstract

    Listeners use lexical knowledge to adjust to speakers’ idiosyncratic pronunciations. Dutch listeners learn to interpret an ambiguous sound between /s/ and /f/ as /f/ if they hear it word-finally in Dutch words normally ending in /f/, but as /s/ if they hear it in normally /s/-final words. Here, we examined two positional effects in lexically guided retuning. In Experiment 1, ambiguous sounds during exposure always appeared in word-initial position (replacing the first sounds of /f/- or /s/-initial words). No retuning was found. In Experiment 2, the same ambiguous sounds always appeared word-finally during exposure. Here, retuning was found. Lexically guided perceptual learning thus appears to emerge reliably only when lexical knowledge is available as the to-be-tuned segment is initially being processed. Under these conditions, however, lexically guided retuning was position independent: It generalized across syllabic positions. Lexical retuning can thus benefit future recognition of particular sounds wherever they appear in words.
  • Johnson, E., McQueen, J. M., & Huettig, F. (2011). Toddlers’ language-mediated visual search: They need not have the words for it. The Quarterly Journal of Experimental Psychology, 64, 1672-1682. doi:10.1080/17470218.2011.594165.

    Abstract

    Eye movements made by listeners during language-mediated visual search reveal a strong link between
    visual processing and conceptual processing. For example, upon hearing the word for a missing referent
    with a characteristic colour (e.g., “strawberry”), listeners tend to fixate a colour-matched distractor (e.g.,
    a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these
    shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children
    who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in
    common exhibit language-mediated eye movements like those made by older children and adults?
    That is, do toddlers look at a red plane when hearing “strawberry”? We observed that 24-montholds
    lacking colour term knowledge nonetheless recognized the perceptual–conceptual commonality
    between named and seen objects. This indicates that language-mediated visual search need not
    depend on stored labels for concepts.
  • Johnson, E. K., & Huettig, F. (2011). Eye movements during language-mediated visual search reveal a strong link between overt visual attention and lexical processing in 36-months-olds. Psychological Research, 75, 35-42. doi:10.1007/s00426-010-0285-4.

    Abstract

    The nature of children’s early lexical processing was investigated by asking what information 36-month-olds access and use when instructed to find a known but absent referent. Children readily retrieved stored knowledge about characteristic color, i.e. when asked to find an object with a typical color (e.g. strawberry), children tended to fixate more upon an object that had the same (e.g. red plane) as opposed to a different (e.g. yellow plane) color. They did so regardless of the fact that they have had plenty of time to recognize the pictures for what they are, i.e. planes not strawberries. These data represent the first demonstration that language-mediated shifts of overt attention in young children can be driven by individual stored visual attributes of known words that mismatch on most other dimensions. The finding suggests that lexical processing and overt attention are strongly linked from an early age.
  • Johnson, E. K., Jusczyk, P. W., Cutler, A., & Norris, D. (2003). Lexical viability constraints on speech segmentation by infants. Cognitive Psychology, 46(1), 65-97. doi:10.1016/S0010-0285(02)00507-8.

    Abstract

    The Possible Word Constraint limits the number of lexical candidates considered in speech recognition by stipulating that input should be parsed into a string of lexically viable chunks. For instance, an isolated single consonant is not a feasible word candidate. Any segmentation containing such a chunk is disfavored. Five experiments using the head-turn preference procedure investigated whether, like adults, 12-month-olds observe this constraint in word recognition. In Experiments 1 and 2, infants were familiarized with target words (e.g., rush), then tested on lists of nonsense items containing these words in “possible” (e.g., “niprush” [nip + rush]) or “impossible” positions (e.g., “prush” [p + rush]). The infants listened significantly longer to targets in “possible” versus “impossible” contexts when targets occurred at the end of nonsense items (rush in “prush”), but not when they occurred at the beginning (tan in “tance”). In Experiments 3 and 4, 12-month-olds were similarly familiarized with target words, but test items were real words in sentential contexts (win in “wind” versus “window”). The infants listened significantly longer to words in the “possible” condition regardless of target location. Experiment 5 with targets at the beginning of isolated real words (e.g., win in “wind”) replicated Experiment 2 in showing no evidence of viability effects in beginning position. Taken together, the findings suggest that, in situations in which 12-month-olds are required to rely on their word segmentation abilities, they give evidence of observing lexical viability constraints in the way that they parse fluent speech.
  • Johnson, J. S., Sutterer, D. W., Acheson, D. J., Lewis-Peacock, J. A., & Postle, B. R. (2011). Increased alpha-band power during the retention of shapes and shape-location associations in visual short-term memory. Frontiers in Psychology, 2(128), 1-9. doi:10.3389/fpsyg.2011.00128.

    Abstract

    Studies exploring the role of neural oscillations in cognition have revealed sustained increases in alpha-band (∼8–14 Hz) power during the delay period of delayed-recognition short-term memory tasks. These increases have been proposed to reflect the inhibition, for example, of cortical areas representing task-irrelevant information, or of potentially interfering representations from previous trials. Another possibility, however, is that elevated delay-period alpha-band power (DPABP) reflects the selection and maintenance of information, rather than, or in addition to, the inhibition of task-irrelevant information. In the present study, we explored these possibilities using a delayed-recognition paradigm in which the presence and task relevance of shape information was systematically manipulated across trial blocks and electroencephalographic was used to measure alpha-band power. In the first trial block, participants remembered locations marked by identical black circles. The second block featured the same instructions, but locations were marked by unique shapes. The third block featured the same stimulus presentation as the second, but with pretrial instructions indicating, on a trial-by-trial basis, whether memory for shape or location was required, the other dimension being irrelevant. In the final block, participants remembered the unique pairing of shape and location for each stimulus. Results revealed minimal DPABP in each of the location-memory conditions, whether locations were marked with identical circles or with unique task-irrelevant shapes. In contrast, alpha-band power increases were observed in both the shape-memory condition, in which location was task irrelevant, and in the critical final condition, in which both shape and location were task relevant. These results provide support for the proposal that alpha-band oscillations reflect the retention of shape information and/or shape–location associations in short-term memory.
  • Johnson, E. K., Westrek, E., Nazzi, T., & Cutler, A. (2011). Infant ability to tell voices apart rests on language experience. Developmental Science, 14(5), 1002-1011. doi:10.1111/j.1467-7687.2011.01052.x.

    Abstract

    A visual fixation study tested whether seven-month-olds can discriminate between different talkers. The infants were first habituated to talkers producing sentences in either a familiar or unfamiliar language, then heard test sentences from previously unheard speakers, either in the language used for habituation, or in another language. When the language at test mismatched that in habituation, infants always noticed the change. When language remained constant and only talker altered, however, infants detected the change only if the language was the native tongue. Adult listeners with a different native tongue than the infants did not reproduce the discriminability patterns shown by the infants, and infants detected neither voice nor language changes in reversed speech; both these results argue against explanation of the native-language voice discrimination in terms of acoustic properties of the stimuli. The ability to identify talkers is, like many other perceptual abilities, strongly influenced by early life experience.
  • Jones, C. R., Pickles, A., Falcaro, M., Marsden, A. J., Happé, F., Scott, S. K., Sauter, D., Tregay, J., Phillips, R. J., Baird, G., Simonoff, E., & Charman, T. (2011). A multimodal approach to emotion recognition ability in autism spectrum disorders. Journal of Child Psychology and Psychiatry, 52(3), 275-285. doi:10.1111/j.1469-7610.2010.02328.x.

    Abstract

    Background: Autism spectrum disorders (ASD) are characterised by social and communication difficulties in day-to-day life, including problems in recognising emotions. However, experimental investigations of emotion recognition ability in ASD have been equivocal; hampered by small sample sizes, narrow IQ range and over-focus on the visual modality. Methods: We tested 99 adolescents (mean age 15;6 years, mean IQ 85) with an ASD and 57 adolescents without an ASD (mean age 15;6 years, mean IQ 88) on a facial emotion recognition task and two vocal emotion recognition tasks (one verbal; one non-verbal). Recognition of happiness, sadness, fear, anger, surprise and disgust were tested. Using structural equation modelling, we conceptualised emotion recognition ability as a multimodal construct, measured by the three tasks. We examined how the mean levels of recognition of the six emotions differed by group (ASD vs. non-ASD) and IQ (>= 80 vs. < 80). Results: There was no significant difference between groups for the majority of emotions and analysis of error patterns suggested that the ASD group were vulnerable to the same pattern of confusions between emotions as the non-ASD group. However, recognition ability was significantly impaired in the ASD group for surprise. IQ had a strong and significant effect on performance for the recognition of all six emotions, with higher IQ adolescents outperforming lower IQ adolescents. Conclusions: The findings do not suggest a fundamental difficulty with the recognition of basic emotions in adolescents with ASD.
  • De Jong, N. H., Schreuder, R., & Baayen, R. H. (2003). Morphological resonance in the mental lexicon. In R. Baayen, & R. Schreuder (Eds.), Morphological structure in language processing (pp. 65-88). Berlin: Mouton de Gruyter.
  • Jordan, F. (2011). A phylogenetic analysis of the evolution of Austronesian sibling terminologies. Human Biology, 83, 297-321. doi:10.3378/027.083.0209.

    Abstract

    Social structure in human societies is underpinned by the variable expression of ideas about relatedness between different types of kin. We express these ideas through language in our kin terminology: to delineate who is kin and who is not, and to attach meanings to the types of kin labels associated with different individuals. Cross-culturally, there is a regular and restricted range of patterned variation in kin terminologies, and to date, our understanding of this diversity has been hampered by inadequate techniques for dealing with the hierarchical relatedness of languages (Galton’s Problem). Here I use maximum-likelihood and Bayesian phylogenetic comparative methods to begin to tease apart the processes underlying the evolution of kin terminologies in the Austronesian language family, focusing on terms for siblings. I infer (1) the probable ancestral states and (2) evolutionary models of change for the semantic distinctions of relative age (older/younger sibling) and relative sex (same sex/opposite-sex). Analyses show that early Austronesian languages contained the relative-age, but not the relative-sex distinction; the latter was reconstructed firmly only for the ancestor of Eastern Malayo-Polynesian languages. Both distinctions were best characterized by evolutionary models where the gains and losses of the semantic distinctions were equally likely. A multi-state model of change examined how the relative-sex distinction could be elaborated and found that some transitions in kin terms were not possible: jumps from absence to heavily elaborated were very unlikely, as was piece-wise dismantling of elaborate distinctions. Cultural ideas about what types of kin distinctions are important can be embedded in the semantics of language; using a phylogenetic evolutionary framework we can understand how those distinctions in meaning change through time.
  • Jordens, P. (1998). Defaultformen des Präteritums. Zum Erwerb der Vergangenheitsmorphologie im Niederlänidischen. In H. Wegener (Ed.), Eine zweite Sprache lernen (pp. 61-88). Tübingen, Germany: Verlag Gunter Narr.
  • Jordens, P. (2003). Constraints on the shape of second language learner varieties. In G. Rickheit, T. Herrmann, & W. Deutsch (Eds.), Psycholinguistik/Psycholinguistics: Ein internationales Handbuch. [An International Handbook] (pp. 819-833). Berlin: Mouton de Gruyter.
  • Keating, P., Cho, T., Fougeron, C., & Hsu, C.-S. (2003). Domain-initial strengthening in four languages. In J. Local, R. Ogden, & R. Temple (Eds.), Laboratory phonology VI: Phonetic interpretation (pp. 145-163). Cambridge: Cambridge University Press.
  • Kelly, S., Byrne, K., & Holler, J. (2011). Raising the stakes of communication: Evidence for increased gesture production as predicted by the GSA framework. Information, 2(4), 579-593. doi:10.3390/info2040579.

    Abstract

    Theorists of language have argued that co-­speech hand gestures are an
    intentional part of social communication. The present study provides evidence for these
    claims by showing that speakers adjust their gesture use according to their perceived relevance to the audience. Participants were asked to read about items that were and were not useful in a wilderness survival scenario, under the pretense that they would then
    explain (on camera) what they learned to one of two different audiences. For one audience (a group of college students in a dormitory orientation activity), the stakes of successful
    communication were low;; for the other audience (a group of students preparing for a
    rugged camping trip in the mountains), the stakes were high. In their explanations to the camera, participants in the high stakes condition produced three times as many
    representational gestures, and spent three times as much time gesturing, than participants in the low stakes condition. This study extends previous research by showing that the anticipated consequences of one’s communication—namely, the degree to which information may be useful to an intended recipient—influences speakers’ use of gesture.
  • Kempen, G. (1998). Comparing and explaining the trajectories of first and second language acquisition: In search of the right mix of psychological and linguistic factors [Commentory]. Bilingualism: Language and Cognition, 1, 29-30. doi:10.1017/S1366728998000066.

    Abstract

    When you compare the behavior of two different age groups which are trying to master the same sensori-motor or cognitive skill, you are likely to discover varying learning routes: different stages, different intervals between stages, or even different orderings of stages. Such heterogeneous learning trajectories may be caused by at least six different types of factors: (1) Initial state: the kinds and levels of skills the learners have available at the onset of the learning episode. (2) Learning mechanisms: rule-based, inductive, connectionist, parameter setting, and so on. (3) Input and feedback characteristics: learning stimuli, information about success and failure. (4) Information processing mechanisms: capacity limitations, attentional biases, response preferences. (5) Energetic variables: motivation, emotional reactions. (6) Final state: the fine-structure of kinds and levels of subskills at the end of the learning episode. This applies to language acquisition as well. First and second language learners probably differ on all six factors. Nevertheless, the debate between advocates and opponents of the Fundamental Difference Hypothesis concerning L1 and L2 acquisition have looked almost exclusively at the first two factors. Those who believe that L1 learners have access to Universal Grammar whereas L2 learners rely on language processing strategies, postulate different learning mechanisms (UG parameter setting in L1, more general inductive strategies in L2 learning). Pienemann opposes this view and, based on his Processability Theory, argues that L1 and L2 learners start out from different initial states: they come to the grammar learning task with different structural hypotheses (SOV versus SVO as basic word order of German).

Share this page