Publications

Displaying 301 - 400 of 831
  • Gullberg, M. (2011). Multilingual multimodality: Communicative difficulties and their solutions in second-language use. In J. Streeck, C. Goodwin, & C. LeBaron (Eds.), Embodied interaction: Language and body in the material world (pp. 137-151). Cambridge: Cambridge University Press.

    Abstract

    Using a poorly mastered second language (L2) in interaction with a native speaker is a challenging task. This paper explores how L2 speakers and their native interlocutors together deploy gestures and speech to sustain problematic interaction. Drawing on native and non-native interactions in Swedish, French, and Dutch, I examine lexical, grammatical and interaction-related problems in turn. The analyses reveal that (a) different problems yield behaviours with different formal and interactive properties that are common across the language pairs and the participant roles; (b) native and non-native behaviour differs in degree, not in kind; and (c) that individual communicative style determines behaviour more than the gravity of the linguistic problem. I discuss the implications for theories opposing 'efficient' L2 communication to learning. Also, contra the traditional view of compensatory gestures, I will argue for a multi-functional 'hydraulic' view grounded in gesture theory where speech and gesture are equal partners, but where the weight carried by the modalities shifts depending on expressive pressures.
  • Gullberg, M. (2011). Language-specific encoding of placement events in gestures. In J. Bohnemeyer, & E. Pederson (Eds.), Event representation in language and cognition (pp. 166-188). New York: Cambridge University Press.

    Abstract

    This study focuses on the effect of the semantics of placement verbs on placement event representations. Specifically, it explores to what extent the semantic properties of habitually used verbs guide attention to certain types of spatial information. French, which typically uses a general placement verb (mettre, 'put'), is contrasted with Dutch, which uses a set of fine-grained (semi-)obligatory posture verbs (zetten, leggen, 'set/stand', 'lay'). Analysis of the concomitant gesture production in the two languages reveals a patterning toward two distinct, language-specific event representations. The object being placed is an essential part of the Dutch representation, while French speakers instead focus only on the (path of the) placement movement. These perspectives permeate the entire placement domain regardless of the actual verb used.
  • Gullberg, M., & Burenhult, N. (2012). Probing the linguistic encoding of placement and removal events in Swedish. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 167-182). Amsterdam: Benjamins.

    Abstract

    This paper explores the linguistic encoding of placement and removal events in Swedish. Drawing on elicited spoken data, it provides a unified approach to caused motion descriptions. The results show uniform syntactic behaviour of placement and removal descriptions and a consistent asymmetry between placement and removal in the semantic specificity of verbs. The results also reveal three further semantic patterns, pertaining to the nature of the relationship between Figure and Ground, that appear to account for how these event types are characterised, viz. whether the Ground is represented by a body part of the Agent; whether the Figure is contained within the Ground; or whether it is supported by the Ground.
  • Gullberg, M., Roberts, L., & Dimroth, C. (2012). What word-level knowledge can adult learners acquire after minimal exposure to a new language? International Review of Applied Linguistics, 50, 239-276.

    Abstract

    Discussions about the adult L2 learning capacity often take as their starting point stages where considerable L2 knowledge has already been accumulated. This paper probes the absolute earliest stages of learning and investigates what lexical knowledge adult learners can extract from complex, continuous speech in an unknown language after minimal exposure and without any help. Dutch participants were exposed to naturalistic but controlled audiovisual input in Mandarin Chinese, in which item frequency and gestural highlighting were manipulated. The results from a word recognition task showed that adults are able to draw on frequency to recognize disyllabic words appearing only eight times in continuous speech. The findings from a sound-to-picture matching task revealed that the mapping of meaning to word form requires a combination of cues: disyllabic words accompanied by a gesture were correctly assigned meaning after eight encounters. Overall, the study suggests that the adult learning mechanism is a considerably more powerful than typically assumed in the SLA literature drawing on frequency, gestural cues and syllable structure. Even in the absence of pre-existing knowledge about cognates and sound system to bootstrap and boost learning, it deals efficiently with very little, very complex input.
  • Gullberg, M. (2011). Thinking, speaking, and gesturing about motion in more than one language. In A. Pavlenko (Ed.), Thinking and speaking in two languages (pp. 143-169). Bristol: Multilingual Matters.

    Abstract

    A key problem in studies of bilingual linguistic cognition is how to probe the details of underlying representations in order to gauge whether bilinguals' conceptualizations differ from those of monolinguals, and if so how. This chapter provides an overview of a line of studies that rely on speech-associated gestures to explore these issues. The gestures of adult monolingual native speakers differ systematically across languages, reflecting consistent differences in what information is selected for expression and how it is mapped onto morphosyntactic devices. Given such differences, gestures can provide more detailed information on how multilingual speakers conceptualize events treated differently in their respective languages, and therefore, ultimately, on the nature of their representations. This chapter reviews a series of studies in the domain of (voluntary and caused) motion event construal. I first discuss speech and gesture evidence for different construals in monolingual native speakers, then review studies on second language speakers showing gestural evidence of persistent L1 construals, shifts to L2 construals, and of bidirectional influences. I consider the implications for theories of ultimate attainment in SLA, transfer and convergence. I will also discuss the methodological implications, namely what gesture data do and do not reveal about linguistic conceptualisation and linguistic relativity proper.
  • Habets, B., Kita, S., Shao, Z., Ozyurek, A., & Hagoort, P. (2011). The role of synchrony and ambiguity in speech–gesture integration during comprehension. Journal of Cognitive Neuroscience, 23, 1845-1854. doi:10.1162/jocn.2010.21462.

    Abstract

    During face-to-face communication, one does not only hear speech but also see a speaker's communicative hand movements. It has been shown that such hand gestures play an important role in communication where the two modalities influence each other's interpretation. A gesture typically temporally overlaps with coexpressive speech, but the gesture is often initiated before (but not after) the coexpressive speech. The present ERP study investigated what degree of asynchrony in the speech and gesture onsets are optimal for semantic integration of the concurrent gesture and speech. Videos of a person gesturing were combined with speech segments that were either semantically congruent or incongruent with the gesture. Although gesture and speech always overlapped in time, gesture and speech were presented with three different degrees of asynchrony. In the SOA 0 condition, the gesture onset and the speech onset were simultaneous. In the SOA 160 and 360 conditions, speech was delayed by 160 and 360 msec, respectively. ERPs time locked to speech onset showed a significant difference between semantically congruent versus incongruent gesture–speech combinations on the N400 for the SOA 0 and 160 conditions. No significant difference was found for the SOA 360 condition. These results imply that speech and gesture are integrated most efficiently when the differences in onsets do not exceed a certain time span because of the fact that iconic gestures need speech to be disambiguated in a way relevant to the speech context.
  • Habscheid, S., & Klein, W. (2012). Einleitung: Dinge und Maschinen in der Kommunikation. Zeitschrift für Literaturwissenschaft und Linguistik, 42(168), 8-12. Retrieved from http://www.uni-siegen.de/lili/ausgaben/2012/lili168.html?lang=de#einleitung.

    Abstract

    “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.” (Weiser 1991, S. 94). – Die Behauptung stammt aus einem vielzitierten Text von Mark Weiser, ehemals Chief Technology Officer am berühmten Xerox Palo Alto Research Center (PARC), wo nicht nur einige bedeutende computertechnische Innovationen ihren Ursprung hatten, sondern auch grundlegende anthropologische Einsichten zum Umgang mit technischen Artefakten gewonnen wurden.1 In einem populärwissenschaftlichen Artikel mit dem Titel „The Computer for the 21st Century” entwarf Weiser 1991 die Vision einer Zukunft, in der wir nicht mehr mit einem einzelnen PC an unserem Arbeitsplatz umgehen – vielmehr seien wir in jedem Raum umgeben von hunderten elektronischer Vorrichtungen, die untrennbar in Alltagsgegenstände eingebettet und daher in unserer materiellen Umwelt gleichsam „verschwunden“ sind. Dabei ging es Weiser nicht allein um das ubiquitäre Phänomen, das in der Medientheorie als „Transparenz der Medien“ bekannt ist2 oder in allgemeineren Theorien der Alltagserfahrung als eine selbstverständliche Verwobenheit des Menschen mit den Dingen, die uns in ihrem Sinn vertraut und praktisch „zuhanden“ sind.3 Darüber hinaus zielte Weisers Vision darauf, unsere bereits existierende Umwelt durch computerlesbare Daten zu erweitern und in die Operationen eines solchen allgegenwärtigen Netzwerks alltägliche Praktiken gleichsam lückenlos zu integrieren: In der Welt, die Weiser entwirft, öffnen sich Türen für denjenigen, der ein bestimmtes elektronisches Abzeichen trägt, begrüßen Räume Personen, die sie betreten, mit Namen, passen sich Computerterminals an die Präferenzen individueller Nutzer an usw. (Weiser 1991, S. 99).
  • Hagoort, P. (2011). The binding problem for language, and its consequences for the neurocognition of comprehension. In E. A. Gibson, & N. J. Pearlmutter (Eds.), The processing and acquisition of reference (pp. 403-436). Cambridge, MA: MIT Press.
  • Hagoort, P. (2011). The neuronal infrastructure for unification at multiple levels. In G. Gaskell, & P. Zwitserlood (Eds.), Lexical representation: A multidisciplinary approach (pp. 231-242). Berlin: De Gruyter Mouton.
  • Hagoort, P. (1998). De electrofysiologie van taal: Wat hersenpotentialen vertellen over het menselijk taalvermogen. Neuropraxis, 2, 223-229.
  • Hagoort, P. (1998). De spreker als sprinter. Psychologie, 17, 48-49.
  • Hagoort, P. (2012). From ants to music and language [Preface]. In A. D. Patel, Music, language, and the brain [Chinese translation] (pp. 9-10). Shanghai: East China Normal University Press Ltd.
  • Hagoort, P. (1998). Hersenen en taal in onderzoek en praktijk. Neuropraxis, 6, 204-205.
  • Hagoort, P. (2012). Het muzikale brein. Speling: Tijdschrift voor bezinning. Muziek als bron van bezieling, 64(1), 44-48.
  • Hagoort, P. (2012). Het sprekende brein. MemoRad, 17(1), 27-30.

    Abstract

    Geen andere soort dan homo sapiens heeft in de loop van zijn evolutionaire geschiedenis een communicatiesysteem ontwikkeld waarin een eindig aantal symbolen samen met een reeks van regels voor het combineren daarvan een oneindig aantal uitdrukkingen mogelijk maakt. Dit natuurlijke taalsysteem stelt leden van onze soort in staat gedachten een uiterlijke vorm te geven en uit te wisselen met de sociale groep en, door de uitvinding van schriftsystemen, met de gehele samenleving. Spraak en taal zijn effectieve middelen voor het behoud van sociale cohesie in samenlevingen waarvan de groepsgrootte en de complexe sociale organisatie van dien aard is dat dit niet langer kan door middel van ‘vlooien’, de wijze waarop onze genetische buren, de primaten van de oude wereld, sociale cohesie bevorderen [1,2].
  • Hagoort, P. (1998). The shadows of lexical meaning in patients with semantic impairments. In B. Stemmer, & H. Whitaker (Eds.), Handbook of neurolinguistics (pp. 235-248). New York: Academic Press.
  • Hagoort, P. (1992). Vertraagde lexicale integratie bij afatisch taalverstaan. Stem, Spraak- en Taalpathologie, 1, 5-23.
  • Hallé, P., & Cristia, A. (2012). Global and detailed speech representations in early language acquisition. In S. Fuchs, M. Weirich, D. Pape, & P. Perrier (Eds.), Speech planning and dynamics (pp. 11-38). Frankfurt am Main: Peter Lang.

    Abstract

    We review data and hypotheses dealing with the mental representations for perceived and produced speech that infants build and use over the course of learning a language. In the early stages of speech perception and vocal production, before the emergence of a receptive or a productive lexicon, the dominant picture emerging from the literature suggests rather non-analytic representations based on units of the size of the syllable: Young children seem to parse speech into syllable-sized units in spite of their ability to detect sound equivalence based on shared phonetic features. Once a productive lexicon has emerged, word form representations are initially rather underspecified phonetically but gradually become more specified with lexical growth, up to the phoneme level. The situation is different for the receptive lexicon, in which phonetic specification for consonants and vowels seem to follow different developmental paths. Consonants in stressed syllables are somewhat well specified already at the first signs of a receptive lexicon, and become even better specified with lexical growth. Vowels seem to follow a different developmental path, with increasing flexibility throughout lexical development. Thus, children come to exhibit a consonant vowel asymmetry in lexical representations, which is clear in adult representations.
  • Hammarström, H. (2012). A full-scale test of the language farming dispersal hypothesis. In S. Wichmann, & A. P. Grant (Eds.), Quantitative approaches to linguistic diversity: Commemorating the centenary of the birth of Morris Swadesh (pp. 7-22). Amsterdam: Benjamins.

    Abstract

    Originally published in Diachronica 27:2 (2010) One attempt at explaining why some language families are large (while others are small) is the hypothesis that the families that are now large became large because their ancestral speakers had a technological advantage, most often agriculture. Variants of this idea are referred to as the Language Farming Dispersal Hypothesis. Previously, detailed language family studies have uncovered various supporting examples and counterexamples to this idea. In the present paper I weigh the evidence from ALL attested language families. For each family, I use the number of member languages as a measure of cardinal size, member language coordinates to measure geospatial size and ethnographic evidence to assess subsistence status. This data shows that, although agricultural families tend to be larger in cardinal size, their size is hardly due to the simple presence of farming. If farming were responsible for language family expansions, we would expect a greater east-west geospatial spread of large families than is actually observed. The data, however, is compatible with weaker versions of the farming dispersal hypothesis as well with models where large families acquire farming because of their size, rather than the other way around.
  • Hammarström, H. (2011). A note on the Maco (Piaroan) language of the lower Ventuari, Venezuela. Cadernos de Etnolingüística, 3(1), 1-11. Retrieved from http://www.etnolinguistica.org/issue:vol3n1.

    Abstract

    The present paper seeks to clarify the position of the Maco [wpc] language of the lower Ventuari, Venezuela, since there has been some uncertainty in the literature on this matter. Maco-Ventuari, not to be confused with other languages with a similar name, is so far poorly documented, but the present paper shows that it is nevertheless possible to show that it is a dialect of Piaroa or a language closely related to Piaroa
  • Hammarström, H. (2012). [Review of Ferdinand von Mengden, Cardinal numerals: Old English from a cross-linguistic perspective]. Linguistic Typology, 16, 321-324. doi:10.1515/lity-2012-0010.
  • Hammarström, H., & Nordhoff, S. (2011). LangDoc: Bibliographic infrastructure for linguistic typology. Oslo Studies in Language, 3(2), 31-43. Retrieved from https://www.journals.uio.no/index.php/osla/article/view/75.

    Abstract

    The present paper describes the ongoing project LangDoc to make a bibliography website for linguistic typology, with a near-complete database of references to documents that contain descriptive data on the languages of the world. This is intended to provide typologists with a more precise and comprehensive way to search for information on languages, and for the specific kind information that they are interested in. The annotation scheme devised is a trade-off between annotation effort and search desiderata. The end goal is a website with browse, search, update, new items subscription and download facilities, which can hopefully be enriched by spontaneous collaborative efforts.
  • Hammarström, H., & van den Heuvel, W. (2012). Introduction to the LLM Special Issue 2012 on the History, contact and classification of Papuan languages. Language & Linguistics in Melanesia, 2012(Special Issue, Part 1), i-v.
  • Hammarström, H. (2012). Pronouns and the (Preliminary) Classification of Papuan languages. Language and linguistics in Melanesia, Special issue 2012 Part 2, 428-539. Retrieved from http://www.langlxmelanesia.com/hammarstrom428-539.pdf.

    Abstract

    A series of articles by Ross (1995, 2001, 2005) use pronoun sim- ilarities to gauge relatedness between various Papuan microgroups, arguing that the similarities could not be the result of chance or bor- rowing. I argue that a more appropriate manner of calculating chance gives a signicantly dierent result: when cross-comparing a pool of languages the prospects for chance matches of rst and second person pronouns are very good. Using pronoun form data from over 3000 lan- guages and over 300 language families inside and outside New Guinea, I show that there is, nevertheless, a tendency for Papuan pronouns to use certain consonants more often in 1P and 2P SG forms than in the rest of the world. This could reect an underlying family. An alter- native explanation is the established Papuan areal feature of having a small consonant inventory, which results in a higher functional load on the remaining consonants, which is, in turn, reected in the enhanced popularity of certain consonants in pronouns of those languages. A test of surface forms (i.e., non-reconstructed forms) favours the latter explanation.
  • Hammarström, H., & Nordhoff, S. (2012). The languages of Melanesia: Quantifying the level of coverage. In N. Evans, & M. Klamer (Eds.), Melanesian languages on the edge of Asia: Challenges for the 21st Century (pp. 13-33). Honolulu: University of Hawai'i Press. Retrieved from http://hdl.handle.net/10125/4559.
  • Hammarström, H., & Borin, L. (2011). Unsupervised learning of morphology. Computational Linguistics, 37(2), 309-350. doi:10.1162/COLI_a_00050.

    Abstract

    This article surveys work on Unsupervised Learning of Morphology. We define Unsupervised Learning of Morphology as the problem of inducing a description (of some kind, even if only morpheme segmentation) of how orthographic words are built up given only raw text data of a language. We briefly go through the history and motivation of this problem. Next, over 200 items of work are listed with a brief characterization, and the most important ideas in the field are critically discussed. We summarize the achievements so far and give pointers for future developments.
  • Hammond, J. (2011). JVC GY-HM100U HD video camera and FFmpeg libraries [Technology review]. Language Documentation and Conservation, 5, 69-80.
  • Hanique, I., & Ernestus, M. (2012). The role of morphology in acoustic reduction. Lingue e linguaggio, 2012(2), 147-164. doi:10.1418/38783.

    Abstract

    This paper examines the role of morphological structure in the reduced pronunciation of morphologically complex words by discussing and re-analyzing data from the literature. Acoustic reduction refers to the phenomenon that, in spontaneous speech, phonemes may be shorter or absent. We review studies investigating effects of the repetition of a morpheme, of whether a segment plays a crucial role in the identification of its morpheme, and of a word's morphological decomposability. We conclude that these studies report either no effects of morphological structure or effects that are open to alternative interpretations. Our analysis also reveals the need for a uniform definition of morphological decomposability. Furthermore, we examine whether the reduction of segments in morphologically complex words correlates with these segments' contribution to the identification of the whole word, and discuss previous studies and new analyses supporting this hypothesis. We conclude that the data show no convincing evidence that morphological structure conditions reduction, which contrasts with the expectations of several models of speech production and of morphological processing (e.g., weaver++ and dual-route models). The data collected so far support psycholinguistic models which assume that all morphologically complex words are processed as complete units.
  • Hanulikova, A., Mitterer, H., & McQueen, J. M. (2011). Effects of first and second language on segmentation of non-native speech. Bilingualism: Language and Cognition, 14, 506-521. doi:10.1017/S1366728910000428.

    Abstract

    We examined whether Slovak-German bilinguals apply native Slovak phonological and lexical knowledge when segmenting German speech. When Slovaks listen to their native language (Hanulíková, McQueen, & Mitterer, 2010), segmentation is impaired when fixed-stress cues are absent, and, following the Possible-Word Constraint (PWC; Norris, McQueen, Cutler, & Butterfield, 1997), lexical candidates are disfavored if segmentation leads to vowelless residues, unless those residues are existing Slovak words. In the present study, fixed-stress cues on German target words were again absent. Nevertheless, in support of the PWC, both German and Slovak listeners recognized German words (e.g., Rose "rose") faster in syllable contexts (suckrose) than in single- onsonant contexts (krose, trose). But only the Slovak listeners recognized Rose, for example, faster in krose than in trose (k is a Slovak word, t is not). It appears that non-native listeners can suppress native stress segmentation procedures, but that they suffer from prevailing interference from native lexical knowledge
  • Hanulikova, A., Dediu, D., Fang, Z., Basnakova, J., & Huettig, F. (2012). Individual differences in the acquisition of a complex L2 phonology: A training study. Language Learning, 62(Supplement S2), 79-109. doi:10.1111/j.1467-9922.2012.00707.x.

    Abstract

    Many learners of a foreign language (L2) struggle to correctly pronounce newly-learned speech sounds, yet many others achieve this with apparent ease. Here we explored how a training study of learning complex consonant clusters at the very onset of the L2 acquisition can inform us about L2 learning in general and individual differences in particular. To this end, adult Dutch native speakers were trained on Slovak words with complex consonant clusters (e.g., pstruh /pstrux/‘trout’, štvrť /ʃtvrc/ ‘quarter’) using auditory and orthographic input. In the same session following training, participants were tested on a battery of L2 perception and production tasks. The battery of L2 tests was repeated twice more with one week between each session. In the first session, an additional battery of control tests was used to test participants’ native language (L1) skills. Overall, in line with some previous research, participants showed only weak learning effects across the L2 perception tasks. However, there were considerable individual differences across all L2 tasks, which remained stable across sessions. Only two participants showed overall high L2 production performance that fell within 2 standard deviations of the mean ratings obtained for an L1 speaker. The mispronunciation detection task was the only perception task which significantly predicted production performance in the final session. We conclude by discussing several recommendations for future L2 learning studies.
  • Hanulikova, A., & Weber, A. (2012). Sink positive: Linguistic experience with th substitutions influences nonnative word recognition. Attention, Perception & Psychophysics, 74(3), 613-629. doi:10.3758/s13414-011-0259-7.

    Abstract

    We used eyetracking, perceptual discrimination, and production tasks to examine the influences of perceptual similarity and linguistic experience on word recognition in nonnative (L2) speech. Eye movements to printed words were tracked while German and Dutch learners of English heard words containing one of three pronunciation variants (/t/, /s/, or /f/) of the interdental fricative /θ/. Irrespective of whether the speaker was Dutch or German, looking preferences for target words with /θ/ matched the preferences for producing /s/ variants in German speakers and /t/ variants in Dutch speakers (as determined via the production task), while a control group of English participants showed no such preferences. The perceptually most similar and most confusable /f/ variant (as determined via the discrimination task) was never preferred as a match for /θ/. These results suggest that linguistic experience with L2 pronunciations facilitates recognition of variants in an L2, with effects of frequency outweighing effects of perceptual similarity.
  • Hanulikova, A., Van Alphen, P. M., Van Goch, M. M., & Weber, A. (2012). When one person’s mistake is another’s standard usage: The effect of foreign accent on syntactic processing. Journal of Cognitive Neuroscience, 24(4), 878-887. doi:10.1162/jocn_a_00103.

    Abstract

    How do native listeners process grammatical errors that are frequent in non-native speech? We investigated whether the neural correlates of syntactic processing are modulated by speaker identity. ERPs to gender agreement errors in sentences spoken by a native speaker were compared with the same errors spoken by a non-native speaker. In line with previous research, gender violations in native speech resulted in a P600 effect (larger P600 for violations in comparison with correct sentences), but when the same violations were produced by the non-native speaker with a foreign accent, no P600 effect was observed. Control sentences with semantic violations elicited comparable N400 effects for both the native and the non-native speaker, confirming no general integration problem in foreign-accented speech. The results demonstrate that the P600 is modulated by speaker identity, extending our knowledge about the role of speaker's characteristics on neural correlates of speech processing.
  • Hanulová, J., Davidson, D. J., & Indefrey, P. (2011). Where does the delay in L2 picture naming come from? Psycholinguistic and neurocognitive evidence on second language word production. Language and Cognitive Processes, 26, 902-934. doi:10.1080/01690965.2010.509946.

    Abstract

    Bilinguals are slower when naming a picture in their second language than when naming it in their first language. Although the phenomenon has been frequently replicated, it is not known what causes the delay in the second language. In this article we discuss at what processing stages a delay might arise according to current models of bilingual processing and how the available behavioural and neurocognitive evidence relates to these proposals. Suggested plausible mechanisms, such as frequency or interference effects, are compatible with a naming delay arising at different processing stages. Haemodynamic and electrophysiological data seem to point to a postlexical stage but are still too scarce to support a definite conclusion.
  • Harbusch, K., & Kempen, G. (2011). Automatic online writing support for L2 learners of German through output monitoring by a natural-language paraphrase generator. In M. Levy, F. Blin, C. Bradin Siskin, & O. Takeuchi (Eds.), WorldCALL: International perspectives on computer-assisted language learning (pp. 128-143). New York: Routledge.

    Abstract

    Students who are learning to write in a foreign language, often want feedback on the grammatical quality of the sentences they produce. The usual NLP approach to this problem is based on parsing student-generated text. Here, we propose a generation-based ap- proach aiming at preventing errors ("scaffolding"). In our ICALL system, the student constructs sentences by composing syntactic trees out of lexically anchored "treelets" via a graphical drag & drop user interface. A natural-language generator computes all possible grammatically well-formed sentences entailed by the student-composed tree. It provides positive feedback if the student-composed tree belongs to the well-formed set, and negative feedback otherwise. If so requested by the student, it can substantiate the positive or negative feedback based on a comparison between the student-composed tree and its own trees (informative feedback on demand). In case of negative feedback, the system refuses to build the structure attempted by the student. Frequently occurring errors are handled in terms of "malrules." The system we describe is a prototype (implemented in JAVA and C++) which can be parameterized with respect to L1 and L2, the size of the lexicon, and the level of detail of the visually presented grammatical structures.
  • Hartsuiker, R. J., Huettig, F., & Olivers, C. N. (2011). Visual search and visual world: Interactions among visual attention, language, and working memory (introduction to the special issue). Acta Psychologica, 137(2), 135-137. doi:10.1016/j.actpsy.2011.01.005.
  • Hartz, S. M., Short, S. E., Saccone, N. L., Culverhouse, R., Chen, L., Schwantes-An, T.-H., Coon, H., Han, Y., Stephens, S. H., Sun, J., Chen, X., Ducci, F., Dueker, N., Franceschini, N., Frank, J., Geller, F., Gubjartsson, D., Hansel, N. N., Jiang, C., Keskitalo-Vuokko, K. and 132 moreHartz, S. M., Short, S. E., Saccone, N. L., Culverhouse, R., Chen, L., Schwantes-An, T.-H., Coon, H., Han, Y., Stephens, S. H., Sun, J., Chen, X., Ducci, F., Dueker, N., Franceschini, N., Frank, J., Geller, F., Gubjartsson, D., Hansel, N. N., Jiang, C., Keskitalo-Vuokko, K., Liu, Z., Lyytikainen, L.-P., Michel, M., Rawal, R., Rosenberger, A., Scheet, P., Shaffer, J. R., Teumer, A., Thompson, J. R., Vink, J. M., Vogelzangs, N., Wenzlaff, A. S., Wheeler, W., Xiao, X., Yang, B.-Z., Aggen, S. H., Balmforth, A. J., Baumeister, S. E., Beaty, T., Bennett, S., Bergen, A. W., Boyd, H. A., Broms, U., Campbell, H., Chatterjee, N., Chen, J., Cheng, Y.-C., Cichon, S., Couper, D., Cucca, F., Dick, D. M., Foroud, T., Furberg, H., Giegling, I., Gu, F., Hall, A. S., Hallfors, J., Han, S., Hartmann, A. M., Hayward, C., Heikkila, K., Hewitt, J. K., Hottenga, J. J., Jensen, M. K., Jousilahti, P., Kaakinen, M., Kittner, S. J., Konte, B., Korhonen, T., Landi, M.-T., Laatikainen, T., Leppert, M., Levy, S. M., Mathias, R. A., McNeil, D. W., Medland, S. E., Montgomery, G. W., Muley, T., Murray, T., Nauck, M., North, K., Pergadia, M., Polasek, O., Ramos, E. M., Ripatti, S., Risch, A., Ruczinski, I., Rudan, I., Salomaa, V., Schlessinger, D., Styrkarsdottir, U., Terracciano, A., Uda, M., Willemsen, G., Wu, X., Abecasis, G., Barnes, K., Bickeboller, H., Boerwinkle, E., Boomsma, D. I., Caporaso, N., Duan, J., Edenberg, H. J., Francks, C., Gejman, P. V., Gelernter, J., Grabe, H. J., Hops, H., Jarvelin, M.-R., Viikari, J., Kahonen, M., Kendler, K. S., Lehtimaki, T., Levinson, D. F., Marazita, M. L., Marchini, J., Melbye, M., Mitchell, B., Murray, J. C., Nothen, M. M., Penninx, B. W., Raitakari, O., Rietschel, M., Rujescu, D., Samani, N. J., Sanders, A. R., Schwartz, A. G., Shete, S., Shi, J., Spitz, M., Stefansson, K., Swan, G. E., Thorgeirsson, T., Volzke, H., Wei, Q., Wichmann, H.-E., Amos, C. I., Breslau, N., Cannon, D. S., Ehringer, M., Grucza, R., Hatsukami, D., Heath, A., Johnson, E. O., Kaprio, J., Madden, P., Martin, N. G., Stevens, V. L., Stitzel, J. A., Weiss, R. B., Kraft, P., & Bierut, L. J. (2012). Increased genetic vulnerability to smoking at CHRNA5 in early-onset smokers. Archives of General Psychiatry, 69, 854-860. doi:10.1001/archgenpsychiatry.2012.124.

    Abstract

    CONTEXT Recent studies have shown an association between cigarettes per day (CPD) and a nonsynonymous single-nucleotide polymorphism in CHRNA5, rs16969968. OBJECTIVE To determine whether the association between rs16969968 and smoking is modified by age at onset of regular smoking. DATA SOURCES Primary data. STUDY SELECTION Available genetic studies containing measures of CPD and the genotype of rs16969968 or its proxy. DATA EXTRACTION Uniform statistical analysis scripts were run locally. Starting with 94 050 ever-smokers from 43 studies, we extracted the heavy smokers (CPD >20) and light smokers (CPD ≤10) with age-at-onset information, reducing the sample size to 33 348. Each study was stratified into early-onset smokers (age at onset ≤16 years) and late-onset smokers (age at onset >16 years), and a logistic regression of heavy vs light smoking with the rs16969968 genotype was computed for each stratum. Meta-analysis was performed within each age-at-onset stratum. DATA SYNTHESIS Individuals with 1 risk allele at rs16969968 who were early-onset smokers were significantly more likely to be heavy smokers in adulthood (odds ratio [OR] = 1.45; 95% CI, 1.36-1.55; n = 13 843) than were carriers of the risk allele who were late-onset smokers (OR = 1.27; 95% CI, 1.21-1.33, n = 19 505) (P = .01). CONCLUSION These results highlight an increased genetic vulnerability to smoking in early-onset smokers.

    Files private

    Request files
  • Haun, D. B. M., & Tomasello, M. (2011). Conformity to peer pressure in preschool children. Child Development, 82, 1759-1767. doi:10.1111/j.1467-8624.2011.01666.x.

    Abstract

    Both adults and adolescents often conform their behavior and opinions to peer groups, even when they themselves know better. The current study investigated this phenomenon in 24 groups of 4 children between 4;2 and 4;9 years of age. Children often made their judgments conform to those of 3 peers, who had made obviously erroneous but unanimous public judgments right before them. A follow-up study with 18 groups of 4 children between 4;0 and 4;6 years of age revealed that children did not change their “real” judgment of the situation, but only their public expression of it. Preschool children are subject to peer pressure, indicating sensitivity to peers as a primary social reference group already during the preschool years.
  • Haun, D. B. M. (2011). How odd I am! In M. Brockman (Ed.), Future science: Essays from the cutting edge (pp. 228-235). New York: Random House.

    Abstract

    Cross-culturally, the human mind varies more than we generally assume
  • Haun, D. B. M., Rekers, Y., & Tomasello, M. (2012). Majority-biased transmission in chimpanzees and human children, but not orangutans. Current Biology, 22, 727-731. doi:10.1016/j.cub.2012.03.006.

    Abstract

    Cultural transmission is a key component of human evolution. Two of humans' closest living relatives, chimpanzees and orangutans, have also been argued to transmit behavioral traditions across generations culturally [ [1], [2] and [3]], but how much the process might resemble the human process is still in large part unknown. One key phenomenon of human cultural transmission is majority-biased transmission: the increased likelihood for learners to end up not with the most frequent behavior but rather with the behavior demonstrated by most individuals. Here we show that chimpanzees and human children as young as 2 years of age, but not orangutans, are more likely to copy an action performed by three individuals, once each, than an action performed by one individual three times. The tendency to acquire the behaviors of the majority has been posited as key to the transmission of relatively safe, reliable, and productive behavioral strategies [ [4], [5], [6] and [7]] but has not previously been demonstrated in primates.
  • Haun, D. B. M. (2011). Memory for body movements in Namibian hunter-gatherer children. Journal of Cognitive Education and Psychology, 10, 56-62.

    Abstract

    Despite the global universality of physical space, different cultural groups vary substantially as to how they memorize it. Although European participants mostly prefer egocentric strategies (“left, right, front, back”) to memorize spatial relations, others use mostly allocentric strategies (“north, south, east, west”). Prior research has shown that some cultures show a general preference to memorize object locations and even also body movements in relation to the larger environment rather than in relation to their own body. Here, we investigate whether this cultural bias also applies to movements specifically directed at the participants' own body, emphasizing the role of ego. We show that even participants with generally allocentric biases preferentially memorize self-directed movements using egocentric spatial strategies. These results demonstrate an intricate system of interacting cultural biases and momentary situational characteristics.
  • Haun, D. B. M., Nawroth, C., & Call, J. (2011). Great apes’ risk-taking strategies in a decision making task. PLoS One, 6(12), e28801. doi:10.1371/journal.pone.0028801.

    Abstract

    We investigate decision-making behaviour in all four non-human great ape species. Apes chose between a safe and a risky option across trials of varying expected values. All species chose the safe option more often with decreasing probability of success. While all species were risk-seeking, orangutans and chimpanzees chose the risky option more often than gorillas and bonobos. Hence all four species' preferences were ordered in a manner consistent with normative dictates of expected value, but varied predictably in their willingness to take risks.
  • Haun, D. B. M., Jordan, F., Vallortigara, G., & Clayton, N. S. (2011). Origins of spatial, temporal and numerical cognition: Insights from comparative psychology [Reprint]. In S. Dehaene, & E. Brannon (Eds.), Space, time and number in the brain. Searching for the foundations of mathematical thought (pp. 191-206). London: Academic Press.

    Abstract

    Contemporary comparative cognition has a large repertoire of animal models and methods, with concurrent theoretical advances that are providing initial answers to crucial questions about human cognition. What cognitive traits are uniquely human? What are the species-typical inherited predispositions of the human mind? What is the human mind capable of without certain types of specific experiences with the surrounding environment? Here, we review recent findings from the domains of space, time and number cognition. These findings are produced using different comparative methodologies relying on different animal species, namely birds and non-human great apes. The study of these species not only reveals the range of cognitive abilities across vertebrates, but also increases our understanding of human cognition in crucial ways.
  • Haun, D. B. M., Rapold, C. J., Janzen, G., & Levinson, S. C. (2011). Plasticity of human spatial memory: Spatial language and cognition covary across cultures. Cognition, 119, 70-80. doi:10.1016/j.cognition.2010.12.009.

    Abstract

    The present paper explores cross-cultural variation in spatial cognition by comparing spatial reconstruction tasks by Dutch and Namibian elementary school children. These two communities differ in the way they predominantly express spatial relations in language. Four experiments investigate cognitive strategy preferences across different levels of task-complexity and instruction. Data show a correlation between dominant linguistic spatial frames of reference and performance patterns in non-linguistic spatial memory tasks. This correlation is shown to be stable across an increase of complexity in the spatial array. When instructed to use their respective non-habitual cognitive strategy, participants were not easily able to switch between strategies and their attempts to do so impaired their performance. These results indicate a difference not only in preference but also in competence and suggest that spatial language and non-linguistic preferences and competences in spatial cognition are systematically aligned across human populations.

    Files private

    Request files
  • Hayano, K. (2011). Claiming epistemic primacy: Yo-marked assessments in Japanese. In T. Stivers, L. Mondada, & J. Steensig (Eds.), The morality of knowledge in conversation (pp. 58-81). Cambridge: Cambridge University Press.
  • Hervais-Adelman, A., Carlyon, R. P., Johnsrude, I. S., & Davis, M. H. (2012). Brain regions recruited for the effortful comprehension of noise-vocoded words. Language and Cognitive Processes, 27(7-8), 1145-1166. doi:10.1080/01690965.2012.662280.

    Abstract

    We used functional magnetic resonance imaging (fMRI) to investigate the neural basis of comprehension and perceptual learning of artificially degraded [noise vocoded (NV)] speech. Fifteen participants were scanned while listening to 6-channel vocoded words, which are difficult for naive listeners to comprehend, but can be readily learned with appropriate feedback presentations. During three test blocks, we compared responses to potentially intelligible NV words, incomprehensible distorted words and clear speech. Training sessions were interleaved with the test sessions and included paired presentation of clear then noise-vocoded words: a type of feedback that enhances perceptual learning. Listeners' comprehension of NV words improved significantly as a consequence of training. Listening to NV compared to clear speech activated left insula, and prefrontal and motor cortices. These areas, which are implicated in speech production, may play an active role in supporting the comprehension of degraded speech. Elevated activation in the precentral gyrus during paired clear-then-distorted presentations that enhance learning further suggests a role for articulatory representations of speech in perceptual learning of degraded speech.
  • Hervais-Adelman, A., Davis, M. H., Johnsrude, I. S., Taylor, K. J., & Carlyon, R. P. (2011). Generalization of Perceptual Learning of Vocoded Speech. Journal of Experimental Psychology: Human Perception and Performance, 37(1), 283-295. doi:10.1037/a0020772.

    Abstract

    Recent work demonstrates that learning to understand noise-vocoded (NV) speech alters sublexical perceptual processes but is enhanced by the simultaneous provision of higher-level, phonological, but not lexical content (Hervais-Adelman, Davis, Johnsrude, & Carlyon, 2008), consistent with top-down learning (Davis, Johnsrude, Hervais-Adelman, Taylor, & McGettigan, 2005; Hervais-Adelman et al., 2008). Here, we investigate whether training listeners with specific types of NV speech improves intelligibility of vocoded speech with different acoustic characteristics. Transfer of perceptual learning would provide evidence for abstraction from variable properties of the speech input. In Experiment 1, we demonstrate that learning of NV speech in one frequency region generalizes to an untrained frequency region. In Experiment 2, we assessed generalization among three carrier signals used to create NV speech: noise bands, pulse trains, and sine waves. Stimuli created using these three carriers possess the same slow, time-varying amplitude information and are equated for naive intelligibility but differ in their temporal fine structure. Perceptual learning generalized partially, but not completely, among different carrier signals. These results delimit the functional and neural locus of perceptual learning of vocoded speech. Generalization across frequency regions suggests that learning occurs at a stage of processing at which some abstraction from the physical signal has occurred, while incomplete transfer across carriers indicates that learning occurs at a stage of processing that is sensitive to acoustic features critical for speech perception (e.g., noise, periodicity).
  • Hervais-Adelman, A., Moser-Mercer, B., & Golestani, N. (2011). Executive control of language in the bilingual brain: Integrating the evidence from neuroinnaging to neuropsychology. Frontiers in Psychology, 2: 234. doi:10.3389/fpsyg.2011.00234.

    Abstract

    In this review we will focus on delineating the neural substrates of the executive control of language in the bilingual brain, based on the existing neuroimaging, intracranial, transcranial magnetic stimulation, and neuropsychological evidence. We will also offer insights from ongoing brain-imaging studies into the development of expertise in multilingual language control. We will concentrate specifically on evidence regarding how the brain selects and controls languages for comprehension and production. This question has been addressed in a number of ways and using various tasks, including language switching during production or perception, translation, and interpretation. We will attempt to synthesize existing evidence in order to bring to light the neural substrates that are crucial to executive control of language.
  • Hill, C. (2011). Collaborative narration and cross-speaker repetition in Umpila and Kuuku Ya'u. In B. Baker, R. Gardner, M. Harvey, & I. Mushin (Eds.), Indigenous language and social identity: Papers in honour of Michael Walsh (pp. 237-260). Canberra: Pacific Linguistics.
  • Hill, C. (2011). Named and unnamed spaces: Color, kin and the environment in Umpila. The Senses & Society, 6(1), 57-67. doi:10.2752/174589311X12893982233759.

    Abstract

    Imagine describing the particular characteristics of the hue of a flower, or the quality of its scent, or the texture of its petal. Introspection suggests the expression of such sensory experiences in words is something quite different than the task of naming artifacts. The particular challenges in the linguistic encoding of sensorial experiences pose questions regarding how languages manage semantic gaps and “ineffability.” That is, what strategies do speakers have available to manage phenomena or domains of experience that are inexpressible or difficult to express in their language? This article considers this issue with regard to color in Umpila, an Aboriginal Australian language of the Paman family. The investigation of color naming and ineffability in Umpila reveals rich associations and mappings between color and visual perceptual qualities more generally, categorization of the human social world, and the environment. “Gaps” in the color system are filled or supported by associations with two of the most linguistically and culturally salient domains for Umpila - kinship and the environment
  • Holler, J., & Wilkin, K. (2011). Co-speech gesture mimicry in the process of collaborative referring during face-to-face dialogue. Journal of Nonverbal Behavior, 35, 133-153. doi:10.1007/s10919-011-0105-6.

    Abstract

    Mimicry has been observed regarding a range of nonverbal behaviors, but only recently have researchers started to investigate mimicry in co-speech gestures. These gestures are considered to be crucially different from other aspects of nonverbal behavior due to their tight link with speech. This study provides evidence of mimicry in co-speech gestures in face-to-face dialogue, the most common forum of everyday talk. In addition, it offers an analysis of the functions that mimicked co-speech gestures fulfill in the collaborative process of creating a mutually shared understanding of referring expressions. The implications bear on theories of gesture production, research on grounding, and the mechanisms underlying behavioral mimicry.
  • Holler, J., & Wilkin, K. (2011). An experimental investigation of how addressee feedback affects co-speech gestures accompanying speakers’ responses. Journal of Pragmatics, 43, 3522-3536. doi:10.1016/j.pragma.2011.08.002.

    Abstract

    There is evidence that co-speech gestures communicate information to addressees and that they are often communicatively intended. However, we still know comparatively little about the role of gestures in the actual process of communication. The present study offers a systematic investigation of speakers’ gesture use before and after addressee feedback. The findings show that when speakers responded to addressees’ feedback gesture rate remained constant when this feedback encouraged clarification, elaboration or correction. However, speakers gestured proportionally less often after feedback when providing confirmatory responses. That is, speakers may not be drawing on gesture in response to addressee feedback per se, but particularly with responses that enhance addressees’ understanding. Further, the large majority of speakers’ gestures changed in their form. They tended to be more precise, larger, or more visually prominent after feedback. Some changes in gesture viewpoint were also observed. In addition, we found that speakers used deixis in speech and gaze to increase the salience of gestures occurring in response to feedback. Speakers appear to conceive of gesture as a useful modality in redesigning utterances to make them more accessible to addressees. The findings further our understanding of recipient design and co-speech gestures in face-to-face dialogue.
    Highlights

    ► Gesture rate remains constant in response to addressee feedback when the response aims to correct or clarify understanding. ► But gesture rate decreases when speakers provide confirmatory responses to feedback signalling correct understanding. ► Gestures are more communicative in response to addressee feedback, particularly in terms of precision, size and visual prominence. ► Speakers make gestures in response to addressee feedback more salient by using deictic markers in speech and gaze.
  • Holler, J. (2011). Verhaltenskoordination, Mimikry und sprachbegleitende Gestik in der Interaktion. Psychotherapie - Wissenschaft: Special issue: "Sieh mal, wer da spricht" - der Koerper in der Psychotherapie Teil IV, 1(1), 56-64. Retrieved from http://www.psychotherapie-wissenschaft.info/index.php/psy-wis/article/view/13/65.
  • Holman, E. W., Brown, C. H., Wichmann, S., Müller, A., Velupillai, V., Hammarström, H., Sauppe, S., Jung, H., Bakker, D., Brown, P., Belyaev, O., Urban, M., Mailhammer, R., List, J.-M., & Egorov, D. (2011). Automated dating of the world’s language families based on lexical similarity. Current Anthropology, 52(6), 841-875. doi:10.1086/662127.

    Abstract

    This paper describes a computerized alternative to glottochronology for estimating elapsed time since parent languages diverged into daughter languages. The method, developed by the Automated Similarity Judgment Program (ASJP) consortium, is different from glottochronology in four major respects: (1) it is automated and thus is more objective, (2) it applies a uniform analytical approach to a single database of worldwide languages, (3) it is based on lexical similarity as determined from Levenshtein (edit) distances rather than on cognate percentages, and (4) it provides a formula for date calculation that mathematically recognizes the lexical heterogeneity of individual languages, including parent languages just before their breakup into daughter languages. Automated judgments of lexical similarity for groups of related languages are calibrated with historical, epigraphic, and archaeological divergence dates for 52 language groups. The discrepancies between estimated and calibration dates are found to be on average 29% as large as the estimated dates themselves, a figure that does not differ significantly among language families. As a resource for further research that may require dates of known level of accuracy, we offer a list of ASJP time depths for nearly all the world’s recognized language families and for many subfamilies.

    Files private

    Request files
  • Hoogman, M., Rijpkema, M., Janss, L., Brunner, H., Fernandez, G., Buitelaar, J., Franke, B., & Arias-Vásquez, A. (2012). Current self-reported symptoms of attention deficit/hyperactivity disorder are associated with total brain volume in healthy adults. PLoS One, 7(2), e31273. doi:10.1371/journal.pone.0031273.

    Abstract

    Background Reduced total brain volume is a consistent finding in children with Attention Deficit/Hyperactivity Disorder (ADHD). In order to get a better understanding of the neurobiology of ADHD, we take the first step in studying the dimensionality of current self-reported adult ADHD symptoms, by looking at its relation with total brain volume. Methodology/Principal Findings In a sample of 652 highly educated adults, the association between total brain volume, assessed with magnetic resonance imaging, and current number of self-reported ADHD symptoms was studied. The results showed an association between these self-reported ADHD symptoms and total brain volume. Post-hoc analysis revealed that the symptom domain of inattention had the strongest association with total brain volume. In addition, the threshold for impairment coincides with the threshold for brain volume reduction. Conclusions/Significance This finding improves our understanding of the biological substrates of self-reported ADHD symptoms, and suggests total brain volume as a target intermediate phenotype for future gene-finding in ADHD.
  • Hoogman, M., Aarts, E., Zwiers, M., Slaats-Willemse, D., Naber, M., Onnink, M., Cools, R., Kan, C., Buitelaar, J., & Franke, B. (2011). Nitric Oxide Synthase genotype modulation of impulsivity and ventral striatal activity in adult ADHD patients and healthy comparison subjects. American Journal of Psychiatry, 168, 1099-1106. doi:10.1176/appi.ajp.2011.10101446.

    Abstract

    Objective: Attention deficit hyperactivity disorder (ADHD) is a highly heritable disorder. The NOS1 gene encoding nitric oxide synthase is a candidate gene for ADHD and has been previously linked with impulsivity. In the present study, the authors investigated the effect of a functional variable number of tandem repeats (VNTR) polymorphism in NOS1 (NOS1 exon 1f-VNTR) on the processing of rewards, one of the cognitive deficits in ADHD. Method: A sample of 136 participants, consisting of 87 adult ADHD patients and 49 healthy comparison subjects, completed a reward-related impulsivity task. A total of 104 participants also underwent functional magnetic resonance imaging during a reward anticipation task. The effect of the NOS1 exon 1f-VNTR genotype on reward-related impulsivity and reward-related ventral striatal activity was examined. Results: ADHD patients had higher impulsivity scores and lower ventral striatal activity than healthy comparison subjects. The association between the short allele and increased impulsivity was confirmed. However, independent of disease status, homozygous carriers of the short allele of NOS1, the ADHD risk genotype, demonstrated higher ventral striatal activity than carriers of the other NOS1 VNTR genotypes. Conclusions: The authors suggest that the NOS1 genotype influences impulsivity and its relation with ADHD is mediated through effects on this behavioral trait. Increased ventral striatal activity related to NOS1 may be compensatory for effects in other brain regions.
  • Hribar, A., Haun, D. B. M., & Call, J. (2012). Children’s reasoning about spatial relational similarity: The effect of alignment and relational complexity. Journal of Experimental Child Psychology, 111, 490-500. doi:10.1016/j.jecp.2011.11.004.

    Abstract

    We investigated 4- and 5-year-old children’s mapping strategies in a spatial task. Children were required to find a picture in an array of three identical cups after observing another picture being hidden in another array of three cups. The arrays were either aligned one behind the other in two rows or placed side by side forming one line. Moreover, children were rewarded for two different mapping strategies. Half of the children needed to choose a cup that held the same relative position as the rewarded cup in the other array; they needed to map left–left, middle–middle, and right–right cups together (aligned mapping), which required encoding and mapping of two relations (e.g., the cup left of the middle cup and left of the right cup). The other half needed to map together the cups that held the same relation to the table’s spatial features—the cups at the edges, the middle cups, and the cups in the middle of the table (landmark mapping)—which required encoding and mapping of one relation (e.g., the cup at the table’s edge). Results showed that children’s success was constellation dependent; performance was higher when the arrays were aligned one behind the other in two rows than when they were placed side by side. Furthermore, children showed a preference for landmark mapping over aligned mapping.
  • Hribar, A., Haun, D. B. M., & Call, J. (2011). Great apes’ strategies to map spatial relations. Animal Cognition, 14, 511-523. doi:10.1007/s10071-011-0385-6.

    Abstract

    We investigated reasoning about spatial relational similarity in three great ape species: chimpanzees, bonobos, and orangutans. Apes were presented with three spatial mapping tasks in which they were required to find a reward in an array of three cups, after observing a reward being hidden in a different array of three cups. To obtain a food reward, apes needed to choose the cup that was in the same relative position (i.e., on the left) as the baited cup in the other array. The three tasks differed in the constellation of the two arrays. In Experiment 1, the arrays were placed next to each other, forming a line. In Experiment 2, the positioning of the two arrays varied each trial, being placed either one behind the other in two rows, or next to each other, forming a line. Finally, in Experiment 3, the two arrays were always positioned one behind the other in two rows, but misaligned. Results suggested that apes compared the two arrays and recognized that they were similar in some way. However, we believe that instead of mapping the left–left, middle–middle, and right–right cups from each array, they mapped the cups that shared the most similar relations to nearby landmarks (table’s visual boundaries).
  • Huettig, F., & McQueen, J. M. (2011). The nature of the visual environment induces implicit biases during language-mediated visual search. Memory & Cognition, 39, 1068-1084. doi:10.3758/s13421-011-0086-z.

    Abstract

    Four eye-tracking experiments examined whether semantic and visual-shape representations are routinely retrieved from printed-word displays and used during language-mediated visual search. Participants listened to sentences containing target words which were similar semantically or in shape to concepts invoked by concurrently-displayed printed words. In Experiment 1 the displays contained semantic and shape competitors of the targets, and two unrelated words. There were significant shifts in eye gaze as targets were heard towards semantic but not shape competitors. In Experiments 2-4, semantic competitors were replaced with unrelated words, semantically richer sentences were presented to encourage visual imagery, or participants rated the shape similarity of the stimuli before doing the eye-tracking task. In all cases there were no immediate shifts in eye gaze to shape competitors, even though, in response to the Experiment 1 spoken materials, participants looked to these competitors when they were presented as pictures (Huettig & McQueen, 2007). There was a late shape-competitor bias (more than 2500 ms after target onset) in all experiments. These data show that shape information is not used in online search of printed-word displays (whereas it is used with picture displays). The nature of the visual environment appears to induce implicit biases towards particular modes of processing during language-mediated visual search.
  • Huettig, F. (2011). The role of color during language-vision interactions. In R. K. Mishra, & N. Srinivasan (Eds.), Language-Cognition interface: State of the art (pp. 93-113). München: Lincom.
  • Huettig, F., Rommers, J., & Meyer, A. S. (2011). Using the visual world paradigm to study language processing: A review and critical evaluation. Acta Psychologica, 137, 151-171. doi:10.1016/j.actpsy.2010.11.003.

    Abstract

    We describe the key features of the visual world paradigm and review the main research areas where it has been used. In our discussion we highlight that the paradigm provides information about the way language users integrate linguistic information with information derived from the visual environment. Therefore the paradigm is well suited to study one of the key issues of current cognitive psychology, namely the interplay between linguistic and visual information processing. However, conclusions about linguistic processing (e.g., about activation, competition, and timing of access of linguistic representations) in the absence of relevant visual information must be drawn with caution.
  • Huettig, F., Mishra, R. K., & Olivers, C. N. (2012). Mechanisms and representations of language-mediated visual attention. Frontiers in Psychology, 2, 394. doi:10.3389/fpsyg.2011.00394.

    Abstract

    The experimental investigation of language-mediated visual attention is a promising way to study the interaction of the cognitive systems involved in language, vision, attention, and memory. Here we highlight four challenges for a mechanistic account of this oculomotor behavior: the levels of representation at which language-derived and vision-derived representations are integrated; attentional mechanisms; types of memory; and the degree of individual and group differences. Central points in our discussion are (a) the possibility that local microcircuitries involving feedforward and feedback loops instantiate a common representational substrate of linguistic and non-linguistic information and attention; and (b) that an explicit working memory may be central to explaining interactions between language and visual attention. We conclude that a synthesis of further experimental evidence from a variety of fields of inquiry and the testing of distinct, non-student, participant populations will prove to be critical.
  • Huettig, F., & Altmann, G. (2011). Looking at anything that is green when hearing ‘frog’: How object surface colour and stored object colour knowledge influence language-mediated overt attention. Quarterly Journal of Experimental Psychology, 64(1), 122-145. doi:10.1080/17470218.2010.481474.

    Abstract

    Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., "spinach"; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of a frog), (b) objects associated with a diagnostic colour but presented in an appropriate but atypical colour (e.g., a colour photograph of a yellow frog), and (c) objects not associated with a diagnostic colour but presented in the diagnostic colour of the target concept (e.g., a green blouse; blouses are not typically green). We observed that colour-mediated shifts in overt attention are primarily due to the perceived surface attributes of the visual objects rather than stored knowledge about the typical colour of the object. In addition our data reveal that conceptual category information is the primary determinant of overt attention if both conceptual category and surface colour competitors are copresent in the visual environment.
  • Huettig, F., Olivers, C. N. L., & Hartsuiker, R. J. (2011). Looking, language, and memory: Bridging research from the visual world and visual search paradigms. Acta Psychologica, 137, 138-150. doi:10.1016/j.actpsy.2010.07.013.

    Abstract

    In the visual world paradigm as used in psycholinguistics, eye gaze (i.e. visual orienting) is measured in order to draw conclusions about linguistic processing. However, current theories are underspecified with respect to how visual attention is guided on the basis of linguistic representations. In the visual search paradigm as used within the area of visual attention research, investigators have become more and more interested in how visual orienting is affected by higher order representations, such as those involved in memory and language. Within this area more specific models of orienting on the basis of visual information exist, but they need to be extended with mechanisms that allow for language-mediated orienting. In the present paper we review the evidence from these two different – but highly related – research areas. We arrive at a model in which working memory serves as the nexus in which long-term visual as well as linguistic representations (i.e. types) are bound to specific locations (i.e. tokens or indices). The model predicts that the interaction between language and visual attention is subject to a number of conditions, such as the presence of the guiding representation in working memory, capacity limitations, and cognitive control mechanisms.
  • Huettig, F., Singh, N., & Mishra, R. K. (2011). Language-mediated visual orienting behavior in low and high literates. Frontiers in Psychology, 2: e285. doi:10.3389/fpsyg.2011.00285.

    Abstract

    The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task (cf. Huettig & Altmann, 2005) which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., 'magar', crocodile) while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., 'matar', peas; a semantic competitor, e.g., 'kachuwa', turtle, and two unrelated distractors). In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1). In both experiments high literates shifted their eye gaze towards phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were impossible (Experiment 2) but in contrast to high literates these phonologically-mediated shifts in eye gaze were not closely time-locked to the speech input. We conclude that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word-object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2), low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate counterparts.
  • Hutton, J., & Kidd, E. (2011). Structural priming in comprehension of relative clause sentences: In search of a frequency x regularity interaction. In E. Kidd (Ed.), The acquisition of relative clauses: Processing, typology and function (pp. 227-242). Amsterdam: Benjamins.

    Abstract

    The current chapter discusses a structural priming experiment that investigated the on-line processing of English subject- and object- relative clauses. Sixty-one monolingual English-speaking adults participated in a self-paced reading experiment where they read prime-target pairs that fully crossed the relativised element within the relative clause (subject- versus object) across prime and target sentences. Following probabilistic theories of sentence processing, which predict that low frequency structures like object relatives are subject to greater priming effects due to their marked status, it was hypothesised that the normally-observed subject RC processing advantage would be eliminated following priming. The hypothesis was supported, identifying an important role for structural frequency in the processing of relative clause structures.
  • Ibarretxe-Antuñano, I. (2012). Placement and removal events in Basque and Spanish. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 123-144). Amsterdam: Benjamins.

    Abstract

    This paper examines how placement and removal events are lexicalised and conceptualised in Basque and Peninsular Spanish. After a brief description of the main linguistic devices employed for the coding of these types of events, the paper discusses how speakers of the two languages choose to talk about these events. Finally, the paper focuses on two aspects that seem to be crucial in the description of these events (1) the role of force dynamics: both languages distinguish between different degrees of force, causality, and intentionality, and (2) the influence of the verb-framed lexicalisation pattern. Data come from six Basque and ten Peninsular Spanish native speakers.
  • IJzerman, H., Gallucci, M., Pouw, W., Weiβgerber, S. C., Van Doesum, N. J., & Williams, K. D. (2012). Cold-blooded loneliness: Social exclusion leads to lower skin temperatures. Acta Psychologica, 140(3), 283-288. doi:10.1016/j.actpsy.2012.05.002.

    Abstract

    Being ostracized or excluded, even briefly and by strangers, is painful and threatens fundamental needs. Recent work by Zhong and Leonardelli (2008) found that excluded individuals perceive the room as cooler and that they desire warmer drinks. A perspective that many rely on in embodiment is the theoretical idea that people use metaphorical associations to understand social exclusion (see Landau, Meier, & Keefer, 2010). We suggest that people feel colder because they are colder. The results strongly support the idea that more complex metaphorical understandings of social relations are scaffolded onto literal changes in bodily temperature: Being excluded in an online ball tossing game leads to lower finger temperatures (Study 1), while the negative affect typically experienced after such social exclusion is alleviated after holding a cup of warm tea (Study 2). The authors discuss further implications for the interaction between body and social relations specifically, and for basic and cognitive systems in general.
  • Ikram, M. A., Fornage, M., Smith, A. V., Seshadri, S., Schmidt, R., Debette, S., Vrooman, H. A., Sigurdsson, S., Ropele, S., Taal, H. R., Mook-Kanamori, D. O., Coker, L. H., Longstreth, W. T., Niessen, W. J., DeStefano, A. L., Beiser, A., Zijdenbos, A. P., Struchalin, M., Jack, C. R., Rivadeneira, F. and 37 moreIkram, M. A., Fornage, M., Smith, A. V., Seshadri, S., Schmidt, R., Debette, S., Vrooman, H. A., Sigurdsson, S., Ropele, S., Taal, H. R., Mook-Kanamori, D. O., Coker, L. H., Longstreth, W. T., Niessen, W. J., DeStefano, A. L., Beiser, A., Zijdenbos, A. P., Struchalin, M., Jack, C. R., Rivadeneira, F., Uitterlinden, A. G., Knopman, D. S., Hartikainen, A.-L., Pennell, C. E., Thiering, E., Steegers, E. A. P., Hakonarson, H., Heinrich, J., Palmer, L. J., Jarvelin, M.-R., McCarthy, M. I., Grant, S. F. A., St Pourcain, B., Timpson, N. J., Smith, G. D., Sovio, U., Nalls, M. A., Au, R., Hofman, A., Gudnason, H., van der Lugt, A., Harris, T. B., Meeks, W. M., Vernooij, M. W., van Buchem, M. A., Catellier, D., Jaddoe, V. W. V., Gudnason, V., Windham, B. G., Wolf, P. A., van Duijn, C. M., Mosley, T. H., Schmidt, H., Launer, L. J., Breteler, M. M. B., DeCarli, C., Consortiumthe Cohorts for Heart and Aging Research in Genomic Epidemiology (CHARGE) Consortium, & Early Growth Genetics (EGG) Consortium (2012). Common variants at 6q22 and 17q21 are associated with intracranial volume. Nature Genetics, 44(5), 539-544. doi:10.1038/ng.2245.

    Abstract

    During aging, intracranial volume remains unchanged and represents maximally attained brain size, while various interacting biological phenomena lead to brain volume loss. Consequently, intracranial volume and brain volume in late life reflect different genetic influences. Our genome-wide association study (GWAS) in 8,175 community-dwelling elderly persons did not reveal any associations at genome-wide significance (P < 5 × 10(-8)) for brain volume. In contrast, intracranial volume was significantly associated with two loci: rs4273712 (P = 3.4 × 10(-11)), a known height-associated locus on chromosome 6q22, and rs9915547 (P = 1.5 × 10(-12)), localized to the inversion on chromosome 17q21. We replicated the associations of these loci with intracranial volume in a separate sample of 1,752 elderly persons (P = 1.1 × 10(-3) for 6q22 and 1.2 × 10(-3) for 17q21). Furthermore, we also found suggestive associations of the 17q21 locus with head circumference in 10,768 children (mean age of 14.5 months). Our data identify two loci associated with head size, with the inversion at 17q21 also likely to be involved in attaining maximal brain size.
  • Indefrey, P. (1998). De neurale architectuur van taal: Welke hersengebieden zijn betrokken bij het spreken. Neuropraxis, 2(6), 230-237.
  • Indefrey, P. (2012). Hemodynamic studies of syntactic processing. In M. Faust (Ed.), Handbook of the neuropsychology of language. Volume 1: Language processing in the brain: Basic science (pp. 209-228). Malden, MA: Wiley-Blackwell.
  • Indefrey, P., Gruber, O., Brown, C. M., Hagoort, P., Posse, S., & Kleinschmidt, A. (1998). Lexicality and not syllable frequency determine lateralized premotor activation during the pronunciation of word-like stimuli: An fMRI study. NeuroImage, 7, S4.
  • Indefrey, P. (2011). Neurobiology of syntax. In P. C. Hogan (Ed.), The Cambridge encyclopedia of the language sciences (pp. 835-838). New York: Cambridge University Press.
  • Indefrey, P. (2011). The spatial and temporal signatures of word production components: a critical update. Frontiers in Psychology, 2(255): 255. doi:10.3389/fpsyg.2011.00255.

    Abstract

    In the first decade of neurocognitive word production research the predominant approach was brain mapping, i.e., investigating the regional cerebral brain activation patterns correlated with word production tasks, such as picture naming and word generation. Indefrey and Levelt (2004) conducted a comprehensive meta-analysis of word production studies that used this approach and combined the resulting spatial information on neural correlates of component processes of word production with information on the time course of word production provided by behavioral and electromagnetic studies. In recent years, neurocognitive word production research has seen a major change toward a hypothesis-testing approach. This approach is characterized by the design of experimental variables modulating single component processes of word production and testing for predicted effects on spatial or temporal neurocognitive signatures of these components. This change was accompanied by the development of a broader spectrum of measurement and analysis techniques. The article reviews the findings of recent studies using the new approach. The time course assumptions of Indefrey and Levelt (2004) have largely been confirmed requiring only minor adaptations. Adaptations of the brain structure/function relationships proposed by Indefrey and Leven (2004) include the precise role of subregions of the left inferior frontal gyrus as well as a probable, yet to date unclear role of the inferior parietal cortex in word production.
  • Ingason, A., Rujescu, D., Cichon, S., Sigurdsson, E., Sigmundsson, T., Pietilainen, O. P. H., Buizer-Voskamp, J. E., Strengman, E., Francks, C., Muglia, P., Gylfason, A., Gustafsson, O., Olason, P. I., Steinberg, S., Hansen, T., Jakobsen, K. D., Rasmussen, H. B., Giegling, I., Möller, H.-J., Hartmann, A. and 28 moreIngason, A., Rujescu, D., Cichon, S., Sigurdsson, E., Sigmundsson, T., Pietilainen, O. P. H., Buizer-Voskamp, J. E., Strengman, E., Francks, C., Muglia, P., Gylfason, A., Gustafsson, O., Olason, P. I., Steinberg, S., Hansen, T., Jakobsen, K. D., Rasmussen, H. B., Giegling, I., Möller, H.-J., Hartmann, A., Crombie, C., Fraser, G., Walker, N., Lonnqvist, J., Suvisaari, J., Tuulio-Henriksson, A., Bramon, E., Kiemeney, L. A., Franke, B., Murray, R., Vassos, E., Toulopoulou, T., Mühleisen, T. W., Tosato, S., Ruggeri, M., Djurovic, S., Andreassen, O. A., Zhang, Z., Werge, T., Ophoff, R. A., Rietschel, M., Nöthen, M. M., Petursson, H., Stefansson, H., Peltonen, L., Collier, D., Stefansson, K., & St Clair, D. M. (2011). Copy number variations of chromosome 16p13.1 region associated with schizophrenia. Molecular Psychiatry, 16, 17-25. doi:10.1038/mp.2009.101.

    Abstract

    Deletions and reciprocal duplications of the chromosome 16p13.1 region have recently been reported in several cases of autism and mental retardation (MR). As genomic copy number variants found in these two disorders may also associate with schizophrenia, we examined 4345 schizophrenia patients and 35 079 controls from 8 European populations for duplications and deletions at the 16p13.1 locus, using microarray data. We found a threefold excess of duplications and deletions in schizophrenia cases compared with controls, with duplications present in 0.30% of cases versus 0.09% of controls (P=0.007) and deletions in 0.12 % of cases and 0.04% of controls (P>0.05). The region can be divided into three intervals defined by flanking low copy repeats. Duplications spanning intervals I and II showed the most significant (P=0.00010) association with schizophrenia. The age of onset in duplication and deletion carriers among cases ranged from 12 to 35 years, and the majority were males with a family history of psychiatric disorders. In a single Icelandic family, a duplication spanning intervals I and II was present in two cases of schizophrenia, and individual cases of alcoholism, attention deficit hyperactivity disorder and dyslexia. Candidate genes in the region include NTAN1 and NDE1. We conclude that duplications and perhaps also deletions of chromosome 16p13.1, previously reported to be associated with autism and MR, also confer risk of schizophrenia.
  • Ioana, M., Ferwerda, B., Farjadian, S., Ioana, L., Ghaderi, A., Oosting, M., Joosten, L. A., Van der Meer, J. W., Romeo, G., Luiselli, D., Dediu, D., & Netea, M. G. (2012). High variability of TLR4 gene in different ethnic groups of Iran. Innate Immunity, 18, 492-502. doi:10.1177/1753425911423043.

    Abstract

    Infectious diseases exert a constant evolutionary pressure on the innate immunity genes. TLR4, an important member of the Toll-like receptors family, specifically recognizes conserved structures of various infectious pathogens. Two functional TLR4 polymorphisms, Asp299Gly and Thr399Ile, modulate innate host defense against infections, and their prevalence between various populations has been proposed to be influenced by local infectious pressures. If this assumption is true, strong local infectious pressures would lead to a homogeneous pattern of these ancient TLR4 polymorphisms in geographically close populations, while a weak selection or genetic drift may result in a diverse pattern. We evaluated TLR4 polymorphisms in 15 ethnic groups of Iran, to assess whether infections exerted selective pressures on different haplotypes containing these variants. The Iranian subpopulations displayed a heterogeneous pattern of TLR4 polymorphisms, comprising various percentages of Asp299Gly and Thr399Ile alone or in combination. The Iranian sample as a whole showed an intermediate mixed pattern when compared with commonly found patterns in Africa, Europe, Eastern Asia and Americas. These findings suggest a weak or absent selection pressure on TLR4 polymorphisms in the Middle-East, that does not support the assumption of an important role of these polymorphisms in the host defence against local pathogens.
  • Irizarri van Suchtelen, P. (2012). Dative constructions in the Spanish of heritage speakers in the Netherlands. In Z. Wąsik, & P. P. Chruszczewski (Eds.), Languages in contact 2011 (pp. 103-118). Wrocław: Philological School of Higher Education in Wrocław Publishing.

    Abstract

    Spanish can use dative as well as non-dative strategies to encode Possessors, Human Sources, Interestees (datives of interest) and Experiencers. In Dutch this optionality is virtually absent, restricting dative encoding mainly to the Recipient of a ditransitive. The present study examines whether this may lead to instability of the non-prototypical dative constructions in the Spanish of Dutch-Spanish bilinguals. Elicited data of 12 Chilean heritage informants from the Netherlands were analyzed. Whereas the evidence on the stability of dative Experiencers was not conclusive, the results indicate that the use of prototypical datives, dative External Possessors, dative Human Sources and datives of interest is fairly stable in bilinguals, except for those with limited childhood exposure to Spanish. It is argued that the consistent preference for non-dative strategies of this group was primarily attributable to instability of the dative clitic, which affected all constructions, even the encoding of prototypical indirect objects
  • Ishibashi, M. (2012). The expression of ‘putting’ and ‘taking’ events in Japanese: The asymmetry of Source and Goal revisited. In A. Kopecka, & B. Narasimhan (Eds.), Events of putting and taking: A crosslinguistic perspective (pp. 253-272). Amsterdam: Benjamins.

    Abstract

    This study explores the expression of Source and Goal in describing placement and removal events in adult Japanese. Although placement and removal events a priori represent symmetry regarding the orientation of motion, their (c)overt expressions actually exhibit multiple asymmetries at various structural levels. The results show that the expression of the Source is less frequent than the expression of the Goal, but, if expressed, morphosyntactically more complex, suggesting that ‘taking’ events are more complex than ‘putting’ events in their construal. It is stressed that finer linguistic analysis is necessary before explaining linguistic asymmetries in terms of non-linguistic foundations of spatial language.
  • Jaeger, E., Leedham, S., Lewis, A., Segditsas, S., Becker, M., Rodenas-Cuadrado, P., Davis, H., Kaur, K., Heinimann, K., Howarth, K., East, J., Taylor, J., Thomas, H., & Tomlinson, I. (2012). Hereditary mixed polyposis syndrome is caused by a 40-kb upstream duplication that leads to increased and ectopic expression of the BMP antagonist GREM1. Nature Genetics, 44, 699-703. doi:10.1038/ng.2263.

    Abstract

    Hereditary mixed polyposis syndrome (HMPS) is characterized by apparent autosomal dominant inheritance of multiple types of colorectal polyp, with colorectal carcinoma occurring in a high proportion of affected individuals. Here, we use genetic mapping, copy-number analysis, exclusion of mutations by high-throughput sequencing, gene expression analysis and functional assays to show that HMPS is caused by a duplication spanning the 3' end of the SCG5 gene and a region upstream of the GREM1 locus. This unusual mutation is associated with increased allele-specific GREM1 expression. Whereas GREM1 is expressed in intestinal subepithelial myofibroblasts in controls, GREM1 is predominantly expressed in the epithelium of the large bowel in individuals with HMPS. The HMPS duplication contains predicted enhancer elements; some of these interact with the GREM1 promoter and can drive gene expression in vitro. Increased GREM1 expression is predicted to cause reduced bone morphogenetic protein (BMP) pathway activity, a mechanism that also underlies tumorigenesis in juvenile polyposis of the large bowel.
  • Janse, E. (2012). A non-auditory measure of interference predicts distraction by competing speech in older adults. Aging, Neuropsychology and Cognition, 19, 741-758. doi:10.1080/13825585.2011.652590.

    Abstract

    In this study, older adults monitored for pre-assigned target sounds in a target talker's speech in a quiet (no noise) condition and in a condition with competing-talker noise. The question was to which extent the impact of the competing-talker noise on performance could be predicted from individual hearing loss and from a cognitive measure of inhibitory abilities, i.e., a measure of Stroop interference. The results showed that the non-auditory measure of Stroop interference predicted the impact of distraction on performance, over and above the effect of hearing loss. This suggests that individual differences in inhibitory abilities among older adults relate to susceptibility to distracting speech.
  • Janse, I., Bok, J., Hamidjaja, R. A., Hodemaekers, H. M., & van Rotterdam, B. J. (2012). Development and comparison of two assay formats for parallel detection of four biothreat pathogens by using suspension microarrays. PLoS One, 7(2), e31958. doi:10.1371/journal.pone.0031958.

    Abstract

    Microarrays provide a powerful analytical tool for the simultaneous detection of multiple pathogens. We developed diagnostic suspension microarrays for sensitive and specific detection of the biothreat pathogens Bacillus anthracis, Yersinia pestis, Francisella tularensis and Coxiella burnetii. Two assay chemistries for amplification and labeling were developed, one method using direct hybridization and the other using target-specific primer extension, combined with hybridization to universal arrays. Asymmetric PCR products for both assay chemistries were produced by using a multiplex asymmetric PCR amplifying 16 DNA signatures (16-plex). The performances of both assay chemistries were compared and their advantages and disadvantages are discussed. The developed microarrays detected multiple signature sequences and an internal control which made it possible to confidently identify the targeted pathogens and assess their virulence potential. The microarrays were highly specific and detected various strains of the targeted pathogens. Detection limits for the different pathogen signatures were similar or slightly higher compared to real-time PCR. Probit analysis showed that even a few genomic copies could be detected with 95% confidence. The microarrays detected DNA from different pathogens mixed in different ratios and from spiked or naturally contaminated samples. The assays that were developed have a potential for application in surveillance and diagnostics. © 2012 Janse et al.
  • Janse, E., & Adank, P. (2012). Predicting foreign-accent adaptation in older adults. Quarterly Journal of Experimental Psychology, 65, 1563-1585. doi:10.1080/17470218.2012.658822.

    Abstract

    We investigated comprehension of and adaptation to speech in an unfamiliar accent in older adults. Participants performed a speeded sentence verification task for accented sentences: one group upon auditory-only presentation, and the other group upon audiovisual presentation. Our questions were whether audiovisual presentation would facilitate adaptation to the novel accent, and which cognitive and linguistic measures would predict adaptation. Participants were therefore tested on a range of background tests: hearing acuity, auditory verbal short-term memory, working memory, attention-switching control, selective attention, and vocabulary knowledge. Both auditory-only and audiovisual groups showed improved accuracy and decreasing response times over the course of the experiment, effectively showing accent adaptation. Even though the total amount of improvement was similar for the auditory-only and audiovisual groups, initial rate of adaptation was faster in the audiovisual group. Hearing sensitivity and short-term and working memory measures were associated with efficient processing of the novel accent. Analysis of the relationship between accent comprehension and the background tests revealed furthermore that selective attention and vocabulary size predicted the amount of adaptation over the course of the experiment. These results suggest that vocabulary knowledge and attentional abilities facilitate the attention-shifting strategies proposed to be required for perceptual learning.
  • Janse, E., & Ernestus, M. (2011). The roles of bottom-up and top-down information in the recognition of reduced speech: Evidence from listeners with normal and impaired hearing. Journal of Phonetics, 39(3), 330-343. doi:10.1016/j.wocn.2011.03.005.
  • Janzen, G., Haun, D. B. M., & Levinson, S. C. (2012). Tracking down abstract linguistic meaning: Neural correlates of spatial frame of reference ambiguities in language. PLoS One, 7(2), e30657. doi:10.1371/journal.pone.0030657.

    Abstract

    This functional magnetic resonance imaging (fMRI) study investigates a crucial parameter in spatial description, namely variants in the frame of reference chosen. Two frames of reference are available in European languages for the description of small-scale assemblages, namely the intrinsic (or object-oriented) frame and the relative (or egocentric) frame. We showed participants a sentence such as “the ball is in front of the man”, ambiguous between the two frames, and then a picture of a scene with a ball and a man – participants had to respond by indicating whether the picture did or did not match the sentence. There were two blocks, in which we induced each frame of reference by feedback. Thus for the crucial test items, participants saw exactly the same sentence and the same picture but now from one perspective, now the other. Using this method, we were able to precisely pinpoint the pattern of neural activation associated with each linguistic interpretation of the ambiguity, while holding the perceptual stimuli constant. Increased brain activity in bilateral parahippocampal gyrus was associated with the intrinsic frame of reference whereas increased activity in the right superior frontal gyrus and in the parietal lobe was observed for the relative frame of reference. The study is among the few to show a distinctive pattern of neural activation for an abstract yet specific semantic parameter in language. It shows with special clarity the nature of the neural substrate supporting each frame of spatial reference
  • Jasmin, K., & Casasanto, D. (2012). The QWERTY Effect: How typing shapes the meanings of words. Psychonomic Bulletin & Review, 19, 499-504. doi:10.3758/s13423-012-0229-7.

    Abstract

    The QWERTY keyboard mediates communication for millions of language users. Here, we investigated whether differences in the way words are typed correspond to differences in their meanings. Some words are spelled with more letters on the right side of the keyboard and others with more letters on the left. In three experiments, we tested whether asymmetries in the way people interact with keys on the right and left of the keyboard influence their evaluations of the emotional valence of the words. We found the predicted relationship between emotional valence and QWERTY key position across three languages (English, Spanish, and Dutch). Words with more right-side letters were rated as more positive in valence, on average, than words with more left-side letters: the QWERTY effect. This effect was strongest in new words coined after QWERTY was invented and was also found in pseudowords. Although these data are correlational, the discovery of a similar pattern across languages, which was strongest in neologisms, suggests that the QWERTY keyboard is shaping the meanings of words as people filter language through their fingers. Widespread typing introduces a new mechanism by which semanntic changes in language can arise.
  • Jepma, M., Verdonschot, R. G., Van Steenbergen, H., Rombouts, S. A. R. B., & Nieuwenhuis, S. (2012). Neural mechanisms underlying the induction and relief of perceptual curiosity. Frontiers in Behavioral Neuroscience, 6: 5. doi:10.3389/fnbeh.2012.00005.

    Abstract

    Curiosity is one of the most basic biological drives in both animals and humans, and has been identified as a key motive for learning and discovery. Despite the importance of curiosity and related behaviors, the topic has been largely neglected in human neuroscience; hence little is known about the neurobiological mechanisms underlying curiosity. We used functional magnetic resonance imaging (fMRI) to investigate what happens in our brain during the induction and subsequent relief of perceptual curiosity. Our core findings were that (1) the induction of perceptual curiosity, through the presentation of ambiguous visual input, activated the anterior insula and anterior cingulate cortex (ACC), brain regions sensitive to conflict and arousal; (2) the relief of perceptual curiosity, through visual disambiguation, activated regions of the striatum that have been related to reward processing; and (3) the relief of perceptual curiosity was associated with hippocampal activation and enhanced incidental memory. These findings provide the first demonstration of the neural basis of human perceptual curiosity. Our results provide neurobiological support for a classic psychological theory of curiosity, which holds that curiosity is an aversive condition of increased arousal whose termination is rewarding and facilitates memory.
  • Jesse, A., & Janse, E. (2012). Audiovisual benefit for recognition of speech presented with single-talker noise in older listeners. Language and Cognitive Processes, 27(7/8), 1167-1191. doi:10.1080/01690965.2011.620335.

    Abstract

    Older listeners are more affected than younger listeners in their recognition of speech in adverse conditions, such as when they also hear a single-competing speaker. In the present study, we investigated with a speeded response task whether older listeners with various degrees of hearing loss benefit under such conditions from also seeing the speaker they intend to listen to. We also tested, at the same time, whether older adults need postperceptual processing to obtain an audiovisual benefit. When tested in a phoneme-monitoring task with single-talker noise present, older (and younger) listeners detected target phonemes more reliably and more rapidly in meaningful sentences uttered by the target speaker when they also saw the target speaker. This suggests that older adults processed audiovisual speech rapidly and efficiently enough to benefit already during spoken sentence processing. Audiovisual benefits for older adults were similar in size to those observed for younger adults in terms of response latencies, but smaller for detection accuracy. Older adults with more hearing loss showed larger audiovisual benefits. Attentional abilities predicted the size of audiovisual response time benefits in both age groups. Audiovisual benefits were found in both age groups when monitoring for the visually highly distinct phoneme /p/ and when monitoring for the visually less distinct phoneme /k/. Visual speech thus provides segmental information about the target phoneme, but also provides more global contextual information that helps both older and younger adults in this adverse listening situation.
  • Jesse, A., & Johnson, E. K. (2012). Prosodic temporal alignment of co-speech gestures to speech facilitates referent resolution. Journal of Experimental Psychology: Human Perception and Performance, 38, 1567-1581. doi:10.1037/a0027921.

    Abstract

    Using a referent detection paradigm, we examined whether listeners can determine the object speakers are referring to by using the temporal alignment between the motion speakers impose on objects and their labeling utterances. Stimuli were created by videotaping speakers labeling a novel creature. Without being explicitly instructed to do so, speakers moved the creature during labeling. Trajectories of these motions were used to animate photographs of the creature. Participants in subsequent perception studies heard these labeling utterances while seeing side-by-side animations of two identical creatures in which only the target creature moved as originally intended by the speaker. Using the cross-modal temporal relationship between speech and referent motion, participants identified which creature the speaker was labeling, even when the labeling utterances were low-pass filtered to remove their semantic content or replaced by tone analogues. However, when the prosodic structure was eliminated by reversing the speech signal, participants no longer detected the referent as readily. These results provide strong support for a prosodic cross-modal alignment hypothesis. Speakers produce a perceptible link between the motion they impose upon a referent and the prosodic structure of their speech, and listeners readily use this prosodic cross-modal relationship to resolve referential ambiguity in word-learning situations.
  • Jesse, A., & McQueen, J. M. (2011). Positional effects in the lexical retuning of speech perception. Psychonomic Bulletin & Review, 18, 943-950. doi:10.3758/s13423-011-0129-2.

    Abstract

    Listeners use lexical knowledge to adjust to speakers’ idiosyncratic pronunciations. Dutch listeners learn to interpret an ambiguous sound between /s/ and /f/ as /f/ if they hear it word-finally in Dutch words normally ending in /f/, but as /s/ if they hear it in normally /s/-final words. Here, we examined two positional effects in lexically guided retuning. In Experiment 1, ambiguous sounds during exposure always appeared in word-initial position (replacing the first sounds of /f/- or /s/-initial words). No retuning was found. In Experiment 2, the same ambiguous sounds always appeared word-finally during exposure. Here, retuning was found. Lexically guided perceptual learning thus appears to emerge reliably only when lexical knowledge is available as the to-be-tuned segment is initially being processed. Under these conditions, however, lexically guided retuning was position independent: It generalized across syllabic positions. Lexical retuning can thus benefit future recognition of particular sounds wherever they appear in words.
  • Jiang, J., Dai, B., Peng, D., Zhu, C., Liu, L., & Lu, C. (2012). Neural synchronization during face-to-face communication. Journal of Neuroscience, 32(45), 16064-16069. doi:10.1523/JNEUROSCI.2926-12.2012.

    Abstract

    Although the human brain may have evolutionarily adapted to face-to-face communication, other modes of communication, e.g., telephone and e-mail, increasingly dominate our modern daily life. This study examined the neural difference between face-to-face communication and other types of communication by simultaneously measuring two brains using a hyperscanning approach. The results showed a significant increase in the neural synchronization in the left inferior frontal cortex during a face-to-face dialog between partners but none during a back-to-back dialog, a face-to-face monologue, or a back-to-back monologue. Moreover, the neural synchronization between partners during the face-to-face dialog resulted primarily from the direct interactions between the partners, including multimodal sensory information integration and turn-taking behavior. The communicating behavior during the face-to-face dialog could be predicted accurately based on the neural synchronization level. These results suggest that face-to-face communication, particularly dialog, has special neural features that other types of communication do not have and that the neural synchronization between partners may underlie successful face-to-face communication.
  • Johnson, E., McQueen, J. M., & Huettig, F. (2011). Toddlers’ language-mediated visual search: They need not have the words for it. The Quarterly Journal of Experimental Psychology, 64, 1672-1682. doi:10.1080/17470218.2011.594165.

    Abstract

    Eye movements made by listeners during language-mediated visual search reveal a strong link between
    visual processing and conceptual processing. For example, upon hearing the word for a missing referent
    with a characteristic colour (e.g., “strawberry”), listeners tend to fixate a colour-matched distractor (e.g.,
    a red plane) more than a colour-mismatched distractor (e.g., a yellow plane). We ask whether these
    shifts in visual attention are mediated by the retrieval of lexically stored colour labels. Do children
    who do not yet possess verbal labels for the colour attribute that spoken and viewed objects have in
    common exhibit language-mediated eye movements like those made by older children and adults?
    That is, do toddlers look at a red plane when hearing “strawberry”? We observed that 24-montholds
    lacking colour term knowledge nonetheless recognized the perceptual–conceptual commonality
    between named and seen objects. This indicates that language-mediated visual search need not
    depend on stored labels for concepts.
  • Johnson, E. K., & Huettig, F. (2011). Eye movements during language-mediated visual search reveal a strong link between overt visual attention and lexical processing in 36-months-olds. Psychological Research, 75, 35-42. doi:10.1007/s00426-010-0285-4.

    Abstract

    The nature of children’s early lexical processing was investigated by asking what information 36-month-olds access and use when instructed to find a known but absent referent. Children readily retrieved stored knowledge about characteristic color, i.e. when asked to find an object with a typical color (e.g. strawberry), children tended to fixate more upon an object that had the same (e.g. red plane) as opposed to a different (e.g. yellow plane) color. They did so regardless of the fact that they have had plenty of time to recognize the pictures for what they are, i.e. planes not strawberries. These data represent the first demonstration that language-mediated shifts of overt attention in young children can be driven by individual stored visual attributes of known words that mismatch on most other dimensions. The finding suggests that lexical processing and overt attention are strongly linked from an early age.
  • Johnson, J. S., Sutterer, D. W., Acheson, D. J., Lewis-Peacock, J. A., & Postle, B. R. (2011). Increased alpha-band power during the retention of shapes and shape-location associations in visual short-term memory. Frontiers in Psychology, 2(128), 1-9. doi:10.3389/fpsyg.2011.00128.

    Abstract

    Studies exploring the role of neural oscillations in cognition have revealed sustained increases in alpha-band (∼8–14 Hz) power during the delay period of delayed-recognition short-term memory tasks. These increases have been proposed to reflect the inhibition, for example, of cortical areas representing task-irrelevant information, or of potentially interfering representations from previous trials. Another possibility, however, is that elevated delay-period alpha-band power (DPABP) reflects the selection and maintenance of information, rather than, or in addition to, the inhibition of task-irrelevant information. In the present study, we explored these possibilities using a delayed-recognition paradigm in which the presence and task relevance of shape information was systematically manipulated across trial blocks and electroencephalographic was used to measure alpha-band power. In the first trial block, participants remembered locations marked by identical black circles. The second block featured the same instructions, but locations were marked by unique shapes. The third block featured the same stimulus presentation as the second, but with pretrial instructions indicating, on a trial-by-trial basis, whether memory for shape or location was required, the other dimension being irrelevant. In the final block, participants remembered the unique pairing of shape and location for each stimulus. Results revealed minimal DPABP in each of the location-memory conditions, whether locations were marked with identical circles or with unique task-irrelevant shapes. In contrast, alpha-band power increases were observed in both the shape-memory condition, in which location was task irrelevant, and in the critical final condition, in which both shape and location were task relevant. These results provide support for the proposal that alpha-band oscillations reflect the retention of shape information and/or shape–location associations in short-term memory.
  • Johnson, E. K., Westrek, E., Nazzi, T., & Cutler, A. (2011). Infant ability to tell voices apart rests on language experience. Developmental Science, 14(5), 1002-1011. doi:10.1111/j.1467-7687.2011.01052.x.

    Abstract

    A visual fixation study tested whether seven-month-olds can discriminate between different talkers. The infants were first habituated to talkers producing sentences in either a familiar or unfamiliar language, then heard test sentences from previously unheard speakers, either in the language used for habituation, or in another language. When the language at test mismatched that in habituation, infants always noticed the change. When language remained constant and only talker altered, however, infants detected the change only if the language was the native tongue. Adult listeners with a different native tongue than the infants did not reproduce the discriminability patterns shown by the infants, and infants detected neither voice nor language changes in reversed speech; both these results argue against explanation of the native-language voice discrimination in terms of acoustic properties of the stimuli. The ability to identify talkers is, like many other perceptual abilities, strongly influenced by early life experience.
  • Jones, C. R., Pickles, A., Falcaro, M., Marsden, A. J., Happé, F., Scott, S. K., Sauter, D., Tregay, J., Phillips, R. J., Baird, G., Simonoff, E., & Charman, T. (2011). A multimodal approach to emotion recognition ability in autism spectrum disorders. Journal of Child Psychology and Psychiatry, 52(3), 275-285. doi:10.1111/j.1469-7610.2010.02328.x.

    Abstract

    Background: Autism spectrum disorders (ASD) are characterised by social and communication difficulties in day-to-day life, including problems in recognising emotions. However, experimental investigations of emotion recognition ability in ASD have been equivocal; hampered by small sample sizes, narrow IQ range and over-focus on the visual modality. Methods: We tested 99 adolescents (mean age 15;6 years, mean IQ 85) with an ASD and 57 adolescents without an ASD (mean age 15;6 years, mean IQ 88) on a facial emotion recognition task and two vocal emotion recognition tasks (one verbal; one non-verbal). Recognition of happiness, sadness, fear, anger, surprise and disgust were tested. Using structural equation modelling, we conceptualised emotion recognition ability as a multimodal construct, measured by the three tasks. We examined how the mean levels of recognition of the six emotions differed by group (ASD vs. non-ASD) and IQ (>= 80 vs. < 80). Results: There was no significant difference between groups for the majority of emotions and analysis of error patterns suggested that the ASD group were vulnerable to the same pattern of confusions between emotions as the non-ASD group. However, recognition ability was significantly impaired in the ASD group for surprise. IQ had a strong and significant effect on performance for the recognition of all six emotions, with higher IQ adolescents outperforming lower IQ adolescents. Conclusions: The findings do not suggest a fundamental difficulty with the recognition of basic emotions in adolescents with ASD.
  • Jordan, F. (2011). A phylogenetic analysis of the evolution of Austronesian sibling terminologies. Human Biology, 83, 297-321. doi:10.3378/027.083.0209.

    Abstract

    Social structure in human societies is underpinned by the variable expression of ideas about relatedness between different types of kin. We express these ideas through language in our kin terminology: to delineate who is kin and who is not, and to attach meanings to the types of kin labels associated with different individuals. Cross-culturally, there is a regular and restricted range of patterned variation in kin terminologies, and to date, our understanding of this diversity has been hampered by inadequate techniques for dealing with the hierarchical relatedness of languages (Galton’s Problem). Here I use maximum-likelihood and Bayesian phylogenetic comparative methods to begin to tease apart the processes underlying the evolution of kin terminologies in the Austronesian language family, focusing on terms for siblings. I infer (1) the probable ancestral states and (2) evolutionary models of change for the semantic distinctions of relative age (older/younger sibling) and relative sex (same sex/opposite-sex). Analyses show that early Austronesian languages contained the relative-age, but not the relative-sex distinction; the latter was reconstructed firmly only for the ancestor of Eastern Malayo-Polynesian languages. Both distinctions were best characterized by evolutionary models where the gains and losses of the semantic distinctions were equally likely. A multi-state model of change examined how the relative-sex distinction could be elaborated and found that some transitions in kin terms were not possible: jumps from absence to heavily elaborated were very unlikely, as was piece-wise dismantling of elaborate distinctions. Cultural ideas about what types of kin distinctions are important can be embedded in the semantics of language; using a phylogenetic evolutionary framework we can understand how those distinctions in meaning change through time.
  • Jordens, P. (1998). Defaultformen des Präteritums. Zum Erwerb der Vergangenheitsmorphologie im Niederlänidischen. In H. Wegener (Ed.), Eine zweite Sprache lernen (pp. 61-88). Tübingen, Germany: Verlag Gunter Narr.
  • Junge, C., Cutler, A., & Hagoort, P. (2012). Electrophysiological evidence of early word learning. Neuropsychologia, 50, 3702-3712. doi:10.1016/j.neuropsychologia.2012.10.012.

    Abstract

    Around their first birthday infants begin to talk, yet they comprehend words long before. This study investigated the event-related potentials (ERP) responses of nine-month-olds on basic level picture-word pairings. After a familiarization phase of six picture-word pairings per semantic category, comprehension for novel exemplars was tested in a picture-word matching paradigm. ERPs time-locked to pictures elicited a modulation of the Negative Central (Nc) component, associated with visual attention and recognition. It was attenuated by category repetition as well as by the type-token ratio of picture context. ERPs time-locked to words in the training phase became more negative with repetition (N300-600), but there was no influence of picture type-token ratio, suggesting that infants have identified the concept of each picture before a word was presented. Results from the test phase provided clear support that infants integrated word meanings with (novel) picture context. Here, infants showed different ERP responses for words that did or did not align with the picture context: a phonological mismatch (N200) and a semantic mismatch (N400). Together, results were informative of visual categorization, word recognition and word-to-world-mappings, all three crucial processes for vocabulary construction.
  • Junge, C., Kooijman, V., Hagoort, P., & Cutler, A. (2012). Rapid recognition at 10 months as a predictor of language development. Developmental Science, 15, 463-473. doi:10.1111/j.1467-7687.2012.1144.x.

    Abstract

    Infants’ ability to recognize words in continuous speech is vital for building a vocabulary.We here examined the amount and type
    of exposure needed for 10-month-olds to recognize words. Infants first heard a word, either embedded within an utterance or in
    isolation, then recognition was assessed by comparing event-related potentials to this word versus a word that they had not heard
    directly before. Although all 10-month-olds showed recognition responses to words first heard in isolation, not all infants showed
    such responses to words they had first heard within an utterance. Those that did succeed in the latter, harder, task, however,
    understood more words and utterances when re-tested at 12 months, and understood more words and produced more words at
    24 months, compared with those who had shown no such recognition response at 10 months. The ability to rapidly recognize the
    words in continuous utterances is clearly linked to future language development.
  • Kelly, S., Byrne, K., & Holler, J. (2011). Raising the stakes of communication: Evidence for increased gesture production as predicted by the GSA framework. Information, 2(4), 579-593. doi:10.3390/info2040579.

    Abstract

    Theorists of language have argued that co-­speech hand gestures are an
    intentional part of social communication. The present study provides evidence for these
    claims by showing that speakers adjust their gesture use according to their perceived relevance to the audience. Participants were asked to read about items that were and were not useful in a wilderness survival scenario, under the pretense that they would then
    explain (on camera) what they learned to one of two different audiences. For one audience (a group of college students in a dormitory orientation activity), the stakes of successful
    communication were low;; for the other audience (a group of students preparing for a
    rugged camping trip in the mountains), the stakes were high. In their explanations to the camera, participants in the high stakes condition produced three times as many
    representational gestures, and spent three times as much time gesturing, than participants in the low stakes condition. This study extends previous research by showing that the anticipated consequences of one’s communication—namely, the degree to which information may be useful to an intended recipient—influences speakers’ use of gesture.
  • Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2012). The communicative influence of gesture and action during speech comprehension: Gestures have the upper hand [Abstract]. Abstracts of the Acoustics 2012 Hong Kong conference published in The Journal of the Acoustical Society of America, 131, 3311. doi:10.1121/1.4708385.

    Abstract

    Hand gestures combine with speech to form a single integrated system of meaning during language comprehension (Kelly et al., 2010). However, it is unknown whether gesture is uniquely integrated with speech or is processed like any other manual action. Thirty-one participants watched videos presenting speech with gestures or manual actions on objects. The relationship between the speech and gesture/action was either complementary (e.g., “He found the answer,” while producing a calculating gesture vs. actually using a calculator) or incongruent (e.g., the same sentence paired with the incongruent gesture/action of stirring with a spoon). Participants watched the video (prime) and then responded to a written word (target) that was or was not spoken in the video prime (e.g., “found” or “cut”). ERPs were taken to the primes (time-locked to the spoken verb, e.g., “found”) and the written targets. For primes, there was a larger frontal N400 (semantic processing) to incongruent vs. congruent items for the gesture, but not action, condition. For targets, the P2 (phonemic processing) was smaller for target words following congruent vs. incongruent gesture, but not action, primes. These findings suggest that hand gestures are integrated with speech in a privileged fashion compared to manual actions on objects.

Share this page