Publications

Displaying 1 - 100 of 317
  • Alibali, M. W., Kita, S., & Young, A. J. (2000). Gesture and the process of speech production: We think, therefore we gesture. Language and Cognitive Processes, 15(6), 593-613. doi:10.1080/016909600750040571.

    Abstract

    At what point in the process of speech production is gesture involved? According to the Lexical Retrieval Hypothesis, gesture is involved in generating the surface forms of utterances. Specifically, gesture facilitates access to items in the mental lexicon. According to the Information Packaging Hypothesis, gesture is involved in the conceptual planning of messages. Specifically, gesture helps speakers to ''package'' spatial information into verbalisable units. We tested these hypotheses in 5-year-old children, using two tasks that required comparable lexical access, but different information packaging. In the explanation task, children explained why two items did or did not have the same quantity (Piagetian conservation). In the description task, children described how two items looked different. Children provided comparable verbal responses across tasks; thus, lexical access was comparable. However, the demands for information packaging differed. Participants' gestures also differed across the tasks. In the explanation task, children produced more gestures that conveyed perceptual dimensions of the objects, and more gestures that conveyed information that differed from the accompanying speech. The results suggest that gesture is involved in the conceptual planning of speech.
  • Ambridge, B., Rowland, C. F., Theakston, A. L., & Tomasello, M. (2006). Comparing different accounts of inversion errors in children's non-subject wh-questions: ‘What experimental data can tell us?’. Journal of Child Language, 33(3), 519-557. doi:10.1017/S0305000906007513.

    Abstract

    This study investigated different accounts of children's acquisition of non-subject wh-questions. Questions using each of 4 wh-words (what, who, how and why), and 3 auxiliaries (BE, DO and CAN) in 3sg and 3pl form were elicited from 28 children aged 3;6–4;6. Rates of non-inversion error (Who she is hitting?) were found not to differ by wh-word, auxiliary or number alone, but by lexical auxiliary subtype and by wh-word+lexical auxiliary combination. This finding counts against simple rule-based accounts of question acquisition that include no role for the lexical subtype of the auxiliary, and suggests that children may initially acquire wh-word+lexical auxiliary combinations from the input. For DO questions, auxiliary-doubling errors (What does she does like?) were also observed, although previous research has found that such errors are virtually non-existent for positive questions. Possible reasons for this discrepancy are discussed.
  • Ameka, F. K. (1992). Interjections: The universal yet neglected part of speech. Journal of Pragmatics, 18(2/3), 101-118. doi:10.1016/0378-2166(92)90048-G.
  • Ameka, F. K. (1992). The meaning of phatic and conative interjections. Journal of Pragmatics, 18(2/3), 245-271. doi:10.1016/0378-2166(92)90054-F.

    Abstract

    The purpose of this paper is to investigate the meanings of the members of two subclasses of interjections in Ewe: the conative/volitive which are directed at an auditor, and the phatic which are used in the maintenance of social and communicative contact. It is demonstrated that interjections like other linguistic signs have meanings which can be rigorously stated. In addition, the paper explores the differences and similarities between the semantic structures of interjections on one hand and formulaic words on the other. This is done through a comparison of the semantics and pragmatics of an interjection and a formulaic word which are used for welcoming people in Ewe. It is contended that formulaic words are speech acts qua speech acts while interjections are not fully fledged speech acts because they lack illocutionary dictum in their semantic structure.
  • Baayen, R. H., Feldman, L. B., & Schreuder, R. (2006). Morphological influences on the recognition of monosyllabic monomorphemic words. Journal of Memory and Language, 55(2), 290-313. doi:10.1016/j.jml.2006.03.008.

    Abstract

    Balota et al. [Balota, D., Cortese, M., Sergent-Marshall, S., Spieler, D., & Yap, M. (2004). Visual word recognition for single-syllable words. Journal of Experimental Psychology: General, 133, 283–316] studied lexical processing in word naming and lexical decision using hierarchical multiple regression techniques for a large data set of monosyllabic, morphologically simple words. The present study supplements their work by making use of more flexible regression techniques that are better suited for dealing with collinearity and non-linearity, and by documenting the contributions of several variables that they did not take into account. In particular, we included measures of morphological connectivity, as well as a new frequency count, the frequency of a word in speech rather than in writing. The morphological measures emerged as strong predictors in visual lexical decision, but not in naming, providing evidence for the importance of morphological connectivity even for the recognition of morphologically simple words. Spoken frequency was predictive not only for naming but also for visual lexical decision. In addition, it co-determined subjective frequency estimates and norms for age of acquisition. Finally, we show that frequency predominantly reflects conceptual familiarity rather than familiarity with a word’s form.
  • Baayen, H., & Lieber, R. (1991). Productivity and English derivation: A corpus-based study. Linguistics, 29(5), 801-843. doi:10.1515/ling.1991.29.5.801.

    Abstract

    The notion of productivity is one which is central to the study of morphology.
    It is a notion about which linguists frequently have intuitions. But it is a notion which still
    remains somewhat problematic in the
    literature on generative morphology some
    15 years after Aronoff raised the issue in his (1976) monograph. In this paper we will review some of the definitions and measures of productivity discussed in the generative and pregenerative literature.
    We will adopt the definition of productivity suggested by Schultink (1961) and propose
    a number of statistical measures of productivity whose results, when
    applied to a fixed corpus, accord nicely with our intuitive estimates of productivity, and which shed light on the quantitative weight of linguistic restrictions on word formation rules. Part of our
    purpose here is also a very
    simple one: to make
    available a substantial
    set of empirical data concerning
    the productivity of
    some of the major derivational
    affixes of English.

    Files private

    Request files
  • Bastiaansen, M. C. M., & Knösche, T. R. (2000). MEG tangential derivative mapping applied to Event-Related Desynchronization (ERD) research. Clinical Neurophysiology, 111, 1300-1305.

    Abstract

    Objectives: A problem with the topographic mapping of MEG data recorded with axial gradiometers is that field extrema are measured at sensors located at either side of a neuronal generator instead of at sensors directly above the source. This is problematic for the computation of event-related desynchronization (ERD) on MEG data, since ERD relies on a correspondence between the signal maximum and the location of the neuronal generator. Methods: We present a new method based on computing spatial derivatives of the MEG data. The limitations of this method were investigated by means of forward simulations, and the method was applied to a 150-channel MEG dataset. Results: The simulations showed that the method has some limitations. (1) Fewer channels reduce accuracy and amplitude. (2) It is less suitable for deep or very extended sources. (3) Multiple sources can only be distinguished if they are not too close to each other. Applying the method in the calculation of ERD on experimental data led to a considerable improvement of the ERD maps. Conclusions: The proposed method offers a significant advantage over raw MEG signals, both for the topographic mapping of MEG and for the analysis of rhythmic MEG activity by means of ERD.
  • Beattie, G. W., Cutler, A., & Pearson, M. (1982). Why is Mrs Thatcher interrupted so often? [Letters to Nature]. Nature, 300, 744-747. doi:10.1038/300744a0.

    Abstract

    If a conversation is to proceed smoothly, the participants have to take turns to speak. Studies of conversation have shown that there are signals which speakers give to inform listeners that they are willing to hand over the conversational turn1−4. Some of these signals are part of the text (for example, completion of syntactic segments), some are non-verbal (such as completion of a gesture), but most are carried by the pitch, timing and intensity pattern of the speech; for example, both pitch and loudness tend to drop particularly low at the end of a speaker's turn. When one speaker interrupts another, the two can be said to be disputing who has the turn. Interruptions can occur because one participant tries to dominate or disrupt the conversation. But it could also be the case that mistakes occur in the way these subtle turn-yielding signals are transmitted and received. We demonstrate here that many interruptions in an interview with Mrs Margaret Thatcher, the British Prime Minister, occur at points where independent judges agree that her turn appears to have finished. It is suggested that she is unconsciously displaying turn-yielding cues at certain inappropriate points. The turn-yielding cues responsible are identified.
  • De Bleser, R., Willmes, K., Graetz, P., & Hagoort, P. (1991). De Akense Afasie Test. Logopedie en Foniatrie, 63, 207-217.
  • Bock, K., Butterfield, S., Cutler, A., Cutting, J. C., Eberhard, K. M., & Humphreys, K. R. (2006). Number agreement in British and American English: Disagreeing to agree collectively. Language, 82(1), 64-113.

    Abstract

    British andAmerican speakers exhibit different verb number agreement patterns when sentence subjects have collective headnouns. From linguistic andpsycholinguistic accounts of how agreement is implemented, three alternative hypotheses can be derived to explain these differences. The hypotheses involve variations in the representation of notional number, disparities in how notional andgrammatical number are used, and inequalities in the grammatical number specifications of collective nouns. We carriedout a series of corpus analyses, production experiments, andnorming studies to test these hypotheses. The results converge to suggest that British and American speakers are equally sensitive to variations in notional number andimplement subjectverb agreement in much the same way, but are likely to differ in the lexical specifications of number for collectives. The findings support a psycholinguistic theory that explains verb and pronoun agreement within a parallel architecture of lexical andsyntactic formulation.
  • Bod, R., Fitz, H., & Zuidema, W. (2006). On the structural ambiguity in natural language that the neural architecture cannot deal with [Commentary]. Behavioral and Brain Sciences, 29, 71-72. doi:10.1017/S0140525X06239025.

    Abstract

    We argue that van der Velde's & de Kamps's model does not solve the binding problem but merely shifts the burden of constructing appropriate neural representations of sentence structure to unexplained preprocessing of the linguistic input. As a consequence, their model is not able to explain how various neural representations can be assigned to sentences that are structurally ambiguous.
  • Bohnemeyer, J. (2000). Event order in language and cognition. Linguistics in the Netherlands, 17(1), 1-16. doi:10.1075/avt.17.04boh.
  • Bowerman, M. (1971). [Review of A. Bar Adon & W.F. Leopold (Eds.), Child language: A book of readings (Prentice Hall, 1971)]. Contemporary Psychology: APA Review of Books, 16, 808-809.
  • Bowerman, M. (1983). How do children avoid constructing an overly general grammar in the absence of feedback about what is not a sentence? Papers and Reports on Child Language Development, 22, 23-35.

    Abstract

    The theory that language acquisition is guided and constrained by inborn linguistic knowledge is assessed. Specifically, the "no negative evidence" view, the belief that linguistic theory should be restricted in such a way that the grammars it allows can be learned by children on the basis of positive evidence only, is explored. Child language data are cited in order to investigate influential innatist approaches to language acquisition. Baker's view that children are innately constrained in significant ways with respect to language acquisition is evaluated. Evidence indicates that children persistently make overgeneralizations of the sort that violate the constrained view of language acquisition. Since children eventually do develop correct adult grammar, they must have other mechanisms for cutting back on these overgeneralizations. Thus, any hypothesized constraints cannot be justified on grounds that without them the child would end up with overly general grammar. It is necessary to explicate the mechanisms by which children eliminate their tendency toward overgeneralization.
  • Bowerman, M. (1982). Evaluating competing linguistic models with language acquisition data: Implications of developmental errors with causative verbs. Quaderni di semantica, 3, 5-66.
  • Braun, B. (2006). Phonetics and phonology of thematic contrast in German. Language and Speech, 49(4), 451-493.

    Abstract

    It is acknowledged that contrast plays an important role in understanding discourse and information structure. While it is commonly assumed that contrast can be marked by intonation only, our understanding of the intonational realization of contrast is limited. For German there is mainly introspective evidence that the rising theme accent (or topic accent) is realized differently when signaling contrast than when not. In this article, the acoustic basis for the reported impressionistic differences is investigated in terms of the scaling (height) and alignment (positioning) of tonal targets.

    Subjects read target sentences in a contrastive and a noncontrastive context (Experiment 1). Prosodic annotation revealed that thematic accents were not realized with different accent types in the two contexts but acoustic comparison showed that themes in contrastive context exhibited a higher and later peak. The alignment and scaling of accents can hence be controlled in a linguistically meaningful way, which has implications for intonational phonology. In Experiment 2, nonlinguists' perception of a subset of the production data was assessed. They had to choose whether, in a contrastive context, the presumed contrastive or noncontrastive realization of a sentence was more appropriate. For some sentence pairs only, subjects had a clear preference. For Experiment 3, a group of linguists annotated the thematic accents of the contrastive and noncontrastive versions of the same data as used in Experiment 2. There was considerable disagreement in labels, but different accent types were consistently used when the two versions differed strongly in F0 excursion. Although themes in contrastive contexts were clearly produced differently than themes in noncontrastive contexts, this difference is not easily perceived or annotated.
  • Braun, B., Kochanski, G., Grabe, E., & Rosner, B. S. (2006). Evidence for attractors in English intonation. Journal of the Acoustical Society of America, 119(6), 4006-4015. doi:10.1121/1.2195267.

    Abstract

    Although the pitch of the human voice is continuously variable, some linguists contend that intonation in speech is restricted to a small, limited set of patterns. This claim is tested by asking subjects to mimic a block of 100 randomly generated intonation contours and then to imitate themselves in several successive sessions. The produced f0 contours gradually converge towards a limited set of distinct, previously recognized basic English intonation patterns. These patterns are "attractors" in the space of possible intonation English contours. The convergence does not occur immediately. Seven of the ten participants show continued convergence toward their attractors after the first iteration. Subjects retain and use information beyond phonological contrasts, suggesting that intonational phonology is not a complete description of their mental representation of intonation.
  • Broeder, D., & Wittenburg, P. (2006). The IMDI metadata framework, its current application and future direction. International Journal of Metadata, Semantics and Ontologies, 1(2), 119-132. doi:10.1504/IJMSO.2006.011008.

    Abstract

    The IMDI Framework offers next to a suitable set of metadata descriptors for language resources, a set of tools and an infrastructure to use these. This paper gives an overview of all these aspects and at the end describes the intentions and hopes for ensuring the interoperability of the IMDI framework within more general ones in development. An evaluation of the current state of the IMDI Framework is presented with an analysis of the benefits and more problematic issues. Finally we describe work on issues of long-term stability for IMDI by linking up to the work done within the ISO TC37/SC4 subcommittee (TC37/SC4).
  • Broeder, D., Auer, E., & Wittenburg, P. (2006). Unique resource identifiers. Language Archive Newsletter, no. 8, 8-9.
  • Broersma, M., & De Bot, K. (2006). Triggered codeswitching: A corpus-based evaluation of the original triggering hypothesis and a new alternative. Bilingualism: Language and Cognition, 9(1), 1-13. doi:10.1017/S1366728905002348.

    Abstract

    In this article the triggering hypothesis for codeswitching proposed by Michael Clyne is discussed and tested. According to this hypothesis, cognates can facilitate codeswitching of directly preceding or following words. It is argued that the triggering hypothesis in its original form is incompatible with language production models, as it assumes that language choice takes place at the surface structure of utterances, while in bilingual production models language choice takes place along with lemma selection. An adjusted version of the triggering hypothesis is proposed in which triggering takes place during lemma selection and the scope of triggering is extended to basic units in language production. Data from a Dutch–Moroccan Arabic corpus are used for a statistical test of the original and the adjusted triggering theory. The codeswitching patterns found in the data support part of the original triggering hypothesis, but they are best explained by the adjusted triggering theory.
  • Brown, C. M., Van Berkum, J. J. A., & Hagoort, P. (2000). Discourse before gender: An event-related brain potential study on the interplay of semantic and syntactic information during spoken language understanding. Journal of Psycholinguistic Research, 29(1), 53-68. doi:10.1023/A:1005172406969.

    Abstract

    A study is presented on the effects of discourse–semantic and lexical–syntactic information during spoken sentence processing. Event-related brain potentials (ERPs) were registered while subjects listened to discourses that ended in a sentence with a temporary syntactic ambiguity. The prior discourse–semantic information biased toward one analysis of the temporary ambiguity, whereas the lexical-syntactic information allowed only for the alternative analysis. The ERP results show that discourse–semantic information can momentarily take precedence over syntactic information, even if this violates grammatical gender agreement rules.
  • Brown, C. M., Hagoort, P., & Chwilla, D. J. (2000). An event-related brain potential analysis of visual word priming effects. Brain and Language, 72, 158-190. doi:10.1006/brln.1999.2284.

    Abstract

    Two experiments are reported that provide evidence on task-induced effects during
    visual lexical processing in a primetarget semantic priming paradigm. The research focuses on target expectancy effects by manipulating the proportion of semantically related and unrelated word pairs. In Experiment 1, a lexical decision task was used and reaction times (RTs) and event-related brain potentials (ERPs) were obtained. In Experiment 2, subjects silently read the stimuli, without any additional task demands, and ERPs were recorded. The RT and ERP results of Experiment 1 demonstrate that an expectancy mechanism contributed to the priming effect when a high proportion of related word pairs was presented. The ERP results of Experiment 2 show that in the absence of extraneous task requirements, an expectancy mechanism is not active. However, a standard ERP semantic priming effect was obtained in Experiment 2. The combined results show that priming effects due to relatedness proportion are induced by task demands and are not a standard aspect of online lexical processing.
  • Brown, P., & Levinson, S. C. (1992). 'Left' and 'right' in Tenejapa: Investigating a linguistic and conceptual gap. Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung, 45(6), 590-611.

    Abstract

    From the perspective of a Kantian belief in the fundamental human tendency to cleave space along the three planes of the human body, Tenejapan Tzeltal exhibits a linguistic gap: there are no linguistic expressions that designate regions (as in English to my left) or describe the visual field (as in to the left of the tree) on the basis of a plane bisecting the body into a left and right side. Tenejapans have expressions for left and right hands (xin k'ab and wa'el k'ab), but these are basically body-part terms, they are not generalized to form a division of space. This paper describes the results of various elicited producton tasks in which concepts of left and right would provide a simple solution, showing that Tenejapan consultants use other notions even when the relevant linguistic distinctions could be made in Tzeltal (e.g. describing the position of one's limbs, or describing rotation of one's body). Instead of using the left-hand/right-hand distinction to construct a division of space, Tenejapans utilize a number of other systems: (i) an absolute, 'cardinal direction' system, supplemented by reference to other geographic or landmark directions, (ii) a generative segmentation of objects and places into analogic body-parts or other kinds of parts, and (iii) a rich system of positional adjectives to describe the exact disposition of things. These systems work conjointly to specify locations with precision and elegance. The overall system is not primarily egocentric, and it makes no essential reference to planes through the human body.
  • Brown, P. (1983). [Review of the book Conversational routine: Explorations in standardized communication situations and prepatterned speech ed. by Florian Coulmas]. Language, 59, 215-219.
  • Brown, P. (1983). [Review of the books Mayan Texts I, II, and III ed. by Louanna Furbee-Losee]. International Journal of American Linguistics, 49, 337-341.
  • Brown, P. (2006). Language, culture and cognition: The view from space. Zeitschrift für Germanistische Linguistik, 34, 64-86.

    Abstract

    This paper addresses the vexed questions of how language relates to culture, and what kind of notion of culture is important for linguistic explanation. I first sketch five perspectives - five different construals - of culture apparent in linguistics and in cognitive science more generally. These are: (i) culture as ethno-linguistic group, (ii) culture as a mental module, (iii) culture as knowledge, (iv) culture as context, and (v) culture as a process emergent in interaction. I then present my own work on spatial language and cognition in a Mayan languge and culture, to explain why I believe a concept of culture is important for linguistics. I argue for a core role for cultural explanation in two domains: in analysing the semantics of words embedded in cultural practices which color their meanings (in this case, spatial frames of reference), and in characterizing thematic and functional links across different domains in the social and semiotic life of a particular group of people.
  • Burenhult, N. (2006). Body part terms in Jahai. Language Sciences, 28(2-3), 162-180. doi:10.1016/j.langsci.2005.11.002.

    Abstract

    This article explores the lexicon of body part terms in Jahai, a Mon-Khmer language spoken by a group of hunter–gatherers in the Malay Peninsula. It provides an extensive inventory of body part terms and describes their structural and semantic properties. The Jahai body part lexicon pays attention to fine anatomical detail but lacks labels for major, ‘higher-level’ categories, like ‘trunk’, ‘limb’, ‘arm’ and ‘leg’. In this lexicon it is therefore sometimes difficult to discern a clear partonomic hierarchy, a presumed universal of body part terminology.
  • Carlsson, K., Andersson, J., Petrovic, P., Petersson, K. M., Öhman, A., & Ingvar, M. (2006). Predictability modulates the affective and sensory-discriminative neural processing of pain. NeuroImage, 32(4), 1804-1814. doi:10.1016/j.neuroimage.2006.05.027.

    Abstract

    Knowing what is going to happen next, that is, the capacity to predict upcoming events, modulates the extent to which aversive stimuli induce stress and anxiety. We explored this issue by manipulating the temporal predictability of aversive events by means of a visual cue, which was either correlated or uncorrelated with pain stimuli (electric shocks). Subjects reported lower levels of anxiety, negative valence and pain intensity when shocks were predictable. In addition to attenuate focus on danger, predictability allows for correct temporal estimation of, and selective attention to, the sensory input. With functional magnetic resonance imaging, we found that predictability was related to enhanced activity in relevant sensory-discriminative processing areas, such as the primary and secondary sensory cortex and posterior insula. In contrast, the unpredictable more aversive context was correlated to brain activity in the anterior insula and the orbitofrontal cortex, areas associated with affective pain processing. This context also prompted increased activity in the posterior parietal cortex and lateral prefrontal cortex that we attribute to enhanced alertness and sustained attention during unpredictability.
  • Carlsson, K., Petrovic, P., Skare, S., Petersson, K. M., & Ingvar, M. (2000). Tickling expectations: Neural processing in anticipation of a sensory stimulus. Journal of Cognitive Neuroscience, 12(4), 691-703. doi:10.1162/089892900562318.
  • Carota, F. (2006). Derivational morphology of Italian: Principles for formalization. Literary and Linguistic Computing, 21(SUPPL. 1), 41-53. doi:10.1093/llc/fql007.

    Abstract

    The present paper investigates the major derivational strategies underlying the formation of suffixed words in Italian, with the purpose of tackling the issue of their formalization. After having specified the theoretical cognitive premises that orient the work, the interacting component modules of the suffixation process, i.e. morphonology, morphotactics and affixal semantics, are explored empirically, by drawing ample naturally occurring data on a Corpus of written Italian. A special attention is paid to the semantic mechanisms that are involved into suffixation. Some semantic nuclei are identified for the major suffixed word types of Italian, which are due to word formation rules active at the synchronic level, and a semantic configuration of productive suffixes is suggested. A general framework is then sketched, which combines classical finite-state methods with a feature unification-based word grammar. More specifically, the semantic information specified for the affixal material is internalised into the structures of the Lexical Functional Grammar (LFG). The formal model allows us to integrate the various modules of suffixation. In particular, it treats, on the one hand, the interface between morphonology/morphotactics and semantics and, on the other hand, the interface between suffixation and inflection. Furthermore, since LFG exploits a hierarchically organised lexicon in order to structure the information regarding the affixal material, affixal co-selectional restrictions are advatageously constrained, avoiding potential multiple spurious analysis/generations.
  • Cho, T., & McQueen, J. M. (2006). Phonological versus phonetic cues in native and non-native listening: Korean and Dutch listeners' perception of Dutch and English consonants. Journal of the Acoustical Society of America, 119(5), 3085-3096. doi:10.1121/1.2188917.

    Abstract

    We investigated how listeners of two unrelated languages, Korean and Dutch, process phonologically viable and nonviable consonants spoken in Dutch and American English. To Korean listeners, released final stops are nonviable because word-final stops in Korean are never released in words spoken in isolation, but to Dutch listeners, unreleased word-final stops are nonviable because word-final stops in Dutch are generally released in words spoken in isolation. Two phoneme monitoring experiments showed a phonological effect on both Dutch and English stimuli: Korean listeners detected the unreleased stops more rapidly whereas Dutch listeners detected the released stops more rapidly and/or more accurately. The Koreans, however, detected released stops more accurately than unreleased stops, but only in the non-native language they were familiar with (English). The results suggest that, in non-native speech perception, phonological legitimacy in the native language can be more important than the richness of phonetic information, though familiarity with phonetic detail in the non-native language can also improve listening performance.
  • Choi, S., & Bowerman, M. (1991). Learning to express motion events in English and Korean: The influence of language-specific lexicalization patterns. Cognition, 41, 83-121. doi:10.1016/0010-0277(91)90033-Z.

    Abstract

    English and Korean differ in how they lexicalize the components of motionevents. English characteristically conflates Motion with Manner, Cause, or Deixis, and expresses Path separately. Korean, in contrast, conflates Motion with Path and elements of Figure and Ground in transitive clauses for caused Motion, but conflates motion with Deixis and spells out Path and Manner separately in intransitive clauses for spontaneous motion. Children learningEnglish and Korean show sensitivity to language-specific patterns in the way they talk about motion from as early as 17–20 months. For example, learners of English quickly generalize their earliest spatial words — Path particles like up, down, and in — to both spontaneous and caused changes of location and, for up and down, to posture changes, while learners of Korean keep words for spontaneous and caused motion strictly separate and use different words for vertical changes of location and posture changes. These findings challenge the widespread view that children initially map spatial words directly to nonlinguistic spatial concepts, and suggest that they are influenced by the semantic organization of their language virtually from the beginning. We discuss how input and cognition may interact in the early phases of learning to talk about space.
  • Cholin, J., Levelt, W. J. M., & Schiller, N. O. (2006). Effects of syllable frequency in speech production. Cognition, 99, 205-235. doi:10.1016/j.cognition.2005.01.009.

    Abstract

    In the speech production model proposed by [Levelt, W. J. M., Roelofs, A., Meyer, A. S. (1999). A theory of lexical access in speech production. Behavioral and Brain Sciences, 22, pp. 1-75.], syllables play a crucial role at the interface of phonological and phonetic encoding. At this interface, abstract phonological syllables are translated into phonetic syllables. It is assumed that this translation process is mediated by a so-called Mental Syllabary. Rather than constructing the motor programs for each syllable on-line, the mental syllabary is hypothesized to provide pre-compiled gestural scores for the articulators. In order to find evidence for such a repository, we investigated syllable-frequency effects: If the mental syllabary consists of retrievable representations corresponding to syllables, then the retrieval process should be sensitive to frequency differences. In a series of experiments using a symbol-position association learning task, we tested whether highfrequency syllables are retrieved and produced faster compared to low-frequency syllables. We found significant syllable frequency effects with monosyllabic pseudo-words and disyllabic pseudo-words in which the first syllable bore the frequency manipulation; no effect was found when the frequency manipulation was on the second syllable. The implications of these results for the theory of word form encoding at the interface of phonological and phonetic encoding; especially with respect to the access mechanisms to the mental syllabary in the speech production model by (Levelt et al.) are discussed.
  • Cronin, K. A., Mitchell, M. A., Lonsdorf, E. V., & Thompson, S. D. (2006). One year later: Evaluation of PMC-Recommended births and transfers. Zoo Biology, 25, 267-277. doi:10.1002/zoo.20100.

    Abstract

    To meet their exhibition, conservation, education, and scientific goals, members of the American Zoo and Aquarium Association (AZA) collaborate to manage their living collections as single species populations. These cooperative population management programs, Species Survival Planss (SSP) and Population Management Plans (PMP), issue specimen-by-specimen recommendations aimed at perpetuating captive populations by maintaining genetic diversity and demographic stability. Species Survival Plans and PMPs differ in that SSP participants agree to complete recommendations, whereas PMP participants need only take recommendations under advisement. We evaluated the effect of program type and the number of participating institutions on the success of actions recommended by the Population Management Center (PMC): transfers of specimens between institutions, breeding, and target number of offspring. We analyzed AZA studbook databases for the occurrence of recommended or unrecommended transfers and births during the 1-year period after the distribution of standard AZA Breeding-and-Transfer Plans. We had three major findings: 1) on average, both SSPs and PMPs fell about 25% short of their target; however, as the number of participating institutions increased so too did the likelihood that programs met or exceeded their target; 2) SSPs exhibited significantly greater transfer success than PMPs, although transfer success for both program types was below 50%; and 3) SSPs exhibited significantly greater breeding success than PMPs, although breeding success for both program types was below 20%. Together, these results indicate that the science and sophistication behind genetic and demographic management of captive populations may be compromised by the challenges of implementation.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1983). A language-specific comprehension strategy [Letters to Nature]. Nature, 304, 159-160. doi:10.1038/304159a0.

    Abstract

    Infants acquire whatever language is spoken in the environment into which they are born. The mental capability of the newborn child is not biased in any way towards the acquisition of one human language rather than another. Because psychologists who attempt to model the process of language comprehension are interested in the structure of the human mind, rather than in the properties of individual languages, strategies which they incorporate in their models are presumed to be universal, not language-specific. In other words, strategies of comprehension are presumed to be characteristic of the human language processing system, rather than, say, the French, English, or Igbo language processing systems. We report here, however, on a comprehension strategy which appears to be used by native speakers of French but not by native speakers of English.
  • Cutler, A., Sebastian-Galles, N., Soler-Vilageliu, O., & Van Ooijen, B. (2000). Constraints of vowels and consonants on lexical selection: Cross-linguistic comparisons. Memory & Cognition, 28, 746-755.

    Abstract

    Languages differ in the constitution of their phonemic repertoire and in the relative distinctiveness of phonemes within the repertoire. In the present study, we asked whether such differences constrain spoken-word recognition, via two word reconstruction experiments, in which listeners turned non-words into real words by changing single sounds. The experiments were carried out in Dutch (which has a relatively balanced vowel-consonant ratio and many similar vowels) and in Spanish (which has many more consonants than vowels and high distinctiveness among the vowels). Both Dutch and Spanish listeners responded significantly faster and more accurately when required to change vowels as opposed to consonants; when allowed to change any phoneme, they more often altered vowels than consonants. Vowel information thus appears to constrain lexical selection less tightly (allow more potential candidates) than does consonant information, independent of language-specific phoneme repertoire and of relative distinctiveness of vowels.
  • Cutler, A. (1992). Cross-linguistic differences in speech segmentation. MRC News, 56, 8-9.
  • Cutler, A., & Van de Weijer, J. (2000). De ontdekking van de eerste woorden. Stem-, Spraak- en Taalpathologie, 9, 245-259.

    Abstract

    Spraak is continu, er zijn geen betrouwbare signalen waardoor de luisteraar weet waar het ene woord eindigt en het volgende begint. Voor volwassen luisteraars is het segmenteren van gesproken taal in afzonderlijke woorden dus niet onproblematisch, maar voor een kind dat nog geen woordenschat bezit, vormt de continuïteit van spraak een nog grotere uitdaging. Desalniettemin produceren de meeste kinderen hun eerste herkenbare woorden rond het begin van het tweede levensjaar. Aan deze vroege spraakproducties gaat een formidabele perceptuele prestatie vooraf. Tijdens het eerste levensjaar - met name gedurende de tweede helft - ontwikkelt de spraakperceptie zich van een algemeen fonetisch discriminatievermogen tot een selectieve gevoeligheid voor de fonologische contrasten die in de moedertaal voorkomen. Recent onderzoek heeft verder aangetoond dat kinderen, lang voordat ze ook maar een enkel woord kunnen zeggen, in staat zijn woorden die kenmerkend zijn voor hun moedertaal te onderscheiden van woorden die dat niet zijn. Bovendien kunnen ze woorden die eerst in isolatie werden aangeboden herkennen in een continue spraakcontext. Het dagelijkse taalaanbod aan een kind van deze leeftijd maakt het in zekere zin niet gemakkelijk, bijvoorbeeld doordat de meeste woorden niet in isolatie voorkomen. Toch wordt het kind ook wel houvast geboden, onder andere doordat het woordgebruik beperkt is.
  • Cutler, A., & Norris, D. (1992). Detection of vowels and consonants with minimal acoustic variation. Speech Communication, 11, 101-108. doi:10.1016/0167-6393(92)90004-Q.

    Abstract

    Previous research has shown that, in a phoneme detection task, vowels produce longer reaction times than consonants, suggesting that they are harder to perceive. One possible explanation for this difference is based upon their respective acoustic/articulatory characteristics. Another way of accounting for the findings would be to relate them to the differential functioning of vowels and consonants in the syllabic structure of words. In this experiment, we examined the second possibility. Targets were two pairs of phonemes, each containing a vowel and a consonant with similar phonetic characteristics. Subjects heard lists of English words had to press a response key upon detecting the occurrence of a pre-specified target. This time, the phonemes which functioned as vowels in syllabic structure yielded shorter reaction times than those which functioned as consonants. This rules out an explanation for response time difference between vowels and consonants in terms of function in syllable structure. Instead, we propose that consonantal and vocalic segments differ with respect to variability of tokens, both in the acoustic realisation of targets and in the representation of targets by listeners.
  • Cutler, A. (1971). [Review of the book Probleme der Aufgabenanalyse bei der Erstellung von Sprachprogrammen by K. Bung]. Babel, 7, 29-31.
  • Cutler, A., Weber, A., & Otake, T. (2006). Asymmetric mapping from phonetic to lexical representations in second-language listening. Journal of Phonetics, 34(2), 269-284. doi:10.1016/j.wocn.2005.06.002.

    Abstract

    The mapping of phonetic information to lexical representations in second-language (L2) listening was examined using an eyetracking paradigm. Japanese listeners followed instructions in English to click on pictures in a display. When instructed to click on a picture of a rocket, they experienced interference when a picture of a locker was present, that is, they tended to look at the locker instead. However, when instructed to click on the locker, they were unlikely to look at the rocket. This asymmetry is consistent with a similar asymmetry previously observed in Dutch listeners’ mapping of English vowel contrasts to lexical representations. The results suggest that L2 listeners may maintain a distinction between two phonetic categories of the L2 in their lexical representations, even though their phonetic processing is incapable of delivering the perceptual discrimination required for correct mapping to the lexical distinction. At the phonetic processing level, one of the L2 categories is dominant; the present results suggest that dominance is determined by acoustic–phonetic proximity to the nearest L1 category. At the lexical processing level, representations containing this dominant category are more likely than representations containing the non-dominant category to be correctly contacted by the phonetic input.
  • Cutler, A. (1982). Idioms: the older the colder. Linguistic Inquiry, 13(2), 317-320. Retrieved from http://www.jstor.org/stable/4178278?origin=JSTOR-pdf.
  • Cutler, A., & Fay, D. A. (1982). One mental lexicon, phonologically arranged: Comments on Hurford’s comments. Linguistic Inquiry, 13, 107-113. Retrieved from http://www.jstor.org/stable/4178262.
  • Cutler, A. (1991). Proceed with caution. New Scientist, (1799), 53-54.
  • Cutler, A. (1992). Proceedings with confidence. New Scientist, (1825), 54.
  • Cutler, A., & Butterfield, S. (1992). Rhythmic cues to speech segmentation: Evidence from juncture misperception. Journal of Memory and Language, 31, 218-236. doi:10.1016/0749-596X(92)90012-M.

    Abstract

    Segmentation of continuous speech into its component words is a nontrivial task for listeners. Previous work has suggested that listeners develop heuristic segmentation procedures based on experience with the structure of their language; for English, the heuristic is that strong syllables (containing full vowels) are most likely to be the initial syllables of lexical words, whereas weak syllables (containing central, or reduced, vowels) are nonword-initial, or, if word-initial, are grammatical words. This hypothesis is here tested against natural and laboratory-induced missegmentations of continuous speech. Precisely the expected pattern is found: listeners erroneously insert boundaries before strong syllables but delete them before weak syllables; boundaries inserted before strong syllables produce lexical words, while boundaries inserted before weak syllables produce grammatical words.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1992). The monolingual nature of speech segmentation by bilinguals. Cognitive Psychology, 24, 381-410.

    Abstract

    Monolingual French speakers employ a syllable-based procedure in speech segmentation; monolingual English speakers use a stress-based segmentation procedure and do not use the syllable-based procedure. In the present study French-English bilinguals participated in segmentation experiments with English and French materials. Their results as a group did not simply mimic the performance of English monolinguals with English language materials and of French monolinguals with French language materials. Instead, the bilinguals formed two groups, defined by forced choice of a dominant language. Only the French-dominant group showed syllabic segmentation and only with French language materials. The English-dominant group showed no syllabic segmentation in either language. However, the English-dominant group showed stress-based segmentation with English language materials; the French-dominant group did not. We argue that rhythmically based segmentation procedures are mutually exclusive, as a consequence of which speech segmentation by bilinguals is, in one respect at least, functionally monolingual.
  • Cutler, A., & Butterfield, S. (1991). Word boundary cues in clear speech: A supplementary report. Speech Communication, 10, 335-353. doi:10.1016/0167-6393(91)90002-B.

    Abstract

    One of a listener's major tasks in understanding continuous speech is segmenting the speech signal into separate words. When listening conditions are difficult, speakers can help listeners by deliberately speaking more clearly. In four experiments, we examined how word boundaries are produced in deliberately clear speech. In an earlier report we showed that speakers do indeed mark word boundaries in clear speech, by pausing at the boundary and lengthening pre-boundary syllables; moreover, these effects are applied particularly to boundaries preceding weak syllables. In English, listeners use segmentation procedures which make word boundaries before strong syllables easier to perceive; thus marking word boundaries before weak syllables in clear speech will make clear precisely those boundaries which are otherwise hard to perceive. The present report presents supplementary data, namely prosodic analyses of the syllable following a critical word boundary. More lengthening and greater increases in intensity were applied in clear speech to weak syllables than to strong. Mean F0 was also increased to a greater extent on weak syllables than on strong. Pitch movement, however, increased to a greater extent on strong syllables than on weak. The effects were, however, very small in comparison to the durational effects we observed earlier for syllables preceding the boundary and for pauses at the boundary.
  • Davidson, D. J. (2006). Strategies for longitudinal neurophysiology [commentary on Osterhout et al.]. Language Learning, 56(suppl. 1), 231-234. doi:10.1111/j.1467-9922.2006.00362.x.
  • Dell, G. S., Reed, K. D., Adams, D. R., & Meyer, A. S. (2000). Speech errors, phonotactic constraints, and implicit learning: A study of the role of experience in language production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 1355-1367. doi:10.1037/0278-7393.26.6.1355.

    Abstract

    Speech errors follow the phonotactics of the language being spoken. For example, in English, if [n] is mispronounced as [n] the [n] will always appear in a syllable coda. The authors created an analogue to this phenomenon by having participants recite lists of consonant-vowel-consonant syllables in 4 sessions on different days. In the first 2 experiments, some consonants were always onsets, some were always codas, and some could be both. In a third experiment, the set of possible onsets and codas depended on vowel identity. In all 3 studies, the production errors that occurred respected the "phonotactics" of the experiment. The results illustrate the implicit learning of the sequential constraints present in the stimuli and show that the language production system adapts to recent experience.
  • Desmet, T., De Baecke, C., Drieghe, D., Brysbaert, M., & Vonk, W. (2006). Relative clause attachment in Dutch: On-line comprehension corresponds to corpus frequencies when lexical variables are taken into account. Language and Cognitive Processes, 21(4), 453-485. doi:10.1080/01690960400023485.

    Abstract

    Desmet, Brysbaert, and De Baecke (2002a) showed that the production of relative clauses following two potential attachment hosts (e.g., ‘Someone shot the servant of the actress who was on the balcony’) was influenced by the animacy of the first host. These results were important because they refuted evidence from Dutch against experience-based accounts of syntactic ambiguity resolution, such as the tuning hypothesis. However, Desmet et al. did not provide direct evidence in favour of tuning, because their study focused on production and did not include reading experiments. In the present paper this line of research was extended. A corpus analysis and an eye-tracking experiment revealed that when taking into account lexical properties of the NP host sites (i.e., animacy and concreteness) the frequency pattern and the on-line comprehension of the relative clause attachment ambiguity do correspond. The implications for exposure-based accounts of sentence processing are discussed.
  • Dimroth, C., & Watorek, M. (2000). The scope of additive particles in basic learner languages. Studies in Second Language Acquisition, 22, 307-336. Retrieved from http://journals.cambridge.org/action/displayAbstract?aid=65981.

    Abstract

    Based on their longitudinal analysis of the acquisition of Dutch, English, French, and German, Klein and Perdue (1997) described a “basic learner variety” as valid cross-linguistically and comprising a limited number of shared syntactic patterns interacting with two types of constraints: (a) semantic—the NP whose referent has highest control comes first, and (b) pragmatic—the focus expression is in final position. These authors hypothesized that “the topic-focus structure also plays an important role in some other respects. . . . Thus, negation and (other) scope particles occur at the topic-focus boundary” (p. 318). This poses the problem of the interaction between the core organizational principles of the basic variety and optional items such as negative particles and scope particles, which semantically affect the whole or part of the utterance in which they occur. In this article, we test the validity of these authors' hypothesis for the acquisition of the additive scope particle also (and its translation equivalents). Our analysis is based on the European Science Foundation (ESF) data originally used to define the basic variety, but we also included some more advanced learner data from the same database. In doing so, we refer to the analyses of Dimroth and Klein (1996), which concern the interaction between scope particles and the part of the utterance they affect, and we make a distinction between maximal scope—that which is potentially affected by the particle—and the actual scope of a particle in relation to an utterance in a given discourse context

    Files private

    Request files
  • Doherty, M., & Klein, W. (Eds.). (1991). Übersetzung [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (84).
  • Drude, S. (2006). Documentação lingüística: O formato de anotação de textos. Cadernos de Estudos Lingüísticos, 35, 27-51.

    Abstract

    This paper presents the methods of language documentation as applied in the Awetí Language Documentation Project, one of the projects in the Documentation of Endangered Languages Programme (DOBES). It describes the steps of how a large digital corpus of annotated multi-media data is built. Special attention is devoted to the format of annotation of linguistic data. The Advanced Glossing format is presented and justified
  • Dunn, M. (2006). [Review of the book Comparative Chukotko-Kamchatkan dictionary by Michael Fortescue]. Anthropological Linguistics, 48(3), 296-298.
  • Dunn, M. (2000). Planning for failure: The niche of standard Chukchi. Current Issues in Language Planning, 1, 389-399. doi:10.1080/14664200008668013.

    Abstract

    This paper examines the effects of language standardization and orthography design on the Chukchi linguistic ecology. The process of standardisation has not taken into consideration the gender-based sociolects of colloquial Chukchi and is based on a grammaticaldescriptionwhich does not reflectactual Chukchi use; as a result standard Chukchi has not gained a place in the Chukchi language ecology. The Cyrillic orthography developed for Chukchi is also problematic as it is based on features of Russian phonology, rather than on Chukchi itself: this has meant that a knowledge of written Chukchi is dependent on a knowledge of the principles of Russian orthography. The aspects of language planning have had a large impact on the pre-existing Chukchi language ecology which has contributed to the obsolescence of the colloquial language.
  • Eibl-Eibesfeldt, I., & Senft, G. (1991). Trobriander (Papua-Neu-guinea, Trobriand -Inseln, Kaile'una) Tänze zur Einleitung des Erntefeier-Rituals. Film E 3129. Trobriander (Papua-Neuguinea, Trobriand-Inseln, Kiriwina); Ausschnitte aus einem Erntefesttanz. Film E3130. Publikationen zu wissenschaftlichen Filmen. Sektion Ethnologie, 17, 1-17.
  • Eisner, F., & McQueen, J. M. (2006). Perceptual learning in speech: Stability over time (L). Journal of the Acoustical Society of America, 119(4), 1950-1953. doi:10.1121/1.2178721.

    Abstract

    Perceptual representations of phonemes are flexible and adapt rapidly to accommodate idiosyncratic articulation in the speech of a particular talker. This letter addresses whether such adjustments remain stable over time and under exposure to other talkers. During exposure to a story, listeners learned to interpret an ambiguous sound as [f] or [s]. Perceptual adjustments measured after 12 h were as robust as those measured immediately after learning. Equivalent effects were found when listeners heard speech from other talkers in the 12 h interval, and when they had the opportunity to consolidate learning during sleep.
  • Enfield, N. J., Majid, A., & Van Staden, M. (2006). Cross-linguistic categorisation of the body: Introduction. Language Sciences, 28(2-3), 137-147. doi:10.1016/j.langsci.2005.11.001.

    Abstract

    The domain of the human body is an ideal focus for semantic typology, since the body is a physical universal and all languages have terms referring to its parts. Previous research on body part terms has depended on secondary sources (e.g. dictionaries), and has lacked sufficient detail or clarity for a thorough understanding of these terms’ semantics. The present special issue is the outcome of a collaborative project aimed at improving approaches to investigating the semantics of body part terms, by developing materials to elicit information that provides for cross-linguistic comparison. The articles in this volume are original fieldwork-based descriptions of terminology for parts of the body in ten languages. Also included are an elicitation guide and experimental protocol used in gathering data. The contributions provide inventories of body part terms in each language, with analysis of both intensional and extensional aspects of meaning, differences in morphological complexity, semantic relations among terms, and discussion of partonomic structure within the domain.
  • Enfield, N. J. (2006). Elicitation guide on parts of the body. Language Sciences, 28(2-3), 148-157. doi:10.1016/j.langsci.2005.11.003.

    Abstract

    This document is intended for use as an elicitation guide for the field linguist consulting with native speakers in collecting terms for parts of the body, and in the exploration of their semantics.
  • Enfield, N. J. (2006). [Review of the book A grammar of Semelai by Nicole Kruspe]. Linguistic Typology, 10(3), 452-455. doi:10.1515/LINGTY.2006.014.
  • Enfield, N. J. (2006). Languages as historical documents: The endangered archive in Laos. South East Asia Research, 14(3), 471-488.

    Abstract

    Abstract: This paper reviews current discussion of the issue of just what is lost when a language dies. Special reference is made to the current situation in Laos, a country renowned for its considerable cultural and linguistic diversity. It focuses on the historical, anthropological and ecological knowledge that a language can encode, and the social and cultural consequences of the loss of such traditional knowledge when a language is no longer passed on. Finally, the article points out the paucity of studies and obstacles to field research on minority languages in Laos, which seriously hamper their documentation.
  • Enfield, N. J. (2006). Lao body part terms. Language Sciences, 28(2-3), 181-200. doi:10.1016/j.langsci.2005.11.011.

    Abstract

    This article presents a description of nominal expressions for parts of the human body conventionalised in Lao, a Southwestern Tai language spoken in Laos, Northeast Thailand, and Northeast Cambodia. An inventory of around 170 Lao expressions is listed, with commentary where some notability is determined, usually based on explicit comparison to the metalanguage, English. Notes on aspects of the grammatical and semantic structure of the set of body part terms are provided, including a discussion of semantic relations pertaining among members of the set of body part terms. I conclude that the semantic relations which pertain between terms for different parts of the body not only include part/whole relations, but also relations of location, connectedness, and general association. Calling the whole system a ‘partonomy’ attributes greater centrality to the part/whole relation than is warranted.
  • Enfield, N. J. (2000). The theory of cultural logic: How individuals combine social intelligence with semiotics to create and maintain cultural meaning. Cultural Dynamics, 12(1), 35-64. doi:10.1177/092137400001200102.

    Abstract

    The social world is an ecological complex in which cultural meanings and knowledges (linguistic and non-linguistic) personally embodied by individuals are intercalibrated via common attention to commonly accessible semiotic structures. This interpersonal ecology bridges realms which are the subject matter of both anthropology and linguistics, allowing the public maintenance of a system of assumptions and counter-assumptions among individuals as to what is mutually known (about), in general and/or in any particular context. The mutual assumption of particular cultural ideas provides human groups with common premises for predictably convergent inferential processes. This process of people collectively using effectively identical assumptions in interpreting each other's actions—i.e. hypothesizing as to each other's motivations and intentions—may be termed cultural logic. This logic relies on the establishment of stereotypes and other kinds of precedents, catalogued in individuals’ personal libraries, as models and scenarios which may serve as reference in inferring and attributing motivations behind people's actions, and behind other mysterious phenomena. This process of establishing conceptual convention depends directly on semiotics, since groups of individuals rely on external signs as material for common focus and, thereby, agreement. Social intelligence binds signs in the world (e.g. speech sounds impressing upon eardrums), with individually embodied representations (e.g. word meanings and contextual schemas). The innate tendency for people to model the intentions of others provides an ultimately biological account for the logic behind culture. Ethnographic examples are drawn from Laos and Australia.
  • Ernestus, M. (2006). Statistically gradient generalizations for contrastive phonological features. The Linguistic Review, 23(3), 217-233. doi:10.1515/TLR.2006.008.

    Abstract

    In mainstream phonology, contrastive properties, like stem-final voicing, are simply listed in the lexicon. This article reviews experimental evidence that such contrastive properties may be predictable to some degree and that the relevant statistically gradient generalizations form an inherent part of the grammar. The evidence comes from the underlying voice specification of stem-final obstruents in Dutch. Contrary to received wisdom, this voice specification is partly predictable from the obstruent’s manner and place of articulation and from the phonological properties of the preceding segments. The degree of predictability, which depends on the exact contents of the lexicon, directs speakers’ guesses of underlying voice specifications. Moreover, existing words that disobey the generalizations are disadvantaged by being recognized and produced more slowly and less accurately, also under natural conditions.We discuss how these observations can be accounted for in two types of different approaches to grammar, Stochastic Optimality Theory and exemplar-based modeling.
  • Ernestus, M., Lahey, M., Verhees, F., & Baayen, R. H. (2006). Lexical frequency and voice assimilation. Journal of the Acoustical Society of America, 120(2), 1040-1051. doi:10.1121/1.2211548.

    Abstract

    Acoustic duration and degree of vowel reduction are known to correlate with a word’s frequency of occurrence. The present study broadens the research on the role of frequency in speech production to voice assimilation. The test case was regressive voice assimilation in Dutch. Clusters from a corpus of read speech were more often perceived as unassimilated in lower-frequency words and as either completely voiced regressive assimilation or, unexpectedly, as completely voiceless progressive assimilation in higher-frequency words. Frequency did not predict the voice classifications over and above important acoustic cues to voicing, suggesting that the frequency effects on the classifications were carried exclusively by the acoustic signal. The duration of the cluster and the period of glottal vibration during the cluster decreased while the duration of the release noises increased with frequency. This indicates that speakers reduce articulatory effort for higher-frequency words, with some acoustic cues signaling more voicing and others less voicing. A higher frequency leads not only to acoustic reduction but also to more assimilation.
  • Eysenck, M. W., & Van Berkum, J. J. A. (1992). Trait anxiety, defensiveness, and the structure of worry. Personality and Individual Differences, 13(12), 1285-1290. Retrieved from http://www.sciencedirect.com/science//journal/01918869.

    Abstract

    A principal components analysis of the ten scales of the Worry Questionnaire revealed the existence of major worry factors or domains of social evaluation and physical threat, and these factors were confirmed in a subsequent item analysis. Those high in trait anxiety had much higher scores on the Worry Questionnaire than those low in trait anxiety, especially on those scales relating to social evaluation. Scores on the Marlowe-Crowne Social Desirability Scale were negatively related to worry frequency. However, groups of low-anxious and repressed individucores did not differ in worry. It was concluded that worry, especals formed on the basis of their trait anxiety and social desirability sially in the social evaluation domain, is of fundamental importance to trait anxiety.
  • Fisher, S. E., & Francks, C. (2006). Genes, cognition and dyslexia: Learning to read the genome. Trends in Cognitive Sciences, 10, 250-257. doi:10.1016/j.tics.2006.04.003.

    Abstract

    Studies of dyslexia provide vital insights into the cognitive architecture underpinning both disordered and normal reading. It is well established that inherited factors contribute to dyslexia susceptibility, but only very recently has evidence emerged to implicate specific candidate genes. In this article, we provide an accessible overview of four prominent examples--DYX1C1, KIAA0319, DCDC2 and ROBO1--and discuss their relevance for cognition. In each case correlations have been found between genetic variation and reading impairments, but precise risk variants remain elusive. Although none of these genes is specific to reading-related neuronal circuits, or even to the human brain, they have intriguing roles in neuronal migration or connectivity. Dissection of cognitive mechanisms that subserve reading will ultimately depend on an integrated approach, uniting data from genetic investigations, behavioural studies and neuroimaging.
  • Fisher, S. E. (2006). Tangled webs: Tracing the connections between genes and cognition. Cognition, 101, 270-297. doi:10.1016/j.cognition.2006.04.004.

    Abstract

    The rise of molecular genetics is having a pervasive influence in a wide variety of fields, including research into neurodevelopmental disorders like dyslexia, speech and language impairments, and autism. There are many studies underway which are attempting to determine the roles of genetic factors in the aetiology of these disorders. Beyond the obvious implications for diagnosis, treatment and understanding, success in these efforts promises to shed light on the links between genes and aspects of cognition and behaviour. However, the deceptive simplicity of finding correlations between genetic and phenotypic variation has led to a common misconception that there exist straightforward linear relationships between specific genes and particular behavioural and/or cognitive outputs. The problem is exacerbated by the adoption of an abstract view of the nature of the gene, without consideration of molecular, developmental or ontogenetic frameworks. To illustrate the limitations of this perspective, I select two cases from recent research into the genetic underpinnings of neurodevelopmental disorders. First, I discuss the proposal that dyslexia can be dissected into distinct components specified by different genes. Second, I review the story of the FOXP2 gene and its role in human speech and language. In both cases, adoption of an abstract concept of the gene can lead to erroneous conclusions, which are incompatible with current knowledge of molecular and developmental systems. Genes do not specify behaviours or cognitive processes; they make regulatory factors, signalling molecules, receptors, enzymes, and so on, that interact in highly complex networks, modulated by environmental influences, in order to build and maintain the brain. I propose that it is necessary for us to fully embrace the complexity of biological systems, if we are ever to untangle the webs that link genes to cognition.
  • Fisher, S. E., & Marcus, G. (2006). The eloquent ape: Genes, brains and the evolution of language. Nature Reviews Genetics, 7, 9-20. doi:10.1038/nrg1747.

    Abstract

    The human capacity to acquire complex language seems to be without parallel in the natural world. The origins of this remarkable trait have long resisted adequate explanation, but advances in fields that range from molecular genetics to cognitive neuroscience offer new promise. Here we synthesize recent developments in linguistics, psychology and neuroimaging with progress in comparative genomics, gene-expression profiling and studies of developmental disorders. We argue that language should be viewed not as a wholesale innovation, but as a complex reconfiguration of ancestral systems that have been adapted in evolutionarily novel ways.
  • Forkstam, C., Hagoort, P., Fernandez, G., Ingvar, M., & Petersson, K. M. (2006). Neural correlates of artificial syntactic structure classification. NeuroImage, 32(2), 956-967. doi:10.1016/j.neuroimage.2006.03.057.

    Abstract

    The human brain supports acquisition mechanisms that extract structural regularities implicitly from experience without the induction of an explicit model. It has been argued that the capacity to generalize to new input is based on the acquisition of abstract representations, which reflect underlying structural regularities in the input ensemble. In this study, we explored the outcome of this acquisition mechanism, and to this end, we investigated the neural correlates of artificial syntactic classification using event-related functional magnetic resonance imaging. The participants engaged once a day during an 8-day period in a short-term memory acquisition task in which consonant-strings generated from an artificial grammar were presented in a sequential fashion without performance feedback. They performed reliably above chance on the grammaticality classification tasks on days 1 and 8 which correlated with a corticostriatal processing network, including frontal, cingulate, inferior parietal, and middle occipital/occipitotemporal regions as well as the caudate nucleus. Part of the left inferior frontal region (BA 45) was specifically related to syntactic violations and showed no sensitivity to local substring familiarity. In addition, the head of the caudate nucleus correlated positively with syntactic correctness on day 8 but not day 1, suggesting that this region contributes to an increase in cognitive processing fluency.
  • Francks, C., Fisher, S. E., J.Marlow, A., J.Richardson, A., Stein, J. F., & Monaco, A. (2000). A sibling-pair based approach for mapping genetic loci that influence quantitative measures of reading disability. Prostaglandins, Leukotrienes and Essential Fatty Acids, 63(1-2), 27-31. doi:10.1054/plef.2000.0187.

    Abstract

    Family and twin studies consistently demonstrate a significant role for genetic factors in the aetiology of the reading disorder dyslexia. However, dyslexia is complex at both the genetic and phenotypic levels, and currently the nature of the core deficit or deficits remains uncertain. Traditional approaches for mapping disease genes, originally developed for single-gene disorders, have limited success when there is not a simple relationship between genotype and phenotype. Recent advances in high-throughput genotyping technology and quantitative statistical methods have made a new approach to identifying genes involved in complex disorders possible. The method involves assessing the genetic similarity of many sibling pairs along the lengths of all their chromosomes and attempting to correlate this similarity with that of their phenotypic scores. We are adopting this approach in an ongoing genome-wide search for genes involved in dyslexia susceptibility, and have already successfully applied the method by replicating results from previous studies suggesting that a quantitative trait locus at 6p21.3 influences reading disability.
  • Gaby, A. R. (2006). The Thaayorre 'true man': Lexicon of the human body in an Australian language. Language Sciences, 28(2-3), 201-220. doi:10.1016/j.langsci.2005.11.006.

    Abstract

    Segmentation (and, indeed, definition) of the human body in Kuuk Thaayorre (a Paman language of Cape York Peninsula, Australia) is in some respects typologically unusual, while at other times it conforms to cross-linguistic patterns. The process of deriving complex body part terms from monolexemic items is revealing of metaphorical associations between parts of the body. Associations between parts of the body and entities and phenomena in the broader environment are evidenced by the ubiquity of body part terms (in their extended uses) throughout Thaayorre speech. Understanding the categorisation of the body is therefore prerequisite to understanding the Thaayorre language and worldview.
  • Ganushchak, L. Y., & Schiller, N. (2006). Effects of time pressure on verbal self-monitoring: An ERP study. Brain Research, 1125, 104-115. doi:10.1016/j.brainres.2006.09.096.

    Abstract

    The Error-Related Negativity (ERN) is a component of the event-related brain potential (ERP) that is associated with action monitoring and error detection. The present study addressed the question whether or not an ERN occurs after verbal error detection, e.g., during phoneme monitoring.We obtained an ERN following verbal errors which showed a typical decrease in amplitude under severe time pressure. This result demonstrates that the functioning of the verbal self-monitoring system is comparable to other performance monitoring, such as action monitoring. Furthermore, we found that participants made more errors in phoneme monitoring under time pressure than in a control condition. This may suggest that time pressure decreases the amount of resources available to a capacity-limited self-monitor thereby leading to more errors.
  • Gray, R., & Jordan, F. (2000). Language trees support the express-train sequence of Austronesian expansion. Nature, 405, 1052-1055. doi:10.1038/35016575.

    Abstract

    Languages, like molecules, document evolutionary history. Darwin(1) observed that evolutionary change in languages greatly resembled the processes of biological evolution: inheritance from a common ancestor and convergent evolution operate in both. Despite many suggestions(2-4), few attempts have been made to apply the phylogenetic methods used in biology to linguistic data. Here we report a parsimony analysis of a large language data set. We use this analysis to test competing hypotheses - the "express-train''(5) and the "entangled-bank''(6,7) models - for the colonization of the Pacific by Austronesian-speaking peoples. The parsimony analysis of a matrix of 77 Austronesian languages with 5,185 lexical items produced a single most-parsimonious tree. The express-train model was converted into an ordered geographical character and mapped onto the language tree. We found that the topology of the language tree was highly compatible with the express-train model.
  • Griffin, Z. M., & Bock, K. (2000). What the eyes say about speaking. Psychological Science, 11(4), 274-279. doi:10.1111/1467-9280.00255.

    Abstract

    To study the time course of sentence formulation, we monitored the eye movements of speakers as they described simple events. The similarity between speakers' initial eye movements and those of observers performing a nonverbal event-comprehension task suggested that response-relevant information was rapidly extracted from scenes, allowing speakers to select grammatical subjects based on comprehended events rather than salience. When speaking extemporaneously, speakers began fixating pictured elements less than a second before naming them within their descriptions, a finding consistent with incremental lexical encoding. Eye movements anticipated the order of mention despite changes in picture orientation, in who-did-what-to-whom, and in sentence structure. The results support Wundt's theory of sentence production.

    Files private

    Request files
  • Gullberg, M. (2006). Some reasons for studying gesture and second language acquisition (Hommage à Adam Kendon). International Review of Applied Linguistics, 44(2), 103-124. doi:10.1515/IRAL.2006.004.

    Abstract

    This paper outlines some reasons for why gestures are relevant to the study of SLA. First, given cross-cultural and cross-linguistic gestural repertoires, gestures can be treated as part of what learners can acquire in a target language. Gestures can therefore be studied as a developing system in their own right both in L2 production and comprehension. Second, because of the close link between gestures, language, and speech, learners' gestures as deployed in L2 usage and interaction can offer valuable insights into the processes of acquisition, such as the handling of expressive difficulties, the influence of the first language, interlanguage phenomena, and possibly even into planning and processing difficulties. As a form of input to learners and to their interlocutors alike, finally, gestures also play a potential role for comprehension and learning.
  • Gullberg, M., & Ozyurek, A. (2006). Report on the Nijmegen Lectures 2004: Susan Goldin-Meadow 'The Many Faces of Gesture'. Gesture, 6(1), 151-164.
  • Gullberg, M., & Indefrey, P. (Eds.). (2006). The cognitive neuroscience of second language acquisition [Special Issue]. Language Learning, 56(suppl. 1).
  • Gullberg, M., & Holmqvist, K. (2006). What speakers do and what addressees look at: Visual attention to gestures in human interaction live and on video. Pragmatics & Cognition, 14(1), 53-82.

    Abstract

    This study investigates whether addressees visually attend to speakers’ gestures in interaction and whether attention is modulated by changes in social setting and display size. We compare a live face-to-face setting to two video conditions. In all conditions, the face dominates as a fixation target and only a minority of gestures draw fixations. The social and size parameters affect gaze mainly when combined and in the opposite direction from the predicted with fewer gestures fixated on video than live. Gestural holds and speakers’ gaze at their own gestures reliably attract addressees’ fixations in all conditions. The attraction force of holds is unaffected by changes in social and size parameters, suggesting a bottom-up response, whereas speaker-fixated gestures draw significantly less attention in both video conditions, suggesting a social effect for overt gaze-following and visual joint attention. The study provides and validates a video-based paradigm enabling further experimental but ecologically valid explorations of cross-modal information processing.
  • Gullberg, M. (Ed.). (2006). Gestures and second language acquisition [Special Issue]. International Review of Applied Linguistics, 44(2).
  • Gullberg, M. (2006). Handling discourse: Gestures, reference tracking, and communication strategies in early L2. Language Learning, 56(1), 155-196. doi:10.1111/j.0023-8333.2006.00344.x.

    Abstract

    The production of cohesive discourse, especially maintained reference, poses problems for early second language (L2) speakers. This paper considers a communicative account of overexplicit L2 discourse by focusing on the interdependence between spoken and gestural cohesion, the latter being expressed by anchoring of referents in gesture space. Specifically, this study investigates whether overexplicit maintained reference in speech (lexical noun phrases [NPs]) and gesture (anaphoric gestures) constitutes an interactional communication strategy. We examine L2 speech and gestures of 16 Dutch learners of French retelling stories to addressees under two visibility conditions. The results indicate that the overexplicit properties of L2 speech are not motivated by interactional strategic concerns. The results for anaphoric gestures are more complex. Although their presence is not interactionally
  • Gumperz, J. J., & Levinson, S. C. (1991). Rethinking linguistic relativity. Current Anthropology, 32(5), 613-623. Retrieved from http://www.jstor.org/stable/2743696.
  • Hagoort, P. (2006). What we cannot learn from neuroanatomy about language learning and language processing [Commentary on Uylings]. Language Learning, 56(suppl. 1), 91-97. doi:10.1111/j.1467-9922.2006.00356.x.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech compared to reading: the P600/SPS to syntactic violations in spoken sentences and rapid serial visual presentation. Neuropsychologia, 38, 1531-1549.

    Abstract

    In this study, event-related brain potential ffects of speech processing are obtained and compared to similar effects in sentence reading. In two experiments sentences were presented that contained three different types of grammatical violations. In one experiment sentences were presented word by word at a rate of four words per second. The grammatical violations elicited a Syntactic Positive Shift (P600/SPS), 500 ms after the onset of the word that rendered the sentence ungrammatical. The P600/SPS consisted of two phases, an early phase with a relatively equal anterior-posterior distribution and a later phase with a strong posterior distribution. We interpret the first phase as an indication of structural integration complexity, and the second phase as an indication of failing parsing operations and/or an attempt at reanalysis. In the second experiment the same syntactic violations were presented in sentences spoken at a normal rate and with normal intonation. These violations elicited a P600/SPS with the same onset as was observed for the reading of these sentences. In addition two of the three violations showed a preceding frontal negativity, most clearly over the left hemisphere.
  • Hagoort, P., & Brown, C. M. (2000). ERP effects of listening to speech: semantic ERP effects. Neuropsychologia, 38, 1518-1530.

    Abstract

    In this study, event-related brain potential effects of speech processing are obtained and compared to similar effects insentence reading. In two experiments spoken sentences were presented with semantic violations in sentence-signal or mid-sentence positions. For these violations N400 effects were obtained that were very similar to N400 effects obtained in reading. However, the N400 effects in speech were preceded by an earlier negativity (N250). This negativity is not commonly observed with written input. The early effect is explained as a manifestation of a mismatch between the word forms expected on the basis of the context, and the actual cohort of activated word candidates that is generated on the basis of the speech signal.
  • Hagoort, P. (2006). Event-related potentials from the user's perspective [Review of the book An introduction to the event-related potential technique by Steven J. Luck]. Nature Neuroscience, 9(4), 463-463. doi:10.1038/nn0406-463.
  • Hagoort, P. (2000). What we shall know only tomorrow. Brain and Language, 71, 89-92. doi:10.1006/brln.1999.2221.
  • Hagoort, P. (1992). Vertraagde lexicale integratie bij afatisch taalverstaan. Stem, Spraak- en Taalpathologie, 1, 5-23.
  • Hald, L. A., Bastiaansen, M. C. M., & Hagoort, P. (2006). EEG theta and gamma responses to semantic violations in online sentence processing. Brain and Language, 96(1), 90-105. doi:10.1016/j.bandl.2005.06.007.

    Abstract

    We explore the nature of the oscillatory dynamics in the EEG of subjects reading sentences that contain a semantic violation. More specifically, we examine whether increases in theta (≈3–7 Hz) and gamma (around 40 Hz) band power occur in response to sentences that were either semantically correct or contained a semantically incongruent word (semantic violation). ERP results indicated a classical N400 effect. A wavelet-based time-frequency analysis revealed a theta band power increase during an interval of 300–800 ms after critical word onset, at temporal electrodes bilaterally for both sentence conditions, and over midfrontal areas for the semantic violations only. In the gamma frequency band, a predominantly frontal power increase was observed during the processing of correct sentences. This effect was absent following semantic violations. These results provide a characterization of the oscillatory brain dynamics, and notably of both theta and gamma oscillations, that occur during language comprehension.
  • Haun, D. B. M., Call, J., Janzen, G., & Levinson, S. C. (2006). Evolutionary psychology of spatial representations in the hominidae. Current Biology, 16(17), 1736-1740. doi:10.1016/j.cub.2006.07.049.

    Abstract

    Comparatively little is known about the inherited primate background underlying human cognition, the human cognitive “wild-type.” Yet it is possible to trace the evolution of human cognitive abilities and tendencies by contrasting the skills of our nearest cousins, not just chimpanzees, but all the extant great apes, thus showing what we are likely to have inherited from the common ancestor [1]. By looking at human infants early in cognitive development, we can also obtain insights into native cognitive biases in our species [2]. Here, we focus on spatial memory, a central cognitive domain. We show, first, that all nonhuman great apes and 1-year-old human infants exhibit a preference for place over feature strategies for spatial memory. This suggests the common ancestor of all great apes had the same preference. We then examine 3-year-old human children and find that this preference reverses. Thus, the continuity between our species and the other great apes is masked early in human ontogeny. These findings, based on both phylogenetic and ontogenetic contrasts, open up the prospect of a systematic evolutionary psychology resting upon the cladistics of cognitive preferences.
  • Haun, D. B. M., Rapold, C. J., Call, J., Janzen, G., & Levinson, S. C. (2006). Cognitive cladistics and cultural override in Hominid spatial cognition. Proceedings of the National Academy of Sciences of the United States of America, 103(46), 17568-17573. doi:10.1073/pnas.0607999103.

    Abstract

    Current approaches to human cognition often take a strong nativist stance based on Western adult performance, backed up where possible by neonate and infant research and almost never by comparative research across the Hominidae. Recent research suggests considerable cross-cultural differences in cognitive strategies, including relational thinking, a domain where infant research is impossible because of lack of cognitive maturation. Here, we apply the same paradigm across children and adults of different cultures and across all nonhuman great ape genera. We find that both child and adult spatial cognition systematically varies with language and culture but that, nevertheless, there is a clear inherited bias for one spatial strategy in the great apes. It is reasonable to conclude, we argue, that language and culture mask the native tendencies in our species. This cladistic approach suggests that the correct perspective on human cognition is neither nativist uniformitarian nor ‘‘blank slate’’ but recognizes the powerful impact that language and culture can have on our shared primate cognitive biases.
  • Heinemann, T. (2006). Will you or can't you? Displaying entitlement in interrogative requests. Journal of Pragmatics, 38(7), 1081-1104. doi:10.1016/j.pragma.2005.09.013.

    Abstract

    Interrogative structures such as ‘Could you pass the salt? and ‘Couldn’t you pass the salt?’ can be used for making requests. A study of such pairs within a conversation analytic framework suggests that these are not used interchangeably, and that they have different impacts on the interaction. Focusing on Danish interactions between elderly care recipients and their home help assistants, I demonstrate how the care recipient displays different degrees of stance towards whether she is entitled to make a request or not, depending on whether she formats her request as a positive or a negative interrogative. With a positive interrogative request, the care recipient orients to her request as one she is not entitled to make. This is underscored by other features, such as the use of mitigating devices and the choice of verb. When accounting for this type of request, the care recipient ties the request to the specific situation she is in, at the moment in which the request is produced. In turn, the home help assistant orients to the lack of entitlement by resisting the request. With a negative interrogative request, the care recipient, in contrast, orients to her request as one she is entitled to make. This is strengthened by the choice of verb and the lack of mitigating devices. When such requests are accounted for, the requested task is treated as something that should be routinely performed, and hence as something the home help assistant has neglected to do. In turn, the home help assistant orients to the display of entitlement by treating the request as unproblematic, and by complying with it immediately.
  • Hoeks, J. C. J., Hendriks, P., Vonk, W., Brown, C. M., & Hagoort, P. (2006). Processing the noun phrase versus sentence coordination ambiguity: Thematic information does not completely eliminate processing difficulty. Quarterly Journal of Experimental Psychology, 59, 1581-1899. doi:10.1080/17470210500268982.

    Abstract

    When faced with the noun phrase (NP) versus sentence (S) coordination ambiguity as in, for example, The thief shot the jeweller and the cop hellip, readers prefer the reading with NP-coordination (e.g., "The thief shot the jeweller and the cop yesterday") over one with two conjoined sentences (e.g., "The thief shot the jeweller and the cop panicked"). A corpus study is presented showing that NP-coordinations are produced far more often than S-coordinations, which in frequency-based accounts of parsing might be taken to explain the NP-coordination preference. In addition, we describe an eye-tracking experiment investigating S-coordinated sentences such as Jasper sanded the board and the carpenter laughed, where the poor thematic fit between carpenter and sanded argues against NP-coordination. Our results indicate that information regarding poor thematic fit was used rapidly, but not without leaving some residual processing difficulty. This is compatible with claims that thematic information can reduce but not completely eliminate garden-path effects.
  • Houston, D. M., Jusczyk, P. W., Kuijpers, C., Coolen, R., & Cutler, A. (2000). Cross-language word segmentation by 9-month-olds. Psychonomic Bulletin & Review, 7, 504-509.

    Abstract

    Dutch-learning and English-learning 9-month-olds were tested, using the Headturn Preference Procedure, for their ability to segment Dutch words with strong/weak stress patterns from fluent Dutch speech. This prosodic pattern is highly typical for words of both languages. The infants were familiarized with pairs of words and then tested on four passages, two that included the familiarized words and two that did not. Both the Dutch- and the English-learning infants gave evidence of segmenting the targets from the passages, to an equivalent degree. Thus, English-learning infants are able to extract words from fluent speech in a language that is phonetically different from English. We discuss the possibility that this cross-language segmentation ability is aided by the similarity of the typical rhythmic structure of Dutch and English words.
  • Huettig, F., Quinlan, P. T., McDonald, S. A., & Altmann, G. T. M. (2006). Models of high-dimensional semantic space predict language-mediated eye movements in the visual world. Acta Psychologica, 121(1), 65-80. doi:10.1016/j.actpsy.2005.06.002.

    Abstract

    In the visual world paradigm, participants are more likely to fixate a visual referent that has some semantic relationship with a heard word, than they are to fixate an unrelated referent [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language. A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6, 813–839]. Here, this method is used to examine the psychological validity of models of high-dimensional semantic space. The data strongly suggest that these corpus-based measures of word semantics predict fixation behavior in the visual world and provide further evidence that language-mediated eye movements to objects in the concurrent visual environment are driven by semantic similarity rather than all-or-none categorical knowledge. The data suggest that the visual world paradigm can, together with other methodologies, converge on the evidence that may help adjudicate between different theoretical accounts of the psychological semantics.
  • Indefrey, P. (2006). A meta-analysis of hemodynamic studies on first and second language processing: Which suggested differences can we trust and what do they mean? Language Learning, 56(suppl. 1), 279-304. doi:10.1111/j.1467-9922.2006.00365.x.

    Abstract

    This article presents the results of a meta-analysis of 30 hemodynamic experiments comparing first language (L1) and second language (L2) processing in a range of tasks. The results suggest that reliably stronger activation during L2 processing is found (a) only for task-specific subgroups of L2 speakers and (b) within some, but not all regions that are also typically activated in native language processing. A tentative interpretation based on the functional roles of frontal and temporal regions is suggested.
  • Indefrey, P., & Gullberg, M. (2006). Introduction. Language Learning, 56(suppl. 1), 1-8. doi:10.1111/j.1467-9922.2006.00352.x.

    Abstract

    This volume is a harvest of articles from the first conference in a series on the cognitive neuroscience of language. The first conference focused on the cognitive neuroscience of second language acquisition (henceforth SLA). It brought together experts from as diverse fields as second language acquisition, bilingualism, cognitive neuroscience, and neuroanatomy. The articles and discussion articles presented here illustrate state-of-the-art findings and represent a wide range of theoretical approaches to classic as well as newer SLA issues. The theoretical themes cover age effects in SLA related to the so-called Critical Period Hypothesis and issues of ultimate attainment and focus both on age effects pertaining to childhood and to aging. Other familiar SLA topics are the effects of proficiency and learning as well as issues concerning the difference between the end product and the process that yields that product, here discussed in terms of convergence and degeneracy. A topic more related to actual usage of a second language once acquired concerns how multilingual speakers control and regulate their two languages.
  • Indefrey, P. (2006). It is time to work toward explicit processing models for native and second language speakers. Journal of Applied Psycholinguistics, 27(1), 66-69. doi:10.1017/S0142716406060103.
  • Janse, E. (2006). Auditieve woordherkenning bij afasie: Waarneming van mismatch items. Afasiologie, 28(4), 64-67.

Share this page