Publications

Displaying 101 - 200 of 209
  • Levinson, S. C., & Brown, P. (2003). Emmanuel Kant chez les Tenejapans: L'Anthropologie comme philosophie empirique [Translated by Claude Vandeloise for 'Langues et Cognition']. Langues et Cognition, 239-278.

    Abstract

    This is a translation of Levinson and Brown (1994).
  • Levinson, S. C., & Meira, S. (2003). 'Natural concepts' in the spatial topological domain - adpositional meanings in crosslinguistic perspective: An exercise in semantic typology. Language, 79(3), 485-516.

    Abstract

    Most approaches to spatial language have assumed that the simplest spatial notions are (after Piaget) topological and universal (containment, contiguity, proximity, support, represented as semantic primitives suchas IN, ON, UNDER, etc.). These concepts would be coded directly in language, above all in small closed classes suchas adpositions—thus providing a striking example of semantic categories as language-specific projections of universal conceptual notions. This idea, if correct, should have as a consequence that the semantic categories instantiated in spatial adpositions should be essentially uniform crosslinguistically. This article attempts to verify this possibility by comparing the semantics of spatial adpositions in nine unrelated languages, with the help of a standard elicitation procedure, thus producing a preliminary semantic typology of spatial adpositional systems. The differences between the languages turn out to be so significant as to be incompatible withstronger versions of the UNIVERSAL CONCEPTUAL CATEGORIES hypothesis. Rather, the language-specific spatial adposition meanings seem to emerge as compact subsets of an underlying semantic space, withcertain areas being statistical ATTRACTORS or FOCI. Moreover, a comparison of systems withdifferent degrees of complexity suggests the possibility of positing implicational hierarchies for spatial adpositions. But such hierarchies need to be treated as successive divisions of semantic space, as in recent treatments of basic color terms. This type of analysis appears to be a promising approachfor future work in semantic typology.
  • Levinson, S. C. (2005). [Comment on: Cultural constraints on grammar and cognition in Piraha by Daniel L. Everett]. Current Anthropology, 46, 637-638.
  • Levinson, S. C. (2005). Living with Manny's dangerous idea. Discourse Studies, 7(4-5), 431-453. doi:10.1177/1461445605054401.

    Abstract

    Daniel Dennett, in Darwin's Dangerous Idea, argues that natural selection is a universal acid that eats through other theories, because it can explain just about everything, even the structure of the mind. Emanuel (Manny) Schegloff (1987) in ‘Between Micro and Macro: Context and Other Connections’ opposes the importation of ‘macro’ (sociological/sociolinguistic) factors into the ‘micro’ (interaction analysis), suggesting that one might reverse the strategy instead. Like Darwin, he is coy about whether he just wants his own turf, but the idea opens up the possibility of interactional reductionism. I will argue against interactional reductionism on methodological grounds: Don't bite off more than you can chew! Instead I'll support the good old Durkheimian strategy of looking for intermediate variables between systems of different orders. I try and make the case with data from Rossel Island, Papua New Guinea.
  • Levinson, S. C. (2005). Languages: Europe puts it's money where its mouth is [Letter to the editor]. Nature, 438, 914-914. doi:doi:10.1038/438914c.
  • Liszkowski, U. (2005). Human twelve-month-olds point cooperatively to share interest with and helpfully provide information for a communicative partner. Gesture, 5(1-2), 135-154. doi:10.1075/gest.5.1.11lis.

    Abstract

    This paper investigates infant pointing at 12 months. Three recent experimental studies from our lab are reported and contrasted with existing accounts on infant communicative and social-cognitive abilities. The new results show that infant pointing at 12 months already is a communicative act which involves the intentional transmission of information to share interest with, or provide information for other persons. It is argued that infant pointing is an inherently social and cooperative act which is used to share psychological relations between interlocutors and environment, repairs misunderstandings in proto-conversational turn-taking, and helps others by providing information. Infant pointing builds on an understanding of others as persons with attentional states and attitudes. Findings do not support lean accounts on early infant pointing which posit that it is initially non-communicative, does not serve the function of indicating, or is purely self-centered. It is suggested to investigate the emergence of reference and the motivation to jointly engage with others also before pointing has emerged.
  • Lundstrom, B. N., Petersson, K. M., Andersson, J., Johansson, M., Fransson, P., & Ingvar, M. (2003). Isolating the retrieval of imagined pictures during episodic memory: Activation of the left precuneus and left prefrontal cortex. Neuroimage, 20, 1934-1943. doi:10.1016/j.neuroimage.2003.07.017.

    Abstract

    The posterior medial parietal cortex and the left prefrontal cortex have both been implicated in the recollection of past episodes. In order to clarify their functional significance, we performed this functional magnetic resonance imaging study, which employed event-related source memory and item recognition retrieval of words paired with corresponding imagined or viewed pictures. Our results suggest that episodic source memory is related to a functional network including the posterior precuneus and the left lateral prefrontal cortex. This network is activated during explicit retrieval of imagined pictures and results from the retrieval of item-context associations. This suggests that previously imagined pictures provide a context with which encoded words can be more strongly associated.
  • Lundstrom, B. N., Ingvar, M., & Petersson, K. M. (2005). The role of precuneus and left inferior frontal cortex during source memory episodic retrieval. Neuroimage, 27, 824-834. doi:10.1016/j.neuroimage.2005.05.008.

    Abstract

    The posterior medial parietal cortex and left prefrontal cortex (PFC) have both been implicated in the recollection of past episodes. In a previous study, we found the posterior precuneus and left lateral inferior frontal cortex to be activated during episodic source memory retrieval. This study further examines the role of posterior precuneal and left prefrontal activation during episodic source memory retrieval using a similar source memory paradigm but with longer latency between encoding and retrieval. Our results suggest that both the precuneus and the left inferior PFC are important for regeneration of rich episodic contextual associations and that the precuneus activates in tandem with the left inferior PFC during correct source retrieval. Further, results suggest that the left ventro-lateral frontal region/ frontal operculum is involved in searching for task-relevant information (BA 47) and subsequent monitoring or scrutiny (BA 44/45) while regions in the dorsal inferior frontal cortex are important for information selection (BA 45/46).
  • MacDermot, K. D., Bonora, E., Sykes, N., Coupe, A.-M., Lai, C. S. L., Vernes, S. C., Vargha-Khadem, F., McKenzie, F., Smith, R. L., Monaco, A. P., & Fisher, S. E. (2005). Identification of FOXP2 truncation as a novel cause of developmental speech and language deficits. American Journal of Human Genetics, 76(6), 1074-1080. doi:10.1086/430841.

    Abstract

    FOXP2, the first gene to have been implicated in a developmental communication disorder, offers a unique entry point into neuromolecular mechanisms influencing human speech and language acquisition. In multiple members of the well-studied KE family, a heterozygous missense mutation in FOXP2 causes problems in sequencing muscle movements required for articulating speech (developmental verbal dyspraxia), accompanied by wider deficits in linguistic and grammatical processing. Chromosomal rearrangements involving this locus have also been identified. Analyses of FOXP2 coding sequence in typical forms of specific language impairment (SLI), autism, and dyslexia have not uncovered any etiological variants. However, no previous study has performed mutation screening of children with a primary diagnosis of verbal dyspraxia, the most overt feature of the disorder in affected members of the KE family. Here, we report investigations of the entire coding region of FOXP2, including alternatively spliced exons, in 49 probands affected with verbal dyspraxia. We detected variants that alter FOXP2 protein sequence in three probands. One such variant is a heterozygous nonsense mutation that yields a dramatically truncated protein product and cosegregates with speech and language difficulties in the proband, his affected sibling, and their mother. Our discovery of the first nonsense mutation in FOXP2 now opens the door for detailed investigations of neurodevelopment in people carrying different etiological variants of the gene. This endeavor will be crucial for gaining insight into the role of FOXP2 in human cognition.
  • Mace, R., Jordan, F., & Holden, C. (2003). Testing evolutionary hypotheses about human biological adaptation using cross-cultural comparison. Comparative Biochemistry and Physiology A-Molecular & Integrative Physiology, 136(1), 85-94. doi:10.1016/S1095-6433(03)00019-9.

    Abstract

    Physiological data from a range of human populations living in different environments can provide valuable information for testing evolutionary hypotheses about human adaptation. By taking into account the effects of population history, phylogenetic comparative methods can help us determine whether variation results from selection due to particular environmental variables. These selective forces could even be due to cultural traits-which means that gene-culture co-evolution may be occurring. In this paper, we outline two examples of the use of these approaches to test adaptive hypotheses that explain global variation in two physiological traits: the first is lactose digestion capacity in adults, and the second is population sex-ratio at birth. We show that lower than average sex ratio at birth is associated with high fertility, and argue that global variation in sex ratio at birth has evolved as a response to the high physiological costs of producing boys in high fertility populations.
  • Magnuson, J. S., Tanenhaus, M. K., Aslin, R. N., & Dahan, D. (2003). The time course of spoken word learning and recognition: Studies with artificial lexicons. Journal of Experimental Psychology: General, 132(2), 202-227. doi:10.1037/0096-3445.132.2.202.

    Abstract

    The time course of spoken word recognition depends largely on the frequencies of a word and its competitors, or neighbors (similar-sounding words). However, variability in natural lexicons makes systematic analysis of frequency and neighbor similarity difficult. Artificial lexicons were used to achieve precise control over word frequency and phonological similarity. Eye tracking provided time course measures of lexical activation and competition (during spoken instructions to perform visually guided tasks) both during and after word learning, as a function of word frequency, neighbor type, and neighbor frequency. Apparent shifts from holistic to incremental competitor effects were observed in adults and neural network simulations, suggesting such shifts reflect general properties of learning rather than changes in the nature of lexical representations.
  • Magyari, L. (2003). Mit ne gondoljunk az állatokról? [What not to think about animals?] [Review of the book Wild Minds: What animals really think by M. Hauser]. Magyar Pszichológiai Szemle (Hungarian Psychological Review), 58(3), 417-424. doi:10.1556/MPSzle.58.2003.3.5.
  • Majid, A. (2003). Towards behavioural genomics. The Psychologist, 16(6), 298-298.
  • Majid, A. (2003). Into the deep. The Psychologist, 16(6), 300-300.
  • Mangione-Smith, R., Stivers, T., Elliott, M. N., McDonald, L., & Heritage, J. (2003). Online commentary during the physical examination: A communication tool for avoiding inappropriate antibiotic prescribing? Social Science and Medicine, 56(2), 313-320.
  • Marcus, G. F., & Fisher, S. E. (2003). FOXP2 in focus: What can genes tell us about speech and language? Trends in Cognitive Sciences, 7, 257-262. doi:10.1016/S1364-6613(03)00104-9.

    Abstract

    The human capacity for acquiring speech and language must derive, at least in part, from the genome. In 2001, a study described the first case of a gene, FOXP2, which is thought to be implicated in our ability to acquire spoken language. In the present article, we discuss how this gene was discovered, what it might do, how it relates to other genes, and what it could tell us about the nature of speech and language development. We explain how FOXP2 could, without being specific to the brain or to our own species, still provide an invaluable entry-point into understanding the genetic cascades and neural pathways that contribute to our capacity for speech and language.
  • Marinis, T., Roberts, L., Felser, C., & Clahsen, H. (2005). Gaps in second language sentence processing. Studies in Second Language Acquisition, 27(1), 53-78. doi:10.1017/S0272263105050035.

    Abstract

    Four groups of second language (L2) learners of English from different language backgrounds (Chinese, Japanese, German, and Greek) and a group of native speaker controls participated in an online reading time experiment with sentences involving long-distance wh-dependencies. Although the native speakers showed evidence of making use of intermediate syntactic gaps during processing, the L2 learners appeared to associate the fronted wh-phrase directly with its lexical subcategorizer, regardless of whether the subjacency constraint was operative in their native language. This finding is argued to support the hypothesis that nonnative comprehenders underuse syntactic information in L2 processing.
  • Marlow, A. J., Fisher, S. E., Francks, C., MacPhie, I. L., Cherny, S. S., Richardson, A. J., Talcott, J. B., Stein, J. F., Monaco, A. P., & Cardon, L. R. (2003). Use of multivariate linkage analysis for dissection of a complex cognitive trait. American Journal of Human Genetics, 72(3), 561-570. doi:10.1086/368201.

    Abstract

    Replication of linkage results for complex traits has been exceedingly difficult, owing in part to the inability to measure the precise underlying phenotype, small sample sizes, genetic heterogeneity, and statistical methods employed in analysis. Often, in any particular study, multiple correlated traits have been collected, yet these have been analyzed independently or, at most, in bivariate analyses. Theoretical arguments suggest that full multivariate analysis of all available traits should offer more power to detect linkage; however, this has not yet been evaluated on a genomewide scale. Here, we conduct multivariate genomewide analyses of quantitative-trait loci that influence reading- and language-related measures in families affected with developmental dyslexia. The results of these analyses are substantially clearer than those of previous univariate analyses of the same data set, helping to resolve a number of key issues. These outcomes highlight the relevance of multivariate analysis for complex disorders for dissection of linkage results in correlated traits. The approach employed here may aid positional cloning of susceptibility genes in a wide spectrum of complex traits.
  • Matsuo, A. (2005). [Review of the book Children's discourse: Person, space and time across languages by Maya Hickmann]. Linguistics, 43(3), 653-657. doi:10.1515/ling.2005.43.3.653.
  • McQueen, J. M. (2003). The ghost of Christmas future: Didn't Scrooge learn to be good? Commentary on Magnuson, McMurray, Tanenhaus and Aslin (2003). Cognitive Science, 27(5), 795-799. doi:10.1207/s15516709cog2705_6.

    Abstract

    Magnuson, McMurray, Tanenhaus, and Aslin [Cogn. Sci. 27 (2003) 285] suggest that they have evidence of lexical feedback in speech perception, and that this evidence thus challenges the purely feedforward Merge model [Behav. Brain Sci. 23 (2000) 299]. This evidence is open to an alternative explanation, however, one which preserves the assumption in Merge that there is no lexical-prelexical feedback during on-line speech processing. This explanation invokes the distinction between perceptual processing that occurs in the short term, as an utterance is heard, and processing that occurs over the longer term, for perceptual learning.
  • McQueen, J. M., & Sereno, J. (2005). Cleaving automatic processes from strategic biases in phonological priming. Memory & Cognition, 33(7), 1185-1209.

    Abstract

    In a phonological priming experiment using spoken Dutch words, Dutch listeners were taught varying expectancies and relatedness relations about the phonological form of target words, given particular primes. They learned to expect that, after a particular prime, if the target was a word, it would be from a specific phonological category. The expectancy either involved phonological overlap (e.g., honk-vonk, “base-spark”; expected related) or did not (e.g., nest-galm, “nest-boom”; expected unrelated, where the learned expectation after hearing nest was a word rhyming in -alm). Targets were occasionally inconsistent with expectations. In these inconsistent expectancy trials, targets were either unrelated (e.g., honk-mest, “base-manure”; unexpected unrelated), where the listener was expecting a related target, or related (e.g., nest-pest, “nest-plague”; unexpected related), where the listener was expecting an unrelated target. Participant expectations and phonological relatedness were thus manipulated factorially for three types of phonological overlap (rhyme, one onset phoneme, and three onset phonemes) at three interstimulus intervals (ISIs; 50, 500, and 2,000 msec). Lexical decisions to targets revealed evidence of expectancy-based strategies for all three types of overlap (e.g., faster responses to expected than to unexpected targets, irrespective of phonological relatedness) and evidence of automatic phonological processes, but only for the rhyme and three-phoneme onset overlap conditions and, most strongly, at the shortest ISI (e.g., faster responses to related than to unrelated targets, irrespective of expectations). Although phonological priming thus has both automatic and strategic components, it is possible to cleave them apart.
  • McQueen, J. M., Cutler, A., & Norris, D. (2003). Flow of information in the spoken word recognition system. Speech Communication, 41(1), 257-270. doi:10.1016/S0167-6393(02)00108-5.

    Abstract

    Spoken word recognition consists of two major component processes. First, at the prelexical stage, an abstract description of the utterance is generated from the information in the speech signal. Second, at the lexical stage, this description is used to activate all the words stored in the mental lexicon which match the input. These multiple candidate words then compete with each other. We review evidence which suggests that positive (match) and negative (mismatch) information of both a segmental and a suprasegmental nature is used to constrain this activation and competition process. We then ask whether, in addition to the necessary influence of the prelexical stage on the lexical stage, there is also feedback from the lexicon to the prelexical level. In two phonetic categorization experiments, Dutch listeners were asked to label both syllable-initial and syllable-final ambiguous fricatives (e.g., sounds ranging from [f] to [s]) in the word–nonword series maf–mas, and the nonword–word series jaf–jas. They tended to label the sounds in a lexically consistent manner (i.e., consistent with the word endpoints of the series). These lexical effects became smaller in listeners’ slower responses, even when the listeners were put under pressure to respond as fast as possible. Our results challenge models of spoken word recognition in which feedback modulates the prelexical analysis of the component sounds of a word whenever that word is heard
  • Meeuwissen, M., Roelofs, A., & Levelt, W. J. M. (2003). Planning levels in naming and reading complex numerals. Memory & Cognition, 31(8), 1238-1249.

    Abstract

    On the basis of evidence from studies of the naming and reading of numerals, Ferrand (1999) argued that the naming of objects is slower than reading their names, due to a greater response uncertainty in naming than in reading, rather than to an obligatory conceptual preparation for naming, but not for reading. We manipulated the need for conceptual preparation, while keeping response uncertainty constant in the naming and reading of complex numerals. In Experiment 1, participants named three-digit Arabic numerals either as house numbers or clock times. House number naming latencies were determined mostly by morphophonological factors, such as morpheme frequency and the number of phonemes, whereas clock time naming latencies revealed an additional conceptual involvement. In Experiment 2, the numerals were presented in alphabetic format and had to be read aloud. Reading latencies were determined mostly by morphophonological factors in both modes. These results suggest that conceptual preparation, rather than response uncertainty, is responsible for the difference between naming and reading latencies.
  • Meira, S., & Terrill, A. (2005). Contrasting contrastive demonstratives in Tiriyó and Lavukaleve. Linguistics, 43(6), 1131-1152. doi:10.1515/ling.2005.43.6.1131.

    Abstract

    This article explores the contrastive function of demonstratives in two languages, Tiriyó (Cariban, northern Brazil) and Lavukaleve (Papuan isolate, Solomon Islands). The contrastive function has to a large extent been neglected in the theoretical literature on demonstrative functions, although preliminary investigations suggest that there are significant differences in demonstrative use in contrastive versus noncontrastive contexts. Tiriyó and Lavukaleve have what seem at first glance to be rather similar three-term demonstrative systems for exophoric deixis, with a proximal term, a distal term, and a middle term. However, under contrastive usage, significant differences between the two systems become apparent. In presenting an analysis of the contrastive use of demonstratives in these two languages, this article aims to show that the contrastive function is an important parameter of variation in demonstrative systems.
  • Meyer, A. S., Roelofs, A., & Levelt, W. J. M. (2003). Word length effects in object naming: The role of a response criterion. Journal of Memory and Language, 48(1), 131-147. doi:10.1016/S0749-596X(02)00509-0.

    Abstract

    According to Levelt, Roelofs, and Meyer (1999) speakers generate the phonological and phonetic representations of successive syllables of a word in sequence and only begin to speak after having fully planned at least one complete phonological word. Therefore, speech onset latencies should be longer for long than for short words. We tested this prediction in four experiments in which Dutch participants named or categorized objects with monosyllabic or disyllabic names. Experiment 1 yielded a length effect on production latencies when objects with long and short names were tested in separate blocks, but not when they were mixed. Experiment 2 showed that the length effect was not due to a difference in the ease of object recognition. Experiment 3 replicated the results of Experiment 1 using a within-participants design. In Experiment 4, the long and short target words appeared in a phrasal context. In addition to the speech onset latencies, we obtained the viewing times for the target objects, which have been shown to depend on the time necessary to plan the form of the target names. We found word length effects for both dependent variables, but only when objects with short and long names were presented in separate blocks. We argue that in pure and mixed blocks speakers used different response deadlines, which they tried to meet by either generating the motor programs for one syllable or for all syllables of the word before speech onset. Computer simulations using WEAVER++ support this view.
  • Morgan, J., & Meyer, A. S. (2005). Processing of extrafoveal objects during multiple-object naming. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 428-442. doi:10.1037/0278-7393.31.3.428.

    Abstract

    In 3 experiments, the authors investigated the extent to which objects that are about to be named are processed prior to fixation. Participants named pairs or triplets of objects. One of the objects, initially seen extrafoveally (the interloper), was replaced by a different object (the target) during the saccade toward it. The interloper-target pairs were identical or unrelated objects or visually and conceptually unrelated objects with homophonous names (e.g., animal-baseball bat). The mean latencies and gaze durations for the targets were shorter in the identity and homophone conditions than in the unrelated condition. This was true when participants viewed a fixation mark until the interloper appeared and when they fixated on another object and prepared to name it while viewing the interloper. These results imply that objects that are about to be named may undergo far-reaching processing, including access to their names, prior to fixation.
  • Moscoso del Prado Martín, F., Deutsch, A., Frost, R., Schreuder, R., De Jong, N. H., & Baayen, R. H. (2005). Changing places: A cross-language perspective on frequency and family size in Dutch and Hebrew. Journal of Memory and Language, 53(4), 496-512. doi:10.1016/j.jml.2005.07.003.

    Abstract

    This study uses the morphological family size effect as a tool for exploring the degree of isomorphism in the networks of morphologically related words in the Hebrew and Dutch mental lexicon. Hebrew and Dutch are genetically unrelated, and they structure their morphologically complex words in very different ways. Two visual lexical decision experiments document substantial cross-language predictivity for the family size measure after partialing out the effect of word frequency and word length. Our data show that the morphological family size effect is not restricted to Indo-European languages but extends to languages with non-concatenative morphology. In Hebrew, a new inhibitory component of the family size effect emerged that arises when a Hebrew root participates in different semantic fields.
  • Narasimhan, B. (2005). Splitting the notion of 'agent': Case-marking in early child Hindi. Journal of Child Language, 32(4), 787-803. doi:10.1017/S0305000905007117.

    Abstract

    Two construals of agency are evaluated as possible innate biases guiding case-marking in children. A BROAD construal treats agentive arguments of multi-participant and single-participant events as being similar. A NARROWER construal is restricted to agents of multi-participant events. In Hindi, ergative case-marking is associated with agentive participants of multi-participant, perfective actions. Children relying on a broad or narrow construal of agent are predicted to overextend ergative case-marking to agentive participants of transitive imperfective actions and/or intransitive actions. Longitudinal data from three children acquiring Hindi (1;7 to 3;9) reveal no overextension errors, suggesting early sensitivity to distributional patterns in the input.
  • Narasimhan, B., Budwig, N., & Murty, L. (2005). Argument realization in Hindi caregiver-child discourse. Journal of Pragmatics, 37(4), 461-495. doi:10.1016/j.pragma.2004.01.005.

    Abstract

    An influential claim in the child language literature posits that children use structural cues in the input language to acquire verb meaning (Gleitman, 1990). One such cue is the number of arguments co-occurring with the verb, which provides an indication as to the event type associated with the verb (Fisher, 1995). In some languages however (e.g. Hindi), verb arguments are ellipted relatively freely, subject to certain discourse-pragmatic constraints. In this paper, we address three questions: Is the pervasive argument ellipsis characteristic of adult Hindi also found in Hindi-speaking caregivers’ input ? If so, do children consequently make errors in verb transitivity? How early do children learning a split-ergative language, such as Hindi, exhibit sensitivity to discourse-pragmatic influences on argument realization? We show that there is massive argument ellipsis in caregivers’ input to 3–4 year-olds. However, children acquiring Hindi do not make transitivity errors in their own speech. Nor do they elide arguments randomly. Rather, even at this early age, children appear to be sensitive to discourse-pragmatics in their own spontaneous speech production. These findings in a split-ergative language parallel patterns of argument realization found in children acquiring both nominative-accusative languages (e.g. Korean) and ergative-absolutive languages (e.g. Tzeltal, Inuktitut).
  • Narasimhan, B. (2003). Motion events and the lexicon: The case of Hindi. Lingua, 113(2), 123-160. doi:10.1016/S0024-3841(02)00068-2.

    Abstract

    English, and a variety of Germanic languages, allow constructions such as the bottle floated into the cave , whereas languages such as Spanish, French, and Hindi are highly restricted in allowing manner of motion verbs to occur with path phrases. This typological observation has been accounted for in terms of the conflation of complex meaning in basic or derived verbs [Talmy, L., 1985. Lexicalization patterns: semantic structure in lexical forms. In: Shopen, T. (Ed.), Language Typology and Syntactic Description 3: Grammatical Categories and the Lexicon. Cambridge University Press, Cambridge, pp. 57–149; Levin, B., Rappaport-Hovav, M., 1995. Unaccusativity: At the Syntax–Lexical Semantics Interface. MIT Press, Cambridge, MA], or the presence of path “satellites” with special grammatical properties in the lexicon of languages such as English, which allow such phrasal combinations [cf. Talmy, L., 1985. Lexicalization patterns: semantic structure in lexical forms. In: Shopen, T. (Ed.), Language Typology and Syntactic Description 3: Grammatical Categories and the Lexicon. Cambridge University Press, Cambridge, pp. 57–149; Talmy, L., 1991. Path to realisation: via aspect and result. In: Proceedings of the Seventeenth Annual Meeting of the Berkeley Linguistics Society. Berkeley Linguistics Society, Berkeley, pp. 480–520]. I use data from Hindi to show that there is little empirical support for the claim that the constraint on the phrasal combination is correlated with differences in verb meaning or the presence of satellites in the lexicon of a language. However, proposals which eschew lexicalization accounts for more general aspectual constraints on the manner verb + path phrase combination in Spanish-type languages (Aske, J., 1989. Path Predicates in English and Spanish: A Closer look. In: Proceedings of the Fifteenth Annual Meeting of the Berkeley Linguistics Society. Berkeley Linguistics Society, Berkeley, pp. 1–14) cannot account for the full range of data in Hindi either. On the basis of these facts, I argue that an empirically adequate account can be formulated in terms of a general mapping constraint, formulated in terms of whether the lexical requirements of the verb strictly or weakly constrain its syntactic privileges of occurrence. In Hindi, path phrases can combine with manner of motion verbs only to the degree that they are compatible with the semantic profile of the verb. Path phrases in English, on the other hand, can extend the verb's “semantic profile” subject to certain constraints. I suggest that path phrases are licensed in English by the semantic requirements of the “construction” in which they appear rather than by the selectional requirements of the verb (Fillmore, C., Kay, P., O'Connor, M.C., 1988, Regularity and idiomaticity in grammatical constructions. Language 64, 501–538; Jackendoff, 1990, Semantic Structures. MIT Press, Cambridge, MA; Goldberg, 1995, Constructions: A Construction Grammar Approach to Argument Structure. University of Chicago Press, Chicago and London).
  • Nieuwland, M. S., & Van Berkum, J. J. A. (2005). Testing the limits of the semantic illusion phenomenon: ERPs reveal temporary semantic change deafness in discourse comprehension. Cognitive Brain Research, 24(3), 691-701. doi:10.1016/j.cogbrainres.2005.04.003.

    Abstract

    In general, language comprehension is surprisingly reliable. Listeners very rapidly extract meaning from the unfolding speech signal, on a word-by-word basis, and usually successfully. Research on ‘semantic illusions’ however suggests that under certain conditions, people fail to notice that the linguistic input simply doesn't make sense. In the current event-related brain potentials (ERP) study, we examined whether listeners would, under such conditions, spontaneously detect an anomaly in which a human character central to the story at hand (e.g., “a tourist”) was suddenly replaced by an inanimate object (e.g., “a suitcase”). Because this replacement introduced a very powerful coherence break, we expected listeners to immediately notice the anomaly and generate the standard ERP effect associated with incoherent language, the N400 effect. However, instead of the standard N400 effect, anomalous words elicited a positive ERP effect from about 500–600 ms onwards. The absence of an N400 effect suggests that subjects did not immediately notice the anomaly, and that for a few hundred milliseconds the comprehension system has converged on an apparently coherent but factually incorrect interpretation. The presence of the later ERP effect indicates that subjects were processing for comprehension and did ultimately detect the anomaly. Therefore, we take the absence of a regular N400 effect as the online manifestation of a temporary semantic illusion. Our results also show that even attentive listeners sometimes fail to notice a radical change in the nature of a story character, and therefore suggest a case of short-lived ‘semantic change deafness’ in language comprehension.
  • Norris, D., McQueen, J. M., & Cutler, A. (2003). Perceptual learning in speech. Cognitive Psychology, 47(2), 204-238. doi:10.1016/S0010-0285(03)00006-9.

    Abstract

    This study demonstrates that listeners use lexical knowledge in perceptual learning of speech sounds. Dutch listeners first made lexical decisions on Dutch words and nonwords. The final fricative of 20 critical words had been replaced by an ambiguous sound, between [f] and [s]. One group of listeners heard ambiguous [f]-final words (e.g., [WI tlo?], from witlof, chicory) and unambiguous [s]-final words (e.g., naaldbos, pine forest). Another group heard the reverse (e.g., ambiguous [na:ldbo?], unambiguous witlof). Listeners who had heard [?] in [f]-final words were subsequently more likely to categorize ambiguous sounds on an [f]–[s] continuum as [f] than those who heard [?] in [s]-final words. Control conditions ruled out alternative explanations based on selective adaptation and contrast. Lexical information can thus be used to train categorization of speech. This use of lexical information differs from the on-line lexical feedback embodied in interactive models of speech perception. In contrast to on-line feedback, lexical feedback for learning is of benefit to spoken word recognition (e.g., in adapting to a newly encountered dialect).
  • Nyberg, L., Marklund, P., Persson, J., Cabeza, R., Forkstam, C., Petersson, K. M., & Ingvar, M. (2003). Common prefrontal activations during working memory, episodic memory, and semantic memory. Neuropsychologia, 41(3), 371-377. doi:10.1016/S0028-3932(02)00168-9.

    Abstract

    Regions of the prefrontal cortex (PFC) are typically activated in many different cognitive functions. In most studies, the focus has been on the role of specific PFC regions in specific cognitive domains, but more recently similarities in PFC activations across cognitive domains have been stressed. Such similarities may suggest that a region mediates a common function across a variety of cognitive tasks. In this study, we compared the activation patterns associated with tests of working memory, semantic memory and episodic memory. The results converged on a general involvement of four regions across memory tests. These were located in left frontopolar cortex, left mid-ventrolateral PFC, left mid-dorsolateral PFC and dorsal anterior cingulate cortex. These findings provide evidence that some PFC regions are engaged during many different memory tests. The findings are discussed in relation to theories about the functional contribition of the PFC regions and the architecture of memory.
  • Nyberg, L., Sandblom, J., Jones, S., Stigsdotter Neely, A., Petersson, K. M., Ingvar, M., & Bäckman, L. (2003). Neural correlates of training-related memory improvement in adulthood and aging. Proceedings of the National Academy of Sciences of the United States of America, 100(23), 13728-13733. doi:10.1073/pnas.1735487100.

    Abstract

    Cognitive studies show that both younger and older adults can increase their memory performance after training in using a visuospatial mnemonic, although age-related memory deficits tend to be magnified rather than reduced after training. Little is known about the changes in functional brain activity that accompany training-induced memory enhancement, and whether age-related activity changes are associated with the size of training-related gains. Here, we demonstrate that younger adults show increased activity during memory encoding in occipito-parietal and frontal brain regions after learning the mnemonic. Older adults did not show increased frontal activity, and only those elderly persons who benefited from the mnemonic showed increased occipitoparietal activity. These findings suggest that age-related differences in cognitive reserve capacity may reflect both a frontal processing deficiency and a posterior production deficiency.
  • Ogdie, M. N., MacPhie, I. L., Minassian, S. L., Yang, M., Fisher, S. E., Francks, C., Cantor, R. M., McCracken, J. T., McGough, J. J., Nelson, S. F., Monaco, A. P., & Smalley, S. L. (2003). A genomewide scan for Attention-Deficit/Hyperactivity Disorder in an extended sample: Suggestive linkage on 17p11. American Journal of Human Genetics, 72(5), 1268-1279. doi:10.1086/375139.

    Abstract

    Attention-deficit/hyperactivity disorder (ADHD [MIM 143465]) is a common, highly heritable neurobehavioral disorder of childhood onset, characterized by hyperactivity, impulsivity, and/or inattention. As part of an ongoing study of the genetic etiology of ADHD, we have performed a genomewide linkage scan in 204 nuclear families comprising 853 individuals and 270 affected sibling pairs (ASPs). Previously, we reported genomewide linkage analysis of a “first wave” of these families composed of 126 ASPs. A follow-up investigation of one region on 16p yielded significant linkage in an extended sample. The current study extends the original sample of 126 ASPs to 270 ASPs and provides linkage analyses of the entire sample, using polymorphic microsatellite markers that define an ∼10-cM map across the genome. Maximum LOD score (MLS) analysis identified suggestive linkage for 17p11 (MLS=2.98) and four nominal regions with MLS values >1.0, including 5p13, 6q14, 11q25, and 20q13. These data, taken together with the fine mapping on 16p13, suggest two regions as highly likely to harbor risk genes for ADHD: 16p13 and 17p11. Interestingly, both regions, as well as 5p13, have been highlighted in genomewide scans for autism.
  • O'Shannessy, C. (2005). Light Warlpiri: A new language. Australian Journal of Linguistics, 25(1), 31-57. doi:10.1080/07268600500110472.
  • Ozyurek, A., Kita, S., Allen, S., Furman, R., & Brown, A. (2005). How does linguistic framing of events influence co-speech gestures? Insights from crosslinguistic variations and similarities. Gesture, 5(1/2), 219-240.

    Abstract

    What are the relations between linguistic encoding and gestural representations of events during online speaking? The few studies that have been conducted on this topic have yielded somewhat incompatible results with regard to whether and how gestural representations of events change with differences in the preferred semantic and syntactic encoding possibilities of languages. Here we provide large scale semantic, syntactic and temporal analyses of speech- gesture pairs that depict 10 different motion events from 20 Turkish and 20 English speakers. We find that the gestural representations of the same events differ across languages when they are encoded by different syntactic frames (i.e., verb-framed or satellite-framed). However, where there are similarities across languages, such as omission of a certain element of the event in the linguistic encoding, gestural representations also look similar and omit the same content. The results are discussed in terms of what gestures reveal about the influence of language specific encoding on on-line thinking patterns and the underlying interactions between speech and gesture during the speaking process.
  • Paterson, K. B., Liversedge, S. P., Rowland, C. F., & Filik, R. (2003). Children's comprehension of sentences with focus particles. Cognition, 89(3), 263-294. doi:10.1016/S0010-0277(03)00126-4.

    Abstract

    We report three studies investigating children's and adults' comprehension of sentences containing the focus particle only. In Experiments 1 and 2, four groups of participants (6–7 years, 8–10 years, 11–12 years and adult) compared sentences with only in different syntactic positions against pictures that matched or mismatched events described by the sentence. Contrary to previous findings (Crain, S., Ni, W., & Conway, L. (1994). Learning, parsing and modularity. In C. Clifton, L. Frazier, & K. Rayner (Eds.), Perspectives on sentence processing. Hillsdale, NJ: Lawrence Erlbaum; Philip, W., & Lynch, E. (1999). Felicity, relevance, and acquisition of the grammar of every and only. In S. C. Howell, S. A. Fish, & T. Keith-Lucas (Eds.), Proceedings of the 24th annual Boston University conference on language development. Somerville, MA: Cascadilla Press) we found that young children predominantly made errors by failing to process contrast information rather than errors in which they failed to use syntactic information to restrict the scope of the particle. Experiment 3 replicated these findings with pre-schoolers.
  • Penke, M., Janssen, U., Indefrey, P., & Seitz, R. (2005). No evidence for a rule/procedural deficit in German patients with Parkinson's disease. Brain and Language, 95(1), 139-140. doi:10.1016/j.bandl.2005.07.078.
  • Petersson, K. M., Sandblom, J., Elfgren, C., & Ingvar, M. (2003). Instruction-specific brain activations during episodic encoding: A generalized level of processing effect. Neuroimage, 20, 1795-1810. doi:10.1016/S1053-8119(03)00414-2.

    Abstract

    In a within-subject design we investigated the levels-of-processing (LOP) effect using visual material in a behavioral and a corresponding PET study. In the behavioral study we characterize a generalized LOP effect, using pleasantness and graphical quality judgments in the encoding situation, with two types of visual material, figurative and nonfigurative line drawings. In the PET study we investigate the related pattern of brain activations along these two dimensions. The behavioral results indicate that instruction and material contribute independently to the level of recognition performance. Therefore the LOP effect appears to stem both from the relative relevance of the stimuli (encoding opportunity) and an altered processing of stimuli brought about by the explicit instruction (encoding mode). In the PET study, encoding of visual material under the pleasantness (deep) instruction yielded left lateralized frontoparietal and anterior temporal activations while surface-based perceptually oriented processing (shallow instruction) yielded right lateralized frontoparietal, posterior temporal, and occipitotemporal activations. The result that deep encoding was related to the left prefrontal cortex while shallow encoding was related to the right prefrontal cortex, holding the material constant, is not consistent with the HERA model. In addition, we suggest that the anterior medial superior frontal region is related to aspects of self-referential semantic processing and that the inferior parts of the anterior cingulate as well as the medial orbitofrontal cortex is related to affective processing, in this case pleasantness evaluation of the stimuli regardless of explicit semantic content. Finally, the left medial temporal lobe appears more actively engaged by elaborate meaning-based processing and the complex response pattern observed in different subregions of the MTL lends support to the suggestion that this region is functionally segregated.
  • Petersson, K. M. (2005). On the relevance of the neurobiological analogue of the finite-state architecture. Neurocomputing, 65(66), 825-832. doi:10.1016/j.neucom.2004.10.108.

    Abstract

    We present two simple arguments for the potential relevance of a neurobiological analogue of the finite-state architecture. The first assumes the classical cognitive framework, is wellknown, and is based on the assumption that the brain is finite with respect to its memory organization. The second is formulated within a general dynamical systems framework and is based on the assumption that the brain sustains some level of noise and/or does not utilize infinite precision processing. We briefly review the classical cognitive framework based on Church–Turing computability and non-classical approaches based on analog processing in dynamical systems. We conclude that the dynamical neurobiological analogue of the finitestate architecture appears to be relevant, at least at an implementational level, for cognitive brain systems
  • Pine, J. M., Rowland, C. F., Lieven, E. V., & Theakston, A. L. (2005). Testing the Agreement/Tense Omission Model: Why the data on children's use of non-nominative 3psg subjects count against the ATOM. Journal of Child Language, 32(2), 269-289. doi:10.1017/S0305000905006860.

    Abstract

    One of the most influential recent accounts of pronoun case-marking errors in young children's speech is Schütze & Wexler's (1996) Agreement/Tense Omission Model (ATOM). The ATOM predicts that the rate of agreeing verbs with non-nominative subjects will be so low that such errors can be reasonably disregarded as noise in the data. The present study tests this prediction on data from 12 children between the ages of 1;8.22 and 3;0.10. This is done, first, by identifying children who produced a reasonably large number of non-nominative 3psg subjects; second, by estimating the expected rate of agreeing verbs with masculine and feminine non-nominative subjects in these children's speech; and, third, by examining the actual rate at which agreeing verb forms occurred with non-nominative subjects in those areas of the data in which the expected error rate was significantly greater than 10%. The results show, first, that only three of the children produced enough non-nominative subjects to allow a reasonable test of the ATOM to be made; second, that for all three of these children, the only area of the data in which the expected frequency of agreeing verbs with non-nominative subjects was significantly greater than 10% was their use of feminine case-marked subjects; and third, that for all three of these children, the rate of agreeing verbs with non-nominative feminine subjects was over 30%. These results raise serious doubts about the claim that children's use of non-nominative subjects can be explained in terms of AGR optionality, and suggest the need for a model of pronoun case-marking error that can explain why some children produce agreeing verb forms with non-nominative subjects as often as they do.
  • Pluymaekers, M., Ernestus, M., & Baayen, R. H. (2005). Articulatory planning is continuous and sensitive to informational redundancy. Phonetica, 62(2-4), 146-159. doi:10.1159/000090095.

    Abstract

    This study investigates the relationship between word repetition, predictability from neighbouring words, and articulatory reduction in Dutch. For the seven most frequent words ending in the adjectival suffix -lijk, 40 occurrences were randomly selected from a large database of face-to-face conversations. Analysis of the selected tokens showed that the degree of articulatory reduction (as measured by duration and number of realized segments) was affected by repetition, predictability from the previous word and predictability from the following word. Interestingly, not all of these effects were significant across morphemes and target words. Repetition effects were limited to suffixes, while effects of predictability from the previous word were restricted to the stems of two of the seven target words. Predictability from the following word affected the stems of all target words equally, but not all suffixes. The implications of these findings for models of speech production are discussed.
  • Pluymaekers, M., Ernestus, M., & Baayen, R. H. (2005). Lexical frequency and acoustic reduction in spoken Dutch. Journal of the Acoustical Society of America, 118(4), 2561-2569. doi:10.1121/1.2011150.

    Abstract

    This study investigates the effects of lexical frequency on the durational reduction of morphologically complex words in spoken Dutch. The hypothesis that high-frequency words are more reduced than low-frequency words was tested by comparing the durations of affixes occurring in different carrier words. Four Dutch affixes were investigated, each occurring in a large number of words with different frequencies. The materials came from a large database of face-to-face conversations. For each word containing a target affix, one token was randomly selected for acoustic analysis. Measurements were made of the duration of the affix as a whole and the durations of the individual segments in the affix. For three of the four affixes, a higher frequency of the carrier word led to shorter realizations of the affix as a whole, individual segments in the affix, or both. Other relevant factors were the sex and age of the speaker, segmental context, and speech rate. To accommodate for these findings, models of speech production should allow word frequency to affect the acoustic realizations of lower-level units, such as individual speech sounds occurring in affixes.
  • Poletiek, F. H., & Van den Bos, E. J. (2005). Het onbewuste is een dader met een motief. De Psycholoog, 40(1), 11-17.
  • Reis, A., Guerreiro, M., & Petersson, K. M. (2003). A sociodemographic and neuropsychological characterization of an illiterate population. Applied Neuropsychology, 10, 191-204. doi:10.1207/s15324826an1004_1.

    Abstract

    The objectives of this article are to characterize the performance and to discuss the performance differences between literate and illiterate participants in a well-defined study population.We describe the participant-selection procedure used to investigate this population. Three groups with similar sociocultural backgrounds living in a relatively homogeneous fishing community in southern Portugal were characterized in terms of socioeconomic and sociocultural background variables and compared on a simple neuropsychological test battery; specifically, a literate group with more than 4 years of education (n = 9), a literate group with 4 years of education (n = 26), and an illiterate group (n = 31) were included in this study.We compare and discuss our results with other similar studies on the effects of literacy and illiteracy. The results indicate that naming and identification of real objects, verbal fluency using ecologically relevant semantic criteria, verbal memory, and orientation are not affected by literacy or level of formal education. In contrast, verbal working memory assessed with digit span, verbal abstraction, long-term semantic memory, and calculation (i.e., multiplication) are significantly affected by the level of literacy. We indicate that it is possible, with proper participant-selection procedures, to exclude general cognitive impairment and to control important sociocultural factors that potentially could introduce bias when studying the specific effects of literacy and level of formal education on cognitive brain function.
  • Reis, A., & Petersson, K. M. (2003). Educational level, socioeconomic status and aphasia research: A comment on Connor et al. (2001)- Effect of socioeconomic status on aphasia severity and recovery. Brain and Language, 87, 449-452. doi:10.1016/S0093-934X(03)00140-8.

    Abstract

    Is there a relation between socioeconomic factors and aphasia severity and recovery? Connor, Obler, Tocco, Fitzpatrick, and Albert (2001) describe correlations between the educational level and socioeconomic status of aphasic subjects with aphasia severity and subsequent recovery. As stated in the introduction by Connor et al. (2001), studies of the influence of educational level and literacy (or illiteracy) on aphasia severity have yielded conflicting results, while no significant link between socioeconomic status and aphasia severity and recovery has been established. In this brief note, we will comment on their findings and conclusions, beginning first with a brief review of literacy and aphasia research, and complexities encountered in these fields of investigation. This serves as a general background to our specific comments on Connor et al. (2001), which will be focusing on methodological issues and the importance of taking normative values in consideration when subjects with different socio-cultural or socio-economic backgrounds are assessed.
  • Rey, A., & Schiller, N. O. (2005). Graphemic complexity and multiple print-to-sound associations in visual word recognition. Memory & Cognition, 33(1), 76-85.

    Abstract

    It has recently been reported that words containing a multiletter grapheme are processed slower than are words composed of single-letter graphemes (Rastle & Coltheart, 1998; Rey, Jacobs, Schmidt-Weigand, & Ziegler, 1998). In the present study, using a perceptual identification task, we found in Experiment 1 that this graphemic complexity effect can be observed while controlling for multiple print-to-sound associations, indexed by regularity or consistency. In Experiment 2, we obtained cumulative effects of graphemic complexity and regularity. These effects were replicated in Experiment 3 in a naming task. Overall, these results indicate that graphemic complexity and multiple print-to-sound associations effects are independent and should be accounted for in different ways by models of written word processing.
  • Roelofs, A. (2003). Shared phonological encoding processes and representations of languages in bilingual speakers. Language and Cognitive Processes, 18(2), 175-204. doi:10.1080/01690960143000515.

    Abstract

    Four form-preparation experiments investigated whether aspects of phonological encoding processes and representations are shared between languages in bilingual speakers. The participants were Dutch--English bilinguals. Experiment 1 showed that the basic rightward incrementality revealed in studies for the first language is also observed for second-language words. In Experiments 2 and 3, speakers were given words to produce that did or did not share onset segments, and that came or did not come from different languages. It was found that when onsets were shared among the response words, those onsets were prepared, even when the words came from different languages. Experiment 4 showed that preparation requires prior knowledge of the segments and that knowledge about their phonological features yields no effect. These results suggest that both first- and second-language words are phonologically planned through the same serial order mechanism and that the representations of segments common to the languages are shared.
  • Roelofs, A. (2005). The visual-auditory color-word Stroop asymmetry and its time course. Memory & Cognition, 33(8), 1325-1336.

    Abstract

    Four experiments examined crossmodal versions of the Stroop task in order (1) to look for Stroop asymmetries in color naming, spoken-word naming, and written-word naming and to evaluate the time course of these asymmetries, and (2) to compare these findings to current models of the Stroop effect. Participants named color patches while ignoring spoken color words presented with an onset varying from 300 msec before to 300 msec after the onset of the color (Experiment 1), or they named the spoken words and ignored the colors (Experiment 2). A secondary visual detection task assured that the participants looked at the colors in both tasks. Spoken color words yielded Stroop effects in color naming, but colors did not yield an effect in spoken-word naming at any stimulus onset asynchrony. This asymmetry in effects was obtained with equivalent color- and spoken-word-naming latencies. Written color words yielded a Stroop effect in naming spoken words (Experiment 3), and spoken color words yielded an effect in naming written words (Experiment 4). These results were interpreted as most consistent with an architectural account of the color-word Stroop asymmetry, in contrast with discriminability and pathway strength accounts.
  • Roelofs, A. (2003). Goal-referenced selection of verbal action: Modeling attentional control in the Stroop task. Psychological Review, 110(1), 88-125.

    Abstract

    This article presents a new account of the color-word Stroop phenomenon ( J. R. Stroop, 1935) based on an implemented model of word production, WEAVER++ ( W. J. M. Levelt, A. Roelofs, & A. S. Meyer, 1999b; A. Roelofs, 1992, 1997c). Stroop effects are claimed to arise from processing interactions within the language-production architecture and explicit goal-referenced control. WEAVER++ successfully simulates 16 classic data sets, mostly taken from the review by C. M. MacLeod (1991), including incongruency, congruency, reverse-Stroop, response-set, semantic-gradient, time-course, stimulus, spatial, multiple-task, manual, bilingual, training, age, and pathological effects. Three new experiments tested the account against alternative explanations. It is shown that WEAVER++ offers a more satisfactory account of the data than other models.
  • Rowland, C. F., Pine, J. M., Lieven, E. V., & Theakston, A. L. (2003). Determinants of acquisition order in wh-questions: Re-evaluating the role of caregiver speech. Journal of Child Language, 30(3), 609-635. doi:10.1017/S0305000903005695.

    Abstract

    Accounts that specify semantic and/or syntactic complexity as the primary determinant of the order in which children acquire particular words or grammatical constructions have been highly influential in the literature on question acquisition. One explanation of wh-question acquisition in particular suggests that the order in which English speaking children acquire wh-questions is determined by two interlocking linguistic factors; the syntactic function of the wh-word that heads the question and the semantic generality (or ‘lightness’) of the main verb (Bloom, Merkin & Wootten, 1982; Bloom, 1991). Another more recent view, however, is that acquisition is influenced by the relative frequency with which children hear particular wh-words and verbs in their input (e.g. Rowland & Pine, 2000). In the present study over 300 hours of naturalistic data from twelve two- to three-year-old children and their mothers were analysed in order to assess the relative contribution of complexity and input frequency to wh-question acquisition. The analyses revealed, first, that the acquisition order of wh-questions could be predicted successfully from the frequency with which particular wh-words and verbs occurred in the children's input and, second, that syntactic and semantic complexity did not reliably predict acquisition once input frequency was taken into account. These results suggest that the relationship between acquisition and complexity may be a by-product of the high correlation between complexity and the frequency with which mothers use particular wh-words and verbs. We interpret the results in terms of a constructivist view of language acquisition.
  • Rowland, C. F., & Pine, J. M. (2003). The development of inversion in wh-questions: a reply to Van Valin. Journal of Child Language, 30(1), 197-212. doi:10.1017/S0305000902005445.

    Abstract

    Van Valin (Journal of Child Language29, 2002, 161–75) presents a critique of Rowland & Pine (Journal of Child Language27, 2000, 157–81) and argues that the wh-question data from Adam (in Brown, A first language, Cambridge, MA, 1973) cannot be explained in terms of input frequencies as we suggest. Instead, he suggests that the data can be more successfully accounted for in terms of Role and Reference Grammar. In this note we re-examine the pattern of inversion and uninversion in Adam's wh-questions and argue that the RRG explanation cannot account for some of the developmental facts it was designed to explain.
  • Rowland, C. F., Pine, J. M., Lieven, E. V., & Theakston, A. L. (2005). The incidence of error in young children's wh-questions. Journal of Speech, Language, and Hearing Research, 48, 384-404. doi:10.1044/1092-4388(2005/027).

    Abstract

    Many current generativist theorists suggest that young children possess the grammatical principles of inversion required for question formation but make errors because they find it difficult to learn language-specific rules about how inversion applies. The present study analyzed longitudinal spontaneous sampled data from twelve 2–3-year-old English speaking children and the intensive diary data of 1 child (age 2;7 [years;months] to 2;11) in order to test some of these theories. The results indicated significantly different rates of error use across different auxiliaries. In particular, error rates differed across 2 forms of the same auxiliary subtype (e.g., auxiliary is vs. are), and auxiliary DO and modal auxiliaries attracted significantly higher rates of errors of inversion than other auxiliaries. The authors concluded that current generativist theories might have problems explaining the patterning of errors seen in children's questions, which might be more consistent with a constructivist account of development. However, constructivists need to devise more precise predictions in order to fully explain the acquisition of questions.
  • De Ruiter, J. P., Rossignol, S., Vuurpijl, L., Cunningham, D. W., & Levelt, W. J. M. (2003). SLOT: A research platform for investigating multimodal communication. Behavior Research Methods, Instruments, & Computers, 35(3), 408-419.

    Abstract

    In this article, we present the spatial logistics task (SLOT) platform for investigating multimodal communication between 2 human participants. Presented are the SLOT communication task and the software and hardware that has been developed to run SLOT experiments and record the participants’ multimodal behavior. SLOT offers a high level of flexibility in varying the context of the communication and is particularly useful in studies of the relationship between pen gestures and speech. We illustrate the use of the SLOT platform by discussing the results of some early experiments. The first is an experiment on negotiation with a one-way mirror between the participants, and the second is an exploratory study of automatic recognition of spontaneous pen gestures. The results of these studies demonstrate the usefulness of the SLOT platform for conducting multimodal communication research in both human– human and human–computer interactions.
  • Salverda, A. P., Dahan, D., & McQueen, J. M. (2003). The role of prosodic boundaries in the resolution of lexical embedding in speech comprehension. Cognition, 90(1), 51-89. doi:10.1016/S0010-0277(03)00139-2.

    Abstract

    Participants' eye movements were monitored as they heard sentences and saw four pictured objects on a computer screen. Participants were instructed to click on the object mentioned in the sentence. There were more transitory fixations to pictures representing monosyllabic words (e.g. ham) when the first syllable of the target word (e.g. hamster) had been replaced by a recording of the monosyllabic word than when it came from a different recording of the target word. This demonstrates that a phonemically identical sequence can contain cues that modulate its lexical interpretation. This effect was governed by the duration of the sequence, rather than by its origin (i.e. which type of word it came from). The longer the sequence, the more monosyllabic-word interpretations it generated. We argue that cues to lexical-embedding disambiguation, such as segmental lengthening, result from the realization of a prosodic boundary that often but not always follows monosyllabic words, and that lexical candidates whose word boundaries are aligned with prosodic boundaries are favored in the word-recognition process.
  • Scharenborg, O., ten Bosch, L., Boves, L., & Norris, D. (2003). Bridging automatic speech recognition and psycholinguistics: Extending Shortlist to an end-to-end model of human speech recognition [Letter to the editor]. Journal of the Acoustical Society of America, 114, 3032-3035. doi:10.1121/1.1624065.

    Abstract

    This letter evaluates potential benefits of combining human speech recognition ~HSR! and automatic speech recognition by building a joint model of an automatic phone recognizer ~APR! and a computational model of HSR, viz., Shortlist @Norris, Cognition 52, 189–234 ~1994!#. Experiments based on ‘‘real-life’’ speech highlight critical limitations posed by some of the simplifying assumptions made in models of human speech recognition. These limitations could be overcome by avoiding hard phone decisions at the output side of the APR, and by using a match between the input and the internal lexicon that flexibly copes with deviations from canonical phonemic representations.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2003). ‘Early recognition’ of words in continuous speech. Automatic Speech Recognition and Understanding, 2003 IEEE Workshop, 61-66. doi:10.1109/ASRU.2003.1318404.

    Abstract

    In this paper, we present an automatic speech recognition (ASR) system based on the combination of an automatic phone recogniser and a computational model of human speech recognition – SpeM – that is capable of computing ‘word activations’ during the recognition process, in addition to doing normal speech recognition, a task in which conventional ASR architectures only provide output after the end of an utterance. We explain the notion of word activation and show that it can be used for ‘early recognition’, i.e. recognising a word before the end of the word is available. Our ASR system was tested on 992 continuous speech utterances, each containing at least one target word: a city name of at least two syllables. The results show that early recognition was obtained for 72.8% of the target words that were recognised correctly. Also, it is shown that word activation can be used as an effective confidence measure.
  • Scharenborg, O., Norris, D., Ten Bosch, L., & McQueen, J. M. (2005). How should a speech recognizer work? Cognitive Science, 29(6), 867-918. doi:10.1207/s15516709cog0000_37.

    Abstract

    Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational levels rather than on the computational level. In this article, we provide a computational-level analysis of the task of speech recognition, which reveals the close parallels between research concerned with HSR and ASR. We illustrate this relation by presenting a new computational model of human spoken-word recognition, built using techniques from the field of ASR that, in contrast to current existing models of HSR, recognizes words from real speech input.
  • Schiller, N. O., Münte, T. F., Horemans, I., & Jansma, B. M. (2003). The influence of semantic and phonological factors on syntactic decisions: An event-related brain potential study. Psychophysiology, 40(6), 869-877. doi:10.1111/1469-8986.00105.

    Abstract

    During language production and comprehension, information about a word's syntactic properties is sometimes needed. While the decision about the grammatical gender of a word requires access to syntactic knowledge, it has also been hypothesized that semantic (i.e., biological gender) or phonological information (i.e., sound regularities) may influence this decision. Event-related potentials (ERPs) were measured while native speakers of German processed written words that were or were not semantically and/or phonologically marked for gender. Behavioral and ERP results showed that participants were faster in making a gender decision when words were semantically and/or phonologically gender marked than when this was not the case, although the phonological effects were less clear. In conclusion, our data provide evidence that even though participants performed a grammatical gender decision, this task can be influenced by semantic and phonological factors.
  • Schiller, N. O., Bles, M., & Jansma, B. M. (2003). Tracking the time course of phonological encoding in speech production: An event-related brain potential study on internal monitoring. Cognitive Brain Research, 17(3), 819-831. doi:10.1016/S0926-6410(03)00204-0.

    Abstract

    This study investigated the time course of phonological encoding during speech production planning. Previous research has shown that conceptual/semantic information precedes syntactic information in the planning of speech production and that syntactic information is available earlier than phonological information. Here, we studied the relative time courses of the two different processes within phonological encoding, i.e. metrical encoding and syllabification. According to one prominent theory of language production, metrical encoding involves the retrieval of the stress pattern of a word, while syllabification is carried out to construct the syllabic structure of a word. However, the relative timing of these two processes is underspecified in the theory. We employed an implicit picture naming task and recorded event-related brain potentials to obtain fine-grained temporal information about metrical encoding and syllabification. Results revealed that both tasks generated effects that fall within the time window of phonological encoding. However, there was no timing difference between the two effects, suggesting that they occur approximately at the same time.
  • Schiller, N. O., & Caramazza, A. (2003). Grammatical feature selection in noun phrase production: Evidence from German and Dutch. Journal of Memory and Language, 48(1), 169-194. doi:10.1016/S0749-596X(02)00508-9.

    Abstract

    In this study, we investigated grammatical feature selection during noun phrase production in German and Dutch. More specifically, we studied the conditions under which different grammatical genders select either the same or different determiners or suffixes. Pictures of one or two objects paired with a gender-congruent or a gender-incongruent distractor word were presented. Participants named the pictures using a singular or plural noun phrase with the appropriate determiner and/or adjective in German or Dutch. Significant effects of gender congruency were only obtained in the singular condition where the selection of determiners is governed by the target’s gender, but not in the plural condition where the determiner is identical for all genders. When different suffixes were to be selected in the gender-incongruent condition, no gender congruency effect was obtained. The results suggest that the so-called gender congruency effect is really a determiner congruency effect. The overall pattern of results is interpreted as indicating that grammatical feature selection is an automatic consequence of lexical node selection and therefore not subject to interference from other grammatical features. This implies that lexical node and grammatical feature selection operate with distinct principles.
  • Schoffelen, J.-M., Oostenveld, R., & Fries, P. (2005). Neuronal coherence as a mechanism of effective corticospinal interaction. Science, 308, 111-113. doi:10.1126/science.1107027.

    Abstract

    Neuronal groups can interact with each other even if they are widely separated. One group might modulate its firing rate or its internal oscillatory synchronization to influence another group. We propose that coherence between two neuronal groups is a mechanism of efficient interaction, because it renders mutual input optimally timed and thereby maximally effective. Modulations of subjects' readiness to respond in a simple reaction-time task were closely correlated with the strength of gamma-band (40 to 70 hertz) coherence between motor cortex and spinal cord neurons. This coherence may contribute to an effective corticospinal interaction and shortened reaction times.
  • Seifart, F. (2003). Marqueurs de classe généraux et spécifiques en Miraña. Faits de Langues, 21, 121-132.
  • Senft, G. (2005). [Review of the book Malinowski: Odyssey of an anthropologist 1884-1920 by Michael Young]. Oceania, 75(3), 302-302.
  • Senft, G. (2003). [Review of the book Representing space in Oceania: Culture in language and mind ed. by Giovanni Bennardo]. Journal of the Polynesian Society, 112, 169-171.
  • Senft, G. (2005). [Review of the book The art of Kula by Shirley F. Campbell]. Anthropos, 100, 247-249.
  • Senghas, A., Ozyurek, A., & Kita, S. (2005). [Response to comment on Children creating core properties of language: Evidence from an emerging sign language in Nicaragua]. Science, 309(5731), 56c-56c. doi:10.1126/science.1110901.
  • Seuren, P. A. M. (2005). Eubulides as a 20th-century semanticist. Language Sciences, 27(1), 75-95. doi:10.1016/j.langsci.2003.12.001.

    Abstract

    It is the purpose of the present paper to highlight the figure of Eubulides, a relatively unknown Greek philosopher who lived ±405–330 BC and taught at Megara, not far from Athens. He is mainly known for his four paradoxes (the Liar, the Sorites, the Electra, and the Horns), and for the mutual animosity between him and his younger contemporary Aristotle. The Megarian school of philosophy was one of the main sources of the great Stoic tradition in ancient philosophy. What has never been made explicit in the literature is the importance of the four paradoxes for the study of meaning in natural language: they summarize the whole research programme of 20th century formal or formally oriented semantics, including the problems of vague predicates (Sorites), intensional contexts (Electra), and presuppositions (Horns). One might say that modern formal or formally oriented semantics is essentially an attempt at finding linguistically tenable answers to problems arising in the context of Aristotelian thought. It is a surprising and highly significant fact that a contemporary of Aristotle already spotted the main weaknesses of the Aristotelian paradigm.
  • Shapiro, K. A., Mottaghy, F. M., Schiller, N. O., Poeppel, T. D., Flüss, M. O., Müller, H. W., Caramazza, A., & Krause, B. J. (2005). Dissociating neural correlates for nouns and verbs. NeuroImage, 24(4), 1058-1067. doi:10.1016/j.neuroimage.2004.10.015.

    Abstract

    Dissociations in the ability to produce words of different grammatical categories are well established in neuropsychology but have not been corroborated fully with evidence from brain imaging. Here we report on a PET study designed to reveal the anatomical correlates of grammatical processes involving nouns and verbs. German-speaking subjects were asked to produce either plural and singular nouns, or first-person plural and singular verbs. Verbs, relative to nouns, activated a left frontal cortical network, while the opposite contrast (nouns–verbs) showed greater activation in temporal regions bilaterally. Similar patterns emerged when subjects performed the task with pseudowords used as nouns or as verbs. These results converge with findings from lesion studies and suggest that grammatical category is an important dimension of organization for knowledge of language in the brain.
  • Sharp, D. J., Scott, S. K., Cutler, A., & Wise, R. J. S. (2005). Lexical retrieval constrained by sound structure: The role of the left inferior frontal gyrus. Brain and Language, 92(3), 309-319. doi:10.1016/j.bandl.2004.07.002.

    Abstract

    Positron emission tomography was used to investigate two competing hypotheses about the role of the left inferior frontal gyrus (IFG) in word generation. One proposes a domain-specific organization, with neural activation dependent on the type of information being processed, i.e., surface sound structure or semantic. The other proposes a process-specific organization, with activation dependent on processing demands, such as the amount of selection needed to decide between competing lexical alternatives. In a novel word retrieval task, word reconstruction (WR), subjects generated real words from heard non-words by the substitution of either a vowel or consonant. Both types of lexical retrieval, informed by sound structure alone, produced activation within anterior and posterior left IFG regions. Within these regions there was greater activity for consonant WR, which is more difficult and imposes greater processing demands. These results support a process-specific organization of the anterior left IFG.
  • Smits, R., Warner, N., McQueen, J. M., & Cutler, A. (2003). Unfolding of phonetic information over time: A database of Dutch diphone perception. Journal of the Acoustical Society of America, 113(1), 563-574. doi:10.1121/1.1525287.

    Abstract

    We present the results of a large-scale study on speech perception, assessing the number and type of perceptual hypotheses which listeners entertain about possible phoneme sequences in their language. Dutch listeners were asked to identify gated fragments of all 1179 diphones of Dutch, providing a total of 488 520 phoneme categorizations. The results manifest orderly uptake of acoustic information in the signal. Differences across phonemes in the rate at which fully correct recognition was achieved arose as a result of whether or not potential confusions could occur with other phonemes of the language ~long with short vowels, affricates with their initial components, etc.!. These data can be used to improve models of how acoustic phonetic information is mapped onto the mental lexicon during speech comprehension.
  • Spinelli, E., McQueen, J. M., & Cutler, A. (2003). Processing resyllabified words in French. Journal of Memory and Language, 48(2), 233-254. doi:10.1016/S0749-596X(02)00513-2.
  • Srivastava, S., Budwig, N., & Narasimhan, B. (2005). A developmental-functionalist view of the development of transitive and intratransitive constructions in a Hindi-speaking child: A case study. International Journal of Idiographic Science, 2.
  • Stivers, T. (2005). Parent resistance to physicians' treatment recommendations: One resource for initiating a negotiation of the treatment decision. Health Communication, 18(1), 41-74. doi:10.1207/s15327027hc1801_3.

    Abstract

    This article examines pediatrician-parent interaction in the context of acute pediatric encounters for children with upper respiratory infections. Parents and physicians orient to treatment recommendations as normatively requiring parent acceptance for physicians to close the activity. Through acceptance, withholding of acceptance, or active resistance, parents have resources with which to negotiate for a treatment outcome that is in line with their own wants. This article offers evidence that even in acute care, shared decision making not only occurs but, through normative constraints, is mandated for parents and physicians to reach accord in the treatment decision.
  • Stivers, T. (2005). Modified repeats: One method for asserting primary rights from second position. Research on Language and Social Interaction, 38(2), 131-158. doi:10.1207/s15327973rlsi3802_1.

    Abstract

    In this article I examine one practice speakers have for confirming when confirmation was not otherwise relevant. The practice involves a speaker repeating an assertion previously made by another speaker in modified form with stress on the copula/auxiliary. I argue that these modified repeats work to undermine the first speaker's default ownership and rights over the claim and instead assert the primacy of the second speaker's rights to make the statement. Two types of modified repeats are identified: partial and full. Although both involve competing for primacy of the claim, they occur in distinct sequential environments: The former are generally positioned after a first claim was epistemically downgraded, whereas the latter are positioned following initial claims that were offered straightforwardly, without downgrading.
  • Stivers, T., & Sidnell, J. (2005). Introduction: Multimodal interaction. Semiotica, 156(1/4), 1-20. doi:10.1515/semi.2005.2005.156.1.

    Abstract

    That human social interaction involves the intertwined cooperation of different modalities is uncontroversial. Researchers in several allied fields have, however, only recently begun to document the precise ways in which talk, gesture, gaze, and aspects of the material surround are brought together to form coherent courses of action. The papers in this volume are attempts to develop this line of inquiry. Although the authors draw on a range of analytic, theoretical, and methodological traditions (conversation analysis, ethnography, distributed cognition, and workplace studies), all are concerned to explore and illuminate the inherently multimodal character of social interaction. Recent studies, including those collected in this volume, suggest that different modalities work together not only to elaborate the semantic content of talk but also to constitute coherent courses of action. In this introduction we present evidence for this position. We begin by reviewing some select literature focusing primarily on communicative functions and interactive organizations of specific modalities before turning to consider the integration of distinct modalities in interaction.
  • Stivers, T. (2005). Non-antibiotic treatment recommendations: Delivery formats and implications for parent resistance. Social Science & Medicine, 60(5), 949-964. doi:10.1016/j.socscimed.2004.06.040.

    Abstract

    This study draws on a database of 570 community-based acute pediatric encounters in the USA and uses conversation analysis as a methodology to identify two formats physicians use to recommend non-antibiotic treatment in acute pediatric care (using a subset of 309 cases): recommendations for particular treatment (e.g., “I’m gonna give her some cough medicine.”) and recommendations against particular treatment (e.g., “She doesn’t need any antibiotics.”). The findings are that the presentation of a specific affirmative recommendation for treatment is less likely to engender parent resistance to a non-antibiotic treatment recommendation than a recommendation against particular treatment even if the physician later offers a recommendation for particular treatment. It is suggested that physicians who provide a specific positive treatment recommendation followed by a negative recommendation are most likely to attain parent alignment and acceptance when recommending a non-antibiotic treatment for a viral upper respiratory illness.
  • Stivers, T., Mangione-Smith, R., Elliott, M. N., McDonald, L., & Heritage, J. (2003). Why do physicians think parents expect antibiotics? What parents report vs what physicians believe. Journal of Family Practice, 52(2), 140-147.
  • Striano, T., & Liszkowski, U. (2005). Sensitivity to the context of facial expression in the still face at 3-, 6-, and 9-months of age. Infant Behavior and Development, 28(1), 10-19. doi:10.1016/j.infbeh.2004.06.004.

    Abstract

    Thirty-eight 3-, 6-, and 9-month-old infants interacted in a face to face situation with a female stranger who disrupted the on-going interaction with 30 s Happy and Neutral still face episodes. Three- and 6-month-olds manifested a robust still face response for gazing and smiling. For smiling, 9-month-olds manifested a floor effect such that no still face effect could be shown. For gazing, 9-month-olds' still face response was modulated by the context of interaction such that it was less pronounced if a happy still face was presented first. The findings point to a developmental transition by the end of the first year, whereby infants' still face response becomes increasingly influenced by the context of social interaction. (C) 2004 Published by Elsevier Inc. [References: 35]
  • Swaab, T., Brown, C. M., & Hagoort, P. (2003). Understanding words in sentence contexts: The time course of ambiguity resolution. Brain and Language, 86(2), 326-343. doi:10.1016/S0093-934X(02)00547-3.

    Abstract

    Spoken language comprehension requires rapid integration of information from multiple linguistic sources. In the present study we addressed the temporal aspects of this integration process by focusing on the time course of the selection of the appropriate meaning of lexical ambiguities (“bank”) in sentence contexts. Successful selection of the contextually appropriate meaning of the ambiguous word is dependent upon the rapid binding of the contextual information in the sentence to the appropriate meaning of the ambiguity. We used the N400 to identify the time course of this binding process. The N400 was measured to target words that followed three types of context sentences. In the concordant context, the sentence biased the meaning of the sentence-final ambiguous word so that it was related to the target. In the discordant context, the sentence context biased the meaning so that it was not related to the target. In the unrelated control condition, the sentences ended in an unambiguous noun that was unrelated to the target. Half of the concordant sentences biased the dominant meaning, and the other half biased the subordinate meaning of the sentence-final ambiguous words. The ISI between onset of the target word and offset of the sentence-final word of the context sentence was 100 ms in one version of the experiment, and 1250 ms in the second version. We found that (i) the lexically dominant meaning is always partly activated, independent of context, (ii) initially both dominant and subordinate meaning are (partly) activated, which suggests that contextual and lexical factors both contribute to sentence interpretation without context completely overriding lexical information, and (iii) strong lexical influences remain present for a relatively long period of time.
  • Swingley, D. (2003). Phonetic detail in the developing lexicon. Language and Speech, 46(3), 265-294.

    Abstract

    Although infants show remarkable sensitivity to linguistically relevant phonetic variation in speech, young children sometimes appear not to make use of this sensitivity. Here, children's knowledge of the sound-forms of familiar words was assessed using a visual fixation task. Dutch 19-month-olds were shown pairs of pictures and heard correct pronunciations and mispronunciations of familiar words naming one of the pictures. Mispronunciations were word-initial in Experiment 1 and word-medial in Experiment 2, and in both experiments involved substituting one segment with [d] (a common sound in Dutch) or [g] (a rare sound). In both experiments, word recognition performance was better for correct pronunciations than for mispronunciations involving either substituted consonant. These effects did not depend upon children's knowledge of lexical or nonlexical phonological neighbors of the tested words. The results indicate the encoding of phonetic detail in words at 19 months.
  • Swingley, D. (2005). Statistical clustering and the contents of the infant vocabulary. Cognitive Psychology, 50(1), 86-132. doi:10.1016/j.cogpsych.2004.06.001.

    Abstract

    Infants parse speech into word-sized units according to biases that develop in the first year. One bias, present before the age of 7 months, is to cluster syllables that tend to co-occur. The present computational research demonstrates that this statistical clustering bias could lead to the extraction of speech sequences that are actual words, rather than missegmentations. In English and Dutch, these word-forms exhibit the strong–weak (trochaic) pattern that guides lexical segmentation after 8 months, suggesting that the trochaic parsing bias is learned as a generalization from statistically extracted bisyllables, and not via attention to short utterances or to high-frequency bisyllables. Extracted word-forms come from various syntactic classes, and exhibit distributional characteristics enabling rudimentary sorting of words into syntactic categories. The results highlight the importance of infants’ first year in language learning: though they may know the meanings of very few words, infants are well on their way to building a vocabulary.
  • Swingley, D. (2005). 11-month-olds' knowledge of how familiar words sounds. Developmental Science, 8(5), 432-443. doi:10.1111/j.1467-7687.2005.00432.

    Abstract

    During the first year of life, infants' perception of speech becomes tuned to the phonology of the native language, as revealed in laboratory discrimination and categorization tasks using syllable stimuli. However, the implications of these results for the development of the early vocabulary remain controversial, with some results suggesting that infants retain only vague, sketchy phonological representations of words. Five experiments using a preferential listening procedure tested Dutch 11-month-olds' responses to word, nonword and mispronounced-word stimuli. Infants listened longer to words than nonwords, but did not exhibit this response when words were mispronounced at onset or at offset. In addition, infants preferred correct pronunciations to onset mispronunciations. The results suggest that infants' encoding of familiar words includes substantial phonological detail.
  • Terrill, A., & Dunn, M. (2003). Orthographic design in the Solomon Islands: The social, historical, and linguistic situation of Touo (Baniata). Written Language and Literacy, 6(2), 177-192. doi:10.1075/wll.6.2.03ter.

    Abstract

    This paper discusses the development of an orthography for the Touo language (Solomon Islands). Various orthographies have been proposed for this language in the past, and the paper discusses why they are perceived by the community to have failed. Current opinion about orthography development within the Touo-speaking community is divided along religious, political, and geographical grounds; and the development of a successful orthography must take into account a variety of opinions. The paper examines the social, historical, and linguistic obstacles that have hitherto prevented the development of an accepted Touo orthography, and presents a new proposal which has thus far gained acceptance with community leaders. The fundamental issue is that creating an orthography for a language takes place in a social, political, and historical context; and for an orthography to be acceptable for the speakers of a language, all these factors must be taken into account.
  • Terrill, A. (2003). Linguistic stratigraphy in the central Solomon Islands: Lexical evidence of early Papuan/Austronesian interaction. Journal of the Polynesian Society, 112(4), 369-401.

    Abstract

    The extent to which linguistic borrowing can be used to shed light on the existence and nature of early contact between Papuan and Oceanic speakers is examined. The question is addressed by taking one Papuan language, Lavukaleve, spoken in the Russell Islands, central Solomon Islands and examining lexical borrowings between it and nearby Oceanic languages, and with reconstructed forms of Proto Oceanic. Evidence from ethnography, culture history and archaeology, when added to the linguistic evidence provided in this study, indicates long-standing cultural links between other (non-Russell) islands. The composite picture is one of a high degree of cultural contact with little linguistic mixing, i.e., little or no changes affecting the structure of the languages and actually very little borrowed vocabulary.
  • Theakston, A. L., Lieven, E. V., Pine, J. M., & Rowland, C. F. (2005). The acquisition of auxiliary syntax: BE and HAVE. Cognitive Linguistics, 16(1), 247-277. doi:10.1515/cogl.2005.16.1.247.

    Abstract

    This study examined patterns of auxiliary provision and omission for the auxiliaries BE and HAVE in a longitudinal data set from 11 children between the ages of two and three years. Four possible explanations for auxiliary omission—a lack of lexical knowledge, performance limitations in production, the Optional Infinitive hypothesis, and patterns of auxiliary use in the input—were examined. The data suggest that although none of these accounts provides a full explanation for the pattern of auxiliary use and nonuse observed in children's early speech, integrating input-based and lexical learning-based accounts of early language acquisition within a constructivist approach appears to provide a possible framework in which to understand the patterns of auxiliary use found in the children's speech. The implications of these findings for models of children's early language acquisition are discussed.
  • Van Turennout, M., Bielamowicz, L., & Martin, A. (2003). Modulation of neural activity during object naming: Effects of time and practice. Cerebral Cortex, 13(4), 381-391.

    Abstract

    Repeated exposure to objects improves our ability to identify and name them, even after a long delay. Previous brain imaging studies have demonstrated that this experience-related facilitation of object naming is associated with neural changes in distinct brain regions. We used event-related functional magnetic resonance imaging (fMRI) to examine the modulation of neural activity in the object naming system as a function of experience and time. Pictures of common objects were presented repeatedly for naming at different time intervals (1 h, 6 h and 3 days) before scanning, or at 30 s intervals during scanning. The results revealed that as objects became more familiar with experience, activity in occipitotemporal and left inferior frontal regions decreased while activity in the left insula and basal ganglia increased. In posterior regions, reductions in activity as a result of multiple repetitions did not interact with time, whereas in left inferior frontal cortex larger decreases were observed when repetitions were spaced out over time. This differential modulation of activity in distinct brain regions provides support for the idea that long-lasting object priming is mediated by two neural mechanisms. The first mechanism may involve changes in object-specific representations in occipitotemporal cortices, the second may be a form of procedural learning involving a reorganization in brain circuitry that leads to more efficient name retrieval.
  • Van Berkum, J. J. A., Zwitserlood, P., Hagoort, P., & Brown, C. M. (2003). When and how do listeners relate a sentence to the wider discourse? Evidence from the N400 effect. Cognitive Brain Research, 17(3), 701-718. doi:10.1016/S0926-6410(03)00196-4.

    Abstract

    In two ERP experiments, we assessed the impact of discourse-level information on the processing of an unfolding spoken sentence. Subjects listened to sentences like Jane told her brother that he was exceptionally quick/slow, designed such that the alternative critical words were equally acceptable within the local sentence context. In Experiment 1, these sentences were embedded in a discourse that rendered one of the critical words anomalous (e.g. because Jane’s brother had in fact done something very quickly). Relative to the coherent alternative, these discourse-anomalous words elicited a standard N400 effect that started at 150–200 ms after acoustic word onset. Furthermore, when the same sentences were heard in isolation in Experiment 2, the N400 effect disappeared. The results demonstrate that our listeners related the unfolding spoken words to the wider discourse extremely rapidly, after having heard the first two or three phonemes only, and in many cases well before the end of the word. In addition, the identical nature of discourse- and sentence-dependent N400 effects suggests that from the perspective of the word-elicited comprehension process indexed by the N400, the interpretive context delineated by a single unfolding sentence and a larger discourse is functionally identical.
  • Van Berkum, J. J. A., Brown, C. M., Hagoort, P., & Zwitserlood, P. (2003). Event-related brain potentials reflect discourse-referential ambiguity in spoken language comprehension. Psychophysiology, 40(2), 235-248. doi:10.1111/1469-8986.00025.

    Abstract

    In two experiments, we explored the use of event-related brain potentials to selectively track the processes that establish reference during spoken language comprehension. Subjects listened to stories in which a particular noun phrase like "the girl" either uniquely referred to a single referent mentioned in the earlier discourse, or ambiguously referred to two equally suitable referents. Referentially ambiguous nouns ("the girl" with two girls introduced in the discourse context) elicited a frontally dominant and sustained negative shift in brain potentials, emerging within 300–400 ms after acoustic noun onset. The early onset of this effect reveals that reference to a discourse entity can be established very rapidly. Its morphology and distribution suggest that at least some of the processing consequences of referential ambiguity may involve an increased demand on memory resources. Furthermore, because this referentially induced ERP effect is very different from that of well-known ERP effects associated with the semantic (N400) and syntactic (e.g., P600/SPS) aspects of language comprehension, it suggests that ERPs can be used to selectively keep track of three major processes involved in the comprehension of an unfolding piece of discourse.
  • Van Donselaar, W., Koster, M., & Cutler, A. (2005). Exploring the role of lexical stress in lexical recognition. Quarterly Journal of Experimental Psychology, 58A(2), 251-273. doi:10.1080/02724980343000927.

    Abstract

    Three cross-modal priming experiments examined the role of suprasegmental information in the processing of spoken words. All primes consisted of truncated spoken Dutch words. Recognition of visually presented word targets was facilitated by prior auditory presentation of the first two syllables of the same words as primes, but only if they were appropriately stressed (e.g., OKTOBER preceded by okTO-); inappropriate stress, compatible with another word (e.g., OKTOBER preceded by OCto-, the beginning of octopus), produced inhibition. Monosyllabic fragments (e.g., OC-) also produced facilitation when appropriately stressed; if inappropriately stressed, they produced neither facilitation nor inhibition. The bisyllabic fragments that were compatible with only one word produced facilitation to semantically associated words, but inappropriate stress caused no inhibition of associates. The results are explained within a model of spoken-word recognition involving competition between simultaneously activated phonological representations followed by activation of separate conceptual representations for strongly supported lexical candidates; at the level of the phonological representations, activation is modulated by both segmental and suprasegmental information.
  • Van Gompel, R. P., & Majid, A. (2003). Antecedent frequency effects during the processing of pronouns. Cognition, 90(3), 255-264. doi:10.1016/S0010-0277(03)00161-6.

    Abstract

    An eye-movement reading experiment investigated whether the ease with which pronouns are processed is affected by the lexical frequency of their antecedent. Reading times following pronouns with infrequent antecedents were faster than following pronouns with frequent antecedents. We argue that this is consistent with a saliency account, according to which infrequent antecedents are more salient than frequent antecedents. The results are not predicted by accounts which claim that readers access all or part of the lexical properties of the antecedent during the processing of pronouns.
  • Van Berkum, J. J. A., Brown, C. M., Zwitserlood, P., Kooijman, V., & Hagoort, P. (2005). Anticipating upcoming words in discourse: Evidence from ERPs and reading times. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(3), 443-467. doi:10.1037/0278-7393.31.3.443.

    Abstract

    The authors examined whether people can use their knowledge of the wider discourse rapidly enough to anticipate specific upcoming words as a sentence is unfolding. In an event-related brain potential (ERP) experiment, subjects heard Dutch stories that supported the prediction of a specific noun. To probe whether this noun was anticipated at a preceding indefinite article, stories were continued with a gender-marked adjective whose suffix mismatched the upcoming noun's syntactic gender. Prediction-inconsistent adjectives elicited a differential ERP effect, which disappeared in a no-discourse control experiment. Furthermore, in self-paced reading, prediction-inconsistent adjectives slowed readers down before the noun. These findings suggest that people can indeed predict upcoming words in fluent discourse and, moreover, that these predicted words can immediately begin to participate in incremental parsing operations.
  • Van Halteren, H., Baayen, R. H., Tweedie, F., Haverkort, M., & Neijt, A. (2005). New machine learning methods demonstrate the existence of a human stylome. Journal of Quantitative Linguistics, 12(1), 65-77. doi:10.1080/09296170500055350.

    Abstract

    Earlier research has shown that established authors can be distinguished by measuring specific properties of their writings, their stylome as it were. Here, we examine writings of less experienced authors. We succeed in distinguishing between these authors with a very high probability, which implies that a stylome exists even in the general population. However, the number of traits needed for so successful a distinction is an order of magnitude larger than assumed so far. Furthermore, traits referring to syntactic patterns prove less distinctive than traits referring to vocabulary, but much more distinctive than expected on the basis of current generativist theories of language learning.
  • Verhagen, J. (2005). The role of the nonmodal auxiliary 'hebben' in Dutch as a second language. Zeitschrift für Literaturwissenschaft und Linguistik, 140, 109-127.

    Abstract

    The acquisition of non-modal auxiliaries has been assumed to constitute an important step in the acquisition of finiteness in Germanic languages (cf. Jordens/Dimroth 2005, Jordens 2004, Becker 2005). This paper focuses onthe role of the auxiliary hebben (>to have<) in the acquisition of Dutch as a second language. More specifically, it investigates whether learners' production of hebben is related to their acquisition of two phenomena commonly associated with finiteness, i.e., topicalization and negation. Data are presented from 16 Turkish and 36 Moroccan learners of Dutch who participated in an experiment involving production and imitation tasks. The production data suggest that learners use topicalization and post-verbal negation only after they have learned to produce the auxiliary hebben. The results from the imitation task indicate, that learners are more sensitive to topicalization and post-verbal negation in sentences with hebben than in sentences with lexical verbs. Interestingly this holds also for learners that did not show productive command of hebben in the production tasks. Thus, in general, the results of the experiment provide support for the idea that non-modal auxiliaries are crucial in the acquisition of (certain properties of) finiteness.
  • Verhagen, J. (2005). The role of the nonmodal auxiliary 'hebben' in Dutch as a second language. Toegepaste Taalwetenschap in Artikelen, 73, 41-52.
  • Verhoeven, L., Schreuder, R., & Baayen, R. H. (2003). Units of analysis in reading Dutch bisyllabic pseudowords. Scientific Studies of Reading, 7(3), 255-271. doi:10.1207/S1532799XSSR0703_4.

    Abstract

    Two experiments were carried out to explore the units of analysis is used by children to read Dutch bisyllabic pseudowords. Although Dutch orthography is highly regular, several deviations from a one-to-one correspondence occur. In polysyllabic words, the grapheme e may represent three different vowels:/∊/, /e/, or /λ/. In Experiment 1, Grade 6 elementary school children were presented lists of bisyllabic pseudowords containing the grapheme e in the initial syllable representing a content morpheme, a prefix, or a random string. On the basis of general word frequency data, we expected the interpretation of the initial syllable as a random string to elicit the pronunciation of a stressed /e/, the interpretation of the initial syllable as a content morpheme to elicit the pronunciation of a stressed /∊/, the interpretation of the initial syllable as a content morpheme to elicit the pronunciation of a stressed /∊/, and the interpretation as a prefix to elicit the pronunciation of an unstressed /&lamda;/. We found both the pronunciation and the stress assignment for pseudowords to depend on word type, which shows morpheme boundaries and prefixes to be identified. However, the identification of prefixes could also be explained by the correspondence of the prefix boundaries in the pseudowords to syllable boundaries. To exclude this alternative explanation, a follow-up experiment with the same group of children was conducted using bisyllabic pseudowords containing prefixes that did not coincide with syllable boundaries versus similar pseudowords with no prefix. The results of the first experiment were replicated. That is, the children identified prefixes and shifted their assignment of word stress accordingly. The results are discussed with reference to a parallel dual-route model of word decoding
  • Waller, D., & Haun, D. B. M. (2003). Scaling techniques for modeling directional knowledge. Behavior Research Methods, Instruments, & Computers, 35(2), 285-293.

    Abstract

    A common way for researchers to model or graphically portray spatial knowledge of a large environment is by applying multidimensional scaling (MDS) to a set of pairwise distance estimations. We introduce two MDS-like techniques that incorporate people’s knowledge of directions instead of (or in addition to) their knowledge of distances. Maps of a familiar environment derived from these procedures were more accurate and were rated by participants as being more accurate than those derived from nonmetric MDS. By incorporating people’s relatively accurate knowledge of directions, these methods offer spatial cognition researchers and behavioral geographers a sharper analytical tool than MDS for studying cognitive maps.
  • Warner, N., Smits, R., McQueen, J. M., & Cutler, A. (2005). Phonological and statistical effects on timing of speech perception: Insights from a database of Dutch diphone perception. Speech Communication, 46(1), 53-72. doi:10.1016/j.specom.2005.01.003.

    Abstract

    We report detailed analyses of a very large database on timing of speech perception collected by Smits et al. (Smits, R., Warner, N., McQueen, J.M., Cutler, A., 2003. Unfolding of phonetic information over time: A database of Dutch diphone perception. J. Acoust. Soc. Am. 113, 563–574). Eighteen listeners heard all possible diphones of Dutch, gated in portions of varying size and presented without background noise. The present report analyzes listeners’ responses across gates in terms of phonological features (voicing, place, and manner for consonants; height, backness, and length for vowels). The resulting patterns for feature perception differ from patterns reported when speech is presented in noise. The data are also analyzed for effects of stress and of phonological context (neighboring vowel vs. consonant); effects of these factors are observed to be surprisingly limited. Finally, statistical effects, such as overall phoneme frequency and transitional probabilities, along with response biases, are examined; these too exercise only limited effects on response patterns. The results suggest highly accurate speech perception on the basis of acoustic information alone.
  • Warner, N., Kim, J., Davis, C., & Cutler, A. (2005). Use of complex phonological patterns in speech processing: Evidence from Korean. Journal of Linguistics, 41(2), 353-387. doi:10.1017/S0022226705003294.

    Abstract

    Korean has a very complex phonology, with many interacting alternations. In a coronal-/i/ sequence, depending on the type of phonological boundary present, alternations such as palatalization, nasal insertion, nasal assimilation, coda neutralization, and intervocalic voicing can apply. This paper investigates how the phonological patterns of Korean affect processing of morphemes and words. Past research on languages such as English, German, Dutch, and Finnish has shown that listeners exploit syllable structure constraints in processing speech and segmenting it into words. The current study shows that in parsing speech, listeners also use much more complex patterns that relate the surface phonological string to various boundaries.

Share this page