Publications

Displaying 101 - 200 of 269
  • Kempen, G. (1993). Mensentaal als computertaal. Onze Taal, 62, 275-277.
  • Kempen, G. (1979). La mise en paroles, aspects psychologiques de l'expression orale. Études de Linguistique Appliquée, 33, 19-28.

    Abstract

    Remarques sur les facteurs intervenant dans le processus de formulation des énoncés.
  • Kempen, G. (1992). Grammar based text processing. Document Management: Nieuwsbrief voor Documentaire Informatiekunde, 1(2), 8-10.
  • Kempen, G. (1979). Psychologie van de zinsbouw: Een Wundtiaanse inleiding. Nederlands Tijdschrift voor de Psychologie, 34, 533-551.

    Abstract

    The psychology of language as developed by Wilhelm Wundt in his fundamental work Die Sprache (1900) has a strongly mentalistic character. The dominating positions held by behaviorism in psychology and structuralism in linguistics have overruled Wundt’s language theory to the effect that it has remained relatively unknown. This situation has changed recently under the influence of transformational linguistics and cognitive psychology. The paper discusses how Wundt applied the basic psychological concepts of apperception and association to language behavior, in particular to the construction and production of sentences during unprepared speech. The final part of the paper is devoted to the work, published in 1917, of the Dutch linguistic scholar Jacques van Ginneken, who elaborated Wundt’s ideas towards an explanation of some syntactic phenomena during the language acquisition of children.
  • Kempen, G. (1988). Preface. Acta Psychologica, 69(3), 205-206. doi:10.1016/0001-6918(88)90032-7.
  • Kempen, G. (1979). Woordwaarde. De Psycholoog, 14, 577.
  • Kempen, G. (1993). Zinsontleding kan een exact vak worden. Levende Talen, 483, 459-462.
  • Kennaway, J., Glauert, J., & Zwitserlood, I. (2007). Providing Signed Content on the Internet by Synthesized Animation. ACM Transactions on Computer-Human Interaction (TOCHI), 14(3), 15. doi:10.1145/1279700.1279705.

    Abstract

    Written information is often of limited accessibility to deaf people who use sign language. The eSign project was undertaken as a response to the need for technologies enabling efficient production and distribution over the Internet of sign language content. By using an avatar-independent scripting notation for signing gestures and a client-side web browser plug-in to translate this notation into motion data for an avatar, we achieve highly efficient delivery of signing, while avoiding the inflexibility of video or motion capture. Tests with members of the deaf community have indicated that the method can provide an acceptable quality of signing.
  • Kerkhofs, R., Vonk, W., Schriefers, H., & Chwilla, D. J. (2007). Discourse, syntax, and prosody: The brain reveals an immediate interaction. Journal of Cognitive Neuroscience, 19(9), 1421-1434. doi:10.1162/jocn.2007.19.9.1421.

    Abstract

    Speech is structured into parts by syntactic and prosodic breaks. In locally syntactic ambiguous sentences, the detection of a syntactic break necessarily follows detection of a corresponding prosodic break, making an investigation of the immediate interplay of syntactic and prosodic information impossible when studying sentences in isolation. This problem can be solved, however, by embedding sentences in a discourse context that induces the expectation of either the presence or the absence of a syntactic break right at a prosodic break. Event-related potentials (ERPs) were compared to acoustically identical sentences in these different contexts. We found in two experiments that the closure positive shift, an ERP component known to be elicited by prosodic breaks, was reduced in size when a prosodic break was aligned with a syntactic break. These results establish that the brain matches prosodic information against syntactic information immediately.
  • Kidd, E., & Bavin, E. L. (2007). Lexical and referential influences on on-line spoken language comprehension: A comparison of adults and primary-school-age children. First Language, 27(1), 29-52. doi:10.1177/0142723707067437.

    Abstract

    This paper reports on two studies investigating children's and adults' processing of sentences containing ambiguity of prepositional phrase (PP) attachment. Study 1 used corpus data to investigate whether cues argued to be used by adults to resolve PP-attachment ambiguities are available in child-directed speech. Study 2 was an on-line reaction time study investigating the role of lexical and referential biases in syntactic ambiguity resolution by children and adults. Forty children (mean age 8;4) and 37 adults listened to V-NP-PP sentences containing temporary ambiguity of PP-attachment. The sentences were manipulated for (i) verb semantics, (ii) the definiteness of the object NP, and (iii) PP-attachment site. The children and adults did not differ qualitatively from each other in their resolution of the ambiguity. A verb semantics by attachment interaction suggested that different attachment analyses were pursued depending on the semantics of the verb. There was no influence of the definiteness of the object NP in either children's or adults' parsing preferences. The findings from the on-line task matched up well with the corpus data, thus identifying a role for the input in the development of parsing strategies.
  • Kidd, E., Brandt, S., Lieven, E., & Tomasello, M. (2007). Object relatives made easy: A cross-linguistic comparison of the constraints influencing young children's processing of relative clauses. Language and Cognitive Processes, 22(6), 860-897. doi:10.1080/01690960601155284.

    Abstract

    We present the results from four studies, two corpora and two experimental, which suggest that English- and German-speaking children (3;1–4;9 years) use multiple constraints to process and produce object relative clauses. Our two corpora studies show that children produce object relatives that reflect the distributional and discourse regularities of the input. Specifically, the results show that when children produce object relatives they most often do so with (a) an inanimate head noun, and (b) a pronominal relative clause subject. Our experimental findings show that children use these constraints to process and produce this construction type. Moreover, when children were required to repeat the object relatives they most often use in naturalistic speech, the subject-object asymmetry in processing of relative clauses disappeared. We also report cross-linguistic differences in children's rate of acquisition which reflect properties of the input language. Overall, our results suggest that children are sensitive to the same constraints on relative clause processing as adults.
  • Kita, S., Ozyurek, A., Allen, S., Brown, A., Furman, R., & Ishizuka, T. (2007). Relations between syntactic encoding and co-speech gestures: Implications for a model of speech and gesture production. Language and Cognitive Processes, 22(8), 1212-1236. doi:10.1080/01690960701461426.

    Abstract

    Gestures that accompany speech are known to be tightly coupled with speech production. However little is known about the cognitive processes that underlie this link. Previous cross-linguistic research has provided preliminary evidence for online interaction between the two systems based on the systematic co-variation found between how different languages syntactically package Manner and Path information of a motion event and how gestures represent Manner and Path. Here we elaborate on this finding by testing whether speakers within the same language gesturally express Manner and Path differently according to their online choice of syntactic packaging of Manner and Path, or whether gestural expression is pre-determined by a habitual conceptual schema congruent with the linguistic typology. Typologically congruent and incongruent syntactic structures for expressing Manner and Path (i.e., in a single clause or multiple clauses) were elicited from English speakers. We found that gestural expressions were determined by the online choice of syntactic packaging rather than by a habitual conceptual schema. It is therefore concluded that speech and gesture production processes interface online at the conceptual planning phase. Implications of the findings for models of speech and gesture production are discussed
  • Klein, W., & Von Stutterheim, C. (Eds.). (2007). Sprachliche Perspektivierung [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, 145.
  • Klein, W. (2007). Zwei Leitgedanken zu "Sprache und Erkenntnis". Zeitschrift für Literaturwissenschaft und Linguistik, 145, 9-43.

    Abstract

    In a way, the entire history of linguistic thought from the Antiquity to present days is a series of variations on two key themes: 1. In a certain sense, language and cognition are the same, and 2. In a certain sense, all languages are the same. What varies is the way in which “in a certain sense” is spelled out. Interpretations oscillate between radical positions such as the idea that thinking without speaking is impossible to the idea that it is just language which vexes our cognition and hence makes it rather impossible, and similarly between the idea that all differences between natural languages are nothing but irrelevant variations in the “vox”, the “external form” to the idea that it our thought is massively shaped by the particular structural features of the language we happen to speak. It is remarkable how little agreement has been reached on these issues after more than 2500 years of discussion. This, it is argued, has mainly two reasons: (a) The entire argument is largely confined to a few lexical and morphological properties of human languages, and (b) the discussion is rarely based on empirical research on “language at work” - how do we manage to solve those many little tasks for which human languages are designed in the first place.
  • Klein, W., & Rieck, B.-O. (1982). Der Erwerb der Personalpronomina im ungesteuerten Spracherwerb. Zeitschrift für Literaturwissenschaft und Linguistik, 45, 35-71.
  • Klein, W. (1982). Einige Bemerkungen zur Frageintonation. Deutsche Sprache, 4, 289-310.

    Abstract

    In the first, critical part of this study, a small sample of simple German sentences with their empirically determined pitch contours is used to demonstrate the incorrectness of numerous currently hold views of German sentence intonation. In the second, more constructive part, several interrogative sentence types are analysed and an attempt is made to show that intonation, besides other functions, indicates the permantently changing 'thematic score' in on-going discourse as well as certain validity claims.
  • Klein, W. (1982). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, 12, 7-8.
  • Klein, W. (1979). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, 9(33), 7-8.
  • Klein, W. (1992). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, 22(86), 7-8.
  • Klein, W. (1988). Einleitung. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, 18(69), 7-8.
  • Klein, W., & Von Stutterheim, C. (2007). Einführung. Zeitschrift für Literaturwissenschaft und Linguistik; Metzler, Stuttgart, (145), 5-8.
  • Klein, W. (2007). Mechanismen des Erst- und Zweitspracherwerbs = Mechanisms of First and Second Language Acquisition. Sprache Stimme Gehör, 31, 138-143. doi:10.1055/s-2007-985818.

    Abstract

    Language acquisition is the transition between the language faculty, with which we are born as a part of our genetic endowment, to the mastery of one or more linguistic systems. There is a plethora of findings about this process; but these findings still do not form a coherent picture of the principles which underlie this process. There are at least six reasons for this situation. First, there is an enormous variability in the conditions under which this process occurs. Second, the learning capacity does not remain constant over time. Third, the process extends over many years and is therefore hard to study. Fourth, especially the investigation of the meaning side is problem-loaded. Fifth, many skills and types of knowledge must be learned in a more or less synchronised way. And sixth, our understanding of the functioning of linguistic systems is still very limited. Nevertheless, there are a few overarching results, three of which are discussed here: (1) There are salient differences between child and adult learners: While children normally end up with perfect mastery of the language to be learned, this is hardly ever the case for adults. On the hand, it could be shown for each linguistic property examined so far, that adults are in principle able to learn it up to perfection. So, adults can learn everything perfectly well, they just don’t. (2) Within childhood, age of onset plays no essential role for ultimate attainment. (3) Children care much more for formal correctness than adults - they are just better in mimicking existing systems. It is argued that age does not affect the „construction capacity”- the capacity to build up linguistic systems - but the „copying faculty”, i.e., the faculty to imitate an existing system.
  • Klein, W. (1982). Pronoms personnels et formes d'acquisition. Encrages, 8/9, 42-46.
  • Klein, W. (1992). Tempus, Aspekt und Zeitadverbien. Kognitionswissenschaft, 2, 107-118.
  • Klein, W. (Ed.). (1992). Textlinguistik [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (86).
  • Klein, W., & Von Stutterheim, C. (1992). Textstruktur und referentielle Bewegung. Zeitschrift für Literaturwissenschaft und Linguistik, 86, 67-92.
  • Klein, W. (Ed.). (1988). Sprache Kranker [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (69).
  • Klein, W. (Ed.). (1979). Sprache und Kontext [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (33).
  • Klein, W. (1988). Sprache und Krankheit: Ein paar Anmerkungen. Zeitschrift für Literaturwissenschaft und Linguistik, 69, 9-20.
  • Klein, W. (1992). The present perfect puzzle. Language, 68, 525-552.

    Abstract

    In John has left London, it is clear that the event in question, John's leaving London, has occurred in the past, for example yesterday at ten. Why is it impossible, then, to make this the event time more explicit by such an adverbial, as in Yesterday at ten, John has left London? Any solution of this puzzle crucially hinges on the meaning assigned to the perfect, and the present perfect in particular. Two such solutions, a scope solution and the 'current relevance'-solution, are discussed and shown to be inadequate. A new, strictly compositional analysis of the English perfect is suggested, and it is argued that the imcompatibility of the present perfect and most past tense adverbials has neither syntactic nor semantic reasons but follows from a simple pragmatical constraint, called here the 'position-definiteness constraint'. It is the very same constraint, which also makes an utterance such as At ten, John had left at nine pragmatically odd, even if John indeed had left at nine, and hence the utterance is true.
  • Klein, W. (1979). Wegauskünfte. Zeitschrift für Literaturwissenschaft und Linguistik, 33, 9-57.
  • Klein, W. (1993). Wie ist der Stand der germanistischen Sprachwissenschaft, und was können wir tun, um ihn zu verbessern? Zeitschrift für Literaturwissenschaft und Linguistik, 90/91, 40-52.
  • Klein, W. (Ed.). (1982). Zweitspracherwerb [Special Issue]. Zeitschrift für Literaturwissenschaft und Linguistik, (45).
  • Korvorst, M., Roelofs, A., & Levelt, W. J. M. (2007). Telling time from analog and digital clocks: A multiple-route account. Experimental Psychology, 54(3), 187-191. doi:10.1027/1618-3169.54.3.187.

    Abstract

    Does the naming of clocks always require conceptual preparation? To examine this question, speakers were presented with analog and digital clocks that had to be named in Dutch using either a relative (e.g., “quarter to four”) or an absolute (e.g., “three forty-five”) clock time expression format. Naming latencies showed evidence of conceptual preparation when speakers produced relative time expressions to analog and digital clocks, but not when they used absolute time expressions. These findings indicate that conceptual mediation is not always mandatory for telling time, but instead depends on clock time expression format, supporting a multiple-route account of Dutch clock time naming.
  • Kristiansen, M., Deriziotis, P., Dimcheff, D. E., Jackson, G. S., Ovaa, H., Naumann, H., Clarke, A. R., van Leeuwen, F. W., Menéndez-Benito, V., Dantuma, N. P., Portis, J. L., Collinge, J., & Tabrizi, S. J. (2007). Disease-associated prion protein oligomers inhibit the 26S proteasome. Molecular Cell, 26, 175-188. doi:10.1016/j.molcel.2007.04.001.

    Abstract

    * Kristiansen, M., Deriziotis, P. These authors contributed equally to this work.* - The mechanism of cell death in prion disease is unknown but is associated with the production of a misfolded conformer of the prion protein. We report that disease-associated prion protein specifically inhibits the proteolytic β subunits of the 26S proteasome. Using reporter substrates, fluorogenic peptides, and an activity probe for the β subunits, this inhibitory effect was demonstrated in pure 26S proteasome and three different cell lines. By challenge with recombinant prion and other amyloidogenic proteins, we demonstrate that only the prion protein in a nonnative β sheet conformation inhibits the 26S proteasome at stoichiometric concentrations. Preincubation with an antibody specific for aggregation intermediates abrogates this inhibition, consistent with an oligomeric species mediating this effect. We also present evidence for a direct relationship between prion neuropathology and impairment of the ubiquitin-proteasome system (UPS) in prion-infected UPS-reporter mice. Together, these data suggest a mechanism for intracellular neurotoxicity mediated by oligomers of misfolded prion protein.

    Additional information

    MolCellSup.pdf
  • Kuiper, K., Van Egmond, M.-E., Kempen, G., & Sprenger, S. A. (2007). Slipping on superlemmas: Multiword lexical items in speech production. The Mental Lexicon, 2(3), 313-357.

    Abstract

    Only relatively recently have theories of speech production concerned themselves with the part idioms and other multi-word lexical items (MLIs) play in the processes of speech production. Two theories of speech production which attempt to account for the accessing of idioms in speech production are those of Cutting and Bock (1997) and superlemma theory (Sprenger, 2003; Sprenger, Levelt, & Kempen, 2006). Much of the data supporting theories of speech production comes either from time course experiments or from slips of the tongue (Bock & Levelt, 1994). The latter are of two kinds: experimentally induced (Baars, 1992) or naturally observed (Fromkin, 1980). Cutting and Bock use experimentally induced speech errors while Sprenger et al. use time course experiments. The missing data type that has a bearing on speech production involving MLIs is that of naturally occurring slips. In this study the impact of data taken from naturally observed slips involving English and Dutch MLIs are brought to bear on these theories. The data are taken initially from a corpus of just over 1000 naturally observed English slips involving MLIs (the Tuggy corpus). Our argument proceeds as follows. First we show that slips occur independent of whether or not there are MLIs involved. In other words, speech production proceeds in certain of its aspects as though there were no MLI present. We illustrate these slips from the Tuggy data. Second we investigate the predictions of superlemma theory. Superlemma theory (Sprenger et al., 2006) accounts for the selection of MLIs and how their properties enter processes of speech production. It predicts certain activation patterns dependent on a MLI being selected. Each such pattern might give rise to slips of the tongue. This set of predictions is tested against the Tuggy data. Each of the predicted activation patterns yields a significant number of slips. These findings are therefore compatible with a view of MLIs as single units in so far as their activation by lexical concepts goes. However, the theory also predicts that some slips are likely not to occur. We confirm that such slips are not present in the data. These findings are further corroborated by reference a second smaller dataset of slips involving Dutch MLIs (the Kempen corpus). We then use slips involving irreversible binomials to distinguish between the predictions of superlemma theory which are supported by slips involving irreversible binomials and the Cutting and Bock model's predictions for slips involving these MLIs which are not
  • Kuperman, V., Pluymaekers, M., Ernestus, M., & Baayen, R. H. (2007). Morphological predictability and acoustic duration of interfixes in Dutch compounds. Journal of the Acoustical Society of America, 121(4), 2261-2271. doi:10.1121/1.2537393.

    Abstract

    This study explores the effects of informational redundancy, as carried by a word's morphological paradigmatic structure, on acoustic duration in read aloud speech. The hypothesis that the more predictable a linguistic unit is, the less salient its realization, was tested on the basis of the acoustic duration of interfixes in Dutch compounds in two datasets: One for the interfix -s- (1155 tokens) and one for the interfix -e(n)- (742 tokens). Both datasets show that the more probable the interfix is, given the compound and its constituents, the longer it is realized. These findings run counter to the predictions of information-theoretical approaches and can be resolved by the Paradigmatic Signal Enhancement Hypothesis. This hypothesis argues that whenever selection of an element from alternatives is probabilistic, the element's duration is predicted by the amount of paradigmatic support for the element: The most likely alternative in the paradigm of selection is realized longer.
  • Kuzla, C., Cho, T., & Ernestus, M. (2007). Prosodic strengthening of German fricatives in duration and assimilatory devoicing. Journal of Phonetics, 35(3), 301-320. doi:10.1016/j.wocn.2006.11.001.

    Abstract

    This study addressed prosodic effects on the duration of and amount of glottal vibration in German word-initial fricatives /f, v, z/ in assimilatory and non-assimilatory devoicing contexts. Fricatives following /small schwa/ (non-assimilation context) were longer and were produced with less glottal vibration after higher prosodic boundaries, reflecting domain-initial prosodic strengthening. After /t/ (assimilation context), lenis fricatives (/v, z/) were produced with less glottal vibration than after /small schwa/, due to assimilatory devoicing. This devoicing was especially strong across lower prosodic boundaries, showing the influence of prosodic structure on sandhi processes. Reduction in glottal vibration made lenis fricatives more fortis-like (/f, s/). Importantly, fricative duration, another major cue to the fortis-lenis distinction, was affected by initial lengthening, but not by assimilation. Hence, at smaller boundaries, fricatives were more devoiced (more fortis-like), but also shorter (more lenis-like). As a consequence, the fortis and lenis fricatives remained acoustically distinct in all prosodic and segmental contexts. Overall, /z/ was devoiced to a greater extent than /v/. Since /z/ does not have a fortis counterpart in word-initial position, these findings suggest that phonotactic restrictions constrain phonetic processes. The present study illuminates a complex interaction of prosody, sandhi processes, and phonotactics, yielding systematic phonetic cues to prosodic structure and phonological distinctions.
  • Lehtonen, M., Cunillera, T., Rodríguez-Fornells, A., Hultén, A., Tuomainen, J., & Laine, M. (2007). Recognition of morphologically complex words in Finnish: Evidence from event-related potentials. Brain Research, 1148, 123-137. doi:10.1016/j.brainres.2007.02.026.

    Abstract

    The temporal dynamics of processing morphologically complex words was investigated by recording event-related brain potentials (ERPs) when native Finnish-speakers performed a visual lexical decision task. Behaviorally, there is evidence that recognition of inflected nouns elicits a processing cost (i.e., longer reaction times and higher error rates) in comparison to matched monomorphemic words. We aimed to reveal whether the processing cost stems from decomposition at the early visual word form level or from recomposition at the later semantic–syntactic level. The ERPs showed no early effects for morphology, but revealed an interaction with word frequency at a late N400-type component, as well as a late positive component that was larger for inflected words. These results suggest that the processing cost stems mainly from the semantic–syntactic level. We also studied the features of the morphological decomposition route by investigating the recognition of pseudowords carrying real morphemes. The results showed no differences between inflected vs. uninflected pseudowords with a false stem, but differences in relation to those with a real stem, suggesting that a recognizable stem is needed to initiate the decomposition route.
  • De León, L., & Levinson, S. C. (Eds.). (1992). Space in Mesoamerican languages [Special Issue]. Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung, 45(6).
  • Levelt, W. J. M. (1992). Accessing words in speech production: Stages, processes and representations. Cognition, 42, 1-22. doi:10.1016/0010-0277(92)90038-J.

    Abstract

    This paper introduces a special issue of Cognition on lexical access in speech production. Over the last quarter century, the psycholinguistic study of speaking, and in particular of accessing words in speech, received a major new impetus from the analysis of speech errors, dysfluencies and hesitations, from aphasiology, and from new paradigms in reaction time research. The emerging theoretical picture partitions the accessing process into two subprocesses, the selection of an appropriate lexical item (a “lemma”) from the mental lexicon, and the phonological encoding of that item, that is, the computation of a phonetic program for the item in the context of utterance. These two theoretical domains are successively introduced by outlining some core issues that have been or still have to be addressed. The final section discusses the controversial question whether phonological encoding can affect lexical selection. This partitioning is also followed in this special issue as a whole. There are, first, four papers on lexical selection, then three papers on phonological encoding, and finally one on the interaction between selection and phonological encoding.
  • Levelt, W. J. M. (1992). Fairness in reviewing: A reply to O'Connell. Journal of Psycholinguistic Research, 21, 401-403.
  • Levelt, W. J. M. (1982). Het lineariseringsprobleem van de spreker. Tijdschrift voor Taal- en Tekstwetenschap (TTT), 2(1), 1-15.
  • Levelt, W. J. M. (1988). Onder sociale wetenschappen. Mededelingen van de Afdeling Letterkunde, 51(2), 41-55.
  • Levelt, W. J. M. (1992). Sprachliche Musterbildung und Mustererkennung. Nova Acta Leopoldina NF, 67(281), 357-370.
  • Levelt, W. J. M., & Kelter, S. (1982). Surface form and memory in question answering. Cognitive Psychology, 14, 78-106. doi:10.1016/0010-0285(82)90005-6.

    Abstract

    Speakers tend to repeat materials from previous talk. This tendency is experimentally established and manipulated in various question-answering situations. It is shown that a question's surface form can affect the format of the answer given, even if this form has little semantic or conversational consequence, as in the pair Q: (At) what time do you close. A: “(At)five o'clock.” Answerers tend to match the utterance to the prepositional (nonprepositional) form of the question. This “correspondence effect” may diminish or disappear when, following the question, additional verbal material is presented to the answerer. The experiments show that neither the articulatory buffer nor long-term memory is normally involved in this retention of recent speech. Retaining recent speech in working memory may fulfill a variety of functions for speaker and listener, among them the correct production and interpretation of surface anaphora. Reusing recent materials may, moreover, be more economical than regenerating speech anew from a semantic base, and thus contribute to fluency. But the realization of this strategy requires a production system in which linguistic formulation can take place relatively independent of, and parallel to, conceptual planning.
  • Levelt, W. J. M. (1982). Science policy: Three recent idols, and a goddess. IPO Annual Progress Report, 17, 32-35.
  • Levelt, W. J. M. (1992). The perceptual loop theory not disconfirmed: A reply to MacKay. Consciousness and Cognition, 1, 226-230. doi:10.1016/1053-8100(92)90062-F.

    Abstract

    In his paper, MacKay reviews his Node Structure theory of error detection, but precedes it with a critical discussion of the Perceptual Loop theory of self-monitoring proposed in Levelt (1983, 1989). The present commentary is concerned with this latter critique and shows that there are more than casual problems with MacKay’s argumentation.
  • Levelt, W. J. M. (1993). Timing in speech production with special reference to word form encoding. Annals of the New York Academy of Sciences, 682, 283-295. doi:10.1111/j.1749-6632.1993.tb22976.x.
  • Levelt, W. J. M. (1982). Zelfcorrecties in het spreekproces. KNAW: Mededelingen van de afdeling letterkunde, nieuwe reeks, 45(8), 215-228.
  • Levelt, W. J. M. (1979). On learnability: A reply to Lasnik and Chomsky. Unpublished manuscript.
  • Levinson, S. C., & Brown, P. (1993). Background to "Immanuel Kant among the Tenejapans". Anthropology Newsletter, 34(3), 22-23. doi:10.1111/an.1993.34.3.22.
  • Levinson, S. C. (1979). Activity types and language. Linguistics, 17, 365-399.
  • Levinson, S. C. (2007). Cut and break verbs in Yélî Dnye, the Papuan language of Rossel Island. Cognitive Linguistics, 18(2), 207-218. doi:10.1515/COG.2007.009.

    Abstract

    The paper explores verbs of cutting and breaking (C&B, hereafter) in Yeli Dnye, the Papuan language of Rossel Island. The Yeli Dnye verbs covering the C&B domain do not divide it in the expected way, with verbs focusing on special instruments and manners of action on the one hand, and verbs focusing on the resultant state on the other. Instead, just three transitive verbs and their intransitive counterparts cover most of the domain, and they are all based on 'exotic' distinctions in mode of severance[--]coherent severance with the grain vs. against the grain, and incoherent severance (regardless of grain).
  • Levinson, S. C. (1992). Primer for the field investigation of spatial description and conception. Pragmatics, 2(1), 5-47.
  • Liszkowski, U., Carpenter, M., & Tomasello, M. (2007). Reference and attitude in infant pointing. Journal of Child Language, 34(1), 1-20. doi:10.1017/S0305000906007689.

    Abstract

    We investigated two main components of infant declarative pointing, reference and attitude, in two experiments with a total of 106 preverbal infants at 1;0. When an experimenter (E) responded to the declarative pointing of these infants by attending to an incorrect referent (with positive attitude), infants repeated pointing within trials to redirect E’s attention, showing an understanding of E’s reference and active message repair. In contrast, when E identified infants’ referent correctly but displayed a disinterested attitude, infants did not repeat pointing within trials and pointed overall in fewer trials, showing an understanding of E’s unenthusiastic attitude about the referent. When E attended to infants’ intended referent AND shared interest in it, infants were most satisfied, showing no message repair within trials and pointing overall in more trials. These results suggest that by twelve months of age infant declarative pointing is a full communicative act aimed at sharing with others both attention to a referent and a specific attitude about that referent.
  • Liszkowski, U., Carpenter, M., & Tomasello, M. (2007). Pointing out new news, old news, and absent referents at 12 months of age. Developmental Science, 10(2), F1-F7. doi:0.1111/j.1467-7687.2006.00552.x.

    Abstract

    There is currently controversy over the nature of 1-year-olds' social-cognitive understanding and motives. In this study we investigated whether 12-month-old infants point for others with an understanding of their knowledge states and with a prosocial motive for sharing experiences with them. Declarative pointing was elicited in four conditions created by crossing two factors: an adult partner (1) was already attending to the target event or not, and (2) emoted positively or neutrally. Pointing was also coded after the event had ceased. The findings suggest that 12-month-olds point to inform others of events they do not know about, that they point to share an attitude about mutually attended events others already know about, and that they can point (already prelinguistically) to absent referents. These findings provide strong support for a mentalistic and prosocial interpretation of infants' prelinguistic communication
  • Majid, A., Bowerman, M., Van Staden, M., & Boster, J. S. (2007). The semantic categories of cutting and breaking events: A crosslinguistic perspective. Cognitive Linguistics, 18(2), 133-152. doi:10.1515/COG.2007.005.

    Abstract

    This special issue of Cognitive Linguistics explores the linguistic encoding of events of cutting and breaking. In this article we first introduce the project on which it is based by motivating the selection of this conceptual domain, presenting the methods of data collection used by all the investigators, and characterizing the language sample. We then present a new approach to examining crosslinguistic similarities and differences in semantic categorization. Applying statistical modeling to the descriptions of cutting and breaking events elicited from speakers of all the languages, we show that although there is crosslinguistic variation in the number of distinctions made and in the placement of category boundaries, these differences take place within a strongly constrained semantic space: across languages, there is a surprising degree of consensus on the partitioning of events in this domain. In closing, we compare our statistical approach with more conventional semantic analyses, and show how...
  • Majid, A., Sanford, A. J., & Pickering, M. J. (2007). The linguistic description of minimal social scenarios affects the extent of causal inference making. Journal of Experimental Social Psychology, 43(6), 918-932. doi:10.1016/j.jesp.2006.10.016.

    Abstract

    There is little consensus regarding the circumstances in which people spontaneously generate causal inferences, and in particular whether they generate inferences about the causal antecedents or the causal consequences of events. We tested whether people systematically infer causal antecedents or causal consequences to minimal social scenarios by using a continuation methodology. People overwhelmingly produced causal antecedent continuations for descriptions of interpersonal events (John hugged Mary), but causal consequence continuations to descriptions of transfer events (John gave a book to Mary). This demonstrates that there is no global cognitive style, but rather inference generation is crucially tied to the input. Further studies examined the role of event unusualness, number of participators, and verb-type on the likelihood of producing a causal antecedent or causal consequence inference. We conclude that inferences are critically guided by the specific verb used.
  • Majid, A., & Bowerman, M. (Eds.). (2007). Cutting and breaking events: A crosslinguistic perspective [Special Issue]. Cognitive Linguistics, 18(2).

    Abstract

    This special issue of Cognitive Linguistics explores the linguistic encoding of events of cutting and breaking. In this article we first introduce the project on which it is based by motivating the selection of this conceptual domain, presenting the methods of data collection used by all the investigators, and characterizing the language sample. We then present a new approach to examining crosslinguistic similarities and differences in semantic categorization. Applying statistical modeling to the descriptions of cutting and breaking events elicited from speakers of all the languages, we show that although there is crosslinguistic variation in the number of distinctions made and in the placement of category boundaries, these differences take place within a strongly constrained semantic space: across languages, there is a surprising degree of consensus on the partitioning of events in this domain. In closing, we compare our statistical approach with more conventional semantic analyses, and show how an extensional semantic typological approach like the one illustrated here can help illuminate the intensional distinctions made by languages.
  • Majid, A., Gullberg, M., Van Staden, M., & Bowerman, M. (2007). How similar are semantic categories in closely related languages? A comparison of cutting and breaking in four Germanic languages. Cognitive Linguistics, 18(2), 179-194. doi:10.1515/COG.2007.007.

    Abstract

    Are the semantic categories of very closely related languages the same? We present a new methodology for addressing this question. Speakers of English, German, Dutch and Swedish described a set of video clips depicting cutting and breaking events. The verbs elicited were then subjected to cluster analysis, which groups scenes together based on similarity (determined by shared verbs). Using this technique, we find that there are surprising differences among the languages in the number of categories, their exact boundaries, and the relationship of the terms to one another[--]all of which is circumscribed by a common semantic space.
  • Marklund, P., Fransson, P., Cabeza, R., Petersson, K. M., Ingvar, M., & Nyberg, L. (2007). Sustained and transient neural modulations in prefrontal cortex related to declarative long-term memory, working memory, and attention. Cortex, 43(1), 22-37. doi:10.1016/S0010-9452(08)70443-X.

    Abstract

    Common activations in prefrontal cortex (PFC) during episodic and semantic long-term memory (LTM) tasks have been hypothesized to reflect functional overlap in terms of working memory (WM) and cognitive control. To evaluate a WM account of LTM-general activations, the present study took into consideration that cognitive task performance depends on the dynamic operation of multiple component processes, some of which are stimulus-synchronous and transient in nature; and some that are engaged throughout a task in a sustained fashion. PFC and WM may be implicated in both of these temporally independent components. To elucidate these possibilities we employed mixed blocked/event-related functional magnetic resonance imaging (fMRI) procedures to assess the extent to which sustained or transient activation patterns overlapped across tasks indexing episodic and semantic LTM, attention (ATT), and WM. Within PFC, ventrolateral and medial areas exhibited sustained activity across all tasks, whereas more anterior regions including right frontopolar cortex were commonly engaged in sustained processing during the three memory tasks. These findings do not support a WM account of sustained frontal responses during LTM tasks, but instead suggest that the pattern that was common to all tasks reflects general attentional set/vigilance, and that the shared WM-LTM pattern mediates control processes related to upholding task set. Transient responses during the three memory tasks were assessed relative to ATT to isolate item-specific mnemonic processes and were found to be largely distinct from sustained effects. Task-specific effects were observed for each memory task. In addition, a common item response for all memory tasks involved left dorsolateral PFC (DLPFC). The latter response might be seen as reflecting WM processes during LTM retrieval. Thus, our findings suggest that a WM account of shared PFC recruitment in LTM tasks holds for common transient item-related responses rather than sustained state-related responses that are better seen as reflecting more general attentional/control processes.
  • McQueen, J. M., & Viebahn, M. C. (2007). Tracking recognition of spoken words by tracking looks to printed words. Quarterly Journal of Experimental Psychology, 60(5), 661-671. doi:10.1080/17470210601183890.

    Abstract

    Eye movements of Dutch participants were tracked as they looked at arrays of four words on a computer screen and followed spoken instructions (e.g., "Klik op het woord buffel": Click on the word buffalo). The arrays included the target (e.g., buffel), a phonological competitor (e.g., buffer, buffer), and two unrelated distractors. Targets were monosyllabic or bisyllabic, and competitors mismatched targets only on either their onset or offset phoneme and only by one distinctive feature. Participants looked at competitors more than at distractors, but this effect was much stronger for offset-mismatch than onset-mismatch competitors. Fixations to competitors started to decrease as soon as phonetic evidence disfavouring those competitors could influence behaviour. These results confirm that listeners continuously update their interpretation of words as the evidence in the speech signal unfolds and hence establish the viability of the methodology of using eye movements to arrays of printed words to track spoken-word recognition.
  • Mehta, G., & Cutler, A. (1988). Detection of target phonemes in spontaneous and read speech. Language and Speech, 31, 135-156.

    Abstract

    Although spontaneous speech occurs more frequently in most listeners’ experience than read speech, laboratory studies of human speech recognition typically use carefully controlled materials read from a script. The phonological and prosodic characteristics of spontaneous and read speech differ considerably, however, which suggests that laboratory results may not generalize to the recognition of spontaneous and read speech materials, and their response time to detect word-initial target phonemes was measured. Response were, overall, equally fast in each speech mode. However analysis of effects previously reported in phoneme detection studies revealed significant differences between speech modes. In read speech but not in spontaneous speech, later targets were detected more rapidly than earlier targets, and targets preceded by long words were detected more rapidly than targets preceded by short words. In contrast, in spontaneous speech but not in read speech, targets were detected more rapidly in accented than unaccented words and in strong than in weak syllables. An explanation for this pattern is offered in terms of characteristic prosodic differences between spontaneous and read speech. The results support claim from previous work that listeners pay great attention to prosodic information in the process of recognizing speech.
  • Menenti, L., & Burani, C. (2007). What causes the effect of age of acquisition in lexical processing? Quarterly Journal of Experimental Psychology, 60(5), 652-660. doi:10.1080/17470210601100126.

    Abstract

    Three hypotheses for effects of age of acquisition (AoA) in lexical processing are compared: the cumulative frequency hypothesis (frequency and AoA both influence the number of encounters with a word, which influences processing speed), the semantic hypothesis (early-acquired words are processed faster because they are more central in the semantic network), and the neural network model (early-acquired words are faster because they are acquired when a network has maximum plasticity). In a regression study of lexical decision (LD) and semantic categorization (SC) in Italian and Dutch, contrary to the cumulative frequency hypothesis, AoA coefficients were larger than frequency coefficients, and, contrary to the semantic hypothesis, the effect of AoA was not larger in SC than in LD. The neural network model was supported.
  • Meyer, A. S., & Damian, M. F. (2007). Activation of distractor names in the picture-picture interference paradigm. Memory & Cognition, 35, 494-503.

    Abstract

    In four experiments, participants named target pictures that were accompanied by distractor pictures with phonologically related or unrelated names. Across experiments, the type of phonological relationship between the targets and the related distractors was varied: They were homophones (e.g., bat [animal/baseball]), or they shared word-initial segments (e.g., dog-doll) or word-final segments (e.g., ball-wall). The participants either named the objects after an extensive familiarization and practice phase or without any familiarization or practice. In all of the experiments, the mean target-naming latency was shorter in the related than in the unrelated condition, demonstrating that the phonological form of the name of the distractor picture became activated. These results are best explained within a cascaded model of lexical access—that is, under the assumption that the recognition of an object leads to the activation of its name.
  • Meyer, A. S., Belke, E., Telling, A. L., & Humphreys, G. W. (2007). Early activation of object names in visual search. Psychonomic Bulletin & Review, 14, 710-716.

    Abstract

    In a visual search experiment, participants had to decide whether or not a target object was present in a four-object search array. One of these objects could be a semantically related competitor (e.g., shirt for the target trousers) or a conceptually unrelated object with the same name as the target-for example, bat (baseball) for the target bat (animal). In the control condition, the related competitor was replaced by an unrelated object. The participants' response latencies and eye movements demonstrated that the two types of related competitors had similar effects: Competitors attracted the participants' visual attention and thereby delayed positive and negative decisions. The results imply that semantic and name information associated with the objects becomes rapidly available and affects the allocation of visual attention.
  • Meyer, A. S. (1992). Investigation of phonological encoding through speech error analyses: Achievements, limitations, and alternatives. Cognition, 42, 181-211. doi:10.1016/0010-0277(92)90043-H.

    Abstract

    Phonological encoding in language production can be defined as a set of processes generating utterance forms on the basis of semantic and syntactic information. Most evidence about these processes stems from analyses of sound errors. In section 1 of this paper, certain important results of these analyses are reviewed. Two prominent models of phonological encoding, which are mainly based on speech error evidence, are discussed in section 2. In section 3, limitations of speech error analyses are discussed, and it is argued that detailed and comprehensive models of phonological encoding cannot be derived solely on the basis of error analyses. As is argued in section 4, a new research strategy is required. Instead of using the properties of errors to draw inferences about the generation of correct word forms, future research should directly investigate the normal process of phonological encoding.
  • Meyer, A. S., & Bock, K. (1992). The tip-of-the-tongue phenomenon: Blocking or partial activation? Memory and Cognition, 20, 181-211.

    Abstract

    Tip-of-the-tongue states may represent the momentary unavailability of an otherwise accessible word or the weak activation of an otherwise inaccessible word. In three experiments designed to address these alternative views, subjects attempted to retrieve rare target words from their definitions. The definitions were followed by cues that were related to the targets in sound, by cues that were related in meaning, and by cues that were not related to the targets. Experiment 1 found that compared with unrelated cues, related cue words that were presented immediately after target definitions helped rather than hindered lexical retrieval, and that sound cues were more effective retrieval aids than meaning cues. Experiment 2 replicated these results when cues were presented after an initial target-retrieval attempt. These findings reverse a previous one (Jones, 1989) that was reproduced in Experiment 3 and shown to stem from a small group of unusually difficult target definitions.
  • Meyer, A. S., Belke, E., Häcker, C., & Mortensen, L. (2007). Use of word length information in utterance planning. Journal of Memory and Language, 57, 210-231. doi:10.1016/j.jml.2006.10.005.

    Abstract

    Griffin [Griffin, Z. M. (2003). A reversed length effect in coordinating the preparation and articulation of words in speaking. Psychonomic Bulletin & Review, 10, 603-609.] found that speakers naming object pairs spent more time before utterance onset looking at the second object when the first object name was short than when it was long. She proposed that this reversed length effect arose because the speakers' decision when to initiate an utterance was based, in part, on their estimate of the spoken duration of the first object name and the time available during its articulation to plan the second object name. In Experiment I of the present study, participants named object pairs. They spent more time looking at the first object when its name was monosyllabic than when it was trisyllabic, and, as in Griffin's study, the average gaze-speech lag (the time between the end of the gaze to the first object and onset of its name, which corresponds closely to the pre-speech inspection time for the second object) showed a reversed length effect. Experiments 2 and 3 showed that this effect was not due to a trade-off between the time speakers spent looking at the first and second object before speech onset. Experiment 4 yielded a reversed length effect when the second object was replaced by a symbol (x or +), which the participants had to categorise. We propose a novel account of the reversed length effect, which links it to the incremental nature of phonological encoding and articulatory planning rather than the speaker's estimate of the length of the first object name.
  • Monaco, A., Fisher, S. E., & The SLI Consortium (SLIC) (2007). Multivariate linkage analysis of specific language impairment (SLI). Annals of Human Genetics, 71(5), 660-673. doi:10.1111/j.1469-1809.2007.00361.x.

    Abstract

    Specific language impairment (SLI) is defined as an inability to develop appropriate language skills without explanatory medical conditions, low intelligence or lack of opportunity. Previously, a genome scan of 98 families affected by SLI was completed by the SLI Consortium, resulting in the identification of two quantitative trait loci (QTL) on chromosomes 16q (SLI1) and 19q (SLI2). This was followed by a replication of both regions in an additional 86 families. Both these studies applied linkage methods to one phenotypic trait at a time. However, investigations have suggested that simultaneous analysis of several traits may offer more power. The current study therefore applied a multivariate variance-components approach to the SLI Consortium dataset using additional phenotypic data. A multivariate genome scan was completed and supported the importance of the SLI1 and SLI2 loci, whilst highlighting a possible novel QTL on chromosome 10. Further investigation implied that the effect of SLI1 on non-word repetition was equally as strong on reading and spelling phenotypes. In contrast, SLI2 appeared to have influences on a selection of expressive and receptive language phenotypes in addition to non-word repetition, but did not show linkage to literacy phenotypes.

    Additional information

    Members_SLIC.doc
  • Murty, L., Otake, T., & Cutler, A. (2007). Perceptual tests of rhythmic similarity: I. Mora Rhythm. Language and Speech, 50(1), 77-99. doi:10.1177/00238309070500010401.

    Abstract

    Listeners rely on native-language rhythm in segmenting speech; in different languages, stress-, syllable- or mora-based rhythm is exploited. The rhythmic similarity hypothesis holds that where two languages have similar rhythm, listeners of each language should segment their own and the other language similarly. Such similarity in listening was previously observed only for related languages (English-Dutch; French-Spanish). We now report three experiments in which speakers of Telugu, a Dravidian language unrelated to Japanese but similar to it in crucial aspects of rhythmic structure, heard speech in Japanese and in their own language, and Japanese listeners heard Telugu. For the Telugu listeners, detection of target sequences in Japanese speech was harder when target boundaries mismatched mora boundaries, exactly the pattern that Japanese listeners earlier exhibited with Japanese and other languages. The same results appeared when Japanese listeners heard Telugu speech containing only codas permissible in Japanese. Telugu listeners' results with Telugu speech were mixed, but the overall pattern revealed correspondences between the response patterns of the two listener groups, as predicted by the rhythmic similarity hypothesis. Telugu and Japanese listeners appear to command similar procedures for speech segmentation, further bolstering the proposal that aspects of language phonological structure affect listeners' speech segmentation.
  • Narasimhan, B., Eisenbeiss, S., & Brown, P. (Eds.). (2007). The linguistic encoding of multiple-participant events [Special Issue]. Linguistics, 45(3).

    Abstract

    This issue investigates the linguistic encoding of events with three or more participants from the perspectives of language typology and acquisition. Such “multiple-participant events” include (but are not limited to) any scenario involving at least three participants, typically encoded using transactional verbs like 'give' and 'show', placement verbs like 'put', and benefactive and applicative constructions like 'do (something for someone)', among others. There is considerable crosslinguistic and withinlanguage variation in how the participants (the Agent, Causer, Theme, Goal, Recipient, or Experiencer) and the subevents involved in multipleparticipant situations are encoded, both at the lexical and the constructional levels
  • Narasimhan, B. (2007). Cutting, breaking, and tearing verbs in Hindi and Tamil. Cognitive Linguistics, 18(2), 195-205. doi:10.1515/COG.2007.008.

    Abstract

    Tamil and Hindi verbs of cutting, breaking, and tearing are shown to have a high degree of overlap in their extensions. However, there are also differences in the lexicalization patterns of these verbs in the two languages with regard to their category boundaries, and the number of verb types that are available to make finer-grained distinctions. Moreover, differences in the extensional ranges of corresponding verbs in the two languages can be motivated in terms of the properties of the instrument and the theme object.
  • Narasimhan, B., Eisenbeiss, S., & Brown, P. (2007). "Two's company, more is a crowd": The linguistic encoding of multiple-participant events. Linguistics, 45(3), 383-392. doi:10.1515/LING.2007.013.

    Abstract

    This introduction to a special issue of the journal Linguistics sketches the challenges that multiple-participant events pose for linguistic and psycholinguistic theories, and summarizes the articles in the volume.
  • Nieuwland, M. S., Petersson, K. M., & Van Berkum, J. J. A. (2007). On sense and reference: Examining the functional neuroanatomy of referential processing. NeuroImage, 37(3), 993-1004. doi:10.1016/j.neuroimage.2007.05.048.

    Abstract

    In an event-related fMRI study, we examined the cortical networks involved in establishing reference during language comprehension. We compared BOLD responses to sentences containing referentially ambiguous pronouns (e.g., “Ronald told Frank that he…”), referentially failing pronouns (e.g., “Rose told Emily that he…”) or coherent pronouns. Referential ambiguity selectively recruited medial prefrontal regions, suggesting that readers engaged in problem-solving to select a unique referent from the discourse model. Referential failure elicited activation increases in brain regions associated with morpho-syntactic processing, and, for those readers who took failing pronouns to refer to unmentioned entities, additional regions associated with elaborative inferencing were observed. The networks activated by these two referential problems did not overlap with the network activated by a standard semantic anomaly. Instead, we observed a double dissociation, in that the systems activated by semantic anomaly are deactivated by referential ambiguity, and vice versa. This inverse coupling may reflect the dynamic recruitment of semantic and episodic processing to resolve semantically or referentially problematic situations. More generally, our findings suggest that neurocognitive accounts of language comprehension need to address not just how we parse a sentence and combine individual word meanings, but also how we determine who's who and what's what during language comprehension.
  • Nieuwland, M. S., Otten, M., & Van Berkum, J. J. A. (2007). Who are you talking about? Tracking discourse-level referential processing with event-related brain potentials. Journal of Cognitive Neuroscience, 19(2), 228-236. doi:10.1162/jocn.2007.19.2.228.

    Abstract

    In this event-related brain potentials (ERPs) study, we explored the possibility to selectively track referential ambiguity during spoken discourse comprehension. Earlier ERP research has shown that referentially ambiguous nouns (e.g., “the girl” in a two-girl context) elicit a frontal, sustained negative shift relative to unambiguous control words. In the current study, we examined whether this ERP effect reflects “deep” situation model ambiguity or “superficial” textbase ambiguity. We contrasted these different interpretations by investigating whether a discourse-level semantic manipulation that prevents referential ambiguity also averts the elicitation of a referentially induced ERP effect. We compared ERPs elicited by nouns that were referentially nonambiguous but were associated with two discourse entities (e.g., “the girl” with two girls introduced in the context, but one of which has died or left the scene), with referentially ambiguous and nonambiguous control words. Although temporally referentially ambiguous nouns elicited a frontal negative shift compared to control words, the “double bound” but referentially nonambiguous nouns did not. These results suggest that it is possible to selectively track referential ambiguity with ERPs at the level that is most relevant to discourse comprehension, the situation model.
  • Nix, A. J., Mehta, G., Dye, J., & Cutler, A. (1993). Phoneme detection as a tool for comparing perception of natural and synthetic speech. Computer Speech and Language, 7, 211-228. doi:10.1006/csla.1993.1011.

    Abstract

    On simple intelligibility measures, high-quality synthesiser output now scores almost as well as natural speech. Nevertheless, it is widely agreed that perception of synthetic speech is a harder task for listeners than perception of natural speech; in particular, it has been hypothesized that listeners have difficulty identifying phonemes in synthetic speech. If so, a simple measure of the speed with which a phoneme can be identified should prove a useful tool for comparing perception of synthetic and natural speech. The phoneme detection task was here used in three experiments comparing perception of natural and synthetic speech. In the first, response times to synthetic and natural targets were not significantly different, but in the second and third experiments response times to synthetic targets were significantly slower than to natural targets. A speed-accuracy tradeoff in the third experiment suggests that an important factor in this task is the response criterion adopted by subjects. It is concluded that the phoneme detection task is a useful tool for investigating phonetic processing of synthetic speech input, but subjects must be encouraged to adopt a response criterion which emphasizes rapid responding. When this is the case, significantly longer response times for synthetic targets can indicate a processing disadvantage for synthetic speech at an early level of phonetic analysis.
  • Norris, D., & Cutler, A. (1988). Speech recognition in French and English. MRC News, 39, 30-31.
  • Norris, D., & Cutler, A. (1988). The relative accessibility of phonemes and syllables. Perception and Psychophysics, 43, 541-550. Retrieved from http://www.psychonomic.org/search/view.cgi?id=8530.

    Abstract

    Previous research comparing detection times for syllables and for phonemes has consistently found that syllables are responded to faster than phonemes. This finding poses theoretical problems for strictly hierarchical models of speech recognition, in which smaller units should be able to be identified faster than larger units. However, inspection of the characteristics of previous experiments’stimuli reveals that subjects have been able to respond to syllables on the basis of only a partial analysis of the stimulus. In the present experiment, five groups of subjects listened to identical stimulus material. Phoneme and syllable monitoring under standard conditions was compared with monitoring under conditions in which near matches of target and stimulus occurred on no-response trials. In the latter case, when subjects were forced to analyze each stimulus fully, phonemes were detected faster than syllables.
  • Nüse, R. (2007). Der Gebrauch und die Bedeutungen von auf, an und unter. Zeitschrift für Germanistische Linguistik, 35, 27-51.

    Abstract

    Present approaches to the semantics of the German prepositions auf an and unter draw on two propositions: First, that spatial prepositions in general specify a region in the surrounding of the relatum object. Second, that in the case of auf an and unter, these regions are to be defined with concepts like the vertical and/or the topological surfa¬ce (the whole surrounding exterior of an object). The present paper argues that the first proposition is right and that the second is wrong. That is, while it is true that prepositions specify regions, the regions specified by auf, an and unter should rather be defined in terms of everyday concepts like SURFACE, SIDE and UNDERSIDE. This idea is suggested by the fact that auf an and unter refer to different regions in different kinds of relatum objects, and that these regions are the same as the regions called surfaces, sides and undersides. Furthermore, reading and usage preferences of auf an and unter can be explained by a corresponding salience of the surfaces, sides and undersides of the relatum objects in question. All in all, therefore, a close look at the use of auf an and unter with different classes of relatum objects reveals problems for a semantic approach that draws on concepts like the vertical, while it suggests mea¬nings of these prepositions that refer to the surface, side and underside of an object.
  • O'Connor, L. (2007). 'Chop, shred, snap apart': Verbs of cutting and breaking in Lowland Chontal. Cognitive Linguistics, 18(2), 219-230. doi:10.1515/COG.2007.010.

    Abstract

    Typological descriptions of understudied languages reveal intriguing crosslinguistic variation in descriptions of events of object separation and destruction. In Lowland Chontal of Oaxaca, verbs of cutting and breaking lexicalize event perspectives that range from the common to the quite unusual, from the tearing of cloth to the snapping apart on the cross-grain of yarn. This paper describes the semantic and syntactic criteria that characterize three verb classes in this semantic domain, examines patterns of event construal, and takes a look at likely changes in these event descriptions from the perspective of endangered language recovery.
  • O'Connor, L. (2007). [Review of the book Pronouns by D.N.S. Bhat]. Journal of Pragmatics, 39(3), 612-616. doi:10.1016/j.pragma.2006.09.007.
  • Otake, T., Hatano, G., Cutler, A., & Mehler, J. (1993). Mora or syllable? Speech segmentation in Japanese. Journal of Memory and Language, 32, 258-278. doi:10.1006/jmla.1993.1014.

    Abstract

    Four experiments examined segmentation of spoken Japanese words by native and non-native listeners. Previous studies suggested that language rhythm determines the segmentation unit most natural to native listeners: French has syllabic rhythm, and French listeners use the syllable in segmentation, while English has stress rhythm, and segmentation by English listeners is based on stress. The rhythm of Japanese is based on a subsyllabic unit, the mora. In the present experiments Japanese listeners′ response patterns were consistent with moraic segmentation; acoustic artifacts could not have determined the results since nonnative (English and French) listeners showed different response patterns with the same materials. Predictions of a syllabic hypothesis were disconfirmed in the Japanese listeners′ results; in contrast, French listeners showed a pattern of responses consistent with the syllabic hypothesis. The results provide further evidence that listeners′ segmentation of spoken words relies on procedures determined by the characteristic phonology of their native language.
  • Otten, M., & Van Berkum, J. J. A. (2007). What makes a discourse constraining? Comparing the effects of discourse message and scenario fit on the discourse-dependent N400 effect. Brain Research, 1153, 166-177. doi:10.1016/j.brainres.2007.03.058.

    Abstract

    A discourse context provides a reader with a great deal of information that can provide constraints for further language processing, at several different levels. In this experiment we used event-related potentials (ERPs) to explore whether discourse-generated contextual constraints are based on the precise message of the discourse or, more `loosely', on the scenario suggested by one or more content words in the text. Participants read constraining stories whose precise message rendered a particular word highly predictable ("The manager thought that the board of directors should assemble to discuss the issue. He planned a...[meeting]") as well as non-constraining control stories that were only biasing in virtue of the scenario suggested by some of the words ("The manager thought that the board of directors need not assemble to discuss the issue. He planned a..."). Coherent words that were inconsistent with the message-level expectation raised in a constraining discourse (e.g., "session" instead of "meeting") elicited a classic centroparietal N400 effect. However, when the same words were only inconsistent with the scenario loosely suggested by earlier words in the text, they elicited a different negativity around 400 ms, with a more anterior, left-lateralized maximum. The fact that the discourse-dependent N400 effect cannot be reduced to scenario-mediated priming reveals that it reflects the rapid use of precise message-level constraints in comprehension. At the same time, the left-lateralized negativity in non-constraining stories suggests that, at least in the absence of strong message-level constraints, scenario-mediated priming does also rapidly affect comprehension.
  • Otten, M., Nieuwland, M. S., & Van Berkum, J. J. A. (2007). Great expectations: Specific lexical anticipation influences the processing of spoken language. BMC Neuroscience, 8: 89. doi:10.1186/1471-2202-8-89.

    Abstract

    Background Recently several studies have shown that people use contextual information to make predictions about the rest of the sentence or story as the text unfolds. Using event related potentials (ERPs) we tested whether these on-line predictions are based on a message-based representation of the discourse or on simple automatic activation by individual words. Subjects heard short stories that were highly constraining for one specific noun, or stories that were not specifically predictive but contained the same prime words as the predictive stories. To test whether listeners make specific predictions critical nouns were preceded by an adjective that was inflected according to, or in contrast with, the gender of the expected noun. Results When the message of the preceding discourse was predictive, adjectives with an unexpected gender-inflection evoked a negative deflection over right-frontal electrodes between 300 and 600 ms. This effect was not present in the prime control context, indicating that the prediction mismatch does not hinge on word-based priming but is based on the actual message of the discourse. Conclusions When listening to a constraining discourse people rapidly make very specific predictions about the remainder of the story, as the story unfolds. These predictions are not simply based on word-based automatic activation, but take into account the actual message of the discourse.
  • Özdemir, R., Roelofs, A., & Levelt, W. J. M. (2007). Perceptual uniqueness point effects in monitoring internal speech. Cognition, 105(2), 457-465. doi:10.1016/j.cognition.2006.10.006.

    Abstract

    Disagreement exists about how speakers monitor their internal speech. Production-based accounts assume that self-monitoring mechanisms exist within the production system, whereas comprehension-based accounts assume that monitoring is achieved through the speech comprehension system. Comprehension-based accounts predict perception-specific effects, like the perceptual uniqueness-point effect, in the monitoring of internal speech. We ran an extensive experiment testing this prediction using internal phoneme monitoring and picture naming tasks. Our results show an effect of the perceptual uniqueness point of a word in internal phoneme monitoring in the absence of such an effect in picture naming. These results support comprehension-based accounts of the monitoring of internal speech.
  • Ozyurek, A., Willems, R. M., Kita, S., & Hagoort, P. (2007). On-line integration of semantic information from speech and gesture: Insights from event-related brain potentials. Journal of Cognitive Neuroscience, 19(4), 605-616. doi:10.1162/jocn.2007.19.4.605.

    Abstract

    During language comprehension, listeners use the global semantic representation from previous sentence or discourse context to immediately integrate the meaning of each upcoming word into the unfolding message-level representation. Here we investigate whether communicative gestures that often spontaneously co-occur with speech are processed in a similar fashion and integrated to previous sentence context in the same way as lexical meaning. Event-related potentials were measured while subjects listened to spoken sentences with a critical verb (e.g., knock), which was accompanied by an iconic co-speech gesture (i.e., KNOCK). Verbal and/or gestural semantic content matched or mismatched the content of the preceding part of the sentence. Despite the difference in the modality and in the specificity of meaning conveyed by spoken words and gestures, the latency, amplitude, and topographical distribution of both word and gesture mismatches are found to be similar, indicating that the brain integrates both types of information simultaneously. This provides evidence for the claim that neural processing in language comprehension involves the simultaneous incorporation of information coming from a broader domain of cognition than only verbal semantics. The neural evidence for similar integration of information from speech and gesture emphasizes the tight interconnection between speech and co-speech gestures.
  • Ozyurek, A., & Kelly, S. D. (2007). Gesture, language, and brain. Brain and Language, 101(3), 181-185. doi:10.1016/j.bandl.2007.03.006.
  • Perdue, C., & Klein, W. (1992). Why does the production of some learners not grammaticalize? Studies in Second Language Acquisition, 14, 259-272. doi:10.1017/S0272263100011116.

    Abstract

    In this paper we follow two beginning learners of English, Andrea and Santo, over a period of 2 years as they develop means to structure the declarative utterances they produce in various production tasks, and then we look at the following problem: In the early stages of acquisition, both learners develop a common learner variety; during these stages, we see a picture of two learner varieties developing similar regularities determined by the minimal requirements of the tasks we examine. Andrea subsequently develops further morphosyntactic means to achieve greater cohesion in his discourse. But Santo does not. Although we can identify contexts where the grammaticalization of Andrea's production allows him to go beyond the initial constraints of his variety, it is much more difficult to ascertain why Santo, faced with the same constraints in the same contexts, does not follow this path. Some lines of investigation into this problem are then suggested.
  • Pereiro Estevan, Y., Wan, V., & Scharenborg, O. (2007). Finding maximum margin segments in speech. Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference, IV, 937-940. doi:10.1109/ICASSP.2007.367225.

    Abstract

    Maximum margin clustering (MMC) is a relatively new and promising kernel method. In this paper, we apply MMC to the task of unsupervised speech segmentation. We present three automatic speech segmentation methods based on MMC, which are tested on TIMIT and evaluated on the level of phoneme boundary detection. The results show that MMC is highly competitive with existing unsupervised methods for the automatic detection of phoneme boundaries. Furthermore, initial analyses show that MMC is a promising method for the automatic detection of sub-phonetic information in the speech signal.
  • Perniss, P. M. (2007). Achieving spatial coherence in German sign language narratives: The use of classifiers and perspective. Lingua, 117(7), 1315-1338. doi:10.1016/j.lingua.2005.06.013.

    Abstract

    Spatial coherence in discourse relies on the use of devices that provide information about where referents are and where events take place. In signed language, two primary devices for achieving and maintaining spatial coherence are the use of classifier forms and signing perspective. This paper gives a unified account of the relationship between perspective and classifiers, and divides the range of possible correspondences between these two devices into prototypical and non-prototypical alignments. An analysis of German Sign Language narratives of complex events investigates the role of different classifier-perspective constructions in encoding spatial information about location, orientation, action and motion, as well as size and shape of referents. In particular, I show how non-prototypical alignments, including simultaneity of perspectives, contribute to the maintenance of spatial coherence, and provide functional explanations in terms of efficiency and informativeness constraints on discourse.
  • Petersson, K. M., Silva, C., Castro-Caldas, A., Ingvar, M., & Reis, A. (2007). Literacy: A cultural influence on functional left-right differences in the inferior parietal cortex. European Journal of Neuroscience, 26(3), 791-799. doi:10.1111/j.1460-9568.2007.05701.x.

    Abstract

    The current understanding of hemispheric interaction is limited. Functional hemispheric specialization is likely to depend on both genetic and environmental factors. In the present study we investigated the importance of one factor, literacy, for the functional lateralization in the inferior parietal cortex in two independent samples of literate and illiterate subjects. The results show that the illiterate group are consistently more right-lateralized than their literate controls. In contrast, the two groups showed a similar degree of left-right differences in early speech-related regions of the superior temporal cortex. These results provide evidence suggesting that a cultural factor, literacy, influences the functional hemispheric balance in reading and verbal working memory-related regions. In a third sample, we investigated grey and white matter with voxel-based morphometry. The results showed differences between literacy groups in white matter intensities related to the mid-body region of the corpus callosum and the inferior parietal and parietotemporal regions (literate > illiterate). There were no corresponding differences in the grey matter. This suggests that the influence of literacy on brain structure related to reading and verbal working memory is affecting large-scale brain connectivity more than grey matter per se.
  • Pickering, M. J., & Majid, A. (2007). What are implicit causality and consequentiality? Language and Cognitive Processes, 22(5), 780-788. doi:10.1080/01690960601119876.

    Abstract

    Much work in psycholinguistics and social psychology has investigated the notion of implicit causality associated with verbs. Crinean and Garnham (2006) relate implicit causality to another phenomenon, implicit consequentiality. We argue that they and other researchers have confused the meanings of events and the reasons for those events, so that particular thematic roles (e.g., Agent, Patient) are taken to be causes or consequences of those events by definition. In accord with Garvey and Caramazza (1974), we propose that implicit causality and consequentiality are probabilistic notions that are straightforwardly related to the explicit causes and consequences of events and are analogous to other biases investigated in psycholinguistics.
  • Prieto, P., & Torreira, F. (2007). The segmental anchoring hypothesis revisited: Syllable structure and speech rate effects on peak timing in Spanish. Journal of Phonetics, 35, 473-500. doi:10.1016/j.wocn.2007.01.001.

    Abstract

    This paper addresses the validity of the segmental anchoring hypothesis for tonal landmarks (henceforth, SAH) as described in recent work by (among others) Ladd, Faulkner, D., Faulkner, H., & Schepman [1999. Constant ‘segmental’ anchoring of f0 movements under changes in speech rate. Journal of the Acoustical Society of America, 106, 1543–1554], Ladd [2003. Phonological conditioning of f0 target alignment. In: M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the XVth international congress of phonetic sciences, Vol. 1, (pp. 249–252). Barcelona: Causal Productions; in press. Segmental anchoring of pitch movements: Autosegmental association or gestural coordination? Italian Journal of Linguistics, 18 (1)]. The alignment of LH* prenuclear peaks with segmental landmarks in controlled speech materials in Peninsular Spanish is analyzed as a function of syllable structure type (open, closed) of the accented syllable, segmental composition, and speaking rate. Contrary to the predictions of the SAH, alignment was affected by syllable structure and speech rate in significant and consistent ways. In: CV syllables the peak was located around the end of the accented vowel, and in CVC syllables around the beginning-mid part of the sonorant coda, but still far from the syllable boundary. With respect to the effects of rate, peaks were located earlier in the syllable as speech rate decreased. The results suggest that the accent gestures under study are synchronized with the syllable unit. In general, the longer the syllable, the longer the rise time. Thus the fundamental idea of the anchoring hypothesis can be taken as still valid. On the other hand, the tonal alignment patterns reported here can be interpreted as the outcome of distinct modes of gestural coordination in syllable-initial vs. syllable-final position: gestures at syllable onsets appear to be more tightly coordinated than gestures at the end of syllables [Browman, C. P., & Goldstein, L.M. (1986). Towards an articulatory phonology. Phonology Yearbook, 3, 219–252; Browman, C. P., & Goldstein, L. (1988). Some notes on syllable structure in articulatory phonology. Phonetica, 45, 140–155; (1992). Articulatory Phonology: An overview. Phonetica, 49, 155–180; Krakow (1999). Physiological organization of syllables: A review. Journal of Phonetics, 27, 23–54; among others]. Intergestural timing can thus provide a unifying explanation for (1) the contrasting behavior between the precise synchronization of L valleys with the onset of the syllable and the more variable timing of the end of the f0 rise, and, more specifically, for (2) the right-hand tonal pressure effects and ‘undershoot’ patterns displayed by peaks at the ends of syllables and other prosodic domains.
  • Protopapas, A., Gerakaki, S., & Alexandri, S. (2007). Sources of information for stress assignment in reading Greek. Applied Psycholinguistics, 28(4), 695 -720. doi:10.1017/S0142716407070373.

    Abstract

    To assign lexical stress when reading, the Greek reader can potentially rely on lexical information (knowledge of the word), visual–orthographic information (processing of the written diacritic), or a default metrical strategy (penultimate stress pattern). Previous studies with secondary education children have shown strong lexical effects on stress assignment and have provided evidence for a default pattern. Here we report two experiments with adult readers, in which we disentangle and quantify the effects of these three potential sources using nonword materials. Stimuli either resembled or did not resemble real words, to manipulate availability of lexical information; and they were presented with or without a diacritic, in a word-congruent or word-incongruent position, to contrast the relative importance of the three sources. Dual-task conditions, in which cognitive load during nonword reading was increased with phonological retention carrying a metrical pattern different from the default, did not support the hypothesis that the default arises from cumulative lexical activation in working memory.
  • Qin, S., Piekema, C., Petersson, K. M., Han, B., Luo, J., & Fernández, G. (2007). Probing the transformation of discontinuous associations into episodic memory: An event-related fMRI study. NeuroImage, 38(1), 212-222. doi:10.1016/j.neuroimage.2007.07.020.

    Abstract

    Using event-related functional magnetic resonance imaging, we identified brain regions involved in storing associations of events discontinuous in time into long-term memory. Participants were scanned while memorizing item-triplets including simultaneous and discontinuous associations. Subsequent memory tests showed that participants remembered both types of associations equally well. First, by constructing the contrast between the subsequent memory effects for discontinuous associations and simultaneous associations, we identified the left posterior parahippocampal region, dorsolateral prefrontal cortex, the basal ganglia, posterior midline structures, and the middle temporal gyrus as being specifically involved in transforming discontinuous associations into episodic memory. Second, we replicated that the prefrontal cortex and the medial temporal lobe (MTL) especially the hippocampus are involved in associative memory formation in general. Our findings provide evidence for distinct neural operation(s) that supports the binding and storing discontinuous associations in memory. We suggest that top-down signals from the prefrontal cortex and MTL may trigger reactivation of internal representation in posterior midline structures of the first event, thus allowing it to be associated with the second event. The dorsolateral prefrontal cortex together with basal ganglia may support this encoding operation by executive and binding processes within working memory, and the posterior parahippocampal region may play a role in binding and memory formation.
  • Reis, A., Faísca, L., Mendonça, S., Ingvar, M., & Petersson, K. M. (2007). Semantic interference on a phonological task in illiterate subjects. Scandinavian Journal of Psychology, 48(1), 69-74. doi:10.1111/j.1467-9450.2006.00544.x.

    Abstract

    Previous research suggests that learning an alphabetic written language influences aspects of the auditory-verbal language system. In this study, we examined whether literacy influences the notion of words as phonological units independent of lexical semantics in literate and illiterate subjects. Subjects had to decide which item in a word- or pseudoword pair was phonologically longest. By manipulating the relationship between referent size and phonological length in three word conditions (congruent, neutral, and incongruent) we could examine to what extent subjects focused on form rather than meaning of the stimulus material. Moreover, the pseudoword condition allowed us to examine global phonological awareness independent of lexical semantics. The results showed that literate performed significantly better than illiterate subjects in the neutral and incongruent word conditions as well as in the pseudoword condition. The illiterate group performed least well in the incongruent condition and significantly better in the pseudoword condition compared to the neutral and incongruent word conditions and suggest that performance on phonological word length comparisons is dependent on literacy. In addition, the results show that the illiterate participants are able to perceive and process phonological length, albeit less well than the literate subjects, when no semantic interference is present. In conclusion, the present results confirm and extend the finding that illiterate subjects are biased towards semantic-conceptual-pragmatic types of cognitive processing.
  • Roberts, L., Marinis, T., Felser, C., & Clahsen, H. (2007). Antecedent priming at trace positions in children’s sentence processing. Journal of Psycholinguistic Research, 36(2), 175-188. doi: 10.1007/s10936-006-9038-3.

    Abstract

    The present study examines whether children reactivate a moved constituent at its gap position and how children’s more limited working memory span affects the way they process filler-gap dependencies. 46 5–7 year-old children and 54 adult controls participated in a cross-modal picture priming experiment and underwent a standardized working memory test. The results revealed a statistically significant interaction between the participants’ working memory span and antecedent reactivation: High-span children (n = 19) and high-span adults (n = 22) showed evidence of antecedent priming at the gap site, while for low-span children and adults, there was no such effect. The antecedent priming effect in the high-span participants indicates that in both children and adults, dislocated arguments access their antecedents at gap positions. The absence of an antecedent reactivation effect in the low-span participants could mean that these participants required more time to integrate the dislocated constituent and reactivated the filler later during the sentence.
  • Roberts, L. (2007). Investigating real-time sentence processing in the second language. Stem-, Spraak- en Taalpathologie, 15, 115-127.

    Abstract

    Second language (L2) acquisition researchers have always been concerned with what L2 learners know about the grammar of the target language but more recently there has been growing interest in how L2 learners put this knowledge to use in real-time sentence comprehension. In order to investigate real-time L2 sentence processing, the types of constructions studied and the methods used are often borrowed from the field of monolingual processing, but the overall issues are familiar from traditional L2 acquisition research. These cover questions relating to L2 learners’ native-likeness, whether or not L1 transfer is in evidence, and how individual differences such as proficiency and language experience might have an effect. The aim of this paper is to provide for those unfamiliar with the field, an overview of the findings of a selection of behavioral studies that have investigated such questions, and to offer a picture of how L2 learners and bilinguals may process sentences in real time.

Share this page