Publications

Displaying 301 - 400 of 475
  • Otake, T., Yoneyama, K., Cutler, A., & van der Lugt, A. (1996). The representation of Japanese moraic nasals. Journal of the Acoustical Society of America, 100, 3831-3842. doi:10.1121/1.417239.

    Abstract

    Nasal consonants in syllabic coda position in Japanese assimilate to the place of articulation of a following consonant. The resulting forms may be perceived as different realizations of a single underlying unit, and indeed the kana orthographies represent them with a single character. In the present study, Japanese listeners' response time to detect nasal consonants was measured. Nasals in coda position, i.e., moraic nasals, were detected faster and more accurately than nonmoraic nasals, as reported in previous studies. The place of articulation with which moraic nasals were realized affected neither response time nor accuracy. Non-native subjects who knew no Japanese, given the same materials with the same instructions, simply failed to respond to moraic nasals which were realized bilabially. When the nasals were cross-spliced across place of articulation contexts the Japanese listeners still showed no significant place of articulation effects, although responses were faster and more accurate to unspliced than to cross-spliced nasals. When asked to detect the phoneme following the (cross-spliced) moraic nasal, Japanese listeners showed effects of mismatch between nasal and context, but non-native listeners did not. Together, these results suggest that Japanese listeners are capable of very rapid abstraction from phonetic realization to a unitary representation of moraic nasals; but they can also use the phonetic realization of a moraic nasal effectively to obtain anticipatory information about following phonemes.
  • Otten, M., & Van Berkum, J. J. A. (2007). What makes a discourse constraining? Comparing the effects of discourse message and scenario fit on the discourse-dependent N400 effect. Brain Research, 1153, 166-177. doi:10.1016/j.brainres.2007.03.058.

    Abstract

    A discourse context provides a reader with a great deal of information that can provide constraints for further language processing, at several different levels. In this experiment we used event-related potentials (ERPs) to explore whether discourse-generated contextual constraints are based on the precise message of the discourse or, more `loosely', on the scenario suggested by one or more content words in the text. Participants read constraining stories whose precise message rendered a particular word highly predictable ("The manager thought that the board of directors should assemble to discuss the issue. He planned a...[meeting]") as well as non-constraining control stories that were only biasing in virtue of the scenario suggested by some of the words ("The manager thought that the board of directors need not assemble to discuss the issue. He planned a..."). Coherent words that were inconsistent with the message-level expectation raised in a constraining discourse (e.g., "session" instead of "meeting") elicited a classic centroparietal N400 effect. However, when the same words were only inconsistent with the scenario loosely suggested by earlier words in the text, they elicited a different negativity around 400 ms, with a more anterior, left-lateralized maximum. The fact that the discourse-dependent N400 effect cannot be reduced to scenario-mediated priming reveals that it reflects the rapid use of precise message-level constraints in comprehension. At the same time, the left-lateralized negativity in non-constraining stories suggests that, at least in the absence of strong message-level constraints, scenario-mediated priming does also rapidly affect comprehension.
  • Otten, M., Nieuwland, M. S., & Van Berkum, J. J. A. (2007). Great expectations: Specific lexical anticipation influences the processing of spoken language. BMC Neuroscience, 8: 89. doi:10.1186/1471-2202-8-89.

    Abstract

    Background Recently several studies have shown that people use contextual information to make predictions about the rest of the sentence or story as the text unfolds. Using event related potentials (ERPs) we tested whether these on-line predictions are based on a message-based representation of the discourse or on simple automatic activation by individual words. Subjects heard short stories that were highly constraining for one specific noun, or stories that were not specifically predictive but contained the same prime words as the predictive stories. To test whether listeners make specific predictions critical nouns were preceded by an adjective that was inflected according to, or in contrast with, the gender of the expected noun. Results When the message of the preceding discourse was predictive, adjectives with an unexpected gender-inflection evoked a negative deflection over right-frontal electrodes between 300 and 600 ms. This effect was not present in the prime control context, indicating that the prediction mismatch does not hinge on word-based priming but is based on the actual message of the discourse. Conclusions When listening to a constraining discourse people rapidly make very specific predictions about the remainder of the story, as the story unfolds. These predictions are not simply based on word-based automatic activation, but take into account the actual message of the discourse.
  • Özdemir, R., Roelofs, A., & Levelt, W. J. M. (2007). Perceptual uniqueness point effects in monitoring internal speech. Cognition, 105(2), 457-465. doi:10.1016/j.cognition.2006.10.006.

    Abstract

    Disagreement exists about how speakers monitor their internal speech. Production-based accounts assume that self-monitoring mechanisms exist within the production system, whereas comprehension-based accounts assume that monitoring is achieved through the speech comprehension system. Comprehension-based accounts predict perception-specific effects, like the perceptual uniqueness-point effect, in the monitoring of internal speech. We ran an extensive experiment testing this prediction using internal phoneme monitoring and picture naming tasks. Our results show an effect of the perceptual uniqueness point of a word in internal phoneme monitoring in the absence of such an effect in picture naming. These results support comprehension-based accounts of the monitoring of internal speech.
  • Ozyurek, A., Willems, R. M., Kita, S., & Hagoort, P. (2007). On-line integration of semantic information from speech and gesture: Insights from event-related brain potentials. Journal of Cognitive Neuroscience, 19(4), 605-616. doi:10.1162/jocn.2007.19.4.605.

    Abstract

    During language comprehension, listeners use the global semantic representation from previous sentence or discourse context to immediately integrate the meaning of each upcoming word into the unfolding message-level representation. Here we investigate whether communicative gestures that often spontaneously co-occur with speech are processed in a similar fashion and integrated to previous sentence context in the same way as lexical meaning. Event-related potentials were measured while subjects listened to spoken sentences with a critical verb (e.g., knock), which was accompanied by an iconic co-speech gesture (i.e., KNOCK). Verbal and/or gestural semantic content matched or mismatched the content of the preceding part of the sentence. Despite the difference in the modality and in the specificity of meaning conveyed by spoken words and gestures, the latency, amplitude, and topographical distribution of both word and gesture mismatches are found to be similar, indicating that the brain integrates both types of information simultaneously. This provides evidence for the claim that neural processing in language comprehension involves the simultaneous incorporation of information coming from a broader domain of cognition than only verbal semantics. The neural evidence for similar integration of information from speech and gesture emphasizes the tight interconnection between speech and co-speech gestures.
  • Ozyurek, A., & Kelly, S. D. (2007). Gesture, language, and brain. Brain and Language, 101(3), 181-185. doi:10.1016/j.bandl.2007.03.006.
  • Ozyurek, A. (1996). How children talk about a conversation. Journal of Child Language, 23(3), 693-714. doi:10.1017/S0305000900009004.

    Abstract

    This study investigates how children of different ages talk about a conversation that they have witnessed. 48 Turkish children, five, nine and thirteen years in age, saw a televised dialogue between two Sesame Street characters (Bert and Ernie). Afterward, they narrated what they had seen and heard. Their reports were analysed for the development of linguistic devices used to orient their listeners to the relevant properties of a conversational exchange. Each utterance in the child's narrative was analysed as to its conversational role: (1) whether the child used direct or indirect quotation frames; (2) whether the child marked the boundaries of conversational turns using speakers' names and (3) whether the child used a marker for pairing of utterances made by different speakers (agreement-disagreement, request-refusal, questioning-answering). Within pairings, children's use of (a) the temporal and evaluative connectivity markers and (b) the kind of verb of saying were identified. The data indicate that there is a developmental change in children's ability to use appropriate linguistic means to orient their listeners to the different properties of a conversation. The development and use of these linguistic means enable the child to establish different social roles in a narrative interaction. The findings are interpreted in terms of the child's social-communicative development from being a ' character' to becoming a ' narrator' and ' author' of the reported conversation in the narrative situation.
  • Paracchini, S., Thomas, A., Castro, S., Lai, C., Paramasivam, M., Wang, Y., Keating, B. J., Taylor, J. M., Hacking, D. F., Scerri, T., Francks, C., Richardson, A. J., Wade-Martins, R., Stein, J. F., Knight, J. C., Copp, A. J., LoTurco, J., & Monaco, A. P. (2006). The chromosome 6p22 haplotype associated with dyslexia reduces the expression of KIAA0319, a novel gene involved in neuronal migration. Human Molecular Genetics, 15(10), 1659-1666. doi:10.1093/hmg/ddl089.

    Abstract

    Dyslexia is one of the most prevalent childhood cognitive disorders, affecting approximately 5% of school-age children. We have recently identified a risk haplotype associated with dyslexia on chromosome 6p22.2 which spans the TTRAP gene and portions of THEM2 and KIAA0319. Here we show that in the presence of the risk haplotype, the expression of the KIAA0319 gene is reduced but the expression of the other two genes remains unaffected. Using in situ hybridization, we detect a very distinct expression pattern of the KIAA0319 gene in the developing cerebral neocortex of mouse and human fetuses. Moreover, interference with rat Kiaa0319 expression in utero leads to impaired neuronal migration in the developing cerebral neocortex. These data suggest a direct link between a specific genetic background and a biological mechanism leading to the development of dyslexia: the risk haplotype on chromosome 6p22.2 down-regulates the KIAA0319 gene which is required for neuronal migration during the formation of the cerebral neocortex.
  • Parkes, L. M., Bastiaansen, M. C. M., & Norris, D. G. (2006). Combining EEG and fMRI to investigate the postmovement beta rebound. NeuroImage, 29(3), 685-696. doi:10.1016/j.neuroimage.2005.08.018.

    Abstract

    The relationship between synchronous neuronal activity as measured with EEG and the blood oxygenation level dependent (BOLD) signal as measured during fMRI is not clear. This work investigates the relationship by combining EEG and fMRI measures of the strong increase in beta frequency power following movement, the so-called post-movement beta rebound (PMBR). The time course of the PMBR, as measured by EEG, was included as a regressor in the fMRI analysis, allowing identification of a region of associated BOLD signal increase in the sensorimotor cortex, with the most significant region in the post-central sulcus. The increase in the BOLD signal suggests that the number of active neurons and/or their synaptic rate is increased during the PMBR. The duration of the BOLD response curve in the PMBR region is significantly longer than in the activated motor region, and is well fitted by a model including both motor and PMBR regressors. An intersubject correlation between the BOLD signal amplitude associated with the PMBR regressor and the PMBR strength as measured with EEG provides further evidence that this region is a source of the PMBR. There is a strong intra-subject correlation between the BOLD signal amplitude in the sensorimotor cortex during movement and the PMBR strength as measured by EEG, suggesting either that the motor activity itself, or somatosensory inputs associated with the motor activity, influence the PMBR. This work provides further evidence for a BOLD signal change associated with changes in neuronal synchrony, so opening up the possibility of studying other event-related oscillatory changes using fMRI.
  • Pederson, E., Danziger, E., Wilkins, D. G., Levinson, S. C., Kita, S., & Senft, G. (1998). Semantic typology and spatial conceptualization. Language, 74(3), 557-589. doi:10.2307/417793.
  • Pereiro Estevan, Y., Wan, V., & Scharenborg, O. (2007). Finding maximum margin segments in speech. Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference, IV, 937-940. doi:10.1109/ICASSP.2007.367225.

    Abstract

    Maximum margin clustering (MMC) is a relatively new and promising kernel method. In this paper, we apply MMC to the task of unsupervised speech segmentation. We present three automatic speech segmentation methods based on MMC, which are tested on TIMIT and evaluated on the level of phoneme boundary detection. The results show that MMC is highly competitive with existing unsupervised methods for the automatic detection of phoneme boundaries. Furthermore, initial analyses show that MMC is a promising method for the automatic detection of sub-phonetic information in the speech signal.
  • Perniss, P. M. (2007). Achieving spatial coherence in German sign language narratives: The use of classifiers and perspective. Lingua, 117(7), 1315-1338. doi:10.1016/j.lingua.2005.06.013.

    Abstract

    Spatial coherence in discourse relies on the use of devices that provide information about where referents are and where events take place. In signed language, two primary devices for achieving and maintaining spatial coherence are the use of classifier forms and signing perspective. This paper gives a unified account of the relationship between perspective and classifiers, and divides the range of possible correspondences between these two devices into prototypical and non-prototypical alignments. An analysis of German Sign Language narratives of complex events investigates the role of different classifier-perspective constructions in encoding spatial information about location, orientation, action and motion, as well as size and shape of referents. In particular, I show how non-prototypical alignments, including simultaneity of perspectives, contribute to the maintenance of spatial coherence, and provide functional explanations in terms of efficiency and informativeness constraints on discourse.
  • Petersson, K. M. (1998). Comments on a Monte Carlo approach to the analysis of functional neuroimaging data. NeuroImage, 8, 108-112.
  • Petersson, K. M., Gisselgard, J., Gretzer, M., & Ingvar, M. (2006). Interaction between a verbal working memory network and the medial temporal lobe. NeuroImage, 33(4), 1207-1217. doi:10.1016/j.neuroimage.2006.07.042.

    Abstract

    The irrelevant speech effect illustrates that sounds that are irrelevant to a visually presented short-term memory task still interfere with neuronal function. In the present study we explore the functional and effective connectivity of such interference. The functional connectivity analysis suggested an interaction between the level of irrelevant speech and the correlation between in particular the left superior temporal region, associated with verbal working memory, and the left medial temporal lobe. Based on this psycho-physiological interaction, and to broaden the understanding of this result, we performed a network analysis, using a simple network model for verbal working memory, to analyze its interaction with the medial temporal lobe memory system. The results showed dissociations in terms of network interactions between frontal as well as parietal and temporal areas in relation to the medial temporal lobe. The results of the present study suggest that a transition from phonological loop processing towards an engagement of episodic processing might take place during the processing of interfering irrelevant sounds. We speculate that, in response to the irrelevant sounds, this reflects a dynamic shift in processing as suggested by a closer interaction between a verbal working memory system and the medial temporal lobe memory system.
  • Petersson, K. M., Silva, C., Castro-Caldas, A., Ingvar, M., & Reis, A. (2007). Literacy: A cultural influence on functional left-right differences in the inferior parietal cortex. European Journal of Neuroscience, 26(3), 791-799. doi:10.1111/j.1460-9568.2007.05701.x.

    Abstract

    The current understanding of hemispheric interaction is limited. Functional hemispheric specialization is likely to depend on both genetic and environmental factors. In the present study we investigated the importance of one factor, literacy, for the functional lateralization in the inferior parietal cortex in two independent samples of literate and illiterate subjects. The results show that the illiterate group are consistently more right-lateralized than their literate controls. In contrast, the two groups showed a similar degree of left-right differences in early speech-related regions of the superior temporal cortex. These results provide evidence suggesting that a cultural factor, literacy, influences the functional hemispheric balance in reading and verbal working memory-related regions. In a third sample, we investigated grey and white matter with voxel-based morphometry. The results showed differences between literacy groups in white matter intensities related to the mid-body region of the corpus callosum and the inferior parietal and parietotemporal regions (literate > illiterate). There were no corresponding differences in the grey matter. This suggests that the influence of literacy on brain structure related to reading and verbal working memory is affecting large-scale brain connectivity more than grey matter per se.
  • Pickering, M. J., & Majid, A. (2007). What are implicit causality and consequentiality? Language and Cognitive Processes, 22(5), 780-788. doi:10.1080/01690960601119876.

    Abstract

    Much work in psycholinguistics and social psychology has investigated the notion of implicit causality associated with verbs. Crinean and Garnham (2006) relate implicit causality to another phenomenon, implicit consequentiality. We argue that they and other researchers have confused the meanings of events and the reasons for those events, so that particular thematic roles (e.g., Agent, Patient) are taken to be causes or consequences of those events by definition. In accord with Garvey and Caramazza (1974), we propose that implicit causality and consequentiality are probabilistic notions that are straightforwardly related to the explicit causes and consequences of events and are analogous to other biases investigated in psycholinguistics.
  • Piekema, C., Kessels, R. P. C., Mars, R. B., Petersson, K. M., & Fernández, G. (2006). The right hippocampus participates in short-term memory maintenance of object–location associations. NeuroImage, 33(1), 374-382. doi:10.1016/j.neuroimage.2006.06.035.

    Abstract

    Doubts have been cast on the strict dissociation between short- and long-term memory systems. Specifically, several neuroimaging studies have shown that the medial temporal lobe, a region almost invariably associated with long-term memory, is involved in active short-term memory maintenance. Furthermore, a recent study in hippocampally lesioned patients has shown that the hippocampus is critically involved in associating objects and their locations, even when the delay period lasts only 8 s. However, the critical feature that causes the medial temporal lobe, and in particular the hippocampus, to participate in active maintenance is still unknown. This study was designed in order to explore hippocampal involvement in active maintenance of spatial and non-spatial associations. Eighteen participants performed a delayed-match-to-sample task in which they had to maintain either object–location associations, color–number association, single colors, or single locations. Whole-brain activity was measured using event-related functional magnetic resonance imaging and analyzed using a random effects model. Right lateralized hippocampal activity was evident when participants had to maintain object–location associations, but not when they had to maintain object–color associations or single items. The present results suggest a hippocampal involvement in active maintenance when feature combinations that include spatial information have to be maintained online.
  • Pine, J. M., Lieven, E. V., & Rowland, C. F. (1998). Comparing different models of the development of the English verb category. Linguistics, 36(4), 807-830. doi:10.1515/ling.1998.36.4.807.

    Abstract

    In this study data from the first six months of 12 children s multiword speech were used to test the validity of Valian's (1991) syntactic perfor-mance-limitation account and Tomasello s (1992) verb-island account of early multiword speech with particular reference to the development of the English verb category. The results provide evidence for appropriate use of verb morphology, auxiliary verb structures, pronoun case marking, and SVO word order from quite early in development. However, they also demonstrate a great deal of lexical specificity in the children's use of these systems, evidenced by a lack of overlap in the verbs to which different morphological markers were applied, a lack of overlap in the verbs with which different auxiliary verbs were used, a disproportionate use of the first person singular nominative pronoun I, and a lack of overlap in the lexical items that served as the subjects and direct objects of transitive verbs. These findings raise problems for both a syntactic performance-limitation account and a strong verb-island account of the data and suggest the need to develop a more general lexiealist account of early multiword speech that explains why some words come to function as "islands" of organization in the child's grammar and others do not.
  • Pine, J. M., Lieven, E. V., & Rowland, C. F. (1996). Observational and checklist measures of vocabulary composition: What do they mean? Journal of Child Language, 23(3), 573-590. doi:10.1017/S0305000900008953.

    Abstract

    Observational and checklist measures of vocabulary composition have both recently been used to look at the absolute proportion of nouns in children's early vocabularies. However, they have tended to generate rather different results. The present study is an attempt to investigate the relationship between such measures in a sample of 26 children between 1;1 and 2;1 at approximately 50 and 100 words. The results show that although observational and checklist measures are significantly correlated, there are also systematic quantitative differences between them which seem to reflect a combination of checklist, maternal-report and observational sampling biases. This suggests that, although both kinds of measure may represent good indices of differences in vocabulary size and composition across children and hence be useful as dependent variables in correlational research, neither may be ideal for estimating the absolute proportion of nouns in children's vocabularies. The implication is that questions which rely on information about the absolute proportion of particular kinds of words in children's vocabularies can only be properly addressed by detailed longitudinal studies in which an attempt is made to collect more comprehensive vocabulary records for individual children.
  • Poletiek, F. H. (2006). De dwingende macht van een Goed Verhaal [Boekbespreking van Vincent plast op de grond:Nachtmerries in het Nederlands recht door W.A. Wagenaar]. De Psycholoog, 41, 460-462.
  • Poletiek, F. H. (1998). De geest van de jury. Psychologie en Maatschappij, 4, 376-378.
  • Poletiek, F. H. (1996). Paradoxes of falsification. Quarterly Journal of Experimental Psychology Section A: Human Experimental Psychology, 49(2), 447-462. doi:10.1080/713755628.
  • Praamstra, P., Meyer, A. S., Cools, A. R., Horstink, M. W. I. M., & Stegeman, D. F. (1996). Movement preparation in Parkinson's disease: Time course and distribution of movement-related potentials in a movement precueing task. Brain, 119, 1689-1704. doi:10.1093/brain/119.5.1689.

    Abstract

    Investigations of the effects of advance information on movement preparation in Parkinson's disease using reaction time (RT) measures have yielded contradictory results. In order to obtain direct information regarding the time course of movement preparation, we combined RT measurements in a movement precueing task with multi-channel recordings of movement-related potentials in the present study. Movements of the index and middle fingers of the left and right hand were either precued or not by advance information regarding the side (left or right hand) of the required response. Reaction times were slower for patients than for control subjects. Both groups benefited equally from informative precues, indicating that patients utilized the advance information as effectively as control subjects. Lateralization of the movement-preceding cerebral activity [i.e. the lateralized readiness potential (LRP)] confirmed that patients used the available partial information to prepare their responses and started this process no later than controls. In conjunction with EMG onset times, the LRP onset measures allowed for a fractionation of the RTs, which provided clues to the stages where the slowness of Parkinson's disease patients might arise. No definite abnormalities of temporal parameters were found, but differences in the distribution of the lateralized movement-preceding activity between patients and controls suggested differences in the cortical organization of movement preparation. Differences in amplitude of the contingent negative variation (CNV) and differences in the way in which the CNV was modulated by the information given by the precue pointed in the same direction. A difference in amplitude of the P300 between patients and controls suggested that preprogramming a response required more effort from. patients than from control subjects.
  • Praamstra, P., Stegeman, D. F., Cools, A. R., Meyer, A. S., & Horstink, M. W. I. M. (1998). Evidence for lateral premotor and parietal overactivity in Parkinson's disease during sequential and bimanual movements: A PET study. Brain, 121, 769-772. doi:10.1093/brain/121.4.769.
  • Prieto, P., & Torreira, F. (2007). The segmental anchoring hypothesis revisited: Syllable structure and speech rate effects on peak timing in Spanish. Journal of Phonetics, 35, 473-500. doi:10.1016/j.wocn.2007.01.001.

    Abstract

    This paper addresses the validity of the segmental anchoring hypothesis for tonal landmarks (henceforth, SAH) as described in recent work by (among others) Ladd, Faulkner, D., Faulkner, H., & Schepman [1999. Constant ‘segmental’ anchoring of f0 movements under changes in speech rate. Journal of the Acoustical Society of America, 106, 1543–1554], Ladd [2003. Phonological conditioning of f0 target alignment. In: M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the XVth international congress of phonetic sciences, Vol. 1, (pp. 249–252). Barcelona: Causal Productions; in press. Segmental anchoring of pitch movements: Autosegmental association or gestural coordination? Italian Journal of Linguistics, 18 (1)]. The alignment of LH* prenuclear peaks with segmental landmarks in controlled speech materials in Peninsular Spanish is analyzed as a function of syllable structure type (open, closed) of the accented syllable, segmental composition, and speaking rate. Contrary to the predictions of the SAH, alignment was affected by syllable structure and speech rate in significant and consistent ways. In: CV syllables the peak was located around the end of the accented vowel, and in CVC syllables around the beginning-mid part of the sonorant coda, but still far from the syllable boundary. With respect to the effects of rate, peaks were located earlier in the syllable as speech rate decreased. The results suggest that the accent gestures under study are synchronized with the syllable unit. In general, the longer the syllable, the longer the rise time. Thus the fundamental idea of the anchoring hypothesis can be taken as still valid. On the other hand, the tonal alignment patterns reported here can be interpreted as the outcome of distinct modes of gestural coordination in syllable-initial vs. syllable-final position: gestures at syllable onsets appear to be more tightly coordinated than gestures at the end of syllables [Browman, C. P., & Goldstein, L.M. (1986). Towards an articulatory phonology. Phonology Yearbook, 3, 219–252; Browman, C. P., & Goldstein, L. (1988). Some notes on syllable structure in articulatory phonology. Phonetica, 45, 140–155; (1992). Articulatory Phonology: An overview. Phonetica, 49, 155–180; Krakow (1999). Physiological organization of syllables: A review. Journal of Phonetics, 27, 23–54; among others]. Intergestural timing can thus provide a unifying explanation for (1) the contrasting behavior between the precise synchronization of L valleys with the onset of the syllable and the more variable timing of the end of the f0 rise, and, more specifically, for (2) the right-hand tonal pressure effects and ‘undershoot’ patterns displayed by peaks at the ends of syllables and other prosodic domains.
  • Protopapas, A., Gerakaki, S., & Alexandri, S. (2006). Lexical and default stress assignment in reading Greek. Journal of research in reading, 29(4), 418-432. doi:10.1111/j.1467-9817.2006.00316.x.

    Abstract

    Greek is a language with lexical stress that marks stress orthographically with a special diacritic. Thus, the orthography and the lexicon constitute potential sources of stress assignment information in addition to any possible general default metrical pattern. Here, we report two experiments with secondary education children reading aloud pseudo-word stimuli, in which we manipulated the availability of lexical (using stimuli resembling particular words) and visual (existence and placement of the diacritic) information. The reliance on the diacritic was found to be imperfect. Strong lexical effects as well as a default metrical pattern stressing the penultimate syllable were revealed. Reading models must be extended to account for multisyllabic word reading including, in particular, stress assignment based on the interplay among multiple possible sources of information.
  • Protopapas, A., Gerakaki, S., & Alexandri, S. (2007). Sources of information for stress assignment in reading Greek. Applied Psycholinguistics, 28(4), 695 -720. doi:10.1017/S0142716407070373.

    Abstract

    To assign lexical stress when reading, the Greek reader can potentially rely on lexical information (knowledge of the word), visual–orthographic information (processing of the written diacritic), or a default metrical strategy (penultimate stress pattern). Previous studies with secondary education children have shown strong lexical effects on stress assignment and have provided evidence for a default pattern. Here we report two experiments with adult readers, in which we disentangle and quantify the effects of these three potential sources using nonword materials. Stimuli either resembled or did not resemble real words, to manipulate availability of lexical information; and they were presented with or without a diacritic, in a word-congruent or word-incongruent position, to contrast the relative importance of the three sources. Dual-task conditions, in which cognitive load during nonword reading was increased with phonological retention carrying a metrical pattern different from the default, did not support the hypothesis that the default arises from cumulative lexical activation in working memory.
  • Qin, S., Piekema, C., Petersson, K. M., Han, B., Luo, J., & Fernández, G. (2007). Probing the transformation of discontinuous associations into episodic memory: An event-related fMRI study. NeuroImage, 38(1), 212-222. doi:10.1016/j.neuroimage.2007.07.020.

    Abstract

    Using event-related functional magnetic resonance imaging, we identified brain regions involved in storing associations of events discontinuous in time into long-term memory. Participants were scanned while memorizing item-triplets including simultaneous and discontinuous associations. Subsequent memory tests showed that participants remembered both types of associations equally well. First, by constructing the contrast between the subsequent memory effects for discontinuous associations and simultaneous associations, we identified the left posterior parahippocampal region, dorsolateral prefrontal cortex, the basal ganglia, posterior midline structures, and the middle temporal gyrus as being specifically involved in transforming discontinuous associations into episodic memory. Second, we replicated that the prefrontal cortex and the medial temporal lobe (MTL) especially the hippocampus are involved in associative memory formation in general. Our findings provide evidence for distinct neural operation(s) that supports the binding and storing discontinuous associations in memory. We suggest that top-down signals from the prefrontal cortex and MTL may trigger reactivation of internal representation in posterior midline structures of the first event, thus allowing it to be associated with the second event. The dorsolateral prefrontal cortex together with basal ganglia may support this encoding operation by executive and binding processes within working memory, and the posterior parahippocampal region may play a role in binding and memory formation.
  • Radeau, M., & Van Berkum, J. J. A. (1996). Gender decision. Language and Cognitive Processes, 11(6), 605-610. doi:10.1080/016909696387006.

    Abstract

    In languages in which nouns have a grammatical gender, word recognition can be estimated by gender decision response times. Although gender decision has yet to be used extensively, it has proved sensitive to several factors that have been shown to affect lexical access. The task is not restricted to spoken language but can be used with linguistic information from other sensory modalities.
  • Reis, A., Faísca, L., Mendonça, S., Ingvar, M., & Petersson, K. M. (2007). Semantic interference on a phonological task in illiterate subjects. Scandinavian Journal of Psychology, 48(1), 69-74. doi:10.1111/j.1467-9450.2006.00544.x.

    Abstract

    Previous research suggests that learning an alphabetic written language influences aspects of the auditory-verbal language system. In this study, we examined whether literacy influences the notion of words as phonological units independent of lexical semantics in literate and illiterate subjects. Subjects had to decide which item in a word- or pseudoword pair was phonologically longest. By manipulating the relationship between referent size and phonological length in three word conditions (congruent, neutral, and incongruent) we could examine to what extent subjects focused on form rather than meaning of the stimulus material. Moreover, the pseudoword condition allowed us to examine global phonological awareness independent of lexical semantics. The results showed that literate performed significantly better than illiterate subjects in the neutral and incongruent word conditions as well as in the pseudoword condition. The illiterate group performed least well in the incongruent condition and significantly better in the pseudoword condition compared to the neutral and incongruent word conditions and suggest that performance on phonological word length comparisons is dependent on literacy. In addition, the results show that the illiterate participants are able to perceive and process phonological length, albeit less well than the literate subjects, when no semantic interference is present. In conclusion, the present results confirm and extend the finding that illiterate subjects are biased towards semantic-conceptual-pragmatic types of cognitive processing.
  • Reis, A., Faísca, L., Ingvar, M., & Petersson, K. M. (2006). Color makes a difference: Two-dimensional object naming in literate and illiterate subjects. Brain and Cognition, 60, 49-54. doi:10.1016/j.bandc.2005.09.012.

    Abstract

    Previous work has shown that illiterate subjects are better at naming two-dimensional representations of real objects when presented as colored photos as compared to black and white drawings. This raises the question if color or textural details selectively improve object recognition and naming in illiterate compared to literate subjects. In this study, we investigated whether the surface texture and/or color of objects is used to access stored object knowledge in illiterate subjects. A group of illiterate subjects and a matched literate control group were compared on an immediate object naming task with four conditions: color and black and white (i.e., grey-scaled) photos, as well as color and black and white (i.e., grey-scaled) drawings of common everyday objects. The results show that illiterate subjects perform significantly better when the stimuli are colored and this effect is independent of the photographic detail. In addition, there were significant differences between the literacy groups in the black and white condition for both drawings and photos. These results suggest that color object information contributes to object recognition. This effect was particularly prominent in the illiterate group
  • Rey, A., & Schiller, N. O. (2006). A case of normal word reading but impaired letter naming. Journal of Neurolinguistics, 19(2), 87-95. doi:10.1016/j.jneuroling.2005.09.003.

    Abstract

    A case of a word/letter dissociation is described. The present patient has a quasi-normal word reading performance (both at the level of speed and accuracy) while he has major problems in nonword and letter reading. More specifically, he has strong difficulties in retrieving letter names but preserved abilities in letter identification. This study complements previous cases reporting a similar word/letter dissociation by focusing more specifically on word reading and letter naming latencies. The results provide new constraints for modeling the role of letter knowledge within reading processes and during reading acquisition or rehabilitation.
  • Roberts, L., Marinis, T., Felser, C., & Clahsen, H. (2007). Antecedent priming at trace positions in children’s sentence processing. Journal of Psycholinguistic Research, 36(2), 175-188. doi: 10.1007/s10936-006-9038-3.

    Abstract

    The present study examines whether children reactivate a moved constituent at its gap position and how children’s more limited working memory span affects the way they process filler-gap dependencies. 46 5–7 year-old children and 54 adult controls participated in a cross-modal picture priming experiment and underwent a standardized working memory test. The results revealed a statistically significant interaction between the participants’ working memory span and antecedent reactivation: High-span children (n = 19) and high-span adults (n = 22) showed evidence of antecedent priming at the gap site, while for low-span children and adults, there was no such effect. The antecedent priming effect in the high-span participants indicates that in both children and adults, dislocated arguments access their antecedents at gap positions. The absence of an antecedent reactivation effect in the low-span participants could mean that these participants required more time to integrate the dislocated constituent and reactivated the filler later during the sentence.
  • Roberts, L. (2007). Investigating real-time sentence processing in the second language. Stem-, Spraak- en Taalpathologie, 15, 115-127.

    Abstract

    Second language (L2) acquisition researchers have always been concerned with what L2 learners know about the grammar of the target language but more recently there has been growing interest in how L2 learners put this knowledge to use in real-time sentence comprehension. In order to investigate real-time L2 sentence processing, the types of constructions studied and the methods used are often borrowed from the field of monolingual processing, but the overall issues are familiar from traditional L2 acquisition research. These cover questions relating to L2 learners’ native-likeness, whether or not L1 transfer is in evidence, and how individual differences such as proficiency and language experience might have an effect. The aim of this paper is to provide for those unfamiliar with the field, an overview of the findings of a selection of behavioral studies that have investigated such questions, and to offer a picture of how L2 learners and bilinguals may process sentences in real time.
  • Robinson, S. (2006). The phoneme inventory of the Aita dialect of Rotokas. Oceanic Linguistics, 45(1), 206-209.

    Abstract

    Rotokas is famous for possessing one of the world’s smallest phoneme inventories. According to one source, the Central dialect of Rotokas possesses only 11 segmental phonemes (five vowels and six consonants) and lacks nasals while the Aita dialect possesses a similar-sized inventory in which nasals replace voiced stops. However, recent fieldwork reveals that the Aita dialect has, in fact, both voiced and nasal stops, making for an inventory of 14 segmental phonemes (five vowels and nine consonants). The correspondences between Central and Aita Rotokas suggest that the former is innovative with respect to its consonant inventory and the latter conservative, and that the small inventory of Central Rotokas arose by collapsing the distinction between voiced and nasal stops.
  • Roelofs, A. (2007). On the modelling of spoken word planning: Rejoinder to La Heij, Starreveld, and Kuipers (2007). Language and Cognitive Processes, 22(8), 1281-1286. doi:10.1080/01690960701462291.

    Abstract

    The author contests several claims of La Heij, Starreveld, and Kuipers (this issue) concerning the modelling of spoken word planning. The claims are about the relevance of error findings, the interaction between semantic and phonological factors, the explanation of word-word findings, the semantic relatedness paradox, and production rules.
  • Roelofs, A. (2006). The influence of spelling on phonological encoding in word reading, object naming, and word generation. Psychonomic Bulletin & Review, 13(1), 33-37.

    Abstract

    Does the spelling of a word mandatorily constrain spoken word production, or does it do so only
    when spelling is relevant for the production task at hand? Damian and Bowers (2003) reported spelling
    effects in spoken word production in English using a prompt–response word generation task. Preparation
    of the response words was disrupted when the responses shared initial phonemes that differed
    in spelling, suggesting that spelling constrains speech production mandatorily. The present experiments,
    conducted in Dutch, tested for spelling effects using word production tasks in which spelling
    was clearly relevant (oral reading in Experiment 1) or irrelevant (object naming and word generation
    in Experiments 2 and 3, respectively). Response preparation was disrupted by spelling inconsistency
    only with the word reading, suggesting that the spelling of a word constrains spoken word production
    in Dutch only when it is relevant for the word production task at hand.
  • Roelofs, A., Meyer, A. S., & Levelt, W. J. M. (1998). A case for the lemma/lexeme distinction in models of speaking: Comment on Caramazza and Miozzo (1997). Cognition, 69(2), 219-230. doi:10.1016/S0010-0277(98)00056-0.

    Abstract

    In a recent series of papers, Caramazza and Miozzo [Caramazza, A., 1997. How many levels of processing are there in lexical access? Cognitive Neuropsychology 14, 177-208; Caramazza, A., Miozzo, M., 1997. The relation between syntactic and phonological knowledge in lexical access: evidence from the 'tip-of-the-tongue' phenomenon. Cognition 64, 309-343; Miozzo, M., Caramazza, A., 1997. On knowing the auxiliary of a verb that cannot be named: evidence for the independence of grammatical and phonological aspects of lexical knowledge. Journal of Cognitive Neuropsychology 9, 160-166] argued against the lemma/lexeme distinction made in many models of lexical access in speaking, including our network model [Roelofs, A., 1992. A spreading-activation theory of lemma retrieval in speaking. Cognition 42, 107-142; Levelt, W.J.M., Roelofs, A., Meyer, A.S., 1998. A theory of lexical access in speech production. Behavioral and Brain Sciences, (in press)]. Their case was based on the observations that grammatical class deficits of brain-damaged patients and semantic errors may be restricted to either spoken or written forms and that the grammatical gender of a word and information about its form can be independently available in tip-of-the-tongue stales (TOTs). In this paper, we argue that though our model is about speaking, not taking position on writing, extensions to writing are possible that are compatible with the evidence from aphasia and speech errors. Furthermore, our model does not predict a dependency between gender and form retrieval in TOTs. Finally, we argue that Caramazza and Miozzo have not accounted for important parts of the evidence motivating the lemma/lexeme distinction, such as word frequency effects in homophone production, the strict ordering of gender and pho neme access in LRP data, and the chronometric and speech error evidence for the production of complex morphology.
  • Roelofs, A. (2006). Context effects of pictures and words in naming objects, reading words, and generating simple phrases. Quarterly Journal of Experimental Psychology, 59(10), 1764-1784. doi:10.1080/17470210500416052.

    Abstract

    In five language production experiments it was examined which aspects of words are activated in memory by context pictures and words. Context pictures yielded Stroop-like and semantic effects on response times when participants generated gender-marked noun phrases in response to written words (Experiment 1A). However, pictures yielded no such effects when participants simply read aloud the noun phrases (Experiment 2). Moreover, pictures yielded a gender congruency effect in generating gender-marked noun phrases in response to the written words (Experiments 3A and 3B). These findings suggest that context pictures activate lemmas (i.e., representations of syntactic properties), which leads to effects only when lemmas are needed to generate a response (i.e., in Experiments 1A, 3A, and 3B, but not in Experiment 2). Context words yielded Stroop-like and semantic effects in picture naming (Experiment 1B). Moreover, words yielded Stroop-like but no semantic effects in reading nouns (Experiment 4) and in generating noun phrases (Experiment 5). These findings suggest that context words activate the lemmas and forms of their names, which leads to semantic effects when lemmas are required for responding (Experiment 1B) but not when only the forms are required (Experiment 4). WEAVER++ simulations of the results are presented.
  • Roelofs, A. (2007). A critique of simple name-retrieval models of spoken word planning. Language and Cognitive Processes, 22(8), 1237-1260. doi:10.1080/01690960701461582.

    Abstract

    Simple name-retrieval models of spoken word planning (Bloem & La Heij, 2003; Starreveld & La Heij, 1996) maintain (1) that there are two levels in word planning, a conceptual and a lexical phonological level, and (2) that planning a word in both object naming and oral reading involves the selection of a lexical phonological representation. Here, the name retrieval models are compared to more complex models with respect to their ability to account for relevant data. It appears that the name retrieval models cannot easily account for several relevant findings, including some speech error biases, types of morpheme errors, and context effects on the latencies of responding to pictures and words. New analyses of the latency distributions in previous studies also pose a challenge. More complex models account for all these findings. It is concluded that the name retrieval models are too simple and that the greater complexity of the other models is warranted
  • Roelofs, A. (2007). Attention and gaze control in picture naming, word reading, and word categorizing. Journal of Memory and Language, 57(2), 232-251. doi:10.1016/j.jml.2006.10.001.

    Abstract

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture–word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment 3) and shifted their gaze to the arrow to manually indicate its direction. The experiments showed that the temporal coordination of vocal responding and gaze shifting depends on the vocal task and, to a lesser extent, on the type of relationship between picture and word. There was a close temporal link between gaze shifting and manual responding, suggesting that the gaze shifts indexed shifts of attention between the vocal and manual tasks. Computer simulations showed that a simple extension of WEAVER++ [Roelofs, A. (1992). A spreading-activation theory of lemma retrieval in speaking. Cognition, 42, 107–142.; Roelofs, A. (2003). Goal-referenced selection of verbal action: modeling attentional control in the Stroop task. Psychological Review, 110, 88–125.] with assumptions about attentional control in the coordination of vocal responding, gaze shifting, and manual responding quantitatively accounts for the key findings.
  • Roelofs, A., Van Turennout, M., & Coles, M. G. H. (2006). Anterior cingulate cortex activity can be independent of response conflict in stroop-like tasks. Proceedings of the National Academy of Sciences of the United States of America, 103(37), 13884-13889. doi:10.1073/pnas.0606265103.

    Abstract

    Cognitive control includes the ability to formulate goals and plans of action and to follow these while facing distraction. Previous neuroimaging studies have shown that the presence of conflicting response alternatives in Stroop-like tasks increases activity in dorsal anterior cingulate cortex (ACC), suggesting that the ACC is involved in cognitive control. However, the exact nature of ACC function is still under debate. The prevailing conflict detection hypothesis maintains that the ACC is involved in performance monitoring. According to this view, ACC activity reflects the detection of response conflict and acts as a signal that engages regulative processes subserved by lateral prefrontal brain regions. Here, we provide evidence from functional MRI that challenges this view and favors an alternative view, according to which the ACC has a role in regulation itself. Using an arrow–word Stroop task, subjects responded to incongruent, congruent, and neutral stimuli. A critical prediction made by the conflict detection hypothesis is that ACC activity should be increased only when conflicting response alternatives are present. Our data show that ACC responses are larger for neutral than for congruent stimuli, in the absence of response conflict. This result demonstrates the engagement of the ACC in regulation itself. A computational model of Stroop-like performance instantiating a version of the regulative hypothesis is shown to account for our findings.
  • Roelofs, A. (2006). Functional architecture of naming dice, digits, and number words. Language and Cognitive Processes, 21(1/2/3), 78-111. doi:10.1080/01690960400001846.

    Abstract

    Five chronometric experiments examined the functional architecture of naming dice, digits, and number words. Speakers named pictured dice, Arabic digits, or written number words, while simultaneously trying to ignore congruent or incongruent dice, digit, or number word distractors presented at various stimulus onset asynchronies (SOAs). Stroop-like interference and facilitation effects were obtained from digits and words on dice naming latencies, but not from dice on digit and word naming latencies. In contrast, words affected digit naming latencies and digits affected word naming latencies to the same extent. The peak of the interference was always around SOA = 0 ms, whereas facilitation was constant across distractor-first SOAs. These results suggest that digit naming is achieved like word naming rather than dice naming. WEAVER++simulations of the results are reported.
  • Roelofs, A., Özdemir, R., & Levelt, W. J. M. (2007). Influences of spoken word planning on speech recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(5), 900-913. doi:10.1037/0278-7393.33.5.900.

    Abstract

    In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. (PsycINFO Database Record (c) 2007 APA, all rights reserved)
  • Roelofs, A. (2006). Modeling the control of phonological encoding in bilingual speakers. Bilingualism: Language and Cognition, 9(2), 167-176. doi:10.1017/S1366728906002513.

    Abstract

    Phonological encoding is the process by which speakers retrieve phonemic segments for morphemes from memory and use
    the segments to assemble phonological representations of words to be spoken. When conversing in one language, bilingual
    speakers have to resist the temptation of encoding word forms using the phonological rules and representations of the other
    language. We argue that the activation of phonological representations is not restricted to the target language and that the
    phonological representations of languages are not separate. We advance a view of bilingual control in which condition-action
    rules determine what is done with the activated phonological information depending on the target language. This view is
    computationally implemented in the WEAVER++ model. We present WEAVER++ simulations of the cognate facilitation effect
    (Costa, Caramazza and Sebasti´an-Gall´es, 2000) and the between-language phonological facilitation effect of spoken
    distractor words in object naming (Hermans, Bongaerts, de Bot and Schreuder, 1998).
  • Roelofs, A., & Meyer, A. S. (1998). Metrical structure in planning the production of spoken words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 922-939. doi:10.1037/0278-7393.24.4.922.

    Abstract

    According to most models of speech production, the planning of spoken words involves the independent retrieval of segments and metrical frames followed by segment-to-frame association. In some models, the metrical frame includes a specification of the number and ordering of consonants and vowels, but in the word-form encoding by activation and verification (WEAVER) model (A. Roelofs, 1997), the frame specifies only the stress pattern across syllables. In 6 implicit priming experiments, on each trial, participants produced 1 word out of a small set as quickly as possible. In homogeneous sets, the response words shared word-initial segments, whereas in heterogeneous sets, they did not. Priming effects from shared segments depended on all response words having the same number of syllables and stress pattern, but not on their having the same number of consonants and vowels. No priming occurred when the response words had only the same metrical frame but shared no segments. Computer simulations demonstrated that WEAVER accounts for the findings.
  • Roelofs, A., Meyer, A. S., & Levelt, W. J. M. (1996). Interaction between semantic and orthographic factors in conceptually driven naming: Comment on Starreveld and La Heij (1995). Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 246-251.

    Abstract

    P. A. Starreveld and W. La Heij (1995) tested the seriality view of lexical access in speech production, according to which lexical selection and the encoding of a word's form proceed in serial order without feedback. In 2 experiments, they looked at the combined effect of semantic and orthographic relatedness of written distracter words in tasks that required conceptually driven naming. They found an interaction between semantic relatedness and orthographic relatedness and argued that the observed interaction refutes the seriality view of lexical access. In this comment, the authors argue that Starreveld and La Heij's rejection of serial access was based on an oversimplified conception of the seriality view and that interaction, rather than additivity, is predicted by existing conceptions of serial access.
  • Roelofs, A. (1998). Rightward incrementality in encoding simple phrasal forms in speech production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 904-921. doi:10.1037/0278-7393.24.4.904.

    Abstract

    This article reports 7 experiments investigating whether utterances are planned in a parallel or rightward incremental fashion during language production. The experiments examined the role of linear order, length, frequency, and repetition in producing Dutch verb–particle combinations. On each trial, participants produced 1 utterance out of a set of 3 as quickly as possible. The responses shared part of their form or not. For particle-initial infinitives, facilitation was obtained when the responses shared the particle but not when they shared the verb. For verb-initial imperatives, however, facilitation was obtained for the verbs but not for the particles. The facilitation increased with length, decreased with frequency, and was independent of repetition. A simple rightward incremental model accounts quantitatively for the results.
  • Rohlfing, K., Loehr, D., Duncan, S., Brown, A., Franklin, A., Kimbara, I., Milde, J.-T., Parrill, F., Rose, T., Schmidt, T., Sloetjes, H., Thies, A., & Wellinghof, S. (2006). Comparison of multimodal annotation tools - workshop report. Gesprächforschung - Online-Zeitschrift zur Verbalen Interaktion, 7, 99-123.
  • Rowland, C. F. (2007). Explaining errors in children’s questions. Cognition, 104(1), 106-134. doi:10.1016/j.cognition.2006.05.011.

    Abstract

    The ability to explain the occurrence of errors in children’s speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813–842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children’s speech, and that errors occur when children resort to other operations to produce questions [e.g. Dąbrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83–102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157–181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.
  • Rowland, C. F., & Fletcher, S. L. (2006). The effect of sampling on estimates of lexical specificity and error rates. Journal of Child Language, 33(4), 859-877. doi:10.1017/S0305000906007537.

    Abstract

    Studies based on naturalistic data are a core tool in the field of language acquisition research and have provided thorough descriptions of children's speech. However, these descriptions are inevitably confounded by differences in the relative frequency with which children use words and language structures. The purpose of the present work was to investigate the impact of sampling constraints on estimates of the productivity of children's utterances, and on the validity of error rates. Comparisons were made between five different sized samples of wh-question data produced by one child aged 2;8. First, we assessed whether sampling constraints undermined the claim (e.g. Tomasello, 2000) that the restricted nature of early child speech reflects a lack of adultlike grammatical knowledge. We demonstrated that small samples were equally likely to under- as overestimate lexical specificity in children's speech, and that the reliability of estimates varies according to sample size. We argued that reliable analyses require a comparison with a control sample, such as that from an adult speaker. Second, we investigated the validity of estimates of error rates based on small samples. The results showed that overall error rates underestimate the incidence of error in some rarely produced parts of the system and that analyses on small samples were likely to substantially over- or underestimate error rates in infrequently produced constructions. We concluded that caution must be used when basing arguments about the scope and nature of errors in children's early multi-word productions on analyses of samples of spontaneous speech.
  • Rubio-Fernández, P. (2007). Suppression in metaphor interpretation: Differences between meaning selection and meaning construction. Journal of Semantics, 24(4), 345-371. doi:10.1093/jos/ffm006.

    Abstract

    Various accounts of metaphor interpretation propose that it involves constructing an ad hoc concept on the basis of the concept encoded by the metaphor vehicle (i.e. the expression used for conveying the metaphor). This paper discusses some of the differences between these theories and investigates their main empirical prediction: that metaphor interpretation involves enhancing properties of the metaphor vehicle that are relevant for interpretation, while suppressing those that are irrelevant. This hypothesis was tested in a cross-modal lexical priming study adapted from early studies on lexical ambiguity. The different patterns of suppression of irrelevant meanings observed in disambiguation studies and in the experiment on metaphor reported here are discussed in terms of differences between meaning selection and meaning construction.
  • De Ruiter, J. P. (2007). Postcards from the mind: The relationship between speech, imagistic gesture and thought. Gesture, 7(1), 21-38.

    Abstract

    In this paper, I compare three different assumptions about the relationship between speech, thought and gesture. These assumptions have profound consequences for theories about the representations and processing involved in gesture and speech production. I associate these assumptions with three simplified processing architectures. In the Window Architecture, gesture provides us with a 'window into the mind'. In the Language Architecture, properties of language have an influence on gesture. In the Postcard Architecture, gesture and speech are planned by a single process to become one multimodal message. The popular Window Architecture is based on the assumption that gestures come, as it were, straight out of the mind. I argue that during the creation of overt imagistic gestures, many processes, especially those related to (a) recipient design, and (b) effects of language structure, cause an observable gesture to be very different from the original thought that it expresses. The Language Architecture and the Postcard Architecture differ from the Window Architecture in that they both incorporate a central component which plans gesture and speech together, however they differ from each other in the way they align gesture and speech. The Postcard Architecture assumes that the process creating a multimodal message involving both gesture and speech has access to the concepts that are available in speech, while the Language Architecture relies on interprocess communication to resolve potential conflicts between the content of gesture and speech.
  • De Ruiter, J. P., Mitterer, H., & Enfield, N. J. (2006). Projecting the end of a speaker's turn: A cognitive cornerstone of conversation. Language, 82(3), 515-535.

    Abstract

    A key mechanism in the organization of turns at talk in conversation is the ability to anticipate or PROJECT the moment of completion of a current speaker’s turn. Some authors suggest that this is achieved via lexicosyntactic cues, while others argue that projection is based on intonational contours. We tested these hypotheses in an on-line experiment, manipulating the presence of symbolic (lexicosyntactic) content and intonational contour of utterances recorded in natural conversations. When hearing the original recordings, subjects can anticipate turn endings with the same degree of accuracy attested in real conversation. With intonational contour entirely removed (leaving intact words and syntax, with a completely flat pitch), there is no change in subjects’ accuracy of end-of-turn projection. But in the opposite case (with original intonational contour intact, but with no recognizable words), subjects’ performance deteriorates significantly. These results establish that the symbolic (i.e. lexicosyntactic) content of an utterance is necessary (and possibly sufficient) for projecting the moment of its completion, and thus for regulating conversational turn-taking. By contrast, and perhaps surprisingly, intonational contour is neither necessary nor sufficient for end-of-turn projection.
  • De Ruiter, J. P. (2006). Can gesticulation help aphasic people speak, or rather, communicate? Advances in Speech-Language Pathology, 8(2), 124-127. doi:10.1080/14417040600667285.

    Abstract

    As Rose (2006) discusses in the lead article, two camps can be identified in the field of gesture research: those who believe that gesticulation enhances communication by providing extra information to the listener, and on the other hand those who believe that gesticulation is not communicative, but rather that it facilitates speaker-internal word finding processes. I review a number of key studies relevant for this controversy, and conclude that the available empirical evidence is supporting the notion that gesture is a communicative device which can compensate for problems in speech by providing information in gesture. Following that, I discuss the finding by Rose and Douglas (2001) that making gestures does facilitate word production in some patients with aphasia. I argue that the gestures produced in the experiment by Rose and Douglas are not guaranteed to be of the same kind as the gestures that are produced spontaneously under naturalistic, communicative conditions, which makes it difficult to generalise from that particular study to general gesture behavior. As a final point, I encourage researchers in the area of aphasia to put more emphasis on communication in naturalistic contexts (e.g., conversation) in testing the capabilities of people with aphasia.
  • Salverda, A. P., Dahan, D., Tanenhaus, M. K., Crosswhite, K., Masharov, M., & McDonough, J. (2007). Effects of prosodically modulated sub-phonetic variation on lexical competition. Cognition, 105(2), 466-476. doi:10.1016/j.cognition.2006.10.008.

    Abstract

    Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically conditioned phonetic variation.
  • Sauter, D., & Scott, S. K. (2007). More than one kind of happiness: Can we recognize vocal expressions of different positive states? Motivation and Emotion, 31(3), 192-199.

    Abstract

    Several theorists have proposed that distinctions are needed between different positive emotional states, and that these discriminations may be particularly useful in the domain of vocal signals (Ekman, 1992b, Cognition and Emotion, 6, 169–200; Scherer, 1986, Psychological Bulletin, 99, 143–165). We report an investigation into the hypothesis that positive basic emotions have distinct vocal expressions (Ekman, 1992b, Cognition and Emotion, 6, 169–200). Non-verbal vocalisations are used that map onto five putative positive emotions: Achievement/Triumph, Amusement, Contentment, Sensual Pleasure, and Relief. Data from categorisation and rating tasks indicate that each vocal expression is accurately categorised and consistently rated as expressing the intended emotion. This pattern is replicated across two language groups. These data, we conclude, provide evidence for the existence of robustly recognisable expressions of distinct positive emotions.
  • Scharenborg, O., Seneff, S., & Boves, L. (2007). A two-pass approach for handling out-of-vocabulary words in a large vocabulary recognition task. Computer, Speech & Language, 21, 206-218. doi:10.1016/j.csl.2006.03.003.

    Abstract

    This paper addresses the problem of recognizing a vocabulary of over 50,000 city names in a telephone access spoken dialogue system. We adopt a two-stage framework in which only major cities are represented in the first stage lexicon. We rely on an unknown word model encoded as a phone loop to detect OOV city names (referred to as ‘rare city’ names). We use SpeM, a tool that can extract words and word-initial cohorts from phone graphs from a large fallback lexicon, to provide an N-best list of promising city name hypotheses on the basis of the phone graph corresponding to the OOV. This N-best list is then inserted into the second stage lexicon for a subsequent recognition pass. Experiments were conducted on a set of spontaneous telephone-quality utterances; each containing one rare city name. It appeared that SpeM was able to include nearly 75% of the correct city names in an N-best hypothesis list of 3000 city names. With the names found by SpeM to extend the lexicon of the second stage recognizer, a word accuracy of 77.3% could be obtained. The best one-stage system yielded a word accuracy of 72.6%. The absolute number of correctly recognized rare city names almost doubled, from 62 for the best one-stage system to 102 for the best two-stage system. However, even the best two-stage system recognized only about one-third of the rare city names retrieved by SpeM. The paper discusses ways for improving the overall performance in the context of an application.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2007). 'Early recognition' of polysyllabic words in continuous speech. Computer, Speech & Language, 21, 54-71. doi:10.1016/j.csl.2005.12.001.

    Abstract

    Humans are able to recognise a word before its acoustic realisation is complete. This in contrast to conventional automatic speech recognition (ASR) systems, which compute the likelihood of a number of hypothesised word sequences, and identify the words that were recognised on the basis of a trace back of the hypothesis with the highest eventual score, in order to maximise efficiency and performance. In the present paper, we present an ASR system, SpeM, based on principles known from the field of human word recognition that is able to model the human capability of ‘early recognition’ by computing word activation scores (based on negative log likelihood scores) during the speech recognition process. Experiments on 1463 polysyllabic words in 885 utterances showed that 64.0% (936) of these polysyllabic words were recognised correctly at the end of the utterance. For 81.1% of the 936 correctly recognised polysyllabic words the local word activation allowed us to identify the word before its last phone was available, and 64.1% of those words were already identified one phone after their lexical uniqueness point. We investigated two types of predictors for deciding whether a word is considered as recognised before the end of its acoustic realisation. The first type is related to the absolute and relative values of the word activation, which trade false acceptances for false rejections. The second type of predictor is related to the number of phones of the word that have already been processed and the number of phones that remain until the end of the word. The results showed that SpeM’s performance increases if the amount of acoustic evidence in support of a word increases and the risk of future mismatches decreases.
  • Scharenborg, O. (2007). Reaching over the gap: A review of efforts to link human and automatic speech recognition research. Speech Communication, 49, 336-347. doi:10.1016/j.specom.2007.01.009.

    Abstract

    The fields of human speech recognition (HSR) and automatic speech recognition (ASR) both investigate parts of the speech recognition process and have word recognition as their central issue. Although the research fields appear closely related, their aims and research methods are quite different. Despite these differences there is, however, lately a growing interest in possible cross-fertilisation. Researchers from both ASR and HSR are realising the potential benefit of looking at the research field on the other side of the ‘gap’. In this paper, we provide an overview of past and present efforts to link human and automatic speech recognition research and present an overview of the literature describing the performance difference between machines and human listeners. The focus of the paper is on the mutual benefits to be derived from establishing closer collaborations and knowledge interchange between ASR and HSR. The paper ends with an argument for more and closer collaborations between researchers of ASR and HSR to further improve research in both fields.
  • Scharenborg, O., Wan, V., & Moore, R. K. (2007). Towards capturing fine phonetic variation in speech using articulatory features. Speech Communication, 49, 811-826. doi:10.1016/j.specom.2007.01.005.

    Abstract

    The ultimate goal of our research is to develop a computational model of human speech recognition that is able to capture the effects of fine-grained acoustic variation on speech recognition behaviour. As part of this work we are investigating automatic feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. In the experiments reported here, we analysed the classification results from support vector machines (SVMs) and multilayer perceptrons (MLPs). MLPs have been widely and successfully used for the task of multi-value articulatory feature classification, while (to the best of our knowledge) SVMs have not. This paper compares the performance of the two classifiers and analyses the results in order to better understand the articulatory representations. It was found that the SVMs outperformed the MLPs for five out of the seven articulatory feature classes we investigated while using only 8.8–44.2% of the training material used for training the MLPs. The structure in the misclassifications of the SVMs and MLPs suggested that there might be a mismatch between the characteristics of the classification systems and the characteristics of the description of the AF values themselves. The analyses showed that some of the misclassified features are inherently confusable given the acoustic space. We concluded that in order to come to a feature set that can be used for a reliable and accurate automatic description of the speech signal; it could be beneficial to move away from quantised representations.
  • Schiller, N. O., Schuhmann, T., Neyndorff, A. C., & Jansma, B. M. (2006). The influence of semantic category membership on syntactic decisions: A study using event-related brain potentials. Brain Research, 1082(1), 153-164. doi:10.1016/j.brainres.2006.01.087.

    Abstract

    An event-related brain potentials (ERP) experiment was carried out to investigate the influence of semantic category membership on syntactic decision-making. Native speakers of German viewed a series of words that were semantically marked or unmarked for gender and made go/no-go decisions about the grammatical gender of those words. The electrophysiological results indicated that participants could make a gender decision earlier when words were semantically gender-marked than when they were semantically gender-unmarked. Our data provide evidence for the influence of semantic category membership on the decision of the syntactic gender of a visually presented German noun. More specifically, our results support models of language comprehension in which semantic information processing of words is initiated prior to syntactic information processing is finalized.
  • Schiller, N. O., Meyer, A. S., Baayen, R. H., & Levelt, W. J. M. (1996). A comparison of lexeme and speech syllables in Dutch. Journal of Quantitative Linguistics, 3(1), 8-28.

    Abstract

    The CELEX lexical database includes a list of Dutch syllables and their frequencies, based on syllabification of isolated word forms. In connected speech, however, sentence-level phonological rules can modify the syllables and their token frequencies. In order to estimate the changes syllables may undergo in connected speech, an empirical investigation was carried out. A large Dutch text corpus (TROUW) was transcribed, processed by word level rules, and syllabified. The resulting lexeme syllables were evaluated by comparing them to the CELEX lexical database for Dutch. Then additional phonological sentence-level rules were applied to the TROUW corpus, and the frequencies of the resulting connected speech syllables were compared with those of the lexeme syllables from TROUW. The overall correlation between lexeme and speech syllables was very high. However, speech syllables generally had more complex CV structures than lexeme syllables. Implications of the results for research involving syllables are discussed. With respect to the notion of a mental syllabary (a store for precompiled articulatory programs for syllables, see Levelt & Wheeldon, 1994) this study revealed an interesting statistical result. The calculation of the cumulative syllable frequencies showed that 85% of the syllable tokens in Dutch can be covered by the 500 most frequent syllable types, which makes the idea of a syllabary very attractive.
  • Schiller, N. O., & Costa, A. (2006). Different selection principles of freestanding and bound morphemes in language production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(5), 1201-1207. doi:10.1037/0278-7393.32.5.1201.

    Abstract

    Freestanding and bound morphemes differ in many (psycho)linguistic aspects. Some theorists have claimed that the representation and retrieval of freestanding and bound morphemes in the course of language production are governed by similar processing mechanisms. Alternatively, it has been proposed that both types of morphemes may be selected for production in different ways. In this article, the authors first review the available experimental evidence related to this topic and then present new experimental data pointing to the notion that freestanding and bound morphemes are retrieved following distinct processing principles: freestanding morphemes are subject to competition, bound morphemes not.
  • Schiller, N. O. (2006). Lexical stress encoding in single word production estimated by event-related brain potentials. Brain Research, 1112(1), 201-212. doi:10.1016/j.brainres.2006.07.027.

    Abstract

    An event-related brain potentials (ERPs) experiment was carried out to investigate the time course of lexical stress encoding in language production. Native speakers of Dutch viewed a series of pictures corresponding to bisyllabic names which were either stressed on the first or on the second syllable and made go/no-go decisions on the lexical stress location of those picture names. Behavioral results replicated a pattern that was observed earlier, i.e. faster button-press latencies to initial as compared to final stress targets. The electrophysiological results indicated that participants could make a lexical stress decision significantly earlier when picture names had initial than when they had final stress. Moreover, the present data suggest the time course of lexical stress encoding during single word form formation in language production. When word length is corrected for, the temporal interval for lexical stress encoding specified by the current ERP results falls into the time window previously identified for phonological encoding in language production.
  • Schiller, N. O., Jansma, B. M., Peters, J., & Levelt, W. J. M. (2006). Monitoring metrical stress in polysyllabic words. Language and Cognitive Processes, 21(1/2/3), 112-140. doi:10.1080/01690960400001861.

    Abstract

    This study investigated the monitoring of metrical stress information in internally generated speech. In Experiment 1, Dutch participants were asked to judge whether bisyllabic picture names had initial or final stress. Results showed significantly faster decision times for initially stressed targets (e.g., KAno ‘‘canoe’’) than for targets with final stress (e.g., kaNON ‘‘cannon’’; capital letters indicate stressed syllables). It was demonstrated that monitoring latencies are not a function of the picture naming or object recognition latencies to the same pictures. Experiments 2 and 3 replicated the outcome of the first experiment with trisyllabic picture names. These results are similar to the findings of Wheeldon and Levelt (1995) in a segment monitoring task. The outcome might be interpreted to demonstrate that phonological encoding in speech production is a rightward incremental process. Alternatively, the data might reflect the sequential nature of a perceptual mechanism used to monitor lexical stress.
  • Schiller, N. O., & Caramazza, A. (2006). Grammatical gender selection and the representation of morphemes: The production of Dutch diminutives. Language and Cognitive Processes, 21, 945-973. doi:10.1080/01690960600824344.

    Abstract

    In this study, we investigated grammatical feature selection during noun phrase production in Dutch. More specifically, we studied the conditions under which different grammatical genders select either the same or different determiners. Pictures of simple objects paired with a gender-congruent or a gender-incongruent distractor word were presented. Participants named the pictures using a noun phrase with the appropriate gender-marked determiner. Auditory (Experiment 1) or visual cues (Experiment 2) indicated whether the noun was to be produced in its standard or diminutive form. Results revealed a cost in naming latencies when target and distractor take different determiner forms independent of whether or not they have the same gender. This replicates earlier results showing that congruency effects are due to competition during the selection of determiner forms rather than gender features. The overall pattern of results supports the view that grammatical feature selection is an automatic consequence of lexical node selection and therefore not subject to interference from incongruent grammatical features. Selection of the correct determiner form, however, is a competitive process, implying that lexical node and grammatical feature selection operate with distinct principles.
  • Schiller, N. O., & Köster, O. (1996). Evaluation of a foreign speaker in forensic phonetics: A report. Forensic Linguistics: The international Journal of Speech, Language and the Law, 3, 176-185.
  • Schiller, N. O. (1998). The effect of visually masked syllable primes on the naming latencies of words and pictures. Journal of Memory and Language, 39, 484-507. doi:10.1006/jmla.1998.2577.

    Abstract

    To investigate the role of the syllable in Dutch speech production, five experiments were carried out to examine the effect of visually masked syllable primes on the naming latencies for written words and pictures. Targets had clear syllable boundaries and began with a CV syllable (e.g., ka.no) or a CVC syllable (e.g., kak.tus), or had ambiguous syllable boundaries and began with a CV[C] syllable (e.g., ka[pp]er). In the syllable match condition, bisyllabic Dutch nouns or verbs were preceded by primes that were identical to the target’s first syllable. In the syllable mismatch condition, the prime was either shorter or longer than the target’s first syllable. A neutral condition was also included. None of the experiments showed a syllable priming effect. Instead, all related primes facilitated the naming of the targets. It is concluded that the syllable does not play a role in the process of phonological encoding in Dutch. Because the amount of facilitation increased with increasing overlap between prime and target, the priming effect is accounted for by a segmental overlap hypothesis.
  • Segurado, R., Hamshere, M. L., Glaser, B., Nikolov, I., Moskvina, V., & Holmans, P. A. (2007). Combining linkage data sets for meta-analysis and mega-analysis: the GAW15 rheumatoid arthritis data set. BMC Proceedings, 1(Suppl 1): S104.

    Abstract

    We have used the genome-wide marker genotypes from Genetic Analysis Workshop 15 Problem 2 to explore joint evidence for genetic linkage to rheumatoid arthritis across several samples. The data consisted of four high-density genome scans on samples selected for rheumatoid arthritis. We cleaned the data, removed intermarker linkage disequilibrium, and assembled the samples onto a common genetic map using genome sequence positions as a reference for map interpolation. The individual studies were combined first at the genotype level (mega-analysis) prior to a multipoint linkage analysis on the combined sample, and second using the genome scan meta-analysis method after linkage analysis of each sample. The two approaches were compared, and give strong support to the HLA locus on chromosome 6 as a susceptibility locus. Other regions of interest include loci on chromosomes 11, 2, and 12.
  • Seidl, A., & Johnson, E. K. (2006). Infant word segmentation revisited: Edge alignment facilitates target extraction. Developmental Science, 9(6), 565-573.

    Abstract

    In a landmark study, Jusczyk and Aslin (1995) demonstrated that English-learning infants are able to segment words from continuous speech at 7.5 months of age. In the current study, we explored the possibility that infants segment words from the edges of utterances more readily than the middle of utterances. The same procedure was used as in Jusczyk and Aslin (1995); however, our stimuli were controlled for target word location and infants were given a shorter familiarization time to avoid ceiling effects. Infants were familiarized to one word that always occurred at the edge of an utterance (sentence-initial position for half of the infants and sentence-final position for the other half) and one word that always occurred in sentence-medial position. Our results demonstrate that infants segment words from the edges of an utterance more readily than from the middle of an utterance. In addition, infants segment words from utterance-final position just as readily as they segment words from utterance-initial position. Possible explanations for these results, as well as their implications for current models of the development of word segmentation, are discussed.
  • Sekine, K. (2006). Developmental changes in spatial frame of reference among preschoolers: Spontaneous gestures and speech in route descriptions. The Japanese journal of developmental psychology, 17(3), 263-271.

    Abstract

    This research investigated how spontaneous gestures during speech represent “Frames of Reference” (FoR) among preschool children, and how their FoRs change with age. Four-, five-, and six-year-olds (N=55) described the route from the nursery school to their own homes. Analysis of children’s utterances and gestures showed that mean length of utterance, speech time, and use of landmarks or right/left terms to describe a route, all increased with age. Most of 4-year-olds made gestures in the direction of the actual route to their homes, and their hands tend to be raised above the shoulder. In contrast, 6-year-olds used gestures to give directions that did not match the actual route, as if they were creating a virtual space in front of the speaker. Some 5- and 6-year-olds produced gestures that represented survey mapping. These results indicated that development of FoR in childhood may change from an egocentric FoR to a fixed FoR. As factors underlying development of FoR, verbal encoding skills and the commuting experience were also discussed.
  • Senft, G. (2006). Völkerkunde und Linguistik: Ein Plädoyer für interdisziplinäre Kooperation. Zeitschrift für Germanistische Linguistik, 34, 87-104.

    Abstract

    Starting with Hockett’s famous statement on the relationship between linguistics and anthropology - "Linguistics without anthropology is sterile; anthropology without linguistics is blind” - this paper first discusses the historic perspective of the topic. This discussion starts with Herder, Humboldt and Schleiermacher and ends with the present debate on the interrelationship of anthropology and linguistics. Then some excellent examples of interdisciplinary projects within anthropological linguistics (or linguistic anthropology) are presented. And finally it is illustrated why Hockett is still right.
  • Senft, G. (1998). Body and mind in the Trobriand Islands. Ethos, 26, 73-104. doi:10.1525/eth.1998.26.1.73.

    Abstract

    This article discusses how the Trobriand Islanders speak about body and mind. It addresses the following questions: do the linguistic datafit into theories about lexical universals of body-part terminology? Can we make inferences about the Trobrianders' conceptualization of psychological and physical states on the basis of these data? If a Trobriand Islander sees these idioms as external manifestations of inner states, then can we interpret them as a kind of ethnopsychological theory about the body and its role for emotions, knowledge, thought, memory, and so on? Can these idioms be understood as representation of Trobriand ethnopsychological theory?
  • Senft, G. (1985). Emic or etic or just another catch 22? A repartee to Hartmut Haberland. Journal of Pragmatics, 9, 845.
  • Senft, G. (1998). [Review of the book Anthropological linguistics: An introduction by William A. Foley]. Linguistics, 36, 995-1001.
  • Senft, G. (1996). [Review of the book Comparative Austronesian dictionary: An introduction to Austronesian studies ed. by Darrell T. Tryon]. Linguistics, 34, 1255-1270.
  • Senft, G. (1996). [Review of the book Language contact and change in the Austronesian world ed. by Tom Dutton and Darrell T. Tryon]. Linguistics, 34, 424-430.
  • Senft, G. (1996). [Review of the book Topics in the description of Kiriwina by Ralph Lawton; ed. by Malcolm Ross and Janet Ezard]. Language and Linguistics in Melanesia, 27, 189-196.
  • Senft, G. (1996). [Review of the journal Bulletin of the International String Figure Association, Vol. 1, 1994]. Journal of the Royal Anthropological Institute, 2, 363-364.
  • Senft, G. (2006). A biography in the strict sense of the term [Review of the book Malinowski: Odyssee of an anthropologist 1884-1920, vol. 1 by Michael Young]. Journal of Pragmatics, 38(4), 610-637. doi:10.1016/j.pragma.2005.06.012.
  • Senft, G. (2006). [Review of the book Bilder aus der Deutschen Südsee by Hermann Joseph Hiery]. Paideuma: Mitteilungen zur Kulturkunde, 52, 304-308.
  • Senft, G. (2007). [Review of the book Bislama reference grammar by Terry Crowley]. Linguistics, 45(1), 235-239.
  • Senft, G. (2006). [Review of the book Narrative as social practice: Anglo-Western and Australian Aboriginal oral traditions by Danièle M. Klapproth]. Journal of Pragmatics, 38(8), 1326-1331. doi:10.1016/j.pragma.2005.11.001.
  • Senft, G. (2006). [Review of the book Pacific Pidgins and Creoles: Origins, growth and development by Darrell T. Tryon and Jean-Michel Charpentier]. Linguistics, 44(1), 195-200. doi:10.1515/LING.2006.006.
  • Senft, G. (2007). [Review of the book Serial verb constructions - A cross-linguistic typology by Alexandra Y. Aikhenvald and Robert M. W. Dixon]. Linguistics, 45(4), 833-840. doi:10.1515/LING.2007.024.
  • Senft, G. (1985). How to tell - and understand - a 'dirty' joke in Kilivila. Journal of Pragmatics, 9, 815-834.
  • Senft, G. (1985). Kilivila: Die Sprache der Trobriander. Studium Linguistik, 17/18, 127-138.
  • Senft, G. (1985). Klassifikationspartikel im Kilivila: Glossen zu ihrer morphologischen Rolle, ihrem Inventar und ihrer Funktion in Satz und Diskurs. Linguistische Berichte, 99, 373-393.
  • Senft, G. (1996). Past is present - Present is past: Time and the harvest rituals on the Trobriand Islands. Anthropos, 91, 381-389.
  • Senft, G. (1985). Weyeis Wettermagie: Eine ethnolinguistische Untersuchung von fünf magischen Formeln eines Wettermagiers auf den Trobriand Inseln. Zeitschrift für Ethnologie, 110(2), 67-90.
  • Senft, G. (1985). Trauer auf Trobriand: Eine ethnologisch/-linguistische Fallstudie. Anthropos, 80, 471-492.
  • Seuren, P. A. M. (2006). The natural logic of language and cognition. Pragmatics, 16(1), 103-138.
  • Seuren, P. A. M. (2007). The theory that dare not speak its name: A rejoinder to Mufwene and Francis. Language Sciences, 29(4), 571-573. doi:10.1016/j.langsci.2007.02.001.
  • Seuren, P. A. M. (1996). Berbice Nederlands: Een zeldzame Nederlandse creolentaal. Nederlandse Taalkunde, 1(2), 155-164.
  • Seuren, P. A. M. (1973). [Review of the book A comprehensive etymological dictionary of the English language by Ernst Klein]. Neophilologus, 57(4), 423-426. doi:10.1007/BF01515518.
  • Seuren, P. A. M. (1998). [Review of the book Adverbial subordination; A typology and history of adverbial subordinators based on European languages by Bernd Kortmann]. Cognitive Linguistics, 9(3), 317-319. doi:10.1515/cogl.1998.9.3.315.
  • Seuren, P. A. M. (1979). [Review of the book Approaches to natural language ed. by K. Hintikka, J. Moravcsik and P. Suppes]. Leuvense Bijdragen, 68, 163-168.
  • Seuren, P. A. M. (1973). [Review of the book Philosophy of language by Robert J. Clack and Bertrand Russell]. Foundations of Language, 9(3), 440-441.
  • Seuren, P. A. M. (1973). [Review of the book Semantics. An interdisciplinary reader in philosophy, linguistics and psychology ed. by Danny D. Steinberg and Leon A. Jakobovits]. Neophilologus, 57(2), 198-213. doi:10.1007/BF01514332.

Share this page