Publications

Displaying 301 - 400 of 456
  • Otten, M., Nieuwland, M. S., & Van Berkum, J. J. A. (2007). Great expectations: Specific lexical anticipation influences the processing of spoken language. BMC Neuroscience, 8: 89. doi:10.1186/1471-2202-8-89.

    Abstract

    Background Recently several studies have shown that people use contextual information to make predictions about the rest of the sentence or story as the text unfolds. Using event related potentials (ERPs) we tested whether these on-line predictions are based on a message-based representation of the discourse or on simple automatic activation by individual words. Subjects heard short stories that were highly constraining for one specific noun, or stories that were not specifically predictive but contained the same prime words as the predictive stories. To test whether listeners make specific predictions critical nouns were preceded by an adjective that was inflected according to, or in contrast with, the gender of the expected noun. Results When the message of the preceding discourse was predictive, adjectives with an unexpected gender-inflection evoked a negative deflection over right-frontal electrodes between 300 and 600 ms. This effect was not present in the prime control context, indicating that the prediction mismatch does not hinge on word-based priming but is based on the actual message of the discourse. Conclusions When listening to a constraining discourse people rapidly make very specific predictions about the remainder of the story, as the story unfolds. These predictions are not simply based on word-based automatic activation, but take into account the actual message of the discourse.
  • Özdemir, R., Roelofs, A., & Levelt, W. J. M. (2007). Perceptual uniqueness point effects in monitoring internal speech. Cognition, 105(2), 457-465. doi:10.1016/j.cognition.2006.10.006.

    Abstract

    Disagreement exists about how speakers monitor their internal speech. Production-based accounts assume that self-monitoring mechanisms exist within the production system, whereas comprehension-based accounts assume that monitoring is achieved through the speech comprehension system. Comprehension-based accounts predict perception-specific effects, like the perceptual uniqueness-point effect, in the monitoring of internal speech. We ran an extensive experiment testing this prediction using internal phoneme monitoring and picture naming tasks. Our results show an effect of the perceptual uniqueness point of a word in internal phoneme monitoring in the absence of such an effect in picture naming. These results support comprehension-based accounts of the monitoring of internal speech.
  • Ozyurek, A., Willems, R. M., Kita, S., & Hagoort, P. (2007). On-line integration of semantic information from speech and gesture: Insights from event-related brain potentials. Journal of Cognitive Neuroscience, 19(4), 605-616. doi:10.1162/jocn.2007.19.4.605.

    Abstract

    During language comprehension, listeners use the global semantic representation from previous sentence or discourse context to immediately integrate the meaning of each upcoming word into the unfolding message-level representation. Here we investigate whether communicative gestures that often spontaneously co-occur with speech are processed in a similar fashion and integrated to previous sentence context in the same way as lexical meaning. Event-related potentials were measured while subjects listened to spoken sentences with a critical verb (e.g., knock), which was accompanied by an iconic co-speech gesture (i.e., KNOCK). Verbal and/or gestural semantic content matched or mismatched the content of the preceding part of the sentence. Despite the difference in the modality and in the specificity of meaning conveyed by spoken words and gestures, the latency, amplitude, and topographical distribution of both word and gesture mismatches are found to be similar, indicating that the brain integrates both types of information simultaneously. This provides evidence for the claim that neural processing in language comprehension involves the simultaneous incorporation of information coming from a broader domain of cognition than only verbal semantics. The neural evidence for similar integration of information from speech and gesture emphasizes the tight interconnection between speech and co-speech gestures.
  • Ozyurek, A., & Kelly, S. D. (2007). Gesture, language, and brain. Brain and Language, 101(3), 181-185. doi:10.1016/j.bandl.2007.03.006.
  • Ozyurek, A., & Trabasso, T. (1997). Evaluation during the understanding of narratives. Discourse Processes, 23(3), 305-337. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=hlh&AN=12673020&site=ehost-live.

    Abstract

    Evaluation plays a role in the telling and understanding of narratives, in communicative interaction, emotional understanding, and in psychological well-being. This article reports a study of evaluation by describing how readers monitor the concerns of characters over the course of a narrative. The main hypothesis is that readers tract the well-being via the expression of a character's internal states. Reader evaluations were revealed in think aloud protocols obtained during reading of narrative texts, one sentence at a time. Five kinds of evaluative inferences were found: appraisals (good versus bad), preferences (like versus don't like), emotions (happy versus frustrated), goals (want versus don't want), or purposes (to attain or maintain X versus to prevent or avoid X). Readers evaluated all sentences. The mean rate of evaluation per sentence was 0.55. Positive and negative evaluations over the course of the story indicated that things initially went badly for characters, improved with the formulation and execution of goal plans, declined with goal failure, and improved as characters formulated new goals and succeeded. The kind of evaluation made depended upon the episodic category of the event and the event's temporal location in the story. Evaluations also served to explain or predict events. In making evaluations, readers stayed within the frame of the story and perspectives of the character or narrator. They also moved out of the narrative frame and addressed evaluations towards the experimenter in a communicative context.
  • Paracchini, S., Thomas, A., Castro, S., Lai, C., Paramasivam, M., Wang, Y., Keating, B. J., Taylor, J. M., Hacking, D. F., Scerri, T., Francks, C., Richardson, A. J., Wade-Martins, R., Stein, J. F., Knight, J. C., Copp, A. J., LoTurco, J., & Monaco, A. P. (2006). The chromosome 6p22 haplotype associated with dyslexia reduces the expression of KIAA0319, a novel gene involved in neuronal migration. Human Molecular Genetics, 15(10), 1659-1666. doi:10.1093/hmg/ddl089.

    Abstract

    Dyslexia is one of the most prevalent childhood cognitive disorders, affecting approximately 5% of school-age children. We have recently identified a risk haplotype associated with dyslexia on chromosome 6p22.2 which spans the TTRAP gene and portions of THEM2 and KIAA0319. Here we show that in the presence of the risk haplotype, the expression of the KIAA0319 gene is reduced but the expression of the other two genes remains unaffected. Using in situ hybridization, we detect a very distinct expression pattern of the KIAA0319 gene in the developing cerebral neocortex of mouse and human fetuses. Moreover, interference with rat Kiaa0319 expression in utero leads to impaired neuronal migration in the developing cerebral neocortex. These data suggest a direct link between a specific genetic background and a biological mechanism leading to the development of dyslexia: the risk haplotype on chromosome 6p22.2 down-regulates the KIAA0319 gene which is required for neuronal migration during the formation of the cerebral neocortex.
  • Parkes, L. M., Bastiaansen, M. C. M., & Norris, D. G. (2006). Combining EEG and fMRI to investigate the postmovement beta rebound. NeuroImage, 29(3), 685-696. doi:10.1016/j.neuroimage.2005.08.018.

    Abstract

    The relationship between synchronous neuronal activity as measured with EEG and the blood oxygenation level dependent (BOLD) signal as measured during fMRI is not clear. This work investigates the relationship by combining EEG and fMRI measures of the strong increase in beta frequency power following movement, the so-called post-movement beta rebound (PMBR). The time course of the PMBR, as measured by EEG, was included as a regressor in the fMRI analysis, allowing identification of a region of associated BOLD signal increase in the sensorimotor cortex, with the most significant region in the post-central sulcus. The increase in the BOLD signal suggests that the number of active neurons and/or their synaptic rate is increased during the PMBR. The duration of the BOLD response curve in the PMBR region is significantly longer than in the activated motor region, and is well fitted by a model including both motor and PMBR regressors. An intersubject correlation between the BOLD signal amplitude associated with the PMBR regressor and the PMBR strength as measured with EEG provides further evidence that this region is a source of the PMBR. There is a strong intra-subject correlation between the BOLD signal amplitude in the sensorimotor cortex during movement and the PMBR strength as measured by EEG, suggesting either that the motor activity itself, or somatosensory inputs associated with the motor activity, influence the PMBR. This work provides further evidence for a BOLD signal change associated with changes in neuronal synchrony, so opening up the possibility of studying other event-related oscillatory changes using fMRI.
  • Pereiro Estevan, Y., Wan, V., & Scharenborg, O. (2007). Finding maximum margin segments in speech. Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference, IV, 937-940. doi:10.1109/ICASSP.2007.367225.

    Abstract

    Maximum margin clustering (MMC) is a relatively new and promising kernel method. In this paper, we apply MMC to the task of unsupervised speech segmentation. We present three automatic speech segmentation methods based on MMC, which are tested on TIMIT and evaluated on the level of phoneme boundary detection. The results show that MMC is highly competitive with existing unsupervised methods for the automatic detection of phoneme boundaries. Furthermore, initial analyses show that MMC is a promising method for the automatic detection of sub-phonetic information in the speech signal.
  • Perniss, P. M. (2007). Achieving spatial coherence in German sign language narratives: The use of classifiers and perspective. Lingua, 117(7), 1315-1338. doi:10.1016/j.lingua.2005.06.013.

    Abstract

    Spatial coherence in discourse relies on the use of devices that provide information about where referents are and where events take place. In signed language, two primary devices for achieving and maintaining spatial coherence are the use of classifier forms and signing perspective. This paper gives a unified account of the relationship between perspective and classifiers, and divides the range of possible correspondences between these two devices into prototypical and non-prototypical alignments. An analysis of German Sign Language narratives of complex events investigates the role of different classifier-perspective constructions in encoding spatial information about location, orientation, action and motion, as well as size and shape of referents. In particular, I show how non-prototypical alignments, including simultaneity of perspectives, contribute to the maintenance of spatial coherence, and provide functional explanations in terms of efficiency and informativeness constraints on discourse.
  • Petersson, K. M., Elfgren, C., & Ingvar, M. (1997). A dynamic role of the medial temporal lobe during retrieval of declarative memory in man. NeuroImage, 6, 1-11.

    Abstract

    Understanding the role of the medial temporal lobe (MTL) in learning and memory is an important problem in cognitive neuroscience. Memory and learning processes that depend on the function of the MTL and related diencephalic structures (e.g., the anterior and mediodorsal thalamic nuclei) are defined as declarative. We have studied the MTL activity as indicated by regional cerebral blood flow with positron emission tomography and statistical parametric mapping during recall of abstract designs in a less practiced memory state as well as in a well-practiced (well-encoded) memory state. The results showed an increased activity of the MTL bilaterally (including parahippocampal gyrus extending into hippocampus proper, as well as anterior lingual and anterior fusiform gyri) during retrieval in the less practiced memory state compared to the well-practiced memory state, indicating a dynamic role of the MTL in retrieval during the learning processes. The results also showed that the activation of the MTL decreases as the subjects learn to draw abstract designs from memory, indicating a changing role of the MTL during recall in the earlier stages of acquisition compared to the well-encoded declarative memory state.
  • Petersson, K. M., Gisselgard, J., Gretzer, M., & Ingvar, M. (2006). Interaction between a verbal working memory network and the medial temporal lobe. NeuroImage, 33(4), 1207-1217. doi:10.1016/j.neuroimage.2006.07.042.

    Abstract

    The irrelevant speech effect illustrates that sounds that are irrelevant to a visually presented short-term memory task still interfere with neuronal function. In the present study we explore the functional and effective connectivity of such interference. The functional connectivity analysis suggested an interaction between the level of irrelevant speech and the correlation between in particular the left superior temporal region, associated with verbal working memory, and the left medial temporal lobe. Based on this psycho-physiological interaction, and to broaden the understanding of this result, we performed a network analysis, using a simple network model for verbal working memory, to analyze its interaction with the medial temporal lobe memory system. The results showed dissociations in terms of network interactions between frontal as well as parietal and temporal areas in relation to the medial temporal lobe. The results of the present study suggest that a transition from phonological loop processing towards an engagement of episodic processing might take place during the processing of interfering irrelevant sounds. We speculate that, in response to the irrelevant sounds, this reflects a dynamic shift in processing as suggested by a closer interaction between a verbal working memory system and the medial temporal lobe memory system.
  • Petersson, K. M., Silva, C., Castro-Caldas, A., Ingvar, M., & Reis, A. (2007). Literacy: A cultural influence on functional left-right differences in the inferior parietal cortex. European Journal of Neuroscience, 26(3), 791-799. doi:10.1111/j.1460-9568.2007.05701.x.

    Abstract

    The current understanding of hemispheric interaction is limited. Functional hemispheric specialization is likely to depend on both genetic and environmental factors. In the present study we investigated the importance of one factor, literacy, for the functional lateralization in the inferior parietal cortex in two independent samples of literate and illiterate subjects. The results show that the illiterate group are consistently more right-lateralized than their literate controls. In contrast, the two groups showed a similar degree of left-right differences in early speech-related regions of the superior temporal cortex. These results provide evidence suggesting that a cultural factor, literacy, influences the functional hemispheric balance in reading and verbal working memory-related regions. In a third sample, we investigated grey and white matter with voxel-based morphometry. The results showed differences between literacy groups in white matter intensities related to the mid-body region of the corpus callosum and the inferior parietal and parietotemporal regions (literate > illiterate). There were no corresponding differences in the grey matter. This suggests that the influence of literacy on brain structure related to reading and verbal working memory is affecting large-scale brain connectivity more than grey matter per se.
  • Pickering, M. J., & Majid, A. (2007). What are implicit causality and consequentiality? Language and Cognitive Processes, 22(5), 780-788. doi:10.1080/01690960601119876.

    Abstract

    Much work in psycholinguistics and social psychology has investigated the notion of implicit causality associated with verbs. Crinean and Garnham (2006) relate implicit causality to another phenomenon, implicit consequentiality. We argue that they and other researchers have confused the meanings of events and the reasons for those events, so that particular thematic roles (e.g., Agent, Patient) are taken to be causes or consequences of those events by definition. In accord with Garvey and Caramazza (1974), we propose that implicit causality and consequentiality are probabilistic notions that are straightforwardly related to the explicit causes and consequences of events and are analogous to other biases investigated in psycholinguistics.
  • Piekema, C., Kessels, R. P. C., Mars, R. B., Petersson, K. M., & Fernández, G. (2006). The right hippocampus participates in short-term memory maintenance of object–location associations. NeuroImage, 33(1), 374-382. doi:10.1016/j.neuroimage.2006.06.035.

    Abstract

    Doubts have been cast on the strict dissociation between short- and long-term memory systems. Specifically, several neuroimaging studies have shown that the medial temporal lobe, a region almost invariably associated with long-term memory, is involved in active short-term memory maintenance. Furthermore, a recent study in hippocampally lesioned patients has shown that the hippocampus is critically involved in associating objects and their locations, even when the delay period lasts only 8 s. However, the critical feature that causes the medial temporal lobe, and in particular the hippocampus, to participate in active maintenance is still unknown. This study was designed in order to explore hippocampal involvement in active maintenance of spatial and non-spatial associations. Eighteen participants performed a delayed-match-to-sample task in which they had to maintain either object–location associations, color–number association, single colors, or single locations. Whole-brain activity was measured using event-related functional magnetic resonance imaging and analyzed using a random effects model. Right lateralized hippocampal activity was evident when participants had to maintain object–location associations, but not when they had to maintain object–color associations or single items. The present results suggest a hippocampal involvement in active maintenance when feature combinations that include spatial information have to be maintained online.
  • Pine, J. M., Lieven, E. V., & Rowland, C. F. (1997). Stylistic variation at the “single-word” stage: Relations between maternal speech characteristics and children's vocabulary composition and usage. Child Development, 68(5), 807-819. doi:10.1111/j.1467-8624.1997.tb01963.x.

    Abstract

    In this study we test a number of different claims about the nature of stylistic variation at the “single-word” stage by examining the relation between variation in early vocabulary composition, variation in early language use, and variation in the structural and functional propreties of mothers' child-directed speech. Maternal-report and observational data were collected for 26 children at 10, 50, and 100 words, These were then correlated with a variety of different measures of maternal speech at 10 words, The results show substantial variation in the percentage of common nouns and unanalyzed phrases in children's vocabularies, and singficant relations between this variation and the way in which language is used by the child. They also reveal singficant relations between the way in whch mothers use language at 10 words and the way in chich their children use language at 50 words and between certain formal properties of mothers speech at 10 words and the percentage of common nouns and unanalyzed phrases in children's early vocabularies, However, most of these relations desappear when an attempt is made to control for ossible effects of the child on the mother at Time 1. The exception is a singficant negative correlation between mothers tendency to produce speech that illustrates word boundaries and the percentage of unanalyzed phrases at 50 and 100 words. This suggests that mothers whose sprech provides the child with information about where new words begin and end tend to have children with few unanalyzed. phrases in their early vocabularies.
  • Poletiek, F. H. (2006). De dwingende macht van een Goed Verhaal [Boekbespreking van Vincent plast op de grond:Nachtmerries in het Nederlands recht door W.A. Wagenaar]. De Psycholoog, 41, 460-462.
  • Poletiek, F. H. (1997). De wet 'bijzondere opnemingen in psychiatrische ziekenhuizen' aan de cijfers getoetst. Maandblad voor Geestelijke Volksgezondheid, 4, 349-361.
  • Poletiek, F. H. (in preparation). Inside the juror: The psychology of juror decision-making [Bespreking van De geest van de jury (1997)].
  • Praamstra, P., Hagoort, P., Maassen, B., & Crul, T. (1991). Word deafness and auditory cortical function: A case history and hypothesis. Brain, 114, 1197-1225. doi:10.1093/brain/114.3.1197.

    Abstract

    A patient who already had Wernick's aphasia due to a left temporal lobe lesion suffered a severe deterioration specifically of auditory language comprehension, subsequent to right temporal lobe infarction. A detailed comparison of his new condition with his language status before the second stroke revealed that the newly acquired deficit was limited to tasks related to auditory input. Further investigations demonstrated a speech perceptual disorder, which we analysed as due to deficits both at the level of general auditory processes and at the level of phonetic analysis. We discuss some arguments related to hemisphere specialization of phonetic processing and to the disconnection explanation of word deafness that support the hypothesis of word deafness being generally caused by mixed deficits.
  • Prieto, P., & Torreira, F. (2007). The segmental anchoring hypothesis revisited: Syllable structure and speech rate effects on peak timing in Spanish. Journal of Phonetics, 35, 473-500. doi:10.1016/j.wocn.2007.01.001.

    Abstract

    This paper addresses the validity of the segmental anchoring hypothesis for tonal landmarks (henceforth, SAH) as described in recent work by (among others) Ladd, Faulkner, D., Faulkner, H., & Schepman [1999. Constant ‘segmental’ anchoring of f0 movements under changes in speech rate. Journal of the Acoustical Society of America, 106, 1543–1554], Ladd [2003. Phonological conditioning of f0 target alignment. In: M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the XVth international congress of phonetic sciences, Vol. 1, (pp. 249–252). Barcelona: Causal Productions; in press. Segmental anchoring of pitch movements: Autosegmental association or gestural coordination? Italian Journal of Linguistics, 18 (1)]. The alignment of LH* prenuclear peaks with segmental landmarks in controlled speech materials in Peninsular Spanish is analyzed as a function of syllable structure type (open, closed) of the accented syllable, segmental composition, and speaking rate. Contrary to the predictions of the SAH, alignment was affected by syllable structure and speech rate in significant and consistent ways. In: CV syllables the peak was located around the end of the accented vowel, and in CVC syllables around the beginning-mid part of the sonorant coda, but still far from the syllable boundary. With respect to the effects of rate, peaks were located earlier in the syllable as speech rate decreased. The results suggest that the accent gestures under study are synchronized with the syllable unit. In general, the longer the syllable, the longer the rise time. Thus the fundamental idea of the anchoring hypothesis can be taken as still valid. On the other hand, the tonal alignment patterns reported here can be interpreted as the outcome of distinct modes of gestural coordination in syllable-initial vs. syllable-final position: gestures at syllable onsets appear to be more tightly coordinated than gestures at the end of syllables [Browman, C. P., & Goldstein, L.M. (1986). Towards an articulatory phonology. Phonology Yearbook, 3, 219–252; Browman, C. P., & Goldstein, L. (1988). Some notes on syllable structure in articulatory phonology. Phonetica, 45, 140–155; (1992). Articulatory Phonology: An overview. Phonetica, 49, 155–180; Krakow (1999). Physiological organization of syllables: A review. Journal of Phonetics, 27, 23–54; among others]. Intergestural timing can thus provide a unifying explanation for (1) the contrasting behavior between the precise synchronization of L valleys with the onset of the syllable and the more variable timing of the end of the f0 rise, and, more specifically, for (2) the right-hand tonal pressure effects and ‘undershoot’ patterns displayed by peaks at the ends of syllables and other prosodic domains.
  • Protopapas, A., Gerakaki, S., & Alexandri, S. (2006). Lexical and default stress assignment in reading Greek. Journal of research in reading, 29(4), 418-432. doi:10.1111/j.1467-9817.2006.00316.x.

    Abstract

    Greek is a language with lexical stress that marks stress orthographically with a special diacritic. Thus, the orthography and the lexicon constitute potential sources of stress assignment information in addition to any possible general default metrical pattern. Here, we report two experiments with secondary education children reading aloud pseudo-word stimuli, in which we manipulated the availability of lexical (using stimuli resembling particular words) and visual (existence and placement of the diacritic) information. The reliance on the diacritic was found to be imperfect. Strong lexical effects as well as a default metrical pattern stressing the penultimate syllable were revealed. Reading models must be extended to account for multisyllabic word reading including, in particular, stress assignment based on the interplay among multiple possible sources of information.
  • Protopapas, A., Gerakaki, S., & Alexandri, S. (2007). Sources of information for stress assignment in reading Greek. Applied Psycholinguistics, 28(4), 695 -720. doi:10.1017/S0142716407070373.

    Abstract

    To assign lexical stress when reading, the Greek reader can potentially rely on lexical information (knowledge of the word), visual–orthographic information (processing of the written diacritic), or a default metrical strategy (penultimate stress pattern). Previous studies with secondary education children have shown strong lexical effects on stress assignment and have provided evidence for a default pattern. Here we report two experiments with adult readers, in which we disentangle and quantify the effects of these three potential sources using nonword materials. Stimuli either resembled or did not resemble real words, to manipulate availability of lexical information; and they were presented with or without a diacritic, in a word-congruent or word-incongruent position, to contrast the relative importance of the three sources. Dual-task conditions, in which cognitive load during nonword reading was increased with phonological retention carrying a metrical pattern different from the default, did not support the hypothesis that the default arises from cumulative lexical activation in working memory.
  • Qin, S., Piekema, C., Petersson, K. M., Han, B., Luo, J., & Fernández, G. (2007). Probing the transformation of discontinuous associations into episodic memory: An event-related fMRI study. NeuroImage, 38(1), 212-222. doi:10.1016/j.neuroimage.2007.07.020.

    Abstract

    Using event-related functional magnetic resonance imaging, we identified brain regions involved in storing associations of events discontinuous in time into long-term memory. Participants were scanned while memorizing item-triplets including simultaneous and discontinuous associations. Subsequent memory tests showed that participants remembered both types of associations equally well. First, by constructing the contrast between the subsequent memory effects for discontinuous associations and simultaneous associations, we identified the left posterior parahippocampal region, dorsolateral prefrontal cortex, the basal ganglia, posterior midline structures, and the middle temporal gyrus as being specifically involved in transforming discontinuous associations into episodic memory. Second, we replicated that the prefrontal cortex and the medial temporal lobe (MTL) especially the hippocampus are involved in associative memory formation in general. Our findings provide evidence for distinct neural operation(s) that supports the binding and storing discontinuous associations in memory. We suggest that top-down signals from the prefrontal cortex and MTL may trigger reactivation of internal representation in posterior midline structures of the first event, thus allowing it to be associated with the second event. The dorsolateral prefrontal cortex together with basal ganglia may support this encoding operation by executive and binding processes within working memory, and the posterior parahippocampal region may play a role in binding and memory formation.
  • Reis, A., Faísca, L., Mendonça, S., Ingvar, M., & Petersson, K. M. (2007). Semantic interference on a phonological task in illiterate subjects. Scandinavian Journal of Psychology, 48(1), 69-74. doi:10.1111/j.1467-9450.2006.00544.x.

    Abstract

    Previous research suggests that learning an alphabetic written language influences aspects of the auditory-verbal language system. In this study, we examined whether literacy influences the notion of words as phonological units independent of lexical semantics in literate and illiterate subjects. Subjects had to decide which item in a word- or pseudoword pair was phonologically longest. By manipulating the relationship between referent size and phonological length in three word conditions (congruent, neutral, and incongruent) we could examine to what extent subjects focused on form rather than meaning of the stimulus material. Moreover, the pseudoword condition allowed us to examine global phonological awareness independent of lexical semantics. The results showed that literate performed significantly better than illiterate subjects in the neutral and incongruent word conditions as well as in the pseudoword condition. The illiterate group performed least well in the incongruent condition and significantly better in the pseudoword condition compared to the neutral and incongruent word conditions and suggest that performance on phonological word length comparisons is dependent on literacy. In addition, the results show that the illiterate participants are able to perceive and process phonological length, albeit less well than the literate subjects, when no semantic interference is present. In conclusion, the present results confirm and extend the finding that illiterate subjects are biased towards semantic-conceptual-pragmatic types of cognitive processing.
  • Reis, A., Faísca, L., Ingvar, M., & Petersson, K. M. (2006). Color makes a difference: Two-dimensional object naming in literate and illiterate subjects. Brain and Cognition, 60, 49-54. doi:10.1016/j.bandc.2005.09.012.

    Abstract

    Previous work has shown that illiterate subjects are better at naming two-dimensional representations of real objects when presented as colored photos as compared to black and white drawings. This raises the question if color or textural details selectively improve object recognition and naming in illiterate compared to literate subjects. In this study, we investigated whether the surface texture and/or color of objects is used to access stored object knowledge in illiterate subjects. A group of illiterate subjects and a matched literate control group were compared on an immediate object naming task with four conditions: color and black and white (i.e., grey-scaled) photos, as well as color and black and white (i.e., grey-scaled) drawings of common everyday objects. The results show that illiterate subjects perform significantly better when the stimuli are colored and this effect is independent of the photographic detail. In addition, there were significant differences between the literacy groups in the black and white condition for both drawings and photos. These results suggest that color object information contributes to object recognition. This effect was particularly prominent in the illiterate group
  • Rey, A., & Schiller, N. O. (2006). A case of normal word reading but impaired letter naming. Journal of Neurolinguistics, 19(2), 87-95. doi:10.1016/j.jneuroling.2005.09.003.

    Abstract

    A case of a word/letter dissociation is described. The present patient has a quasi-normal word reading performance (both at the level of speed and accuracy) while he has major problems in nonword and letter reading. More specifically, he has strong difficulties in retrieving letter names but preserved abilities in letter identification. This study complements previous cases reporting a similar word/letter dissociation by focusing more specifically on word reading and letter naming latencies. The results provide new constraints for modeling the role of letter knowledge within reading processes and during reading acquisition or rehabilitation.
  • Roberts, L., Marinis, T., Felser, C., & Clahsen, H. (2007). Antecedent priming at trace positions in children’s sentence processing. Journal of Psycholinguistic Research, 36(2), 175-188. doi: 10.1007/s10936-006-9038-3.

    Abstract

    The present study examines whether children reactivate a moved constituent at its gap position and how children’s more limited working memory span affects the way they process filler-gap dependencies. 46 5–7 year-old children and 54 adult controls participated in a cross-modal picture priming experiment and underwent a standardized working memory test. The results revealed a statistically significant interaction between the participants’ working memory span and antecedent reactivation: High-span children (n = 19) and high-span adults (n = 22) showed evidence of antecedent priming at the gap site, while for low-span children and adults, there was no such effect. The antecedent priming effect in the high-span participants indicates that in both children and adults, dislocated arguments access their antecedents at gap positions. The absence of an antecedent reactivation effect in the low-span participants could mean that these participants required more time to integrate the dislocated constituent and reactivated the filler later during the sentence.
  • Roberts, L. (2007). Investigating real-time sentence processing in the second language. Stem-, Spraak- en Taalpathologie, 15, 115-127.

    Abstract

    Second language (L2) acquisition researchers have always been concerned with what L2 learners know about the grammar of the target language but more recently there has been growing interest in how L2 learners put this knowledge to use in real-time sentence comprehension. In order to investigate real-time L2 sentence processing, the types of constructions studied and the methods used are often borrowed from the field of monolingual processing, but the overall issues are familiar from traditional L2 acquisition research. These cover questions relating to L2 learners’ native-likeness, whether or not L1 transfer is in evidence, and how individual differences such as proficiency and language experience might have an effect. The aim of this paper is to provide for those unfamiliar with the field, an overview of the findings of a selection of behavioral studies that have investigated such questions, and to offer a picture of how L2 learners and bilinguals may process sentences in real time.
  • Robinson, S. (2006). The phoneme inventory of the Aita dialect of Rotokas. Oceanic Linguistics, 45(1), 206-209.

    Abstract

    Rotokas is famous for possessing one of the world’s smallest phoneme inventories. According to one source, the Central dialect of Rotokas possesses only 11 segmental phonemes (five vowels and six consonants) and lacks nasals while the Aita dialect possesses a similar-sized inventory in which nasals replace voiced stops. However, recent fieldwork reveals that the Aita dialect has, in fact, both voiced and nasal stops, making for an inventory of 14 segmental phonemes (five vowels and nine consonants). The correspondences between Central and Aita Rotokas suggest that the former is innovative with respect to its consonant inventory and the latter conservative, and that the small inventory of Central Rotokas arose by collapsing the distinction between voiced and nasal stops.
  • Roelofs, A. (2007). On the modelling of spoken word planning: Rejoinder to La Heij, Starreveld, and Kuipers (2007). Language and Cognitive Processes, 22(8), 1281-1286. doi:10.1080/01690960701462291.

    Abstract

    The author contests several claims of La Heij, Starreveld, and Kuipers (this issue) concerning the modelling of spoken word planning. The claims are about the relevance of error findings, the interaction between semantic and phonological factors, the explanation of word-word findings, the semantic relatedness paradox, and production rules.
  • Roelofs, A. (2006). The influence of spelling on phonological encoding in word reading, object naming, and word generation. Psychonomic Bulletin & Review, 13(1), 33-37.

    Abstract

    Does the spelling of a word mandatorily constrain spoken word production, or does it do so only
    when spelling is relevant for the production task at hand? Damian and Bowers (2003) reported spelling
    effects in spoken word production in English using a prompt–response word generation task. Preparation
    of the response words was disrupted when the responses shared initial phonemes that differed
    in spelling, suggesting that spelling constrains speech production mandatorily. The present experiments,
    conducted in Dutch, tested for spelling effects using word production tasks in which spelling
    was clearly relevant (oral reading in Experiment 1) or irrelevant (object naming and word generation
    in Experiments 2 and 3, respectively). Response preparation was disrupted by spelling inconsistency
    only with the word reading, suggesting that the spelling of a word constrains spoken word production
    in Dutch only when it is relevant for the word production task at hand.
  • Roelofs, A. (2006). Context effects of pictures and words in naming objects, reading words, and generating simple phrases. Quarterly Journal of Experimental Psychology, 59(10), 1764-1784. doi:10.1080/17470210500416052.

    Abstract

    In five language production experiments it was examined which aspects of words are activated in memory by context pictures and words. Context pictures yielded Stroop-like and semantic effects on response times when participants generated gender-marked noun phrases in response to written words (Experiment 1A). However, pictures yielded no such effects when participants simply read aloud the noun phrases (Experiment 2). Moreover, pictures yielded a gender congruency effect in generating gender-marked noun phrases in response to the written words (Experiments 3A and 3B). These findings suggest that context pictures activate lemmas (i.e., representations of syntactic properties), which leads to effects only when lemmas are needed to generate a response (i.e., in Experiments 1A, 3A, and 3B, but not in Experiment 2). Context words yielded Stroop-like and semantic effects in picture naming (Experiment 1B). Moreover, words yielded Stroop-like but no semantic effects in reading nouns (Experiment 4) and in generating noun phrases (Experiment 5). These findings suggest that context words activate the lemmas and forms of their names, which leads to semantic effects when lemmas are required for responding (Experiment 1B) but not when only the forms are required (Experiment 4). WEAVER++ simulations of the results are presented.
  • Roelofs, A. (2007). A critique of simple name-retrieval models of spoken word planning. Language and Cognitive Processes, 22(8), 1237-1260. doi:10.1080/01690960701461582.

    Abstract

    Simple name-retrieval models of spoken word planning (Bloem & La Heij, 2003; Starreveld & La Heij, 1996) maintain (1) that there are two levels in word planning, a conceptual and a lexical phonological level, and (2) that planning a word in both object naming and oral reading involves the selection of a lexical phonological representation. Here, the name retrieval models are compared to more complex models with respect to their ability to account for relevant data. It appears that the name retrieval models cannot easily account for several relevant findings, including some speech error biases, types of morpheme errors, and context effects on the latencies of responding to pictures and words. New analyses of the latency distributions in previous studies also pose a challenge. More complex models account for all these findings. It is concluded that the name retrieval models are too simple and that the greater complexity of the other models is warranted
  • Roelofs, A. (2007). Attention and gaze control in picture naming, word reading, and word categorizing. Journal of Memory and Language, 57(2), 232-251. doi:10.1016/j.jml.2006.10.001.

    Abstract

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture–word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment 3) and shifted their gaze to the arrow to manually indicate its direction. The experiments showed that the temporal coordination of vocal responding and gaze shifting depends on the vocal task and, to a lesser extent, on the type of relationship between picture and word. There was a close temporal link between gaze shifting and manual responding, suggesting that the gaze shifts indexed shifts of attention between the vocal and manual tasks. Computer simulations showed that a simple extension of WEAVER++ [Roelofs, A. (1992). A spreading-activation theory of lemma retrieval in speaking. Cognition, 42, 107–142.; Roelofs, A. (2003). Goal-referenced selection of verbal action: modeling attentional control in the Stroop task. Psychological Review, 110, 88–125.] with assumptions about attentional control in the coordination of vocal responding, gaze shifting, and manual responding quantitatively accounts for the key findings.
  • Roelofs, A., Van Turennout, M., & Coles, M. G. H. (2006). Anterior cingulate cortex activity can be independent of response conflict in stroop-like tasks. Proceedings of the National Academy of Sciences of the United States of America, 103(37), 13884-13889. doi:10.1073/pnas.0606265103.

    Abstract

    Cognitive control includes the ability to formulate goals and plans of action and to follow these while facing distraction. Previous neuroimaging studies have shown that the presence of conflicting response alternatives in Stroop-like tasks increases activity in dorsal anterior cingulate cortex (ACC), suggesting that the ACC is involved in cognitive control. However, the exact nature of ACC function is still under debate. The prevailing conflict detection hypothesis maintains that the ACC is involved in performance monitoring. According to this view, ACC activity reflects the detection of response conflict and acts as a signal that engages regulative processes subserved by lateral prefrontal brain regions. Here, we provide evidence from functional MRI that challenges this view and favors an alternative view, according to which the ACC has a role in regulation itself. Using an arrow–word Stroop task, subjects responded to incongruent, congruent, and neutral stimuli. A critical prediction made by the conflict detection hypothesis is that ACC activity should be increased only when conflicting response alternatives are present. Our data show that ACC responses are larger for neutral than for congruent stimuli, in the absence of response conflict. This result demonstrates the engagement of the ACC in regulation itself. A computational model of Stroop-like performance instantiating a version of the regulative hypothesis is shown to account for our findings.
  • Roelofs, A. (2006). Functional architecture of naming dice, digits, and number words. Language and Cognitive Processes, 21(1/2/3), 78-111. doi:10.1080/01690960400001846.

    Abstract

    Five chronometric experiments examined the functional architecture of naming dice, digits, and number words. Speakers named pictured dice, Arabic digits, or written number words, while simultaneously trying to ignore congruent or incongruent dice, digit, or number word distractors presented at various stimulus onset asynchronies (SOAs). Stroop-like interference and facilitation effects were obtained from digits and words on dice naming latencies, but not from dice on digit and word naming latencies. In contrast, words affected digit naming latencies and digits affected word naming latencies to the same extent. The peak of the interference was always around SOA = 0 ms, whereas facilitation was constant across distractor-first SOAs. These results suggest that digit naming is achieved like word naming rather than dice naming. WEAVER++simulations of the results are reported.
  • Roelofs, A., Özdemir, R., & Levelt, W. J. M. (2007). Influences of spoken word planning on speech recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(5), 900-913. doi:10.1037/0278-7393.33.5.900.

    Abstract

    In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. (PsycINFO Database Record (c) 2007 APA, all rights reserved)
  • Roelofs, A. (2006). Modeling the control of phonological encoding in bilingual speakers. Bilingualism: Language and Cognition, 9(2), 167-176. doi:10.1017/S1366728906002513.

    Abstract

    Phonological encoding is the process by which speakers retrieve phonemic segments for morphemes from memory and use
    the segments to assemble phonological representations of words to be spoken. When conversing in one language, bilingual
    speakers have to resist the temptation of encoding word forms using the phonological rules and representations of the other
    language. We argue that the activation of phonological representations is not restricted to the target language and that the
    phonological representations of languages are not separate. We advance a view of bilingual control in which condition-action
    rules determine what is done with the activated phonological information depending on the target language. This view is
    computationally implemented in the WEAVER++ model. We present WEAVER++ simulations of the cognate facilitation effect
    (Costa, Caramazza and Sebasti´an-Gall´es, 2000) and the between-language phonological facilitation effect of spoken
    distractor words in object naming (Hermans, Bongaerts, de Bot and Schreuder, 1998).
  • Roelofs, A. (1997). The WEAVER model of word-form encoding in speech production. Cognition, 64, 249-284. doi:10.1016/S0010-0277(97)00027-9.

    Abstract

    Lexical access in speaking consists of two major steps: lemma retrieval and word-form encoding. In Roelofs (Roelofs, A. 1992a. Cognition 42. 107-142; Roelofs. A. 1993. Cognition 47, 59-87.), I described a model of lemma retrieval. The present paper extends this work by presenting a comprehensive model of the second access step, word-form encoding. The model is called WEAVER (Word-form Encoding by Activation and VERification). Unlike other models of word-form generation, WEAVER is able to provide accounts of response time data, particularly from the picture-word interference paradigm and the implicit priming paradigm. Its key features are (1) retrieval by spreading activation, (2) verification of activated information by a production rule, (3) a rightward incremental construction of phonological representations using a principle of active syllabification, syllables are constructed on the fly rather than stored with lexical items, (4) active competitive selection of syllabic motor programs using a mathematical formalism that generates response times and (5) the association of phonological speech errors with the selection of syllabic motor programs due to the failure of verification.
  • Rohlfing, K., Loehr, D., Duncan, S., Brown, A., Franklin, A., Kimbara, I., Milde, J.-T., Parrill, F., Rose, T., Schmidt, T., Sloetjes, H., Thies, A., & Wellinghof, S. (2006). Comparison of multimodal annotation tools - workshop report. Gesprächforschung - Online-Zeitschrift zur Verbalen Interaktion, 7, 99-123.
  • Rösler, D., & Skiba, R. (1988). Möglichkeiten für den Einsatz einer Lehrmaterial-Datenbank in der Lehrerfortbildung. Deutsch lernen, 14(1), 24-31.
  • Rowland, C. F. (2007). Explaining errors in children’s questions. Cognition, 104(1), 106-134. doi:10.1016/j.cognition.2006.05.011.

    Abstract

    The ability to explain the occurrence of errors in children’s speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813–842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children’s speech, and that errors occur when children resort to other operations to produce questions [e.g. Dąbrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83–102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157–181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.
  • Rowland, C. F., & Fletcher, S. L. (2006). The effect of sampling on estimates of lexical specificity and error rates. Journal of Child Language, 33(4), 859-877. doi:10.1017/S0305000906007537.

    Abstract

    Studies based on naturalistic data are a core tool in the field of language acquisition research and have provided thorough descriptions of children's speech. However, these descriptions are inevitably confounded by differences in the relative frequency with which children use words and language structures. The purpose of the present work was to investigate the impact of sampling constraints on estimates of the productivity of children's utterances, and on the validity of error rates. Comparisons were made between five different sized samples of wh-question data produced by one child aged 2;8. First, we assessed whether sampling constraints undermined the claim (e.g. Tomasello, 2000) that the restricted nature of early child speech reflects a lack of adultlike grammatical knowledge. We demonstrated that small samples were equally likely to under- as overestimate lexical specificity in children's speech, and that the reliability of estimates varies according to sample size. We argued that reliable analyses require a comparison with a control sample, such as that from an adult speaker. Second, we investigated the validity of estimates of error rates based on small samples. The results showed that overall error rates underestimate the incidence of error in some rarely produced parts of the system and that analyses on small samples were likely to substantially over- or underestimate error rates in infrequently produced constructions. We concluded that caution must be used when basing arguments about the scope and nature of errors in children's early multi-word productions on analyses of samples of spontaneous speech.
  • Rubio-Fernández, P. (2007). Suppression in metaphor interpretation: Differences between meaning selection and meaning construction. Journal of Semantics, 24(4), 345-371. doi:10.1093/jos/ffm006.

    Abstract

    Various accounts of metaphor interpretation propose that it involves constructing an ad hoc concept on the basis of the concept encoded by the metaphor vehicle (i.e. the expression used for conveying the metaphor). This paper discusses some of the differences between these theories and investigates their main empirical prediction: that metaphor interpretation involves enhancing properties of the metaphor vehicle that are relevant for interpretation, while suppressing those that are irrelevant. This hypothesis was tested in a cross-modal lexical priming study adapted from early studies on lexical ambiguity. The different patterns of suppression of irrelevant meanings observed in disambiguation studies and in the experiment on metaphor reported here are discussed in terms of differences between meaning selection and meaning construction.
  • De Ruiter, J. P. (2007). Postcards from the mind: The relationship between speech, imagistic gesture and thought. Gesture, 7(1), 21-38.

    Abstract

    In this paper, I compare three different assumptions about the relationship between speech, thought and gesture. These assumptions have profound consequences for theories about the representations and processing involved in gesture and speech production. I associate these assumptions with three simplified processing architectures. In the Window Architecture, gesture provides us with a 'window into the mind'. In the Language Architecture, properties of language have an influence on gesture. In the Postcard Architecture, gesture and speech are planned by a single process to become one multimodal message. The popular Window Architecture is based on the assumption that gestures come, as it were, straight out of the mind. I argue that during the creation of overt imagistic gestures, many processes, especially those related to (a) recipient design, and (b) effects of language structure, cause an observable gesture to be very different from the original thought that it expresses. The Language Architecture and the Postcard Architecture differ from the Window Architecture in that they both incorporate a central component which plans gesture and speech together, however they differ from each other in the way they align gesture and speech. The Postcard Architecture assumes that the process creating a multimodal message involving both gesture and speech has access to the concepts that are available in speech, while the Language Architecture relies on interprocess communication to resolve potential conflicts between the content of gesture and speech.
  • De Ruiter, J. P., Mitterer, H., & Enfield, N. J. (2006). Projecting the end of a speaker's turn: A cognitive cornerstone of conversation. Language, 82(3), 515-535.

    Abstract

    A key mechanism in the organization of turns at talk in conversation is the ability to anticipate or PROJECT the moment of completion of a current speaker’s turn. Some authors suggest that this is achieved via lexicosyntactic cues, while others argue that projection is based on intonational contours. We tested these hypotheses in an on-line experiment, manipulating the presence of symbolic (lexicosyntactic) content and intonational contour of utterances recorded in natural conversations. When hearing the original recordings, subjects can anticipate turn endings with the same degree of accuracy attested in real conversation. With intonational contour entirely removed (leaving intact words and syntax, with a completely flat pitch), there is no change in subjects’ accuracy of end-of-turn projection. But in the opposite case (with original intonational contour intact, but with no recognizable words), subjects’ performance deteriorates significantly. These results establish that the symbolic (i.e. lexicosyntactic) content of an utterance is necessary (and possibly sufficient) for projecting the moment of its completion, and thus for regulating conversational turn-taking. By contrast, and perhaps surprisingly, intonational contour is neither necessary nor sufficient for end-of-turn projection.
  • De Ruiter, J. P. (2006). Can gesticulation help aphasic people speak, or rather, communicate? Advances in Speech-Language Pathology, 8(2), 124-127. doi:10.1080/14417040600667285.

    Abstract

    As Rose (2006) discusses in the lead article, two camps can be identified in the field of gesture research: those who believe that gesticulation enhances communication by providing extra information to the listener, and on the other hand those who believe that gesticulation is not communicative, but rather that it facilitates speaker-internal word finding processes. I review a number of key studies relevant for this controversy, and conclude that the available empirical evidence is supporting the notion that gesture is a communicative device which can compensate for problems in speech by providing information in gesture. Following that, I discuss the finding by Rose and Douglas (2001) that making gestures does facilitate word production in some patients with aphasia. I argue that the gestures produced in the experiment by Rose and Douglas are not guaranteed to be of the same kind as the gestures that are produced spontaneously under naturalistic, communicative conditions, which makes it difficult to generalise from that particular study to general gesture behavior. As a final point, I encourage researchers in the area of aphasia to put more emphasis on communication in naturalistic contexts (e.g., conversation) in testing the capabilities of people with aphasia.
  • Salverda, A. P., Dahan, D., Tanenhaus, M. K., Crosswhite, K., Masharov, M., & McDonough, J. (2007). Effects of prosodically modulated sub-phonetic variation on lexical competition. Cognition, 105(2), 466-476. doi:10.1016/j.cognition.2006.10.008.

    Abstract

    Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically conditioned phonetic variation.
  • Sauter, D., & Scott, S. K. (2007). More than one kind of happiness: Can we recognize vocal expressions of different positive states? Motivation and Emotion, 31(3), 192-199.

    Abstract

    Several theorists have proposed that distinctions are needed between different positive emotional states, and that these discriminations may be particularly useful in the domain of vocal signals (Ekman, 1992b, Cognition and Emotion, 6, 169–200; Scherer, 1986, Psychological Bulletin, 99, 143–165). We report an investigation into the hypothesis that positive basic emotions have distinct vocal expressions (Ekman, 1992b, Cognition and Emotion, 6, 169–200). Non-verbal vocalisations are used that map onto five putative positive emotions: Achievement/Triumph, Amusement, Contentment, Sensual Pleasure, and Relief. Data from categorisation and rating tasks indicate that each vocal expression is accurately categorised and consistently rated as expressing the intended emotion. This pattern is replicated across two language groups. These data, we conclude, provide evidence for the existence of robustly recognisable expressions of distinct positive emotions.
  • Scharenborg, O., Seneff, S., & Boves, L. (2007). A two-pass approach for handling out-of-vocabulary words in a large vocabulary recognition task. Computer, Speech & Language, 21, 206-218. doi:10.1016/j.csl.2006.03.003.

    Abstract

    This paper addresses the problem of recognizing a vocabulary of over 50,000 city names in a telephone access spoken dialogue system. We adopt a two-stage framework in which only major cities are represented in the first stage lexicon. We rely on an unknown word model encoded as a phone loop to detect OOV city names (referred to as ‘rare city’ names). We use SpeM, a tool that can extract words and word-initial cohorts from phone graphs from a large fallback lexicon, to provide an N-best list of promising city name hypotheses on the basis of the phone graph corresponding to the OOV. This N-best list is then inserted into the second stage lexicon for a subsequent recognition pass. Experiments were conducted on a set of spontaneous telephone-quality utterances; each containing one rare city name. It appeared that SpeM was able to include nearly 75% of the correct city names in an N-best hypothesis list of 3000 city names. With the names found by SpeM to extend the lexicon of the second stage recognizer, a word accuracy of 77.3% could be obtained. The best one-stage system yielded a word accuracy of 72.6%. The absolute number of correctly recognized rare city names almost doubled, from 62 for the best one-stage system to 102 for the best two-stage system. However, even the best two-stage system recognized only about one-third of the rare city names retrieved by SpeM. The paper discusses ways for improving the overall performance in the context of an application.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2007). 'Early recognition' of polysyllabic words in continuous speech. Computer, Speech & Language, 21, 54-71. doi:10.1016/j.csl.2005.12.001.

    Abstract

    Humans are able to recognise a word before its acoustic realisation is complete. This in contrast to conventional automatic speech recognition (ASR) systems, which compute the likelihood of a number of hypothesised word sequences, and identify the words that were recognised on the basis of a trace back of the hypothesis with the highest eventual score, in order to maximise efficiency and performance. In the present paper, we present an ASR system, SpeM, based on principles known from the field of human word recognition that is able to model the human capability of ‘early recognition’ by computing word activation scores (based on negative log likelihood scores) during the speech recognition process. Experiments on 1463 polysyllabic words in 885 utterances showed that 64.0% (936) of these polysyllabic words were recognised correctly at the end of the utterance. For 81.1% of the 936 correctly recognised polysyllabic words the local word activation allowed us to identify the word before its last phone was available, and 64.1% of those words were already identified one phone after their lexical uniqueness point. We investigated two types of predictors for deciding whether a word is considered as recognised before the end of its acoustic realisation. The first type is related to the absolute and relative values of the word activation, which trade false acceptances for false rejections. The second type of predictor is related to the number of phones of the word that have already been processed and the number of phones that remain until the end of the word. The results showed that SpeM’s performance increases if the amount of acoustic evidence in support of a word increases and the risk of future mismatches decreases.
  • Scharenborg, O. (2007). Reaching over the gap: A review of efforts to link human and automatic speech recognition research. Speech Communication, 49, 336-347. doi:10.1016/j.specom.2007.01.009.

    Abstract

    The fields of human speech recognition (HSR) and automatic speech recognition (ASR) both investigate parts of the speech recognition process and have word recognition as their central issue. Although the research fields appear closely related, their aims and research methods are quite different. Despite these differences there is, however, lately a growing interest in possible cross-fertilisation. Researchers from both ASR and HSR are realising the potential benefit of looking at the research field on the other side of the ‘gap’. In this paper, we provide an overview of past and present efforts to link human and automatic speech recognition research and present an overview of the literature describing the performance difference between machines and human listeners. The focus of the paper is on the mutual benefits to be derived from establishing closer collaborations and knowledge interchange between ASR and HSR. The paper ends with an argument for more and closer collaborations between researchers of ASR and HSR to further improve research in both fields.
  • Scharenborg, O., Wan, V., & Moore, R. K. (2007). Towards capturing fine phonetic variation in speech using articulatory features. Speech Communication, 49, 811-826. doi:10.1016/j.specom.2007.01.005.

    Abstract

    The ultimate goal of our research is to develop a computational model of human speech recognition that is able to capture the effects of fine-grained acoustic variation on speech recognition behaviour. As part of this work we are investigating automatic feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. In the experiments reported here, we analysed the classification results from support vector machines (SVMs) and multilayer perceptrons (MLPs). MLPs have been widely and successfully used for the task of multi-value articulatory feature classification, while (to the best of our knowledge) SVMs have not. This paper compares the performance of the two classifiers and analyses the results in order to better understand the articulatory representations. It was found that the SVMs outperformed the MLPs for five out of the seven articulatory feature classes we investigated while using only 8.8–44.2% of the training material used for training the MLPs. The structure in the misclassifications of the SVMs and MLPs suggested that there might be a mismatch between the characteristics of the classification systems and the characteristics of the description of the AF values themselves. The analyses showed that some of the misclassified features are inherently confusable given the acoustic space. We concluded that in order to come to a feature set that can be used for a reliable and accurate automatic description of the speech signal; it could be beneficial to move away from quantised representations.
  • Schiller, N. O., Schuhmann, T., Neyndorff, A. C., & Jansma, B. M. (2006). The influence of semantic category membership on syntactic decisions: A study using event-related brain potentials. Brain Research, 1082(1), 153-164. doi:10.1016/j.brainres.2006.01.087.

    Abstract

    An event-related brain potentials (ERP) experiment was carried out to investigate the influence of semantic category membership on syntactic decision-making. Native speakers of German viewed a series of words that were semantically marked or unmarked for gender and made go/no-go decisions about the grammatical gender of those words. The electrophysiological results indicated that participants could make a gender decision earlier when words were semantically gender-marked than when they were semantically gender-unmarked. Our data provide evidence for the influence of semantic category membership on the decision of the syntactic gender of a visually presented German noun. More specifically, our results support models of language comprehension in which semantic information processing of words is initiated prior to syntactic information processing is finalized.
  • Schiller, N. O., & Costa, A. (2006). Different selection principles of freestanding and bound morphemes in language production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(5), 1201-1207. doi:10.1037/0278-7393.32.5.1201.

    Abstract

    Freestanding and bound morphemes differ in many (psycho)linguistic aspects. Some theorists have claimed that the representation and retrieval of freestanding and bound morphemes in the course of language production are governed by similar processing mechanisms. Alternatively, it has been proposed that both types of morphemes may be selected for production in different ways. In this article, the authors first review the available experimental evidence related to this topic and then present new experimental data pointing to the notion that freestanding and bound morphemes are retrieved following distinct processing principles: freestanding morphemes are subject to competition, bound morphemes not.
  • Schiller, N. O. (2006). Lexical stress encoding in single word production estimated by event-related brain potentials. Brain Research, 1112(1), 201-212. doi:10.1016/j.brainres.2006.07.027.

    Abstract

    An event-related brain potentials (ERPs) experiment was carried out to investigate the time course of lexical stress encoding in language production. Native speakers of Dutch viewed a series of pictures corresponding to bisyllabic names which were either stressed on the first or on the second syllable and made go/no-go decisions on the lexical stress location of those picture names. Behavioral results replicated a pattern that was observed earlier, i.e. faster button-press latencies to initial as compared to final stress targets. The electrophysiological results indicated that participants could make a lexical stress decision significantly earlier when picture names had initial than when they had final stress. Moreover, the present data suggest the time course of lexical stress encoding during single word form formation in language production. When word length is corrected for, the temporal interval for lexical stress encoding specified by the current ERP results falls into the time window previously identified for phonological encoding in language production.
  • Schiller, N. O., Jansma, B. M., Peters, J., & Levelt, W. J. M. (2006). Monitoring metrical stress in polysyllabic words. Language and Cognitive Processes, 21(1/2/3), 112-140. doi:10.1080/01690960400001861.

    Abstract

    This study investigated the monitoring of metrical stress information in internally generated speech. In Experiment 1, Dutch participants were asked to judge whether bisyllabic picture names had initial or final stress. Results showed significantly faster decision times for initially stressed targets (e.g., KAno ‘‘canoe’’) than for targets with final stress (e.g., kaNON ‘‘cannon’’; capital letters indicate stressed syllables). It was demonstrated that monitoring latencies are not a function of the picture naming or object recognition latencies to the same pictures. Experiments 2 and 3 replicated the outcome of the first experiment with trisyllabic picture names. These results are similar to the findings of Wheeldon and Levelt (1995) in a segment monitoring task. The outcome might be interpreted to demonstrate that phonological encoding in speech production is a rightward incremental process. Alternatively, the data might reflect the sequential nature of a perceptual mechanism used to monitor lexical stress.
  • Schiller, N. O., & Caramazza, A. (2006). Grammatical gender selection and the representation of morphemes: The production of Dutch diminutives. Language and Cognitive Processes, 21, 945-973. doi:10.1080/01690960600824344.

    Abstract

    In this study, we investigated grammatical feature selection during noun phrase production in Dutch. More specifically, we studied the conditions under which different grammatical genders select either the same or different determiners. Pictures of simple objects paired with a gender-congruent or a gender-incongruent distractor word were presented. Participants named the pictures using a noun phrase with the appropriate gender-marked determiner. Auditory (Experiment 1) or visual cues (Experiment 2) indicated whether the noun was to be produced in its standard or diminutive form. Results revealed a cost in naming latencies when target and distractor take different determiner forms independent of whether or not they have the same gender. This replicates earlier results showing that congruency effects are due to competition during the selection of determiner forms rather than gender features. The overall pattern of results supports the view that grammatical feature selection is an automatic consequence of lexical node selection and therefore not subject to interference from incongruent grammatical features. Selection of the correct determiner form, however, is a competitive process, implying that lexical node and grammatical feature selection operate with distinct principles.
  • Schiller, N. O., Meyer, A. S., & Levelt, W. J. M. (1997). The syllabic structure of spoken words: Evidence from the syllabification of intervocalic consonants. Language and Speech, 40(2), 103-140.

    Abstract

    A series of experiments was carried out to investigate the syllable affiliation of intervocalic consonants following short vowels, long vowels, and schwa in Dutch. Special interest was paid to words such as letter ['leter] ''id.,'' where a short vowel is followed by a single consonant. On phonological grounds one may predict that the first syllable should always be closed, but earlier psycholinguistic research had shown that speakers tend to leave these syllables open. In our experiments, bisyllabic word forms were presented aurally, and participants produced their syllables in reversed order (Experiments 1 through 5), or repeated the words inserting a pause between the syllables (Experiment 6). The results showed that participants generally closed syllables with a short vowel. However, in a significant number of the cases they produced open short vowel syllables. Syllables containing schwa, like syllables with a long vowel, were hardly ever closed. Word stress, the phonetic quality of the vowel in the first syllable, and the experimental context influenced syllabification. Taken together, the experiments show that native speakers syllabify bisyllabic Dutch nouns in accordance with a small set of prosodic output constraints. To account for the variability of the results, we propose that these constraints differ in their probabilities of being applied.
  • Segurado, R., Hamshere, M. L., Glaser, B., Nikolov, I., Moskvina, V., & Holmans, P. A. (2007). Combining linkage data sets for meta-analysis and mega-analysis: the GAW15 rheumatoid arthritis data set. BMC Proceedings, 1(Suppl 1): S104.

    Abstract

    We have used the genome-wide marker genotypes from Genetic Analysis Workshop 15 Problem 2 to explore joint evidence for genetic linkage to rheumatoid arthritis across several samples. The data consisted of four high-density genome scans on samples selected for rheumatoid arthritis. We cleaned the data, removed intermarker linkage disequilibrium, and assembled the samples onto a common genetic map using genome sequence positions as a reference for map interpolation. The individual studies were combined first at the genotype level (mega-analysis) prior to a multipoint linkage analysis on the combined sample, and second using the genome scan meta-analysis method after linkage analysis of each sample. The two approaches were compared, and give strong support to the HLA locus on chromosome 6 as a susceptibility locus. Other regions of interest include loci on chromosomes 11, 2, and 12.
  • Seidl, A., & Johnson, E. K. (2006). Infant word segmentation revisited: Edge alignment facilitates target extraction. Developmental Science, 9(6), 565-573.

    Abstract

    In a landmark study, Jusczyk and Aslin (1995) demonstrated that English-learning infants are able to segment words from continuous speech at 7.5 months of age. In the current study, we explored the possibility that infants segment words from the edges of utterances more readily than the middle of utterances. The same procedure was used as in Jusczyk and Aslin (1995); however, our stimuli were controlled for target word location and infants were given a shorter familiarization time to avoid ceiling effects. Infants were familiarized to one word that always occurred at the edge of an utterance (sentence-initial position for half of the infants and sentence-final position for the other half) and one word that always occurred in sentence-medial position. Our results demonstrate that infants segment words from the edges of an utterance more readily than from the middle of an utterance. In addition, infants segment words from utterance-final position just as readily as they segment words from utterance-initial position. Possible explanations for these results, as well as their implications for current models of the development of word segmentation, are discussed.
  • Sekine, K. (2006). Developmental changes in spatial frame of reference among preschoolers: Spontaneous gestures and speech in route descriptions. The Japanese journal of developmental psychology, 17(3), 263-271.

    Abstract

    This research investigated how spontaneous gestures during speech represent “Frames of Reference” (FoR) among preschool children, and how their FoRs change with age. Four-, five-, and six-year-olds (N=55) described the route from the nursery school to their own homes. Analysis of children’s utterances and gestures showed that mean length of utterance, speech time, and use of landmarks or right/left terms to describe a route, all increased with age. Most of 4-year-olds made gestures in the direction of the actual route to their homes, and their hands tend to be raised above the shoulder. In contrast, 6-year-olds used gestures to give directions that did not match the actual route, as if they were creating a virtual space in front of the speaker. Some 5- and 6-year-olds produced gestures that represented survey mapping. These results indicated that development of FoR in childhood may change from an egocentric FoR to a fixed FoR. As factors underlying development of FoR, verbal encoding skills and the commuting experience were also discussed.
  • Senft, G. (2006). Völkerkunde und Linguistik: Ein Plädoyer für interdisziplinäre Kooperation. Zeitschrift für Germanistische Linguistik, 34, 87-104.

    Abstract

    Starting with Hockett’s famous statement on the relationship between linguistics and anthropology - "Linguistics without anthropology is sterile; anthropology without linguistics is blind” - this paper first discusses the historic perspective of the topic. This discussion starts with Herder, Humboldt and Schleiermacher and ends with the present debate on the interrelationship of anthropology and linguistics. Then some excellent examples of interdisciplinary projects within anthropological linguistics (or linguistic anthropology) are presented. And finally it is illustrated why Hockett is still right.
  • Senft, G. (1988). A grammar of Manam by Frantisek Lichtenberk [Book review]. Language and linguistics in Melanesia, 18, 169-173.
  • Senft, G. (1991). [Review of the book Einführung in die deskriptive Linguistik by Michael Dürr and Peter Schlobinski]. Linguistics, 29, 722-725.
  • Senft, G. (1988). [Review of the book Functional syntax: Anaphora, discourse and empathy by Susumu Kuno]. Journal of Pragmatics, 12, 396-399. doi:10.1016/0378-2166(88)90040-9.
  • Senft, G. (1997). [Review of the book The design of language: An introduction to descriptive linguistics by Terry Crowley, John Lynch, Jeff Siegel, and Julie Piau]. Linguistics, 35, 781-785.
  • Senft, G. (1991). [Review of the book The sign languages of Aboriginal Australia by Adam Kendon]. Journal of Pragmatics, 15, 400-405. doi:10.1016/0378-2166(91)90040-5.
  • Senft, G. (2006). A biography in the strict sense of the term [Review of the book Malinowski: Odyssee of an anthropologist 1884-1920, vol. 1 by Michael Young]. Journal of Pragmatics, 38(4), 610-637. doi:10.1016/j.pragma.2005.06.012.
  • Senft, G. (2006). [Review of the book Bilder aus der Deutschen Südsee by Hermann Joseph Hiery]. Paideuma: Mitteilungen zur Kulturkunde, 52, 304-308.
  • Senft, G. (2007). [Review of the book Bislama reference grammar by Terry Crowley]. Linguistics, 45(1), 235-239.
  • Senft, G. (2006). [Review of the book Narrative as social practice: Anglo-Western and Australian Aboriginal oral traditions by Danièle M. Klapproth]. Journal of Pragmatics, 38(8), 1326-1331. doi:10.1016/j.pragma.2005.11.001.
  • Senft, G. (2006). [Review of the book Pacific Pidgins and Creoles: Origins, growth and development by Darrell T. Tryon and Jean-Michel Charpentier]. Linguistics, 44(1), 195-200. doi:10.1515/LING.2006.006.
  • Senft, G. (2007). [Review of the book Serial verb constructions - A cross-linguistic typology by Alexandra Y. Aikhenvald and Robert M. W. Dixon]. Linguistics, 45(4), 833-840. doi:10.1515/LING.2007.024.
  • Senft, G. (1997). Magical conversation on the Trobriand Islands. Anthropos, 92, 369-391.
  • Senft, G. (1991). Network models to describe the Kilivila classifier system. Oceanic Linguistics, 30, 131-155. Retrieved from http://www.jstor.org/stable/3623085.
  • Seuren, P. A. M. (2006). The natural logic of language and cognition. Pragmatics, 16(1), 103-138.
  • Seuren, P. A. M. (2007). The theory that dare not speak its name: A rejoinder to Mufwene and Francis. Language Sciences, 29(4), 571-573. doi:10.1016/j.langsci.2007.02.001.
  • Seuren, P. A. M. (1970). A note on descriptive adequacy. Journal of Linguistics, 6(2), 263-266. doi:10.1017/S0022226700002668.
  • Seuren, P. A. M. (1982). De spellingsproblematiek in Suriname: Een inleiding. OSO, 1(1), 71-79.
  • Seuren, P. A. M. (1970). [Review of the book Betekenis en betekenisstructuur. Nagelaten geschriften van Prof. Dr. A. W. de Groot ed by G. F. Bos and H. Roose]. Foundations of Language, 6(2), 282-283.
  • Seuren, P. A. M. (1988). [Review of the book Pidgin and Creole linguistics by P. Mühlhäusler]. Studies in Language, 12(2), 504-513.
  • Seuren, P. A. M. (1997). [Review of the book Schets van de Nederlandse Taal. Grammatica, poëtica en retorica by Adriaen Verwer, Naar de editie van E. van Driel (1783) vertaald door J. Knol. Ed. Th.A.J.M. Janssen & J. Noordegraaf]. Nederlandse Taalkunde, 4, 370-374.
  • Seuren, P. A. M. (1988). [Review of the Collins Cobuild English Language Dictionary (Collins Birmingham University International Language Database)]. Journal of Semantics, 6, 169-174. doi:10.1093/jos/6.1.169.
  • Seuren, P. A. M. (2006). McCawley’s legacy [Review of the book Polymorphous linguistics: Jim McCawley's legacy ed. by Salikoko S. Mufwene, Elaine J. Francis and Rebecca S. Wheeler]. Language Sciences, 28(5), 521-526. doi:10.1016/j.langsci.2006.02.001.
  • Seuren, P. A. M. (1963). Naar aanleiding van Dr. F. Balk-Smit Duyzentkunst "De Grammatische Functie". Levende Talen, 219, 179-186.
  • Seuren, P. A. M. (1991). Grammatika als algorithme: Rekenen met taal. Koninklijke Nederlandse Akademie van Wetenschappen. Mededelingen van de Afdeling Letterkunde, Nieuwe Reeks, 54(2), 25-63.
  • Seuren, P. A. M. (1982). Internal variability in competence. Linguistische Berichte, 77, 1-31.
  • Seuren, P. A. M. (1989). Neue Entwicklungen im Wahrheitsbegriff. Studia Leibnitiana, 21(2), 155-173.
  • Seuren, P. A. M. (1988). Presupposition and negation. Journal of Semantics, 6(3/4), 175-226. doi:10.1093/jos/6.1.175.

    Abstract

    This paper is an attempt to show that given the available observations on the behaviour of negation and presuppositions there is no simpler explanation than to assume that natural language has two distinct negation operators, the minimal negation which preserves presuppositions and the radical negation which does not. The three-valued logic emerging from this distinction, and especially its model-theory, are discussed in detail. It is, however, stressed that the logic itself is only epiphenomenal on the structures and processes involved in the interpretation of sentences. Horn (1985) brings new observations to bear, related with metalinguistic uses of negation, and proposes a “pragmatic” ambiguity in negation to the effect that in descriptive (or “straight”) use negation is the classical bivalent operator, whereas in metalinguistic use it is non-truthfunctional but only pragmatic. Van der Sandt (to appear) accepts Horn's observations but proposes a different solution: he proposes an ambiguity in the argument clause of the negation operator (which, for him, too, is classical and bivalent), according to whether the negation takes only the strictly asserted proposition or covers also the presuppositions, the (scalar) implicatures and other implications (in particular of style and register) of the sentence expressing that proposition. These theories are discussed at some length. The three-valued analysis is defended on the basis of partly new observations, which do not seem to fit either Horn's or Van der Sandt's solution. It is then placed in the context of incremental discourse semantics, where both negations are seen to do the job of keeping increments out of the discourse domain, though each does so in its own specific way. The metalinguistic character of the radical negation is accounted for in terms of the incremental apparatus. The metalinguistic use of negation in denials of implicatures or implications of style and register is regarded as a particular form of minimal negation, where the negation denies not the proposition itself but the appropriateness of the use of an expression in it. This appropriateness negation is truth-functional and not pragmatic, but it applies to a particular, independently motivated, analysis of the argument clause. The ambiguity of negation in natural language is different from the ordinary type of ambiguity found in the lexicon. Normally, lexical ambiguities are idiosyncratic, highly contingent, and unpredictable from language to language. In the case of negation, however, the two meanings are closely related, both truth-conditionally and incrementally. Moreover, the mechanism of discourse incrementation automatically selects the right meaning. These properties are taken to provide a sufficient basis for discarding the, otherwise valid, objection that negation is unlikely to be ambiguous because no known language makes a lexical distinction between the two readings.
  • Shatzman, K. B., & McQueen, J. M. (2006). Segment duration as a cue to word boundaries in spoken-word recognition. Perception & Psychophysics, 68(1), 1-16.

    Abstract

    In two eye-tracking experiments, we examined the degree to which listeners use acoustic cues to word boundaries. Dutch participants listened to ambiguous sentences in which stop-initial words (e.g., pot, jar) were preceded by eens (once); the sentences could thus also refer to cluster-initial words (e.g., een spot, a spotlight). The participants made fewer fixations to target pictures (e.g., a jar) when the target and the preceding [s] were replaced by a recording of the cluster-initial word than when they were spliced from another token of the target-bearing sentence (Experiment 1). Although acoustic analyses revealed several differences between the two recordings, only [s] duration correlated with the participants’ fixations (more target fixations for shorter [s]s). Thus, we found that listeners apparently do not use all available acoustic differences equally. In Experiment 2, the participants made more fixations to target pictures when the [s] was shortened than when it was lengthened. Utterance interpretation can therefore be influenced by individual segment duration alone.
  • Shatzman, K. B., & McQueen, J. M. (2006). Prosodic knowledge affects the recognition of newly acquired words. Psychological Science, 17(5), 372-377. doi:10.1111/j.1467-9280.2006.01714.x.

    Abstract

    An eye-tracking study examined the involvement of prosodic knowledge—specifically, the knowledge that monosyllabic words tend to have longer durations than the first syllables of polysyllabic words—in the recognition of newly learned words. Participants learned new spoken words (by associating them to novel shapes): bisyllables and onset-embedded monosyllabic competitors (e.g., baptoe and bap). In the learning phase, the duration of the ambiguous sequence (e.g., bap) was held constant. In the test phase, its duration was longer than, shorter than, or equal to its learning-phase duration. Listeners’ fixations indicated that short syllables tended to be interpreted as the first syllables of the bisyllables, whereas long syllables generated more monosyllabic-word interpretations. Recognition of newly acquired words is influenced by prior prosodic knowledge and is therefore not determined solely on the basis of stored episodes of those words.
  • Shatzman, K. B., & McQueen, J. M. (2006). The modulation of lexical competition by segment duration. Psychonomic Bulletin & Review, 13(6), 966-971.

    Abstract

    In an eye-tracking study, we examined how fine-grained phonetic detail, such as segment duration, influences the lexical competition process during spoken word recognition. Dutch listeners’ eye movements to pictures of four objects were monitored as they heard sentences in which a stop-initial target word (e.g., pijp “pipe”) was preceded by an [s]. The participants made more fixations to pictures of cluster-initial words (e.g., spijker “nail”) when they heard a long [s] (mean duration, 103 msec) than when they heard a short [s] (mean duration, 73 msec). Conversely, the participants made more fixations to pictures of the stop-initial words when they heard a short [s] than when they heard a long [s]. Lexical competition between stop- and cluster-initial words, therefore, is modulated by segment duration differences of only 30 msec.
  • Shi, R., Werker, J. F., & Cutler, A. (2006). Recognition and representation of function words in English-learning infants. Infancy, 10(2), 187-198. doi:10.1207/s15327078in1002_5.

    Abstract

    We examined infants' recognition of functors and the accuracy of the representations that infants construct of the perceived word forms. Auditory stimuli were “Functor + Content Word” versus “Nonsense Functor + Content Word” sequences. Eight-, 11-, and 13-month-old infants heard both real functors and matched nonsense functors (prosodically analogous to their real counterparts but containing a segmental change). Results reveal that 13-month-olds recognized functors with attention to segmental detail. Eight-month-olds did not distinguish real versus nonsense functors. The performance of 11-month-olds fell in between that of the older and younger groups, consistent with an emerging recognition of real functors. The three age groups exhibited a clear developmental trend. We propose that in the earliest stages of vocabulary acquisition, function elements receive no segmentally detailed representations, but such representations are gradually constructed so that once vocabulary growth starts in earnest, fully specified functor representations are in place to support it.
  • Shi, R., Cutler, A., Werker, J., & Cruickshank, M. (2006). Frequency and form as determinants of functor sensitivity in English-acquiring infants. Journal of the Acoustical Society of America, 119(6), EL61-EL67. doi:10.1121/1.2198947.

    Abstract

    High-frequency functors are arguably among the earliest perceived word forms and may assist extraction of initial vocabulary items. Canadian 11- and 8-month-olds were familiarized to pseudo-nouns following either a high-frequency functor the or a low-frequency functor her versus phonetically similar mispronunciations of each, kuh and ler, and then tested for recognition of the pseudo-nouns. A preceding the (but not kuh, her, ler)facilitated extraction of the pseudo-nouns for 11-month-olds; the is thus well-specified in form for these infants. However, both the and kuh (but not her-ler )f aciliated segmentation or 8-month-olds, suggesting an initial underspecified representation of high-frequency functors.
  • Shopen, T., Reid, N., Shopen, G., & Wilkins, D. G. (1997). Ensuring the survival of Aboriginal and Torres Strait islander languages into the 21st century. Australian Review of Applied Linguistics, 10(1), 143-157.

    Abstract

    Aboriginal languages threatened by speakers poor economic and social conditions; some may survive through support for community development, language maintenance, bilingual education and training of Aboriginal teachers and linguists, and nonAboriginal teachers of Aboriginal and Islander students.
  • Slobin, D. I., & Bowerman, M. (2007). Interfaces between linguistic typology and child language research. Linguistic Typology, 11(1), 213-226. doi:10.1515/LINGTY.2007.015.
  • Smith, M. R., Cutler, A., Butterfield, S., & Nimmo-Smith, I. (1989). The perception of rhythm and word boundaries in noise-masked speech. Journal of Speech and Hearing Research, 32, 912-920.

    Abstract

    The present experiment tested the suggestion that human listeners may exploit durational information in speech to parse continuous utterances into words. Listeners were presented with six-syllable unpredictable utterances under noise-masking, and were required to judge between alternative word strings as to which best matched the rhythm of the masked utterances. For each utterance there were four alternative strings: (a) an exact rhythmic and word boundary match, (b) a rhythmic mismatch, and (c) two utterances with the same rhythm as the masked utterance, but different word boundary locations. Listeners were clearly able to perceive the rhythm of the masked utterances: The rhythmic mismatch was chosen significantly less often than any other alternative. Within the three rhythmically matched alternatives, the exact match was chosen significantly more often than either word boundary mismatch. Thus, listeners both perceived speech rhythm and used durational cues effectively to locate the position of word boundaries.
  • Smits, R., Sereno, J., & Jongman, A. (2006). Categorization of sounds. Journal of Experimental Psychology: Human Perception and Performance, 32(3), 733-754. doi:10.1037/0096-1523.32.3.733.

    Abstract

    The authors conducted 4 experiments to test the decision-bound, prototype, and distribution theories for the categorization of sounds. They used as stimuli sounds varying in either resonance frequency or duration. They created different experimental conditions by varying the variance and overlap of 2 stimulus distributions used in a training phase and varying the size of the stimulus continuum used in the subsequent test phase. When resonance frequency was the stimulus dimension, the pattern of categorization-function slopes was in accordance with the decision-bound theory. When duration was the stimulus dimension, however, the slope pattern gave partial support for the decision-bound and distribution theories. The authors introduce a new categorization model combining aspects of decision-bound and distribution theories that gives a superior account of the slope patterns across the 2 stimulus dimensions.
  • Snijders, T. M., Kooijman, V., Cutler, A., & Hagoort, P. (2007). Neurophysiological evidence of delayed segmentation in a foreign language. Brain Research, 1178, 106-113. doi:10.1016/j.brainres.2007.07.080.

    Abstract

    Previous studies have shown that segmentation skills are language-specific, making it difficult to segment continuous speech in an unfamiliar language into its component words. Here we present the first study capturing the delay in segmentation and recognition in the foreign listener using ERPs. We compared the ability of Dutch adults and of English adults without knowledge of Dutch (‘foreign listeners’) to segment familiarized words from continuous Dutch speech. We used the known effect of repetition on the event-related potential (ERP) as an index of recognition of words in continuous speech. Our results show that word repetitions in isolation are recognized with equivalent facility by native and foreign listeners, but word repetitions in continuous speech are not. First, words familiarized in isolation are recognized faster by native than by foreign listeners when they are repeated in continuous speech. Second, when words that have previously been heard only in a continuous-speech context re-occur in continuous speech, the repetition is detected by native listeners, but is not detected by foreign listeners. A preceding speech context facilitates word recognition for native listeners, but delays or even inhibits word recognition for foreign listeners. We propose that the apparent difference in segmentation rate between native and foreign listeners is grounded in the difference in language-specific skills available to the listeners.

Share this page