Publications

Displaying 301 - 400 of 438
  • Petersson, K. M. (2004). The human brain, language, and implicit learning. Impuls, Tidsskrift for psykologi (Norwegian Journal of Psychology), 58(3), 62-72.
  • Petrovic, P., Petersson, K. M., Hansson, P., & Ingvar, M. (2004). Brainstem involvement in the initial response to pain. NeuroImage, 22, 995-1005. doi:10.1016/j.neuroimage.2004.01.046.

    Abstract

    The autonomic responses to acute pain exposure usually habituate rapidly while the subjective ratings of pain remain high for more extended periods of time. Thus, systems involved in the autonomic response to painful stimulation, for example the hypothalamus and the brainstem, would be expected to attenuate the response to pain during prolonged stimulation. This suggestion is in line with the hypothesis that the brainstem is specifically involved in the initial response to pain. To probe this hypothesis, we performed a positron emission tomography (PET) study where we scanned subjects during the first and second minute of a prolonged tonic painful cold stimulation (cold pressor test) and nonpainful cold stimulation. Galvanic skin response (GSR) was recorded during the PET scanning as an index of autonomic sympathetic response. In the main effect of pain, we observed increased activity in the thalamus bilaterally, in the contralateral insula and in the contralateral anterior cingulate cortex but no significant increases in activity in the primary or secondary somatosensory cortex. The autonomic response (GSR) decreased with stimulus duration. Concomitant with the autonomic response, increased activity was observed in brainstem and hypothalamus areas during the initial vs. the late stimulation. This effect was significantly stronger for the painful than for the cold stimulation. Activity in the brainstem showed pain-specific covariation with areas involved in pain processing, indicating an interaction between the brainstem and cortical pain networks. The findings indicate that areas in the brainstem are involved in the initial response to noxious stimulation, which is also characterized by an increased sympathetic response.
  • Petrovic, P., Ingvar, M., Stone-Elander, S., Petersson, K. M., & Hansson, P. (1999). A PET activation study of dynamic mechanical allodynia in patients with mononeuropathy. Pain, 83, 459-470.

    Abstract

    The objective of this study was to investigate the central processing of dynamic mechanical allodynia in patients with mononeuropathy. Regional cerebral bloodflow, as an indicator of neuronal activity, was measured with positron emission tomography. Paired comparisons were made between three different states; rest, allodynia during brushing the painful skin area, and brushing of the homologous contralateral area. Bilateral activations were observed in the primary somatosensory cortex (S1) and the secondary somatosensory cortex (S2) during allodynia compared to rest. The S1 activation contralateral to the site of the stimulus was more expressed during allodynia than during innocuous touch. Significant activations of the contralateral posterior parietal cortex, the periaqueductal gray (PAG), the thalamus bilaterally and motor areas were also observed in the allodynic state compared to both non-allodynic states. In the anterior cingulate cortex (ACC) there was only a suggested activation when the allodynic state was compared with the non-allodynic states. In order to account for the individual variability in the intensity of allodynia and ongoing spontaneous pain, rCBF was regressed on the individually reported pain intensity, and significant covariations were observed in the ACC and the right anterior insula. Significantly decreased regional blood flow was observed bilaterally in the medial and lateral temporal lobe as well as in the occipital and posterior cingulate cortices when the allodynic state was compared to the non-painful conditions. This finding is consistent with previous studies suggesting attentional modulation and a central coping strategy for known and expected painful stimuli. Involvement of the medial pain system has previously been reported in patients with mononeuropathy during ongoing spontaneous pain. This study reveals a bilateral activation of the lateral pain system as well as involvement of the medial pain system during dynamic mechanical allodynia in patients with mononeuropathy.
  • Petrovic, P., Carlsson, K., Petersson, K. M., Hansson, P., & Ingvar, M. (2004). Context-dependent deactivation of the amygdala during pain. Journal of Cognitive Neuroscience, 16, 1289-1301.

    Abstract

    The amygdala has been implicated in fundamental functions for the survival of the organism, such as fear and pain. In accord with this, several studies have shown increased amygdala activity during fear conditioning and the processing of fear-relevant material in human subjects. In contrast, functional neuroimaging studies of pain have shown a decreased amygdala activity. It has previously been proposed that the observed deactivations of the amygdala in these studies indicate a cognitive strategy to adapt to a distressful but in the experimental setting unavoidable painful event. In this positron emission tomography study, we show that a simple contextual manipulation, immediately preceding a painful stimulation, that increases the anticipated duration of the painful event leads to a decrease in amygdala activity and modulates the autonomic response during the noxious stimulation. On a behavioral level, 7 of the 10 subjects reported that they used coping strategies more intensely in this context. We suggest that the altered activity in the amygdala may be part of a mechanism to attenuate pain-related stress responses in a context that is perceived as being more aversive. The study also showed an increased activity in the rostral part of anterior cingulate cortex in the same context in which the amygdala activity decreased, further supporting the idea that this part of the cingulate cortex is involved in the modulation of emotional and pain networks
  • Pickering, M. J., & Majid, A. (2007). What are implicit causality and consequentiality? Language and Cognitive Processes, 22(5), 780-788. doi:10.1080/01690960601119876.

    Abstract

    Much work in psycholinguistics and social psychology has investigated the notion of implicit causality associated with verbs. Crinean and Garnham (2006) relate implicit causality to another phenomenon, implicit consequentiality. We argue that they and other researchers have confused the meanings of events and the reasons for those events, so that particular thematic roles (e.g., Agent, Patient) are taken to be causes or consequences of those events by definition. In accord with Garvey and Caramazza (1974), we propose that implicit causality and consequentiality are probabilistic notions that are straightforwardly related to the explicit causes and consequences of events and are analogous to other biases investigated in psycholinguistics.
  • Pine, J. M., Lieven, E. V., & Rowland, C. F. (1997). Stylistic variation at the “single-word” stage: Relations between maternal speech characteristics and children's vocabulary composition and usage. Child Development, 68(5), 807-819. doi:10.1111/j.1467-8624.1997.tb01963.x.

    Abstract

    In this study we test a number of different claims about the nature of stylistic variation at the “single-word” stage by examining the relation between variation in early vocabulary composition, variation in early language use, and variation in the structural and functional propreties of mothers' child-directed speech. Maternal-report and observational data were collected for 26 children at 10, 50, and 100 words, These were then correlated with a variety of different measures of maternal speech at 10 words, The results show substantial variation in the percentage of common nouns and unanalyzed phrases in children's vocabularies, and singficant relations between this variation and the way in which language is used by the child. They also reveal singficant relations between the way in whch mothers use language at 10 words and the way in chich their children use language at 50 words and between certain formal properties of mothers speech at 10 words and the percentage of common nouns and unanalyzed phrases in children's early vocabularies, However, most of these relations desappear when an attempt is made to control for ossible effects of the child on the mother at Time 1. The exception is a singficant negative correlation between mothers tendency to produce speech that illustrates word boundaries and the percentage of unanalyzed phrases at 50 and 100 words. This suggests that mothers whose sprech provides the child with information about where new words begin and end tend to have children with few unanalyzed. phrases in their early vocabularies.
  • Poletiek, F. H. (1997). De wet 'bijzondere opnemingen in psychiatrische ziekenhuizen' aan de cijfers getoetst. Maandblad voor Geestelijke Volksgezondheid, 4, 349-361.
  • Poletiek, F. H. (in preparation). Inside the juror: The psychology of juror decision-making [Bespreking van De geest van de jury (1997)].
  • Praamstra, P., Plat, E. M., Meyer, A. S., & Horstink, M. W. I. M. (1999). Motor cortex activation in Parkinson's disease: Dissociation of electrocortical and peripheral measures of response generation. Movement Disorders, 14, 790-799. doi:10.1002/1531-8257(199909)14:5<790:AID-MDS1011>3.0.CO;2-A.

    Abstract

    This study investigated characteristics of motor cortex activation and response generation in Parkinson's disease with measures of electrocortical activity (lateralized readiness potential [LRP]), electromyographic activity (EMG), and isometric force in a noise-compatibility task. When presented with stimuli consisting of incompatible target and distracter elements asking for responses of opposite hands, patients were less able than control subjects to suppress activation of the motor cortex controlling the wrong response hand. This was manifested in the pattern of reaction times and in an incorrect lateralization of the LRP. Onset latency and rise time of the LRP did not differ between patients and control subjects, but EMG and response force developed more slowly in patients. Moreover, in patients but not in control subjects, the rate of development of EMG and response force decreased as reaction time increased. We hypothesize that this dissociation between electrocortical activity and peripheral measures in Parkinson's disease is the result of changes in motor cortex function that alter the relation between signal-related and movement-related neural activity in the motor cortex. In the LRP, this altered balance may obscure an abnormal development of movement-related neural activity.
  • Prieto, P., & Torreira, F. (2007). The segmental anchoring hypothesis revisited: Syllable structure and speech rate effects on peak timing in Spanish. Journal of Phonetics, 35, 473-500. doi:10.1016/j.wocn.2007.01.001.

    Abstract

    This paper addresses the validity of the segmental anchoring hypothesis for tonal landmarks (henceforth, SAH) as described in recent work by (among others) Ladd, Faulkner, D., Faulkner, H., & Schepman [1999. Constant ‘segmental’ anchoring of f0 movements under changes in speech rate. Journal of the Acoustical Society of America, 106, 1543–1554], Ladd [2003. Phonological conditioning of f0 target alignment. In: M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the XVth international congress of phonetic sciences, Vol. 1, (pp. 249–252). Barcelona: Causal Productions; in press. Segmental anchoring of pitch movements: Autosegmental association or gestural coordination? Italian Journal of Linguistics, 18 (1)]. The alignment of LH* prenuclear peaks with segmental landmarks in controlled speech materials in Peninsular Spanish is analyzed as a function of syllable structure type (open, closed) of the accented syllable, segmental composition, and speaking rate. Contrary to the predictions of the SAH, alignment was affected by syllable structure and speech rate in significant and consistent ways. In: CV syllables the peak was located around the end of the accented vowel, and in CVC syllables around the beginning-mid part of the sonorant coda, but still far from the syllable boundary. With respect to the effects of rate, peaks were located earlier in the syllable as speech rate decreased. The results suggest that the accent gestures under study are synchronized with the syllable unit. In general, the longer the syllable, the longer the rise time. Thus the fundamental idea of the anchoring hypothesis can be taken as still valid. On the other hand, the tonal alignment patterns reported here can be interpreted as the outcome of distinct modes of gestural coordination in syllable-initial vs. syllable-final position: gestures at syllable onsets appear to be more tightly coordinated than gestures at the end of syllables [Browman, C. P., & Goldstein, L.M. (1986). Towards an articulatory phonology. Phonology Yearbook, 3, 219–252; Browman, C. P., & Goldstein, L. (1988). Some notes on syllable structure in articulatory phonology. Phonetica, 45, 140–155; (1992). Articulatory Phonology: An overview. Phonetica, 49, 155–180; Krakow (1999). Physiological organization of syllables: A review. Journal of Phonetics, 27, 23–54; among others]. Intergestural timing can thus provide a unifying explanation for (1) the contrasting behavior between the precise synchronization of L valleys with the onset of the syllable and the more variable timing of the end of the f0 rise, and, more specifically, for (2) the right-hand tonal pressure effects and ‘undershoot’ patterns displayed by peaks at the ends of syllables and other prosodic domains.
  • Protopapas, A., Gerakaki, S., & Alexandri, S. (2007). Sources of information for stress assignment in reading Greek. Applied Psycholinguistics, 28(4), 695 -720. doi:10.1017/S0142716407070373.

    Abstract

    To assign lexical stress when reading, the Greek reader can potentially rely on lexical information (knowledge of the word), visual–orthographic information (processing of the written diacritic), or a default metrical strategy (penultimate stress pattern). Previous studies with secondary education children have shown strong lexical effects on stress assignment and have provided evidence for a default pattern. Here we report two experiments with adult readers, in which we disentangle and quantify the effects of these three potential sources using nonword materials. Stimuli either resembled or did not resemble real words, to manipulate availability of lexical information; and they were presented with or without a diacritic, in a word-congruent or word-incongruent position, to contrast the relative importance of the three sources. Dual-task conditions, in which cognitive load during nonword reading was increased with phonological retention carrying a metrical pattern different from the default, did not support the hypothesis that the default arises from cumulative lexical activation in working memory.
  • Qin, S., Piekema, C., Petersson, K. M., Han, B., Luo, J., & Fernández, G. (2007). Probing the transformation of discontinuous associations into episodic memory: An event-related fMRI study. NeuroImage, 38(1), 212-222. doi:10.1016/j.neuroimage.2007.07.020.

    Abstract

    Using event-related functional magnetic resonance imaging, we identified brain regions involved in storing associations of events discontinuous in time into long-term memory. Participants were scanned while memorizing item-triplets including simultaneous and discontinuous associations. Subsequent memory tests showed that participants remembered both types of associations equally well. First, by constructing the contrast between the subsequent memory effects for discontinuous associations and simultaneous associations, we identified the left posterior parahippocampal region, dorsolateral prefrontal cortex, the basal ganglia, posterior midline structures, and the middle temporal gyrus as being specifically involved in transforming discontinuous associations into episodic memory. Second, we replicated that the prefrontal cortex and the medial temporal lobe (MTL) especially the hippocampus are involved in associative memory formation in general. Our findings provide evidence for distinct neural operation(s) that supports the binding and storing discontinuous associations in memory. We suggest that top-down signals from the prefrontal cortex and MTL may trigger reactivation of internal representation in posterior midline structures of the first event, thus allowing it to be associated with the second event. The dorsolateral prefrontal cortex together with basal ganglia may support this encoding operation by executive and binding processes within working memory, and the posterior parahippocampal region may play a role in binding and memory formation.
  • Reis, A., Faísca, L., Mendonça, S., Ingvar, M., & Petersson, K. M. (2007). Semantic interference on a phonological task in illiterate subjects. Scandinavian Journal of Psychology, 48(1), 69-74. doi:10.1111/j.1467-9450.2006.00544.x.

    Abstract

    Previous research suggests that learning an alphabetic written language influences aspects of the auditory-verbal language system. In this study, we examined whether literacy influences the notion of words as phonological units independent of lexical semantics in literate and illiterate subjects. Subjects had to decide which item in a word- or pseudoword pair was phonologically longest. By manipulating the relationship between referent size and phonological length in three word conditions (congruent, neutral, and incongruent) we could examine to what extent subjects focused on form rather than meaning of the stimulus material. Moreover, the pseudoword condition allowed us to examine global phonological awareness independent of lexical semantics. The results showed that literate performed significantly better than illiterate subjects in the neutral and incongruent word conditions as well as in the pseudoword condition. The illiterate group performed least well in the incongruent condition and significantly better in the pseudoword condition compared to the neutral and incongruent word conditions and suggest that performance on phonological word length comparisons is dependent on literacy. In addition, the results show that the illiterate participants are able to perceive and process phonological length, albeit less well than the literate subjects, when no semantic interference is present. In conclusion, the present results confirm and extend the finding that illiterate subjects are biased towards semantic-conceptual-pragmatic types of cognitive processing.
  • Rietveld, T., Van Hout, R., & Ernestus, M. (2004). Pitfalls in corpus research. Computers and the Humanities, 38(4), 343-362. doi:10.1007/s10579-004-1919-1.

    Abstract

    This paper discusses some pitfalls in corpus research and suggests solutions on the basis of examples and computer simulations. We first address reliability problems in language transcriptions, agreement between transcribers, and how disagreements can be dealt with. We then show that the frequencies of occurrence obtained from a corpus cannot always be analyzed with the traditional X2 test, as corpus data are often not sequentially independent and unit independent. Next, we stress the relevance of the power of statistical tests, and the sizes of statistically significant effects. Finally, we point out that a t-test based on log odds often provides a better alternative to a X2 analysis based on frequency counts.
  • Roberts, L., Marinis, T., Felser, C., & Clahsen, H. (2007). Antecedent priming at trace positions in children’s sentence processing. Journal of Psycholinguistic Research, 36(2), 175-188. doi: 10.1007/s10936-006-9038-3.

    Abstract

    The present study examines whether children reactivate a moved constituent at its gap position and how children’s more limited working memory span affects the way they process filler-gap dependencies. 46 5–7 year-old children and 54 adult controls participated in a cross-modal picture priming experiment and underwent a standardized working memory test. The results revealed a statistically significant interaction between the participants’ working memory span and antecedent reactivation: High-span children (n = 19) and high-span adults (n = 22) showed evidence of antecedent priming at the gap site, while for low-span children and adults, there was no such effect. The antecedent priming effect in the high-span participants indicates that in both children and adults, dislocated arguments access their antecedents at gap positions. The absence of an antecedent reactivation effect in the low-span participants could mean that these participants required more time to integrate the dislocated constituent and reactivated the filler later during the sentence.
  • Roberts, L. (2007). Investigating real-time sentence processing in the second language. Stem-, Spraak- en Taalpathologie, 15, 115-127.

    Abstract

    Second language (L2) acquisition researchers have always been concerned with what L2 learners know about the grammar of the target language but more recently there has been growing interest in how L2 learners put this knowledge to use in real-time sentence comprehension. In order to investigate real-time L2 sentence processing, the types of constructions studied and the methods used are often borrowed from the field of monolingual processing, but the overall issues are familiar from traditional L2 acquisition research. These cover questions relating to L2 learners’ native-likeness, whether or not L1 transfer is in evidence, and how individual differences such as proficiency and language experience might have an effect. The aim of this paper is to provide for those unfamiliar with the field, an overview of the findings of a selection of behavioral studies that have investigated such questions, and to offer a picture of how L2 learners and bilinguals may process sentences in real time.
  • Roelofs, A. (2004). Seriality of phonological encoding in naming objects and reading their names. Memory & Cognition, 32(2), 212-222.

    Abstract

    There is a remarkable lack of research bringing together the literatures on oral reading and speaking.
    As concerns phonological encoding, both models of reading and speaking assume a process of segmental
    spellout for words, which is followed by serial prosodification in models of speaking (e.g., Levelt,
    Roelofs, & Meyer, 1999). Thus, a natural place to merge models of reading and speaking would be
    at the level of segmental spellout. This view predicts similar seriality effects in reading and object naming.
    Experiment 1 showed that the seriality of encoding inside a syllable revealed in previous studies
    of speaking is observed for both naming objects and reading their names. Experiment 2 showed that
    both object naming and reading exhibit the seriality of the encoding of successive syllables previously
    observed for speaking. Experiment 3 showed that the seriality is also observed when object naming and
    reading trials are mixed rather than tested separately, as in the first two experiments. These results suggest
    that a serial phonological encoding mechanism is shared between naming objects and reading
    their names.
  • Roelofs, A. (2007). On the modelling of spoken word planning: Rejoinder to La Heij, Starreveld, and Kuipers (2007). Language and Cognitive Processes, 22(8), 1281-1286. doi:10.1080/01690960701462291.

    Abstract

    The author contests several claims of La Heij, Starreveld, and Kuipers (this issue) concerning the modelling of spoken word planning. The claims are about the relevance of error findings, the interaction between semantic and phonological factors, the explanation of word-word findings, the semantic relatedness paradox, and production rules.
  • Roelofs, A. (2004). Error biases in spoken word planning and monitoring by aphasic and nonaphasic speakers: Comment on Rapp and Goldrick,2000. Psychological Review, 111(2), 561-572. doi:10.1037/0033-295X.111.2.561.

    Abstract

    B. Rapp and M. Goldrick (2000) claimed that the lexical and mixed error biases in picture naming by
    aphasic and nonaphasic speakers argue against models that assume a feedforward-only relationship
    between lexical items and their sounds in spoken word production. The author contests this claim by
    showing that a feedforward-only model like WEAVER ++ (W. J. M. Levelt, A. Roelofs, & A. S. Meyer,
    1999b) exhibits the error biases in word planning and self-monitoring. Furthermore, it is argued that
    extant feedback accounts of the error biases and relevant chronometric effects are incompatible.
    WEAVER ++ simulations with self-monitoring revealed that this model accounts for the chronometric
    data, the error biases, and the influence of the impairment locus in aphasic speakers.
  • Roelofs, A. (2007). A critique of simple name-retrieval models of spoken word planning. Language and Cognitive Processes, 22(8), 1237-1260. doi:10.1080/01690960701461582.

    Abstract

    Simple name-retrieval models of spoken word planning (Bloem & La Heij, 2003; Starreveld & La Heij, 1996) maintain (1) that there are two levels in word planning, a conceptual and a lexical phonological level, and (2) that planning a word in both object naming and oral reading involves the selection of a lexical phonological representation. Here, the name retrieval models are compared to more complex models with respect to their ability to account for relevant data. It appears that the name retrieval models cannot easily account for several relevant findings, including some speech error biases, types of morpheme errors, and context effects on the latencies of responding to pictures and words. New analyses of the latency distributions in previous studies also pose a challenge. More complex models account for all these findings. It is concluded that the name retrieval models are too simple and that the greater complexity of the other models is warranted
  • Roelofs, A. (2004). Comprehension-based versus production-internal feedback in planning spoken words: A rejoinder to Rapp and Goldrick, 2004. Psychological Review, 111(2), 579-580. doi:10.1037/0033-295X.111.2.579.

    Abstract

    WEAVER++ has no backward links in its form-production network and yet is able to explain the lexical
    and mixed error biases and the mixed distractor latency effect. This refutes the claim of B. Rapp and M.
    Goldrick (2000) that these findings specifically support production-internal feedback. Whether their restricted interaction account model can also provide a unified account of the error biases and latency effect remains to be shown.
  • Roelofs, A. (2007). Attention and gaze control in picture naming, word reading, and word categorizing. Journal of Memory and Language, 57(2), 232-251. doi:10.1016/j.jml.2006.10.001.

    Abstract

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture–word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment 3) and shifted their gaze to the arrow to manually indicate its direction. The experiments showed that the temporal coordination of vocal responding and gaze shifting depends on the vocal task and, to a lesser extent, on the type of relationship between picture and word. There was a close temporal link between gaze shifting and manual responding, suggesting that the gaze shifts indexed shifts of attention between the vocal and manual tasks. Computer simulations showed that a simple extension of WEAVER++ [Roelofs, A. (1992). A spreading-activation theory of lemma retrieval in speaking. Cognition, 42, 107–142.; Roelofs, A. (2003). Goal-referenced selection of verbal action: modeling attentional control in the Stroop task. Psychological Review, 110, 88–125.] with assumptions about attentional control in the coordination of vocal responding, gaze shifting, and manual responding quantitatively accounts for the key findings.
  • Roelofs, A., Özdemir, R., & Levelt, W. J. M. (2007). Influences of spoken word planning on speech recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(5), 900-913. doi:10.1037/0278-7393.33.5.900.

    Abstract

    In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. (PsycINFO Database Record (c) 2007 APA, all rights reserved)
  • Roelofs, A. (1997). The WEAVER model of word-form encoding in speech production. Cognition, 64, 249-284. doi:10.1016/S0010-0277(97)00027-9.

    Abstract

    Lexical access in speaking consists of two major steps: lemma retrieval and word-form encoding. In Roelofs (Roelofs, A. 1992a. Cognition 42. 107-142; Roelofs. A. 1993. Cognition 47, 59-87.), I described a model of lemma retrieval. The present paper extends this work by presenting a comprehensive model of the second access step, word-form encoding. The model is called WEAVER (Word-form Encoding by Activation and VERification). Unlike other models of word-form generation, WEAVER is able to provide accounts of response time data, particularly from the picture-word interference paradigm and the implicit priming paradigm. Its key features are (1) retrieval by spreading activation, (2) verification of activated information by a production rule, (3) a rightward incremental construction of phonological representations using a principle of active syllabification, syllables are constructed on the fly rather than stored with lexical items, (4) active competitive selection of syllabic motor programs using a mathematical formalism that generates response times and (5) the association of phonological speech errors with the selection of syllabic motor programs due to the failure of verification.
  • Rowland, C. F. (2007). Explaining errors in children’s questions. Cognition, 104(1), 106-134. doi:10.1016/j.cognition.2006.05.011.

    Abstract

    The ability to explain the occurrence of errors in children’s speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813–842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children’s speech, and that errors occur when children resort to other operations to produce questions [e.g. Dąbrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83–102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157–181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.
  • Rubio-Fernández, P. (2007). Suppression in metaphor interpretation: Differences between meaning selection and meaning construction. Journal of Semantics, 24(4), 345-371. doi:10.1093/jos/ffm006.

    Abstract

    Various accounts of metaphor interpretation propose that it involves constructing an ad hoc concept on the basis of the concept encoded by the metaphor vehicle (i.e. the expression used for conveying the metaphor). This paper discusses some of the differences between these theories and investigates their main empirical prediction: that metaphor interpretation involves enhancing properties of the metaphor vehicle that are relevant for interpretation, while suppressing those that are irrelevant. This hypothesis was tested in a cross-modal lexical priming study adapted from early studies on lexical ambiguity. The different patterns of suppression of irrelevant meanings observed in disambiguation studies and in the experiment on metaphor reported here are discussed in terms of differences between meaning selection and meaning construction.
  • De Ruiter, J. P. (2007). Postcards from the mind: The relationship between speech, imagistic gesture and thought. Gesture, 7(1), 21-38.

    Abstract

    In this paper, I compare three different assumptions about the relationship between speech, thought and gesture. These assumptions have profound consequences for theories about the representations and processing involved in gesture and speech production. I associate these assumptions with three simplified processing architectures. In the Window Architecture, gesture provides us with a 'window into the mind'. In the Language Architecture, properties of language have an influence on gesture. In the Postcard Architecture, gesture and speech are planned by a single process to become one multimodal message. The popular Window Architecture is based on the assumption that gestures come, as it were, straight out of the mind. I argue that during the creation of overt imagistic gestures, many processes, especially those related to (a) recipient design, and (b) effects of language structure, cause an observable gesture to be very different from the original thought that it expresses. The Language Architecture and the Postcard Architecture differ from the Window Architecture in that they both incorporate a central component which plans gesture and speech together, however they differ from each other in the way they align gesture and speech. The Postcard Architecture assumes that the process creating a multimodal message involving both gesture and speech has access to the concepts that are available in speech, while the Language Architecture relies on interprocess communication to resolve potential conflicts between the content of gesture and speech.
  • Russel, A., & Trilsbeek, P. (2004). ELAN Audio Playback. Language Archive Newsletter, 1(4), 12-13.
  • Russel, A., & Wittenburg, P. (2004). ELAN Native Media Handling. Language Archive Newsletter, 1(3), 12-12.
  • Sach, M., Seitz, R. J., & Indefrey, P. (2004). Unified inflectional processing of regular and irregular verbs: A PET study. NeuroReport, 15(3), 533-537. doi:10.1097/01.wnr.0000113529.32218.92.

    Abstract

    Psycholinguistic theories propose different models of inflectional processing of regular and irregular verbs: dual mechanism models assume separate modules with lexical frequency sensitivity for irregular verbs. In contradistinction, connectionist models propose a unified process in a single module.We conducted a PET study using a 2 x 2 design with verb regularity and frequency.We found significantly shorter voice onset times for regular verbs and high frequency verbs irrespective of regularity. The PET data showed activations in inferior frontal gyrus (BA 45), nucleus lentiformis, thalamus, and superior medial cerebellum for both regular and irregular verbs but no dissociation for verb regularity.Our results support common processing components for regular and irregular verb inflection.
  • Salverda, A. P., Dahan, D., Tanenhaus, M. K., Crosswhite, K., Masharov, M., & McDonough, J. (2007). Effects of prosodically modulated sub-phonetic variation on lexical competition. Cognition, 105(2), 466-476. doi:10.1016/j.cognition.2006.10.008.

    Abstract

    Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically conditioned phonetic variation.
  • Sauter, D., & Scott, S. K. (2007). More than one kind of happiness: Can we recognize vocal expressions of different positive states? Motivation and Emotion, 31(3), 192-199.

    Abstract

    Several theorists have proposed that distinctions are needed between different positive emotional states, and that these discriminations may be particularly useful in the domain of vocal signals (Ekman, 1992b, Cognition and Emotion, 6, 169–200; Scherer, 1986, Psychological Bulletin, 99, 143–165). We report an investigation into the hypothesis that positive basic emotions have distinct vocal expressions (Ekman, 1992b, Cognition and Emotion, 6, 169–200). Non-verbal vocalisations are used that map onto five putative positive emotions: Achievement/Triumph, Amusement, Contentment, Sensual Pleasure, and Relief. Data from categorisation and rating tasks indicate that each vocal expression is accurately categorised and consistently rated as expressing the intended emotion. This pattern is replicated across two language groups. These data, we conclude, provide evidence for the existence of robustly recognisable expressions of distinct positive emotions.
  • Scerri, T. S., Fisher, S. E., Francks, C., MacPhie, I. L., Paracchini, S., Richardson, A. J., Stein, J. F., & Monaco, A. P. (2004). Putative functional alleles of DYX1C1 are not associated with dyslexia susceptibility in a large sample of sibling pairs from the UK [Letter to JMG]. Journal of Medical Genetics, 41(11), 853-857. doi:10.1136/jmg.2004.018341.
  • Scharenborg, O., Seneff, S., & Boves, L. (2007). A two-pass approach for handling out-of-vocabulary words in a large vocabulary recognition task. Computer, Speech & Language, 21, 206-218. doi:10.1016/j.csl.2006.03.003.

    Abstract

    This paper addresses the problem of recognizing a vocabulary of over 50,000 city names in a telephone access spoken dialogue system. We adopt a two-stage framework in which only major cities are represented in the first stage lexicon. We rely on an unknown word model encoded as a phone loop to detect OOV city names (referred to as ‘rare city’ names). We use SpeM, a tool that can extract words and word-initial cohorts from phone graphs from a large fallback lexicon, to provide an N-best list of promising city name hypotheses on the basis of the phone graph corresponding to the OOV. This N-best list is then inserted into the second stage lexicon for a subsequent recognition pass. Experiments were conducted on a set of spontaneous telephone-quality utterances; each containing one rare city name. It appeared that SpeM was able to include nearly 75% of the correct city names in an N-best hypothesis list of 3000 city names. With the names found by SpeM to extend the lexicon of the second stage recognizer, a word accuracy of 77.3% could be obtained. The best one-stage system yielded a word accuracy of 72.6%. The absolute number of correctly recognized rare city names almost doubled, from 62 for the best one-stage system to 102 for the best two-stage system. However, even the best two-stage system recognized only about one-third of the rare city names retrieved by SpeM. The paper discusses ways for improving the overall performance in the context of an application.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2007). 'Early recognition' of polysyllabic words in continuous speech. Computer, Speech & Language, 21, 54-71. doi:10.1016/j.csl.2005.12.001.

    Abstract

    Humans are able to recognise a word before its acoustic realisation is complete. This in contrast to conventional automatic speech recognition (ASR) systems, which compute the likelihood of a number of hypothesised word sequences, and identify the words that were recognised on the basis of a trace back of the hypothesis with the highest eventual score, in order to maximise efficiency and performance. In the present paper, we present an ASR system, SpeM, based on principles known from the field of human word recognition that is able to model the human capability of ‘early recognition’ by computing word activation scores (based on negative log likelihood scores) during the speech recognition process. Experiments on 1463 polysyllabic words in 885 utterances showed that 64.0% (936) of these polysyllabic words were recognised correctly at the end of the utterance. For 81.1% of the 936 correctly recognised polysyllabic words the local word activation allowed us to identify the word before its last phone was available, and 64.1% of those words were already identified one phone after their lexical uniqueness point. We investigated two types of predictors for deciding whether a word is considered as recognised before the end of its acoustic realisation. The first type is related to the absolute and relative values of the word activation, which trade false acceptances for false rejections. The second type of predictor is related to the number of phones of the word that have already been processed and the number of phones that remain until the end of the word. The results showed that SpeM’s performance increases if the amount of acoustic evidence in support of a word increases and the risk of future mismatches decreases.
  • Scharenborg, O. (2007). Reaching over the gap: A review of efforts to link human and automatic speech recognition research. Speech Communication, 49, 336-347. doi:10.1016/j.specom.2007.01.009.

    Abstract

    The fields of human speech recognition (HSR) and automatic speech recognition (ASR) both investigate parts of the speech recognition process and have word recognition as their central issue. Although the research fields appear closely related, their aims and research methods are quite different. Despite these differences there is, however, lately a growing interest in possible cross-fertilisation. Researchers from both ASR and HSR are realising the potential benefit of looking at the research field on the other side of the ‘gap’. In this paper, we provide an overview of past and present efforts to link human and automatic speech recognition research and present an overview of the literature describing the performance difference between machines and human listeners. The focus of the paper is on the mutual benefits to be derived from establishing closer collaborations and knowledge interchange between ASR and HSR. The paper ends with an argument for more and closer collaborations between researchers of ASR and HSR to further improve research in both fields.
  • Scharenborg, O., Wan, V., & Moore, R. K. (2007). Towards capturing fine phonetic variation in speech using articulatory features. Speech Communication, 49, 811-826. doi:10.1016/j.specom.2007.01.005.

    Abstract

    The ultimate goal of our research is to develop a computational model of human speech recognition that is able to capture the effects of fine-grained acoustic variation on speech recognition behaviour. As part of this work we are investigating automatic feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. In the experiments reported here, we analysed the classification results from support vector machines (SVMs) and multilayer perceptrons (MLPs). MLPs have been widely and successfully used for the task of multi-value articulatory feature classification, while (to the best of our knowledge) SVMs have not. This paper compares the performance of the two classifiers and analyses the results in order to better understand the articulatory representations. It was found that the SVMs outperformed the MLPs for five out of the seven articulatory feature classes we investigated while using only 8.8–44.2% of the training material used for training the MLPs. The structure in the misclassifications of the SVMs and MLPs suggested that there might be a mismatch between the characteristics of the classification systems and the characteristics of the description of the AF values themselves. The analyses showed that some of the misclassified features are inherently confusable given the acoustic space. We concluded that in order to come to a feature set that can be used for a reliable and accurate automatic description of the speech signal; it could be beneficial to move away from quantised representations.
  • Schiller, N. O., Fikkert, P., & Levelt, C. C. (2004). Stress priming in picture naming: An SOA study. Brain and Language, 90(1-3), 231-240. doi:10.1016/S0093-934X(03)00436-X.

    Abstract

    This study investigates whether or not the representation of lexical stress information can be primed during speech production. In four experiments, we attempted to prime the stress position of bisyllabic target nouns (picture names) having initial and final stress with auditory prime words having either the same or different stress as the target (e.g., WORtel–MOtor vs. koSTUUM–MOtor; capital letters indicate stressed syllables in prime–target pairs). Furthermore, half of the prime words were semantically related, the other half unrelated. Overall, picture names were not produced faster when the prime word had the same stress as the target than when the prime had different stress, i.e., there was no stress-priming effect in any experiment. This result would not be expected if stress were stored in the lexicon. However, targets with initial stress were responded to faster than final-stress targets. The reason for this effect was neither the quality of the pictures nor frequency of occurrence or voice-key characteristics. We hypothesize here that this stress effect is a genuine encoding effect, i.e., words with stress on the second syllable take longer to be encoded because their stress pattern is irregular with respect to the lexical distribution of bisyllabic stress patterns, even though it can be regular with respect to metrical stress rules in Dutch. The results of the experiments are discussed in the framework of models of phonological encoding.
  • Schiller, N. O., & De Ruiter, J. P. (2004). Some notes on priming, alignment, and self-monitoring [Commentary]. Behavioral and Brain Sciences, 27(2), 208-209. doi:10.1017/S0140525X0441005X.

    Abstract

    Any complete theory of speaking must take the dialogical function of language use into account. Pickering & Garrod (P&G) make some progress on this point. However, we question whether their interactive alignment model is the optimal approach. In this commentary, we specifically criticize (1) their notion of alignment being implemented through priming, and (2) their claim that self-monitoring can occur at all levels of linguistic representation.
  • Schiller, N. O. (2004). The onset effect in word naming. Journal of Memory and Language, 50(4), 477-490. doi:10.1016/j.jml.2004.02.004.

    Abstract

    This study investigates whether or not masked form priming effects in the naming task depend on the number of shared segments between prime and target. Dutch participants named bisyllabic words, which were preceded by visual masked primes. When primes shared the initial segment(s) with the target, naming latencies were shorter than in a control condition (string of percent signs). Onset complexity (singleton vs. complex word onset) did not modulate this priming effect in Dutch. Furthermore, significant priming due to shared final segments was only found when the prime did not contain a mismatching onset, suggesting an interfering role of initial non-target segments. It is concluded that (a) degree of overlap (segmental match vs. mismatch), and (b) position of overlap (initial vs. final) influence the magnitude of the form priming effect in the naming task. A modification of the segmental overlap hypothesis (Schiller, 1998) is proposed to account for the data.
  • Schiller, N. O., Meyer, A. S., & Levelt, W. J. M. (1997). The syllabic structure of spoken words: Evidence from the syllabification of intervocalic consonants. Language and Speech, 40(2), 103-140.

    Abstract

    A series of experiments was carried out to investigate the syllable affiliation of intervocalic consonants following short vowels, long vowels, and schwa in Dutch. Special interest was paid to words such as letter ['leter] ''id.,'' where a short vowel is followed by a single consonant. On phonological grounds one may predict that the first syllable should always be closed, but earlier psycholinguistic research had shown that speakers tend to leave these syllables open. In our experiments, bisyllabic word forms were presented aurally, and participants produced their syllables in reversed order (Experiments 1 through 5), or repeated the words inserting a pause between the syllables (Experiment 6). The results showed that participants generally closed syllables with a short vowel. However, in a significant number of the cases they produced open short vowel syllables. Syllables containing schwa, like syllables with a long vowel, were hardly ever closed. Word stress, the phonetic quality of the vowel in the first syllable, and the experimental context influenced syllabification. Taken together, the experiments show that native speakers syllabify bisyllabic Dutch nouns in accordance with a small set of prosodic output constraints. To account for the variability of the results, we propose that these constraints differ in their probabilities of being applied.
  • Schmitt, B. M., Meyer, A. S., & Levelt, W. J. M. (1999). Lexical access in the production of pronouns. Cognition, 69(3), 313-335. doi:doi:10.1016/S0010-0277(98)00073-0.

    Abstract

    Speakers can use pronouns when their conceptual referents are accessible from the preceding discourse, as in 'The flower is red. It turns blue'. Theories of language production agree that in order to produce a noun semantic, syntactic, and phonological information must be accessed. However, little is known about lexical access to pronouns. In this paper, we propose a model of pronoun access in German. Since the forms of German pronouns depend on the grammatical gender of the nouns they replace, the model claims that speakers must access the syntactic representation of the replaced noun (its lemma) to select a pronoun. In two experiments using the lexical decision during naming paradigm [Levelt, W.J.M., Schriefers, H., Vorberg, D., Meyer, A.S., Pechmann, T., Havinga, J., 1991a. The time course of lexical access in speech production: a study of picture naming. Psychological Review 98, 122-142], we investigated whether lemma access automatically entails the activation of the corresponding word form or whether a word form is only activated when the noun itself is produced, but not when it is replaced by a pronoun. Experiment 1 showed that during pronoun production the phonological form of the replaced noun is activated. Experiment 2 demonstrated that this phonological activation was not a residual of the use of the noun in the preceding sentence. Thus, when a pronoun is produced, the lemma and the phonological form of the replaced noun become reactivated.
  • Schwichtenberg, B., & Schiller, N. O. (2004). Semantic gender assignment regularities in German. Brain and Language, 90(1-3), 326-337. doi:10.1016/S0093-934X(03)00445-0.

    Abstract

    Gender assignment relates to a native speaker's knowledge of the structure of the gender system of his/her language, allowing the speaker to select the appropriate gender for each noun. Whereas categorical assignment rules and exceptional gender assignment are well investigated, assignment regularities, i.e., tendencies in the gender distribution identified within the vocabulary of a language, are still controversial. The present study is an empirical contribution trying to shed light on the gender assignment system native German speakers have at their disposal. Participants presented with a category (e.g., predator) and a pair of gender-marked pseudowords (e.g., der Trelle vs. die Stisse) preferentially selected the pseudo-word preceded by the gender-marked determiner ‘‘associated’’ with the category (e.g., masculine). This finding suggests that semantic regularities might be part of the gender assignment system of native speakers.
  • Segurado, R., Hamshere, M. L., Glaser, B., Nikolov, I., Moskvina, V., & Holmans, P. A. (2007). Combining linkage data sets for meta-analysis and mega-analysis: the GAW15 rheumatoid arthritis data set. BMC Proceedings, 1(Suppl 1): S104.

    Abstract

    We have used the genome-wide marker genotypes from Genetic Analysis Workshop 15 Problem 2 to explore joint evidence for genetic linkage to rheumatoid arthritis across several samples. The data consisted of four high-density genome scans on samples selected for rheumatoid arthritis. We cleaned the data, removed intermarker linkage disequilibrium, and assembled the samples onto a common genetic map using genome sequence positions as a reference for map interpolation. The individual studies were combined first at the genotype level (mega-analysis) prior to a multipoint linkage analysis on the combined sample, and second using the genome scan meta-analysis method after linkage analysis of each sample. The two approaches were compared, and give strong support to the HLA locus on chromosome 6 as a susceptibility locus. Other regions of interest include loci on chromosomes 11, 2, and 12.
  • Senft, G. (1999). ENTER and EXIT in Kilivila. Studies in Language, 23, 1-23.
  • Senft, G. (1999). [Review of the book Describing morphosyntax: A guide for field linguists by Thomas E. Payne]. Linguistics, 37, 181-187. doi:10.1515/ling.1999.003, 01/01/1999.
  • Senft, G. (1999). [Review of the book Pacific languages - An introduction by John Lynch]. Linguistics, 37, 979-983. doi:10.1515/ling.37.5.961.
  • Senft, G. (1997). [Review of the book The design of language: An introduction to descriptive linguistics by Terry Crowley, John Lynch, Jeff Siegel, and Julie Piau]. Linguistics, 35, 781-785.
  • Senft, G. (1999). A case study from the Trobriand Islands: The presentation of Self in touristic encounters [abstract]. IIAS Newsletter, (19). Retrieved from http://www.iias.nl/iiasn/19/.

    Abstract

    Visiting the Trobriand Islands is advertised as being the highlight of a trip for tourists to Papua New Guinea who want, and can afford, to experience this 'ultimate adventure' with 'expeditionary cruises aboard the luxurious Melanesian Discoverer. The advertisements also promise that the tourists can 'meet the friendly people' and 'observe their unique culture, dances, and art'. During my research in Kaibola and Nuwebila, two neighbouring villages on the northern tip of Kiriwina Island, I studied and analysed the encounters of tourists with Trobriand Islanders, who sing and dance for the Europeans. The analyses of the islanders' tourist performances are based on Erving Goffman's now classic study The Presentation of Self in Everyday Life, which was first published in 1959. In this study Goffmann analyses the structures of social encounters from the perspective of the dramatic performance. The situational context within which the encounter between tourists and Trobriand Islanders takes place frames the tourists as the audience and the Trobriand Islanders as a team of performers. The inherent structure of the parts of the overall performance presented in the two villages can be summarized - within the framework of Goffman's approach - in analogy with the structure of drama. We find parts that constitute the 'exposition', the 'complication', and the 'resolution' of a drama; we even observe an equivalent to the importance of the 'Second Act Curtain' in modern drama theory. Deeper analyses of this encounter show that the motives of the performers and their 'art of impression management' are to control the impression their audience receives in this encounter situation. This analysis reveals that the Trobriand Islanders sell their customers the expected images of what Malinowski (1929) once termed the '...Life of Savages in North-Western Melanesia' in a staged 'illusion'. With the conscious realization of the part they as performers play in this encounter, the Trobriand Islanders are in a position that is superior to that of their audience. Their merchandise or commodity is 'not real', as it is sold 'out of its true cultural context'. It is staged - and thus cannot be taken by any customer whatsoever because it (re)presents just an 'illusion'. The Trobriand Islanders know that neither they nor the core aspects of their culture will suffer any damage within a tourist encounter that is defined by the structure and the kind of their performance. Their pride and self-confidence enable them to bring their superior position into play in their dealings with tourists. With their indigenous humour, they even use this encounter for ridiculing their visitors. It turns out that the encounter is another manifestation of the Trobriand Islanders' self-consciousness, self-confidence, and pride with which they manage to protect core aspects of their cultural identity, while at the same time using and 'selling' parts of their culture as a kind of commodity to tourists.
  • Senft, G. (2007). [Review of the book Bislama reference grammar by Terry Crowley]. Linguistics, 45(1), 235-239.
  • Senft, G. (2007). [Review of the book Serial verb constructions - A cross-linguistic typology by Alexandra Y. Aikhenvald and Robert M. W. Dixon]. Linguistics, 45(4), 833-840. doi:10.1515/LING.2007.024.
  • Senft, G. (2004). [Review of the book Serial verbs in Oceanic: A descriptive typology by Terry Crowley]. Linguistics, 42(4), 855-859. doi:10.1515/ling.2004.028, 08/06/2004.
  • Senft, G. (2004). [Review of the book The Oceanic Languages by John Lynch, Malcolm Ross and Terry Crowley]. Linguistics, 42(2), 515-520. doi:10.1515/ling.2004.016.
  • Senft, G. (1997). Magical conversation on the Trobriand Islands. Anthropos, 92, 369-391.
  • Senft, G. (1999). The presentation of self in touristic encounters: A case study from the Trobriand Islands. Anthropos, 94, 21-33.
  • Senft, G. (1999). Weird Papalagi and a Fake Samoan Chief: A footnote to the noble savage myth. Rongorongo Studies: A forum for Polynesian philology, 9(1&2), 23-32-62-75.
  • Senghas, A., Kita, S., & Ozyurek, A. (2004). Children creating core properties of language: Evidence from an emerging sign language in Nicaragua. Science, 305(5691), 1779-1782. doi:10.1126/science.1100199.

    Abstract

    A new sign language has been created by deaf Nicaraguans over the past 25 years, providing an opportunity to observe the inception of universal hallmarks of language. We found that in their initial creation of the language, children analyzed complex events into basic elements and sequenced these elements into hierarchically structured expressions according to principles not observed in gestures accompanying speech in the surrounding language. Successive cohorts of learners extended this procedure, transforming Nicaraguan signing from its early gestural form into a linguistic system. We propose that this early segmentation and recombination reflect mechanisms with which children learn, and thereby perpetuate, language. Thus, children naturally possess learning abilities capable of giving language its fundamental structure.
  • Seuren, P. A. M. (2007). The theory that dare not speak its name: A rejoinder to Mufwene and Francis. Language Sciences, 29(4), 571-573. doi:10.1016/j.langsci.2007.02.001.
  • Seuren, P. A. M. (2004). The importance of being modular. Journal of Linguistics, 40(3), 593-635. doi:10.1017/S0022226704002786.
  • Seuren, P. A. M. (1982). De spellingsproblematiek in Suriname: Een inleiding. OSO, 1(1), 71-79.
  • Seuren, P. A. M. (1979). [Review of the book Approaches to natural language ed. by K. Hintikka, J. Moravcsik and P. Suppes]. Leuvense Bijdragen, 68, 163-168.
  • Seuren, P. A. M. (1997). [Review of the book Schets van de Nederlandse Taal. Grammatica, poëtica en retorica by Adriaen Verwer, Naar de editie van E. van Driel (1783) vertaald door J. Knol. Ed. Th.A.J.M. Janssen & J. Noordegraaf]. Nederlandse Taalkunde, 4, 370-374.
  • Seuren, P. A. M. (2004). [Review of the book A short history of Structural linguistics by Peter Matthews]. Linguistics, 42(1), 235-236. doi:10.1515/ling.2004.005.
  • Seuren, P. A. M. (1979). Meer over minder dan hoeft. De Nieuwe Taalgids, 72(3), 236-239.
  • Seuren, P. A. M. (1982). Internal variability in competence. Linguistische Berichte, 77, 1-31.
  • Seuren, P. A. M. (1999). Vertakkingsrichting als parameter in de grammatica. Verslagen en Mededelingen van de Koninklijke Academie voor Nederlandse Taal- en Letterkunde, 109(2-3), 149-166.
  • Shatzman, K. B., & Schiller, N. O. (2004). The word frequency effect in picture naming: Contrasting two hypotheses using homonym pictures. Brain and Language, 90(1-3), 160-169. doi:10.1016/S0093-934X(03)00429-2.

    Abstract

    Models of speech production disagree on whether or not homonyms have a shared word-form representation. To investigate this issue, a picture-naming experiment was carried out using Dutch homonyms of which both meanings could be presented as a picture. Naming latencies for the low-frequency meanings of homonyms were slower than for those of the high-frequency meanings. However, no frequency effect was found for control words, which matched the frequency of the homonyms meanings. Subsequent control experiments indicated that the difference in naming latencies for the homonyms could be attributed to processes earlier than wordform retrieval. Specifically, it appears that low name agreement slowed down the naming of the low-frequency homonym pictures.
  • Shopen, T., Reid, N., Shopen, G., & Wilkins, D. G. (1997). Ensuring the survival of Aboriginal and Torres Strait islander languages into the 21st century. Australian Review of Applied Linguistics, 10(1), 143-157.

    Abstract

    Aboriginal languages threatened by speakers poor economic and social conditions; some may survive through support for community development, language maintenance, bilingual education and training of Aboriginal teachers and linguists, and nonAboriginal teachers of Aboriginal and Islander students.
  • Skiba, R., Wittenburg, F., & Trilsbeek, P. (2004). New DoBeS web site: Contents & functions. Language Archive Newsletter, 1(2), 4-4.
  • Slobin, D. I., & Bowerman, M. (2007). Interfaces between linguistic typology and child language research. Linguistic Typology, 11(1), 213-226. doi:10.1515/LINGTY.2007.015.
  • Snijders, T. M., Kooijman, V., Cutler, A., & Hagoort, P. (2007). Neurophysiological evidence of delayed segmentation in a foreign language. Brain Research, 1178, 106-113. doi:10.1016/j.brainres.2007.07.080.

    Abstract

    Previous studies have shown that segmentation skills are language-specific, making it difficult to segment continuous speech in an unfamiliar language into its component words. Here we present the first study capturing the delay in segmentation and recognition in the foreign listener using ERPs. We compared the ability of Dutch adults and of English adults without knowledge of Dutch (‘foreign listeners’) to segment familiarized words from continuous Dutch speech. We used the known effect of repetition on the event-related potential (ERP) as an index of recognition of words in continuous speech. Our results show that word repetitions in isolation are recognized with equivalent facility by native and foreign listeners, but word repetitions in continuous speech are not. First, words familiarized in isolation are recognized faster by native than by foreign listeners when they are repeated in continuous speech. Second, when words that have previously been heard only in a continuous-speech context re-occur in continuous speech, the repetition is detected by native listeners, but is not detected by foreign listeners. A preceding speech context facilitates word recognition for native listeners, but delays or even inhibits word recognition for foreign listeners. We propose that the apparent difference in segmentation rate between native and foreign listeners is grounded in the difference in language-specific skills available to the listeners.
  • Snowdon, C. T., & Cronin, K. A. (2007). Cooperative breeders do cooperate. Behavioural Processes, 76, 138-141. doi:10.1016/j.beproc.2007.01.016.

    Abstract

    Bergmuller et al. (2007) make an important contribution to studies of cooperative breeding and provide a theoretical basis for linking the evolution of cooperative breeding with cooperative behavior.We have long been involved in empirical research on the only family of nonhuman primates to exhibit cooperative breeding, the Callitrichidae, which includes marmosets and tamarins, with studies in both field and captive contexts. In this paper we expand on three themes from Bergm¨uller et al. (2007) with empirical data. First we provide data in support of the importance of helpers and the specific benefits that helpers can gain in terms of fitness. Second, we suggest that mechanisms of rewarding helpers are more common and more effective in maintaining cooperative breeding than punishments. Third, we present a summary of our own research on cooperative behavior in cotton-top tamarins (Saguinus oedipus) where we find greater success in cooperative problem solving than has been reported for non-cooperatively breeding species.
  • Sonnenstuhl, I., Eisenbeiss, S., & Clahsen, H. (1999). Morphological priming in the German mental lexicon. Cognition, 72(3), 203-236. doi:10.1016/S0010-0277(99)00033-5.

    Abstract

    We present results from cross-modal priming experiments on German participles and noun plurals. The experiments produced parallel results for both inflectional systems. Regular inflection exhibits full priming whereas irregularly inflected word forms show only partial priming: after hearing regularly inflected words (-t participles and -s plurals), lexical decision times on morphologically related word forms (presented visually) were similar to reaction times for a base-line condition in which prime and target were identical, but significantly shorter than in a control condition where prime and target were unrelated. In contrast, prior presentation of irregular words (-n participles and -er plurals) led to significantly longer response times on morphologically related word forms than the prior presentation of the target itself. Hence, there are clear priming differences between regularly and irregularly inflected German words. We compare the findings on German with experimental results on regular and irregular inflection in English and Italian, and discuss theoretical implications for single versus dual-mechanism models of inflection.
  • Spiteri, E., Konopka, G., Coppola, G., Bomar, J., Oldham, M., Ou, J., Vernes, S. C., Fisher, S. E., Ren, B., & Geschwind, D. (2007). Identification of the transcriptional targets of FOXP2, a gene linked to speech and language, in developing human brain. American Journal of Human Genetics, 81(6), 1144-1157. doi:10.1086/522237.

    Abstract

    Mutations in FOXP2, a member of the forkhead family of transcription factor genes, are the only known cause of developmental speech and language disorders in humans. To date, there are no known targets of human FOXP2 in the nervous system. The identification of FOXP2 targets in the developing human brain, therefore, provides a unique tool with which to explore the development of human language and speech. Here, we define FOXP2 targets in human basal ganglia (BG) and inferior frontal cortex (IFC) by use of chromatin immunoprecipitation followed by microarray analysis (ChIP-chip) and validate the functional regulation of targets in vitro. ChIP-chip identified 285 FOXP2 targets in fetal human brain; statistically significant overlap of targets in BG and IFC indicates a core set of 34 transcriptional targets of FOXP2. We identified targets specific to IFC or BG that were not observed in lung, suggesting important regional and tissue differences in FOXP2 activity. Many target genes are known to play critical roles in specific aspects of central nervous system patterning or development, such as neurite outgrowth, as well as plasticity. Subsets of the FOXP2 transcriptional targets are either under positive selection in humans or differentially expressed between human and chimpanzee brain. This is the first ChIP-chip study to use human brain tissue, making the FOXP2-target genes identified in these studies important to understanding the pathways regulating speech and language in the developing human brain. These data provide the first insight into the functional network of genes directly regulated by FOXP2 in human brain and by evolutionary comparisons, highlighting genes likely to be involved in the development of human higher-order cognitive processes.
  • Stewart, A., Holler, J., & Kidd, E. (2007). Shallow processing of ambiguous pronouns: Evidence for delay. Quarterly Journal of Experimental Psychology, 60, 1680-1696. doi:10.1080/17470210601160807.
  • Stivers, T. (2004). Potilaan vastarinta: Keino vaikuttaa lääkärin hoitopäätökseen. Sosiaalilääketieteellinen Aikakauslehti, 41, 199-213.
  • Stivers, T., & Majid, A. (2007). Questioning children: Interactional evidence of implicit bias in medical interviews. Social Psychology Quarterly, 70(4), 424-441.

    Abstract

    Social psychologists have shown experimentally that implicit race bias can influence an individual's behavior. Implicit bias has been suggested to be more subtle and less subject to cognitive control than more explicit forms of racial prejudice. Little is known about how implicit bias is manifest in naturally occurring social interaction. This study examines the factors associated with physicians selecting children rather than parents to answer questions in pediatric interviews about routine childhood illnesses. Analysis of the data using a Generalized Linear Latent and Mixed Model demonstrates a significant effect of parent race and education on whether physicians select children to answer questions. Black children and Latino children of low-education parents are less likely to be selected to answer questions than their same aged white peers irrespective of education. One way that implicit bias manifests itself in naturally occurring interaction may be through the process of speaker selection during questioning.
  • Stivers, T. (2004). "No no no" and other types of multiple sayings in social interaction. Human Communication Research, 30(2), 260-293. doi:10.1111/j.1468-2958.2004.tb00733.x.

    Abstract

    Relying on the methodology of conversation analysis, this article examines a practice in ordinary conversation characterized by the resaying of a word, phrase, or sentence. The article shows that multiple sayings such as "No no no" or "Alright alright alright" are systematic in both their positioning relative to the interlocutor's talk and in their function. Specifically, the findings are that multiple sayings are a resource speakers have to display that their turn is addressing an in progress course of action rather than only the just prior utterance. Speakers of multiple sayings communicate their stance that the prior speaker has persisted unnecessarily in the prior course of action and should properly halt course of action.
  • Suomi, K., McQueen, J. M., & Cutler, A. (1997). Vowel harmony and speech segmentation in Finnish. Journal of Memory and Language, 36, 422-444. doi:10.1006/jmla.1996.2495.

    Abstract

    Finnish vowel harmony rules require that if the vowel in the first syllable of a word belongs to one of two vowel sets, then all subsequent vowels in that word must belong either to the same set or to a neutral set. A harmony mismatch between two syllables containing vowels from the opposing sets thus signals a likely word boundary. We report five experiments showing that Finnish listeners can exploit this information in an on-line speech segmentation task. Listeners found it easier to detect words likehymyat the end of the nonsense stringpuhymy(where there is a harmony mismatch between the first two syllables) than in the stringpyhymy(where there is no mismatch). There was no such effect, however, when the target words appeared at the beginning of the nonsense string (e.g.,hymypuvshymypy). Stronger harmony effects were found for targets containing front harmony vowels (e.g.,hymy) than for targets containing back harmony vowels (e.g.,paloinkypaloandkupalo). The same pattern of results appeared whether target position within the string was predictable or unpredictable. Harmony mismatch thus appears to provide a useful segmentation cue for the detection of word onsets in Finnish speech.
  • Swaab, T. Y., Brown, C. M., & Hagoort, P. (1997). Spoken sentence comprehension in aphasia: Event-related potential evidence for a lexical integration deficit. Journal of Cognitive Neuroscience, 9(1), 39-66.

    Abstract

    In this study the N400 component of the event-related potential was used to investigate spoken sentence understanding in Broca's and Wernicke's aphasics. The aim of the study was to determine whether spoken sentence comprehension problems in these patients might result from a deficit in the on-line integration of lexical information. Subjects listened to sentences spoken at a normal rate. In half of these sentences, the meaning of the final word of the sentence matched the semantic specifications of the preceding sentence context. In the other half of the sentences, the sentence-final word was anomalous with respect to the preceding sentence context. The N400 was measured to the sentence-final words in both conditions. The results for the aphasic patients (n = 14) were analyzed according to the severity of their comprehension deficit and compared to a group of 12 neurologically unimpaired age-matched controls, as well as a group of 6 nonaphasic patients with a lesion in the right hemisphere. The nonaphasic brain damaged patients and the aphasic patients with a light comprehension deficit (high comprehenders, n = 7) showed an N400 effect that was comparable to that of the neurologically unimpaired subjects. In the aphasic patients with a moderate to severe comprehension deficit (low comprehenders, n = 7), a reduction and delay of the N400 effect was obtained. In addition, the P300 component was measured in a classical oddball paradigm, in which subjects were asked to count infrequent low tones in a random series of high and low tones. No correlation was found between the occurrence of N400 and P300 effects, indicating that changes in the N400 results were related to the patients' language deficit. Overall, the pattern of results was compatible with the idea that aphasic patients with moderate to severe comprehension problems are impaired in the integration of lexical information into a higher order representation of the preceding sentence context.
  • Swingley, D., & Aslin, R. N. (2007). Lexical competition in young children's word learning. Cognitive Psychology, 54(2), 99-132.

    Abstract

    In two experiments, 1.5-year-olds were taught novel words whose sound patterns were phonologically similar to familiar words (novel neighbors) or were not (novel nonneighbors). Learning was tested using a picture-fixation task. In both experiments, children learned the novel nonneighbors but not the novel neighbors. In addition, exposure to the novel neighbors impaired recognition performance on familiar neighbors. Finally, children did not spontaneously use phonological differences to infer that a novel word referred to a novel object. Thus, lexical competition—inhibitory interaction among words in speech comprehension—can prevent children from using their full phonological sensitivity in judging words as novel. These results suggest that word learning in young children, as in adults, relies not only on the discrimination and identification of phonetic categories, but also on evaluating the likelihood that an utterance conveys a new word.
  • Swingley, D. (2007). Lexical exposure and word-from encoding in 1.5-year-olds. Developmental Psychology, 43(2), 454-464. doi:10.1037/0012-1649.43.2.454.

    Abstract

    In this study, 1.5-year-olds were taught a novel word. Some children were familiarized with the word's phonological form before learning the word's meaning. Fidelity of phonological encoding was tested in a picture-fixation task using correctly pronounced and mispronounced stimuli. Only children with additional exposure in familiarization showed reduced recognition performance given slight mispronunciations relative to correct pronunciations; children with fewer exposures did not. Mathematical modeling of vocabulary exposure indicated that children may hear thousands of words frequently enough for accurate encoding. The results provide evidence compatible with partial failure of phonological encoding at 19 months of age, demonstrate that this limitation in learning does not always hinder word recognition, and show the value of infants' word-form encoding in early lexical development.
  • Swinney, D. A., & Cutler, A. (1979). The access and processing of idiomatic expressions. Journal of Verbal Learning an Verbal Behavior, 18, 523-534. doi:10.1016/S0022-5371(79)90284-6.

    Abstract

    Two experiments examined the nature of access, storage, and comprehension of idiomatic phrases. In both studies a Phrase Classification Task was utilized. In this, reaction times to determine whether or not word strings constituted acceptable English phrases were measured. Classification times were significantly faster to idiom than to matched control phrases. This effect held under conditions involving different categories of idioms, different transitional probabilities among words in the phrases, and different levels of awareness of the presence of idioms in the materials. The data support a Lexical Representation Hypothesis for the processing of idioms.
  • Takashima, A., Nieuwenhuis, I. L. C., Rijpkema, M., Petersson, K. M., Jensen, O., & Fernández, G. (2007). Memory trace stabilization leads to large-scale changes in the retrieval network: A functional MRI study on associative memory. Learning & Memory, 14, 472-479. doi:10.1101/lm.605607.

    Abstract

    Spaced learning with time to consolidate leads to more stabile memory traces. However, little is known about the neural correlates of trace stabilization, especially in humans. The present fMRI study contrasted retrieval activity of two well-learned sets of face-location associations, one learned in a massed style and tested on the day of learning (i.e., labile condition) and another learned in a spaced scheme over the course of one week (i.e., stabilized condition). Both sets of associations were retrieved equally well, but the retrieval of stabilized association was faster and accompanied by large-scale changes in the network supporting retrieval. Cued recall of stabilized as compared with labile associations was accompanied by increased activity in the precuneus, the ventromedial prefrontal cortex, the bilateral temporal pole, and left temporo–parietal junction. Conversely, memory representational areas such as the fusiform gyrus for faces and the posterior parietal cortex for locations did not change their activity with stabilization. The changes in activation in the precuneus, which also showed increased connectivity with the fusiform area, are likely to be related to the spatial nature of our task. The activation increase in the ventromedial prefrontal cortex, on the other hand, might reflect a general function in stabilized memory retrieval. This area might succeed the hippocampus in linking distributed neocortical representations.
  • Tanaka, K., Fisher, S. E., & Craig, I. W. (1999). Characterization of novel promoter and enhancer elements of the mouse homologue of the Dent disease gene, CLCN5, implicated in X-linked hereditary nephrolithiasis. Genomics, 58, 281-292. doi:10.1006/geno.1999.5839.

    Abstract

    The murine homologue of the human chloride channel gene, CLCN5, defects in which are responsible for Dent disease, has been cloned and characterized. We isolated the entire coding region of mouse Clcn5 cDNA and approximately 45 kb of genomic sequence embracing the gene. To study its transcriptional control, the 5' upstream sequences of the mouse Clcn5 gene were cloned into a luciferase reporter vector. Deletion analysis of 1.5 kb of the 5' flanking sequence defined an active promoter region within 128 bp of the putative transcription start site, which is associated with a TATA motif but lacks a CAAT consensus. Within this sequence, there is a motif with homology to a purine-rich sequence responsible for the kidney-specific promoter activity of the rat CLC-K1 gene, another member of the chloride-channel gene family expressed in kidney. An enhancer element that confers a 10- to 20-fold increase in the promoter activity of the mouse Clcn5 gene was found within the first intron. The organization of the human CLCN5 and mouse Clcn5 gene structures is highly conserved, and the sequence of the murine protein is 98% similar to that of human, with its highest expression seen in the kidney. This study thus provides the first identification of the transcriptional control region of, and the basis for an understanding of the regulatory mechanism that controls, this kidney-specific, chloride-channel gene.
  • Tendolkar, I., Arnold, J., Petersson, K. M., Weis, S., Brockhaus-Dumke, A., Van Eijndhoven, P., Buitelaar, J., & Fernández, G. (2007). Probing the neural correlates of associative memory formation: A parametrically analyzed event-related functional MRI study. Brain Research, 1142, 159-168. doi:10.1016/j.brainres.2007.01.040.

    Abstract

    The medial temporal lobe (MTL) is crucial for declarative memory formation, but the function of its subcomponents in associative memory formation remains controversial. Most functional imaging studies on this topic are based on a stepwise approach comparing a condition with and one without associative encoding. Extending this approach we applied additionally a parametric analysis by varying the amount of associative memory formation. We found a hippocampal subsequent memory effect of almost similar magnitude regardless of the amount of associations formed. By contrast, subsequent memory effects in rhinal and parahippocampal cortices were parametrically and positively modulated by the amount of associations formed. Our results indicate that the parahippocampal region supports associative memory formation as tested here and the hippocampus adds a general mnemonic operation. This pattern of results might suggest a new interpretation. Instead of having either a fixed division of labor between the hippocampus (associative memory formation) and the rhinal cortex (non-associative memory formation) or a functionally unitary MTL system, in which all substructures are contributing to memory formation in a similar way, we propose that the location where associations are formed within the MTL depends on the kind of associations bound: If visual single-dimension associations, as used here, can already be integrated within the parahippocampal region, the hippocampus might add a general purpose mnemonic operation only. In contrast, if associations have to be formed across widely distributed neocortical representations, the hippocampus may provide a binding operation in order to establish a coherent memory.
  • Ter Keurs, M., Brown, C. M., Hagoort, P., & Stegeman, D. F. (1999). Electrophysiological manifestations of open- and closed-class words in patients with Broca's aphasia with agrammatic comprehension: An event-related brain potential study. Brain, 122, 839-854. doi:10.1093/brain/122.5.839.

    Abstract

    This paper presents electrophysiological data on the on-line processing of open- and closed-class words in patients with Broca’s aphasia with agrammatic comprehension. Event-related brain potentials were recorded from the scalp when Broca patients and nonaphasic control subjects were visually presented with a story in which the words appeared one at a time on the screen. Separate waveforms were computed for open- and closed-class words. The non-aphasic control subjects showed clear differences between the processing of open- and closed-class words in an early (210-375 ms) and a late (400-700 ms) time-window.The early electrophysiological differences reflect the first manifestation of the availability of word-category information from the mental lexicon. The late differences presumably relate to post-lexical semantic and syntactic processing. In contrast to the control subjects, the Broca patients showed no early vocabulary class effect and only a limited late effect. The results suggest that an important factor in the agrammatic comprehension deficit of Broca’s aphasics is a delayed and/or incomplete availability of word-class information.
  • Terrill, A. (2007). [Review of ‘Andrew Pawley, Robert Attenborough, Jack Golson, and Robin Hide, eds. 2005. Papuan pasts: Cultural, linguistic and biological histories of Papuan-speaking people]. Oceanic Linguistics, 46(1), 313-321. doi:10.1353/ol.2007.0025.
  • Theakston, A. L., Lieven, E. V., Pine, J. M., & Rowland, C. F. (2004). Semantic generality, input frequency and the acquisition of syntax. Journal of Child Language, 31(1), 61-99. doi:10.1017/S0305000903005956.

    Abstract

    In many areas of language acquisition, researchers have suggested that semantic generality plays an important role in determining the order of acquisition of particular lexical forms. However, generality is typically confounded with the effects of input frequency and it is therefore unclear to what extent semantic generality or input frequency determines the early acquisition of particular lexical items. The present study evaluates the relative influence of semantic status and properties of the input on the acquisition of verbs and their argument structures in the early speech of 9 English-speaking children from 2;0 to 3;0. The children's early verb utterances are examined with respect to (1) the order of acquisition of particular verbs in three different constructions, (2) the syntactic diversity of use of individual verbs, (3) the relative proportional use of semantically general verbs as a function of total verb use, and (4) their grammatical accuracy. The data suggest that although measures of semantic generality correlate with various measures of early verb use, once the effects of verb use in the input are removed, semantic generality is not a significant predictor of early verb use. The implications of these results for semantic-based theories of verb argument structure acquisition are discussed.
  • Tomasello, M., Carpenter, M., & Liszkowski, U. (2007). A new look at infant pointing. Child Development, 78, 705-722. doi:10.1111/j.1467-8624.2007.01025.x.

    Abstract

    The current article proposes a new theory of infant pointing involving multiple layers of intentionality and shared intentionality. In the context of this theory, evidence is presented for a rich interpretation of prelinguistic communication, that is, one that posits that when 12-month-old infants point for an adult they are in some sense trying to influence her mental states. Moreover, evidence is also presented for a deeply social view in which infant pointing is best understood—on many levels and in many ways—as depending on uniquely human skills and motivations for cooperation and shared intentionality (e.g., joint intentions and attention with others). Children's early linguistic skills are built on this already existing platform of prelinguistic communication.
  • Trilsbeek, P. (2004). Report from DoBeS training week. Language Archive Newsletter, 1(3), 12-12.
  • Trilsbeek, P. (2004). DoBeS Training Course. Language Archive Newsletter, 1(2), 6-6.
  • Van den Brink, D., & Hagoort, P. (2004). The influence of semantic and syntactic context constraints on lexical selection and integration in spoken-word comprehension as revealed by ERPs. Journal of Cognitive Neuroscience, 16(6), 1068-1084. doi:10.1162/0898929041502670.

    Abstract

    An event-related brain potential experiment was carried out to investigate the influence of semantic and syntactic context constraints on lexical selection and integration in spoken-word comprehension. Subjects were presented with constraining spoken sentences that contained a critical word that was either (a) congruent, (b) semantically and syntactically incongruent, but beginning with the same initial phonemes as the congruent critical word, or (c) semantically and syntactically incongruent, beginning with phonemes that differed from the congruent critical word. Relative to the congruent condition, an N200 effect reflecting difficulty in the lexical selection process was obtained in the semantically and syntactically incongruent condition where word onset differed from that of the congruent critical word. Both incongruent conditions elicited a large N400 followed by a left anterior negativity (LAN) time-locked to the moment of word category violation and a P600 effect. These results would best fit within a cascaded model of spoken-word processing, proclaiming an optimal use of contextual information during spokenword identification by allowing for semantic and syntactic processing to take place in parallel after bottom-up activation of a set of candidates, and lexical integration to proceed with a limited number of candidates that still match the acoustic input.
  • van Kuijk, D., & Boves, L. (1999). Acoustic characteristics of lexical stress in continuous telephone speech. Speech Communication, 27(2), 95-111. doi:10.1016/S0167-6393(98)00069-7.

    Abstract

    In this paper we investigate acoustic differences between vowels in syllables that do or do not carry lexical stress. In doing so, we concentrated on segmental acoustic phonetic features that are conventionally assumed to differ between stressed and unstressed syllables, viz. Duration, Energy and Spectral Tilt. The speech material in this study differs from the type of material used in previous research: instead of specially constructed sentences we used phonetically rich sentences from the Dutch POLYPHONE corpus. Most of the Duration, Energy and Spectral Tilt features that we used in the investigation show statistically significant differences for the population means of stressed and unstressed vowels. However, it also appears that the distributions overlap to such an extent that automatic detection of stressed and unstressed syllables yields correct classifications of 72.6% at best. It is argued that this result is due to the large variety in the ways in which the abstract linguistic feature `lexical stress' is realized in the acoustic speech signal. Our findings suggest that a lexical stress detector has little use for a single pass decoder in an automatic speech recognition (ASR) system, but could still play a useful role as an additional knowledge source in a multi-pass decoder.
  • Van Wijk, C., & Kempen, G. (1982). De ontwikkeling van syntactische formuleervaardigheid bij kinderen van 9 tot 16 jaar. Nederlands Tijdschrift voor de Psychologie en haar Grensgebieden, 37(8), 491-509.

    Abstract

    An essential phenomenon in the development towards syntactic maturity after early childhood is the increasing use of so-called sentence-combining transformations. Especially by using subordination, complex sentences are produced. The research reported here is an attempt to arrive at a more adequate characterization and explanation. Our starting point was an analysis of 280 texts written by Dutch-speaking pupils of the two highest grades of the primary school and the four lowest grades of three different types of secondary education. It was examined whether systematic shifts in the use of certain groups of so-called function words could be traced. We concluded that the development of the syntactic formulating ability can be characterized as an increase in connectivity: the use of all kinds of function words which explicitly mark logico-semantic relations between propositions. This development starts by inserting special adverbs and coordinating conjunctions resulting in various types of coordination. In a later stage, the syntactic patterning of the sentence is affected as well (various types of subordination). The increase in sentence complexity is only one aspect of the entire development. An explanation for the increase in connectivity is offered based upon a distinction between narrative and expository language use. The latter, but not the former, is characterized by frequent occurrence of connectives. The development in syntactic formulating ability includes a high level of skill in expository language use. Speed of development is determined by intensity of training, e.g. in scholastic and occupational settings.
  • Van Alphen, P. M., De Bree, E., Gerrits, E., De Jong, J., Wilsenach, C., & Wijnen, F. (2004). Early language development in children with a genetic risk of dyslexia. Dyslexia, 10, 265-288. doi:10.1002/dys.272.

    Abstract

    We report on a prospective longitudinal research programme exploring the connection between language acquisition deficits and dyslexia. The language development profile of children at-risk for dyslexia is compared to that of age-matched controls as well as of children who have been diagnosed with specific language impairment (SLI). The experiments described concern the perception and production of grammatical morphology, categorical perception of speech sounds, phonological processing (non-word repetition), mispronunciation detection, and rhyme detection. The results of each of these indicate that the at-risk children as a group underperform in comparison to the controls, and that, in most cases, they approach the SLI group. It can be concluded that dyslexia most likely has precursors in language development, also in domains other than those traditionally considered conditional for the acquisition of literacy skills. The dyslexia-SLI connection awaits further, particularly qualitative, analyses.
  • Van Berkum, J. J. A., Brown, C. M., & Hagoort, P. (1999). Early referential context effects in sentence processing: Evidence from event-related brain potentials. Journal of Memory and Language, 41(2), 147-182. doi:10.1006/jmla.1999.2641.

    Abstract

    An event-related brain potentials experiment was carried out to examine the interplay of referential and structural factors during sentence processing in discourse. Subjects read (Dutch) sentences beginning like “David told the girl that … ” in short story contexts that had introduced either one or two referents for a critical singular noun phrase (“the girl”). The waveforms showed that within 280 ms after onset of the critical noun the reader had already determined whether the noun phrase had a unique referent in earlier discourse. Furthermore, this referential information was immediately used in parsing the rest of the sentence, which was briefly ambiguous between a complement clause (“ … that there would be some visitors”) and a relative clause (“ … that had been on the phone to hang up”). A consistent pattern of P600/SPS effects elicited by various subsequent disambiguations revealed that a two-referent discourse context had led the parser to initially pursue the relative-clause alternative to a larger extent than a one-referent context. Together, the results suggest that during the processing of sentences in discourse, structural and referential sources of information interact on a word-by-word basis.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1997). Electrophysiological evidence on the time course of semantic and phonological processes in speech production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23(4), 787-806.

    Abstract

    The temporal properties of semantic and phonological processes in speech production were investigated in a new experimental paradigm using movement-related brain potentials. The main experimental task was picture naming. In addition, a 2-choice reaction go/no-go procedure was included, involving a semantic and a phonological categorization of the picture name. Lateralized readiness potentials (LRPs) were derived to test whether semantic and phonological information activated motor processes at separate moments in time. An LRP was only observed on no-go trials when the semantic (not the phonological) decision determined the response hand. Varying the position of the critical phoneme in the picture name did not affect the onset of the LRP but rather influenced when the LRP began to differ on go and no-go trials and allowed the duration of phonological encoding of a word to be estimated. These results provide electrophysiological evidence for early semantic activation and later phonological encoding.
  • Van Berkum, J. J. A., Koornneef, A. W., Otten, M., & Nieuwland, M. S. (2007). Establishing reference in language comprehension: An electrophysiological perspective. Brain Research, 1146, 158-171. doi:10.1016/j.brainres.2006.06.091.

    Abstract

    The electrophysiology of language comprehension has long been dominated by research on syntactic and semantic integration. However, to understand expressions like "he did it" or "the little girl", combining word meanings in accordance with semantic and syntactic constraints is not enough--readers and listeners also need to work out what or who is being referred to. We review our event-related brain potential research on the processes involved in establishing reference, and present a new experiment in which we examine when and how the implicit causality associated with specific interpersonal verbs affects the interpretation of a referentially ambiguous pronoun. The evidence suggests that upon encountering a singular noun or pronoun, readers and listeners immediately inspect their situation model for a suitable discourse entity, such that they can discriminate between having too many, too few, or exactly the right number of referents within at most half a second. Furthermore, our implicit causality findings indicate that a fragment like "David praised Linda because..." can immediately foreground a particular referent, to the extent that a subsequent "he" is at least initially construed as a syntactic error. In all, our brain potential findings suggest that referential processing is highly incremental, and not necessarily contingent upon the syntax. In addition, they demonstrate that we can use ERPs to relatively selectively keep track of how readers and listeners establish reference.
  • Van Alphen, P. M., & Smits, R. (2004). Acoustical and perceptual analysis of the voicing distinction in Dutch initial plosives: The role of prevoicing. Journal of Phonetics, 32(4), 455-491. doi:10.1016/j.wocn.2004.05.001.

    Abstract

    Three experiments investigated the voicing distinction in Dutch initial labial and alveolar plosives. The difference between voiced and voiceless Dutch plosives is generally described in terms of the presence or absence of prevoicing (negative voice onset time). Experiment 1 showed, however, that prevoicing was absent in 25% of voiced plosive productions across 10 speakers. The production of prevoicing was influenced by place of articulation of the plosive, by whether the plosive occurred in a consonant cluster or not, and by speaker sex. Experiment 2 was a detailed acoustic analysis of the voicing distinction, which identified several acoustic correlates of voicing. Prevoicing appeared to be by far the best predictor. Perceptual classification data revealed that prevoicing was indeed the strongest cue that listeners use when classifying plosives as voiced or voiceless. In the cases where prevoicing was absent, other acoustic cues influenced classification, such that some of these tokens were still perceived as being voiced. These secondary cues were different for the two places of articulation. We discuss the paradox raised by these findings: although prevoicing is the most reliable cue to the voicing distinction for listeners, it is not reliably produced by speakers.

Share this page