Publications

Displaying 301 - 400 of 421
  • Robinson, S. (2006). The phoneme inventory of the Aita dialect of Rotokas. Oceanic Linguistics, 45(1), 206-209.

    Abstract

    Rotokas is famous for possessing one of the world’s smallest phoneme inventories. According to one source, the Central dialect of Rotokas possesses only 11 segmental phonemes (five vowels and six consonants) and lacks nasals while the Aita dialect possesses a similar-sized inventory in which nasals replace voiced stops. However, recent fieldwork reveals that the Aita dialect has, in fact, both voiced and nasal stops, making for an inventory of 14 segmental phonemes (five vowels and nine consonants). The correspondences between Central and Aita Rotokas suggest that the former is innovative with respect to its consonant inventory and the latter conservative, and that the small inventory of Central Rotokas arose by collapsing the distinction between voiced and nasal stops.
  • Roelofs, A. (2007). On the modelling of spoken word planning: Rejoinder to La Heij, Starreveld, and Kuipers (2007). Language and Cognitive Processes, 22(8), 1281-1286. doi:10.1080/01690960701462291.

    Abstract

    The author contests several claims of La Heij, Starreveld, and Kuipers (this issue) concerning the modelling of spoken word planning. The claims are about the relevance of error findings, the interaction between semantic and phonological factors, the explanation of word-word findings, the semantic relatedness paradox, and production rules.
  • Roelofs, A. (2006). The influence of spelling on phonological encoding in word reading, object naming, and word generation. Psychonomic Bulletin & Review, 13(1), 33-37.

    Abstract

    Does the spelling of a word mandatorily constrain spoken word production, or does it do so only
    when spelling is relevant for the production task at hand? Damian and Bowers (2003) reported spelling
    effects in spoken word production in English using a prompt–response word generation task. Preparation
    of the response words was disrupted when the responses shared initial phonemes that differed
    in spelling, suggesting that spelling constrains speech production mandatorily. The present experiments,
    conducted in Dutch, tested for spelling effects using word production tasks in which spelling
    was clearly relevant (oral reading in Experiment 1) or irrelevant (object naming and word generation
    in Experiments 2 and 3, respectively). Response preparation was disrupted by spelling inconsistency
    only with the word reading, suggesting that the spelling of a word constrains spoken word production
    in Dutch only when it is relevant for the word production task at hand.
  • Roelofs, A. (2006). Context effects of pictures and words in naming objects, reading words, and generating simple phrases. Quarterly Journal of Experimental Psychology, 59(10), 1764-1784. doi:10.1080/17470210500416052.

    Abstract

    In five language production experiments it was examined which aspects of words are activated in memory by context pictures and words. Context pictures yielded Stroop-like and semantic effects on response times when participants generated gender-marked noun phrases in response to written words (Experiment 1A). However, pictures yielded no such effects when participants simply read aloud the noun phrases (Experiment 2). Moreover, pictures yielded a gender congruency effect in generating gender-marked noun phrases in response to the written words (Experiments 3A and 3B). These findings suggest that context pictures activate lemmas (i.e., representations of syntactic properties), which leads to effects only when lemmas are needed to generate a response (i.e., in Experiments 1A, 3A, and 3B, but not in Experiment 2). Context words yielded Stroop-like and semantic effects in picture naming (Experiment 1B). Moreover, words yielded Stroop-like but no semantic effects in reading nouns (Experiment 4) and in generating noun phrases (Experiment 5). These findings suggest that context words activate the lemmas and forms of their names, which leads to semantic effects when lemmas are required for responding (Experiment 1B) but not when only the forms are required (Experiment 4). WEAVER++ simulations of the results are presented.
  • Roelofs, A. (2007). A critique of simple name-retrieval models of spoken word planning. Language and Cognitive Processes, 22(8), 1237-1260. doi:10.1080/01690960701461582.

    Abstract

    Simple name-retrieval models of spoken word planning (Bloem & La Heij, 2003; Starreveld & La Heij, 1996) maintain (1) that there are two levels in word planning, a conceptual and a lexical phonological level, and (2) that planning a word in both object naming and oral reading involves the selection of a lexical phonological representation. Here, the name retrieval models are compared to more complex models with respect to their ability to account for relevant data. It appears that the name retrieval models cannot easily account for several relevant findings, including some speech error biases, types of morpheme errors, and context effects on the latencies of responding to pictures and words. New analyses of the latency distributions in previous studies also pose a challenge. More complex models account for all these findings. It is concluded that the name retrieval models are too simple and that the greater complexity of the other models is warranted
  • Roelofs, A. (2007). Attention and gaze control in picture naming, word reading, and word categorizing. Journal of Memory and Language, 57(2), 232-251. doi:10.1016/j.jml.2006.10.001.

    Abstract

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture–word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment 3) and shifted their gaze to the arrow to manually indicate its direction. The experiments showed that the temporal coordination of vocal responding and gaze shifting depends on the vocal task and, to a lesser extent, on the type of relationship between picture and word. There was a close temporal link between gaze shifting and manual responding, suggesting that the gaze shifts indexed shifts of attention between the vocal and manual tasks. Computer simulations showed that a simple extension of WEAVER++ [Roelofs, A. (1992). A spreading-activation theory of lemma retrieval in speaking. Cognition, 42, 107–142.; Roelofs, A. (2003). Goal-referenced selection of verbal action: modeling attentional control in the Stroop task. Psychological Review, 110, 88–125.] with assumptions about attentional control in the coordination of vocal responding, gaze shifting, and manual responding quantitatively accounts for the key findings.
  • Roelofs, A., Van Turennout, M., & Coles, M. G. H. (2006). Anterior cingulate cortex activity can be independent of response conflict in stroop-like tasks. Proceedings of the National Academy of Sciences of the United States of America, 103(37), 13884-13889. doi:10.1073/pnas.0606265103.

    Abstract

    Cognitive control includes the ability to formulate goals and plans of action and to follow these while facing distraction. Previous neuroimaging studies have shown that the presence of conflicting response alternatives in Stroop-like tasks increases activity in dorsal anterior cingulate cortex (ACC), suggesting that the ACC is involved in cognitive control. However, the exact nature of ACC function is still under debate. The prevailing conflict detection hypothesis maintains that the ACC is involved in performance monitoring. According to this view, ACC activity reflects the detection of response conflict and acts as a signal that engages regulative processes subserved by lateral prefrontal brain regions. Here, we provide evidence from functional MRI that challenges this view and favors an alternative view, according to which the ACC has a role in regulation itself. Using an arrow–word Stroop task, subjects responded to incongruent, congruent, and neutral stimuli. A critical prediction made by the conflict detection hypothesis is that ACC activity should be increased only when conflicting response alternatives are present. Our data show that ACC responses are larger for neutral than for congruent stimuli, in the absence of response conflict. This result demonstrates the engagement of the ACC in regulation itself. A computational model of Stroop-like performance instantiating a version of the regulative hypothesis is shown to account for our findings.
  • Roelofs, A. (2006). Functional architecture of naming dice, digits, and number words. Language and Cognitive Processes, 21(1/2/3), 78-111. doi:10.1080/01690960400001846.

    Abstract

    Five chronometric experiments examined the functional architecture of naming dice, digits, and number words. Speakers named pictured dice, Arabic digits, or written number words, while simultaneously trying to ignore congruent or incongruent dice, digit, or number word distractors presented at various stimulus onset asynchronies (SOAs). Stroop-like interference and facilitation effects were obtained from digits and words on dice naming latencies, but not from dice on digit and word naming latencies. In contrast, words affected digit naming latencies and digits affected word naming latencies to the same extent. The peak of the interference was always around SOA = 0 ms, whereas facilitation was constant across distractor-first SOAs. These results suggest that digit naming is achieved like word naming rather than dice naming. WEAVER++simulations of the results are reported.
  • Roelofs, A., Özdemir, R., & Levelt, W. J. M. (2007). Influences of spoken word planning on speech recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(5), 900-913. doi:10.1037/0278-7393.33.5.900.

    Abstract

    In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. (PsycINFO Database Record (c) 2007 APA, all rights reserved)
  • Roelofs, A. (2006). Modeling the control of phonological encoding in bilingual speakers. Bilingualism: Language and Cognition, 9(2), 167-176. doi:10.1017/S1366728906002513.

    Abstract

    Phonological encoding is the process by which speakers retrieve phonemic segments for morphemes from memory and use
    the segments to assemble phonological representations of words to be spoken. When conversing in one language, bilingual
    speakers have to resist the temptation of encoding word forms using the phonological rules and representations of the other
    language. We argue that the activation of phonological representations is not restricted to the target language and that the
    phonological representations of languages are not separate. We advance a view of bilingual control in which condition-action
    rules determine what is done with the activated phonological information depending on the target language. This view is
    computationally implemented in the WEAVER++ model. We present WEAVER++ simulations of the cognate facilitation effect
    (Costa, Caramazza and Sebasti´an-Gall´es, 2000) and the between-language phonological facilitation effect of spoken
    distractor words in object naming (Hermans, Bongaerts, de Bot and Schreuder, 1998).
  • Roelofs, A. (1997). The WEAVER model of word-form encoding in speech production. Cognition, 64, 249-284. doi:10.1016/S0010-0277(97)00027-9.

    Abstract

    Lexical access in speaking consists of two major steps: lemma retrieval and word-form encoding. In Roelofs (Roelofs, A. 1992a. Cognition 42. 107-142; Roelofs. A. 1993. Cognition 47, 59-87.), I described a model of lemma retrieval. The present paper extends this work by presenting a comprehensive model of the second access step, word-form encoding. The model is called WEAVER (Word-form Encoding by Activation and VERification). Unlike other models of word-form generation, WEAVER is able to provide accounts of response time data, particularly from the picture-word interference paradigm and the implicit priming paradigm. Its key features are (1) retrieval by spreading activation, (2) verification of activated information by a production rule, (3) a rightward incremental construction of phonological representations using a principle of active syllabification, syllables are constructed on the fly rather than stored with lexical items, (4) active competitive selection of syllabic motor programs using a mathematical formalism that generates response times and (5) the association of phonological speech errors with the selection of syllabic motor programs due to the failure of verification.
  • Rohlfing, K., Loehr, D., Duncan, S., Brown, A., Franklin, A., Kimbara, I., Milde, J.-T., Parrill, F., Rose, T., Schmidt, T., Sloetjes, H., Thies, A., & Wellinghof, S. (2006). Comparison of multimodal annotation tools - workshop report. Gesprächforschung - Online-Zeitschrift zur Verbalen Interaktion, 7, 99-123.
  • Rowland, C. F. (2007). Explaining errors in children’s questions. Cognition, 104(1), 106-134. doi:10.1016/j.cognition.2006.05.011.

    Abstract

    The ability to explain the occurrence of errors in children’s speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813–842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children’s speech, and that errors occur when children resort to other operations to produce questions [e.g. Dąbrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83–102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157–181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.
  • Rowland, C. F., & Fletcher, S. L. (2006). The effect of sampling on estimates of lexical specificity and error rates. Journal of Child Language, 33(4), 859-877. doi:10.1017/S0305000906007537.

    Abstract

    Studies based on naturalistic data are a core tool in the field of language acquisition research and have provided thorough descriptions of children's speech. However, these descriptions are inevitably confounded by differences in the relative frequency with which children use words and language structures. The purpose of the present work was to investigate the impact of sampling constraints on estimates of the productivity of children's utterances, and on the validity of error rates. Comparisons were made between five different sized samples of wh-question data produced by one child aged 2;8. First, we assessed whether sampling constraints undermined the claim (e.g. Tomasello, 2000) that the restricted nature of early child speech reflects a lack of adultlike grammatical knowledge. We demonstrated that small samples were equally likely to under- as overestimate lexical specificity in children's speech, and that the reliability of estimates varies according to sample size. We argued that reliable analyses require a comparison with a control sample, such as that from an adult speaker. Second, we investigated the validity of estimates of error rates based on small samples. The results showed that overall error rates underestimate the incidence of error in some rarely produced parts of the system and that analyses on small samples were likely to substantially over- or underestimate error rates in infrequently produced constructions. We concluded that caution must be used when basing arguments about the scope and nature of errors in children's early multi-word productions on analyses of samples of spontaneous speech.
  • Rubio-Fernández, P. (2007). Suppression in metaphor interpretation: Differences between meaning selection and meaning construction. Journal of Semantics, 24(4), 345-371. doi:10.1093/jos/ffm006.

    Abstract

    Various accounts of metaphor interpretation propose that it involves constructing an ad hoc concept on the basis of the concept encoded by the metaphor vehicle (i.e. the expression used for conveying the metaphor). This paper discusses some of the differences between these theories and investigates their main empirical prediction: that metaphor interpretation involves enhancing properties of the metaphor vehicle that are relevant for interpretation, while suppressing those that are irrelevant. This hypothesis was tested in a cross-modal lexical priming study adapted from early studies on lexical ambiguity. The different patterns of suppression of irrelevant meanings observed in disambiguation studies and in the experiment on metaphor reported here are discussed in terms of differences between meaning selection and meaning construction.
  • De Ruiter, J. P. (2007). Postcards from the mind: The relationship between speech, imagistic gesture and thought. Gesture, 7(1), 21-38.

    Abstract

    In this paper, I compare three different assumptions about the relationship between speech, thought and gesture. These assumptions have profound consequences for theories about the representations and processing involved in gesture and speech production. I associate these assumptions with three simplified processing architectures. In the Window Architecture, gesture provides us with a 'window into the mind'. In the Language Architecture, properties of language have an influence on gesture. In the Postcard Architecture, gesture and speech are planned by a single process to become one multimodal message. The popular Window Architecture is based on the assumption that gestures come, as it were, straight out of the mind. I argue that during the creation of overt imagistic gestures, many processes, especially those related to (a) recipient design, and (b) effects of language structure, cause an observable gesture to be very different from the original thought that it expresses. The Language Architecture and the Postcard Architecture differ from the Window Architecture in that they both incorporate a central component which plans gesture and speech together, however they differ from each other in the way they align gesture and speech. The Postcard Architecture assumes that the process creating a multimodal message involving both gesture and speech has access to the concepts that are available in speech, while the Language Architecture relies on interprocess communication to resolve potential conflicts between the content of gesture and speech.
  • De Ruiter, J. P., Mitterer, H., & Enfield, N. J. (2006). Projecting the end of a speaker's turn: A cognitive cornerstone of conversation. Language, 82(3), 515-535.

    Abstract

    A key mechanism in the organization of turns at talk in conversation is the ability to anticipate or PROJECT the moment of completion of a current speaker’s turn. Some authors suggest that this is achieved via lexicosyntactic cues, while others argue that projection is based on intonational contours. We tested these hypotheses in an on-line experiment, manipulating the presence of symbolic (lexicosyntactic) content and intonational contour of utterances recorded in natural conversations. When hearing the original recordings, subjects can anticipate turn endings with the same degree of accuracy attested in real conversation. With intonational contour entirely removed (leaving intact words and syntax, with a completely flat pitch), there is no change in subjects’ accuracy of end-of-turn projection. But in the opposite case (with original intonational contour intact, but with no recognizable words), subjects’ performance deteriorates significantly. These results establish that the symbolic (i.e. lexicosyntactic) content of an utterance is necessary (and possibly sufficient) for projecting the moment of its completion, and thus for regulating conversational turn-taking. By contrast, and perhaps surprisingly, intonational contour is neither necessary nor sufficient for end-of-turn projection.
  • De Ruiter, J. P. (2006). Can gesticulation help aphasic people speak, or rather, communicate? Advances in Speech-Language Pathology, 8(2), 124-127. doi:10.1080/14417040600667285.

    Abstract

    As Rose (2006) discusses in the lead article, two camps can be identified in the field of gesture research: those who believe that gesticulation enhances communication by providing extra information to the listener, and on the other hand those who believe that gesticulation is not communicative, but rather that it facilitates speaker-internal word finding processes. I review a number of key studies relevant for this controversy, and conclude that the available empirical evidence is supporting the notion that gesture is a communicative device which can compensate for problems in speech by providing information in gesture. Following that, I discuss the finding by Rose and Douglas (2001) that making gestures does facilitate word production in some patients with aphasia. I argue that the gestures produced in the experiment by Rose and Douglas are not guaranteed to be of the same kind as the gestures that are produced spontaneously under naturalistic, communicative conditions, which makes it difficult to generalise from that particular study to general gesture behavior. As a final point, I encourage researchers in the area of aphasia to put more emphasis on communication in naturalistic contexts (e.g., conversation) in testing the capabilities of people with aphasia.
  • Salverda, A. P., Dahan, D., Tanenhaus, M. K., Crosswhite, K., Masharov, M., & McDonough, J. (2007). Effects of prosodically modulated sub-phonetic variation on lexical competition. Cognition, 105(2), 466-476. doi:10.1016/j.cognition.2006.10.008.

    Abstract

    Eye movements were monitored as participants followed spoken instructions to manipulate one of four objects pictured on a computer screen. Target words occurred in utterance-medial (e.g., Put the cap next to the square) or utterance-final position (e.g., Now click on the cap). Displays consisted of the target picture (e.g., a cap), a monosyllabic competitor picture (e.g., a cat), a polysyllabic competitor picture (e.g., a captain) and a distractor (e.g., a beaker). The relative proportion of fixations to the two types of competitor pictures changed as a function of the position of the target word in the utterance, demonstrating that lexical competition is modulated by prosodically conditioned phonetic variation.
  • Sauter, D., & Scott, S. K. (2007). More than one kind of happiness: Can we recognize vocal expressions of different positive states? Motivation and Emotion, 31(3), 192-199.

    Abstract

    Several theorists have proposed that distinctions are needed between different positive emotional states, and that these discriminations may be particularly useful in the domain of vocal signals (Ekman, 1992b, Cognition and Emotion, 6, 169–200; Scherer, 1986, Psychological Bulletin, 99, 143–165). We report an investigation into the hypothesis that positive basic emotions have distinct vocal expressions (Ekman, 1992b, Cognition and Emotion, 6, 169–200). Non-verbal vocalisations are used that map onto five putative positive emotions: Achievement/Triumph, Amusement, Contentment, Sensual Pleasure, and Relief. Data from categorisation and rating tasks indicate that each vocal expression is accurately categorised and consistently rated as expressing the intended emotion. This pattern is replicated across two language groups. These data, we conclude, provide evidence for the existence of robustly recognisable expressions of distinct positive emotions.
  • Scharenborg, O., Seneff, S., & Boves, L. (2007). A two-pass approach for handling out-of-vocabulary words in a large vocabulary recognition task. Computer, Speech & Language, 21, 206-218. doi:10.1016/j.csl.2006.03.003.

    Abstract

    This paper addresses the problem of recognizing a vocabulary of over 50,000 city names in a telephone access spoken dialogue system. We adopt a two-stage framework in which only major cities are represented in the first stage lexicon. We rely on an unknown word model encoded as a phone loop to detect OOV city names (referred to as ‘rare city’ names). We use SpeM, a tool that can extract words and word-initial cohorts from phone graphs from a large fallback lexicon, to provide an N-best list of promising city name hypotheses on the basis of the phone graph corresponding to the OOV. This N-best list is then inserted into the second stage lexicon for a subsequent recognition pass. Experiments were conducted on a set of spontaneous telephone-quality utterances; each containing one rare city name. It appeared that SpeM was able to include nearly 75% of the correct city names in an N-best hypothesis list of 3000 city names. With the names found by SpeM to extend the lexicon of the second stage recognizer, a word accuracy of 77.3% could be obtained. The best one-stage system yielded a word accuracy of 72.6%. The absolute number of correctly recognized rare city names almost doubled, from 62 for the best one-stage system to 102 for the best two-stage system. However, even the best two-stage system recognized only about one-third of the rare city names retrieved by SpeM. The paper discusses ways for improving the overall performance in the context of an application.
  • Scharenborg, O., Ten Bosch, L., & Boves, L. (2007). 'Early recognition' of polysyllabic words in continuous speech. Computer, Speech & Language, 21, 54-71. doi:10.1016/j.csl.2005.12.001.

    Abstract

    Humans are able to recognise a word before its acoustic realisation is complete. This in contrast to conventional automatic speech recognition (ASR) systems, which compute the likelihood of a number of hypothesised word sequences, and identify the words that were recognised on the basis of a trace back of the hypothesis with the highest eventual score, in order to maximise efficiency and performance. In the present paper, we present an ASR system, SpeM, based on principles known from the field of human word recognition that is able to model the human capability of ‘early recognition’ by computing word activation scores (based on negative log likelihood scores) during the speech recognition process. Experiments on 1463 polysyllabic words in 885 utterances showed that 64.0% (936) of these polysyllabic words were recognised correctly at the end of the utterance. For 81.1% of the 936 correctly recognised polysyllabic words the local word activation allowed us to identify the word before its last phone was available, and 64.1% of those words were already identified one phone after their lexical uniqueness point. We investigated two types of predictors for deciding whether a word is considered as recognised before the end of its acoustic realisation. The first type is related to the absolute and relative values of the word activation, which trade false acceptances for false rejections. The second type of predictor is related to the number of phones of the word that have already been processed and the number of phones that remain until the end of the word. The results showed that SpeM’s performance increases if the amount of acoustic evidence in support of a word increases and the risk of future mismatches decreases.
  • Scharenborg, O. (2007). Reaching over the gap: A review of efforts to link human and automatic speech recognition research. Speech Communication, 49, 336-347. doi:10.1016/j.specom.2007.01.009.

    Abstract

    The fields of human speech recognition (HSR) and automatic speech recognition (ASR) both investigate parts of the speech recognition process and have word recognition as their central issue. Although the research fields appear closely related, their aims and research methods are quite different. Despite these differences there is, however, lately a growing interest in possible cross-fertilisation. Researchers from both ASR and HSR are realising the potential benefit of looking at the research field on the other side of the ‘gap’. In this paper, we provide an overview of past and present efforts to link human and automatic speech recognition research and present an overview of the literature describing the performance difference between machines and human listeners. The focus of the paper is on the mutual benefits to be derived from establishing closer collaborations and knowledge interchange between ASR and HSR. The paper ends with an argument for more and closer collaborations between researchers of ASR and HSR to further improve research in both fields.
  • Scharenborg, O., Wan, V., & Moore, R. K. (2007). Towards capturing fine phonetic variation in speech using articulatory features. Speech Communication, 49, 811-826. doi:10.1016/j.specom.2007.01.005.

    Abstract

    The ultimate goal of our research is to develop a computational model of human speech recognition that is able to capture the effects of fine-grained acoustic variation on speech recognition behaviour. As part of this work we are investigating automatic feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. In the experiments reported here, we analysed the classification results from support vector machines (SVMs) and multilayer perceptrons (MLPs). MLPs have been widely and successfully used for the task of multi-value articulatory feature classification, while (to the best of our knowledge) SVMs have not. This paper compares the performance of the two classifiers and analyses the results in order to better understand the articulatory representations. It was found that the SVMs outperformed the MLPs for five out of the seven articulatory feature classes we investigated while using only 8.8–44.2% of the training material used for training the MLPs. The structure in the misclassifications of the SVMs and MLPs suggested that there might be a mismatch between the characteristics of the classification systems and the characteristics of the description of the AF values themselves. The analyses showed that some of the misclassified features are inherently confusable given the acoustic space. We concluded that in order to come to a feature set that can be used for a reliable and accurate automatic description of the speech signal; it could be beneficial to move away from quantised representations.
  • Schiller, N. O., Schuhmann, T., Neyndorff, A. C., & Jansma, B. M. (2006). The influence of semantic category membership on syntactic decisions: A study using event-related brain potentials. Brain Research, 1082(1), 153-164. doi:10.1016/j.brainres.2006.01.087.

    Abstract

    An event-related brain potentials (ERP) experiment was carried out to investigate the influence of semantic category membership on syntactic decision-making. Native speakers of German viewed a series of words that were semantically marked or unmarked for gender and made go/no-go decisions about the grammatical gender of those words. The electrophysiological results indicated that participants could make a gender decision earlier when words were semantically gender-marked than when they were semantically gender-unmarked. Our data provide evidence for the influence of semantic category membership on the decision of the syntactic gender of a visually presented German noun. More specifically, our results support models of language comprehension in which semantic information processing of words is initiated prior to syntactic information processing is finalized.
  • Schiller, N. O., & Costa, A. (2006). Different selection principles of freestanding and bound morphemes in language production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(5), 1201-1207. doi:10.1037/0278-7393.32.5.1201.

    Abstract

    Freestanding and bound morphemes differ in many (psycho)linguistic aspects. Some theorists have claimed that the representation and retrieval of freestanding and bound morphemes in the course of language production are governed by similar processing mechanisms. Alternatively, it has been proposed that both types of morphemes may be selected for production in different ways. In this article, the authors first review the available experimental evidence related to this topic and then present new experimental data pointing to the notion that freestanding and bound morphemes are retrieved following distinct processing principles: freestanding morphemes are subject to competition, bound morphemes not.
  • Schiller, N. O. (2006). Lexical stress encoding in single word production estimated by event-related brain potentials. Brain Research, 1112(1), 201-212. doi:10.1016/j.brainres.2006.07.027.

    Abstract

    An event-related brain potentials (ERPs) experiment was carried out to investigate the time course of lexical stress encoding in language production. Native speakers of Dutch viewed a series of pictures corresponding to bisyllabic names which were either stressed on the first or on the second syllable and made go/no-go decisions on the lexical stress location of those picture names. Behavioral results replicated a pattern that was observed earlier, i.e. faster button-press latencies to initial as compared to final stress targets. The electrophysiological results indicated that participants could make a lexical stress decision significantly earlier when picture names had initial than when they had final stress. Moreover, the present data suggest the time course of lexical stress encoding during single word form formation in language production. When word length is corrected for, the temporal interval for lexical stress encoding specified by the current ERP results falls into the time window previously identified for phonological encoding in language production.
  • Schiller, N. O., Jansma, B. M., Peters, J., & Levelt, W. J. M. (2006). Monitoring metrical stress in polysyllabic words. Language and Cognitive Processes, 21(1/2/3), 112-140. doi:10.1080/01690960400001861.

    Abstract

    This study investigated the monitoring of metrical stress information in internally generated speech. In Experiment 1, Dutch participants were asked to judge whether bisyllabic picture names had initial or final stress. Results showed significantly faster decision times for initially stressed targets (e.g., KAno ‘‘canoe’’) than for targets with final stress (e.g., kaNON ‘‘cannon’’; capital letters indicate stressed syllables). It was demonstrated that monitoring latencies are not a function of the picture naming or object recognition latencies to the same pictures. Experiments 2 and 3 replicated the outcome of the first experiment with trisyllabic picture names. These results are similar to the findings of Wheeldon and Levelt (1995) in a segment monitoring task. The outcome might be interpreted to demonstrate that phonological encoding in speech production is a rightward incremental process. Alternatively, the data might reflect the sequential nature of a perceptual mechanism used to monitor lexical stress.
  • Schiller, N. O., & Caramazza, A. (2006). Grammatical gender selection and the representation of morphemes: The production of Dutch diminutives. Language and Cognitive Processes, 21, 945-973. doi:10.1080/01690960600824344.

    Abstract

    In this study, we investigated grammatical feature selection during noun phrase production in Dutch. More specifically, we studied the conditions under which different grammatical genders select either the same or different determiners. Pictures of simple objects paired with a gender-congruent or a gender-incongruent distractor word were presented. Participants named the pictures using a noun phrase with the appropriate gender-marked determiner. Auditory (Experiment 1) or visual cues (Experiment 2) indicated whether the noun was to be produced in its standard or diminutive form. Results revealed a cost in naming latencies when target and distractor take different determiner forms independent of whether or not they have the same gender. This replicates earlier results showing that congruency effects are due to competition during the selection of determiner forms rather than gender features. The overall pattern of results supports the view that grammatical feature selection is an automatic consequence of lexical node selection and therefore not subject to interference from incongruent grammatical features. Selection of the correct determiner form, however, is a competitive process, implying that lexical node and grammatical feature selection operate with distinct principles.
  • Schiller, N. O., Meyer, A. S., & Levelt, W. J. M. (1997). The syllabic structure of spoken words: Evidence from the syllabification of intervocalic consonants. Language and Speech, 40(2), 103-140.

    Abstract

    A series of experiments was carried out to investigate the syllable affiliation of intervocalic consonants following short vowels, long vowels, and schwa in Dutch. Special interest was paid to words such as letter ['leter] ''id.,'' where a short vowel is followed by a single consonant. On phonological grounds one may predict that the first syllable should always be closed, but earlier psycholinguistic research had shown that speakers tend to leave these syllables open. In our experiments, bisyllabic word forms were presented aurally, and participants produced their syllables in reversed order (Experiments 1 through 5), or repeated the words inserting a pause between the syllables (Experiment 6). The results showed that participants generally closed syllables with a short vowel. However, in a significant number of the cases they produced open short vowel syllables. Syllables containing schwa, like syllables with a long vowel, were hardly ever closed. Word stress, the phonetic quality of the vowel in the first syllable, and the experimental context influenced syllabification. Taken together, the experiments show that native speakers syllabify bisyllabic Dutch nouns in accordance with a small set of prosodic output constraints. To account for the variability of the results, we propose that these constraints differ in their probabilities of being applied.
  • Segurado, R., Hamshere, M. L., Glaser, B., Nikolov, I., Moskvina, V., & Holmans, P. A. (2007). Combining linkage data sets for meta-analysis and mega-analysis: the GAW15 rheumatoid arthritis data set. BMC Proceedings, 1(Suppl 1): S104.

    Abstract

    We have used the genome-wide marker genotypes from Genetic Analysis Workshop 15 Problem 2 to explore joint evidence for genetic linkage to rheumatoid arthritis across several samples. The data consisted of four high-density genome scans on samples selected for rheumatoid arthritis. We cleaned the data, removed intermarker linkage disequilibrium, and assembled the samples onto a common genetic map using genome sequence positions as a reference for map interpolation. The individual studies were combined first at the genotype level (mega-analysis) prior to a multipoint linkage analysis on the combined sample, and second using the genome scan meta-analysis method after linkage analysis of each sample. The two approaches were compared, and give strong support to the HLA locus on chromosome 6 as a susceptibility locus. Other regions of interest include loci on chromosomes 11, 2, and 12.
  • Seidl, A., & Johnson, E. K. (2006). Infant word segmentation revisited: Edge alignment facilitates target extraction. Developmental Science, 9(6), 565-573.

    Abstract

    In a landmark study, Jusczyk and Aslin (1995) demonstrated that English-learning infants are able to segment words from continuous speech at 7.5 months of age. In the current study, we explored the possibility that infants segment words from the edges of utterances more readily than the middle of utterances. The same procedure was used as in Jusczyk and Aslin (1995); however, our stimuli were controlled for target word location and infants were given a shorter familiarization time to avoid ceiling effects. Infants were familiarized to one word that always occurred at the edge of an utterance (sentence-initial position for half of the infants and sentence-final position for the other half) and one word that always occurred in sentence-medial position. Our results demonstrate that infants segment words from the edges of an utterance more readily than from the middle of an utterance. In addition, infants segment words from utterance-final position just as readily as they segment words from utterance-initial position. Possible explanations for these results, as well as their implications for current models of the development of word segmentation, are discussed.
  • Sekine, K. (2006). Developmental changes in spatial frame of reference among preschoolers: Spontaneous gestures and speech in route descriptions. The Japanese journal of developmental psychology, 17(3), 263-271.

    Abstract

    This research investigated how spontaneous gestures during speech represent “Frames of Reference” (FoR) among preschool children, and how their FoRs change with age. Four-, five-, and six-year-olds (N=55) described the route from the nursery school to their own homes. Analysis of children’s utterances and gestures showed that mean length of utterance, speech time, and use of landmarks or right/left terms to describe a route, all increased with age. Most of 4-year-olds made gestures in the direction of the actual route to their homes, and their hands tend to be raised above the shoulder. In contrast, 6-year-olds used gestures to give directions that did not match the actual route, as if they were creating a virtual space in front of the speaker. Some 5- and 6-year-olds produced gestures that represented survey mapping. These results indicated that development of FoR in childhood may change from an egocentric FoR to a fixed FoR. As factors underlying development of FoR, verbal encoding skills and the commuting experience were also discussed.
  • Senft, G. (2006). Völkerkunde und Linguistik: Ein Plädoyer für interdisziplinäre Kooperation. Zeitschrift für Germanistische Linguistik, 34, 87-104.

    Abstract

    Starting with Hockett’s famous statement on the relationship between linguistics and anthropology - "Linguistics without anthropology is sterile; anthropology without linguistics is blind” - this paper first discusses the historic perspective of the topic. This discussion starts with Herder, Humboldt and Schleiermacher and ends with the present debate on the interrelationship of anthropology and linguistics. Then some excellent examples of interdisciplinary projects within anthropological linguistics (or linguistic anthropology) are presented. And finally it is illustrated why Hockett is still right.
  • Senft, G. (1985). Emic or etic or just another catch 22? A repartee to Hartmut Haberland. Journal of Pragmatics, 9, 845.
  • Senft, G. (1997). [Review of the book The design of language: An introduction to descriptive linguistics by Terry Crowley, John Lynch, Jeff Siegel, and Julie Piau]. Linguistics, 35, 781-785.
  • Senft, G. (2006). A biography in the strict sense of the term [Review of the book Malinowski: Odyssee of an anthropologist 1884-1920, vol. 1 by Michael Young]. Journal of Pragmatics, 38(4), 610-637. doi:10.1016/j.pragma.2005.06.012.
  • Senft, G. (2006). [Review of the book Bilder aus der Deutschen Südsee by Hermann Joseph Hiery]. Paideuma: Mitteilungen zur Kulturkunde, 52, 304-308.
  • Senft, G. (2007). [Review of the book Bislama reference grammar by Terry Crowley]. Linguistics, 45(1), 235-239.
  • Senft, G. (2006). [Review of the book Narrative as social practice: Anglo-Western and Australian Aboriginal oral traditions by Danièle M. Klapproth]. Journal of Pragmatics, 38(8), 1326-1331. doi:10.1016/j.pragma.2005.11.001.
  • Senft, G. (2006). [Review of the book Pacific Pidgins and Creoles: Origins, growth and development by Darrell T. Tryon and Jean-Michel Charpentier]. Linguistics, 44(1), 195-200. doi:10.1515/LING.2006.006.
  • Senft, G. (2007). [Review of the book Serial verb constructions - A cross-linguistic typology by Alexandra Y. Aikhenvald and Robert M. W. Dixon]. Linguistics, 45(4), 833-840. doi:10.1515/LING.2007.024.
  • Senft, G. (1985). How to tell - and understand - a 'dirty' joke in Kilivila. Journal of Pragmatics, 9, 815-834.
  • Senft, G. (1997). Magical conversation on the Trobriand Islands. Anthropos, 92, 369-391.
  • Senft, G. (1985). Kilivila: Die Sprache der Trobriander. Studium Linguistik, 17/18, 127-138.
  • Senft, G. (1985). Klassifikationspartikel im Kilivila: Glossen zu ihrer morphologischen Rolle, ihrem Inventar und ihrer Funktion in Satz und Diskurs. Linguistische Berichte, 99, 373-393.
  • Senft, G. (1985). Weyeis Wettermagie: Eine ethnolinguistische Untersuchung von fünf magischen Formeln eines Wettermagiers auf den Trobriand Inseln. Zeitschrift für Ethnologie, 110(2), 67-90.
  • Senft, G. (1985). Trauer auf Trobriand: Eine ethnologisch/-linguistische Fallstudie. Anthropos, 80, 471-492.
  • Seuren, P. A. M. (2006). The natural logic of language and cognition. Pragmatics, 16(1), 103-138.
  • Seuren, P. A. M. (2007). The theory that dare not speak its name: A rejoinder to Mufwene and Francis. Language Sciences, 29(4), 571-573. doi:10.1016/j.langsci.2007.02.001.
  • Seuren, P. A. M. (1982). De spellingsproblematiek in Suriname: Een inleiding. OSO, 1(1), 71-79.
  • Seuren, P. A. M. (1971). Chomsky, man en werk. De Gids, 134, 298-308.
  • Seuren, P. A. M. (1979). [Review of the book Approaches to natural language ed. by K. Hintikka, J. Moravcsik and P. Suppes]. Leuvense Bijdragen, 68, 163-168.
  • Seuren, P. A. M. (1971). [Review of the book Introduction à la grammaire générative by Nicolas Ruwet]. Linguistics, 10(78), 111-120. doi:10.1515/ling.1972.10.78.72.
  • Seuren, P. A. M. (1971). [Review of the book La linguistique synchronique by Andre Martinet]. Linguistics, 10(78), 109-111. doi:10.1515/ling.1972.10.78.72.
  • Seuren, P. A. M. (1997). [Review of the book Schets van de Nederlandse Taal. Grammatica, poëtica en retorica by Adriaen Verwer, Naar de editie van E. van Driel (1783) vertaald door J. Knol. Ed. Th.A.J.M. Janssen & J. Noordegraaf]. Nederlandse Taalkunde, 4, 370-374.
  • Seuren, P. A. M. (1971). [Review of the book Syntaxis by A. Kraak and W. Klooster]. Foundations of Language, 7(3), 441-445.
  • Seuren, P. A. M. (2006). McCawley’s legacy [Review of the book Polymorphous linguistics: Jim McCawley's legacy ed. by Salikoko S. Mufwene, Elaine J. Francis and Rebecca S. Wheeler]. Language Sciences, 28(5), 521-526. doi:10.1016/j.langsci.2006.02.001.
  • Seuren, P. A. M. (1979). Meer over minder dan hoeft. De Nieuwe Taalgids, 72(3), 236-239.
  • Seuren, P. A. M. (1982). Internal variability in competence. Linguistische Berichte, 77, 1-31.
  • Shatzman, K. B., & McQueen, J. M. (2006). Segment duration as a cue to word boundaries in spoken-word recognition. Perception & Psychophysics, 68(1), 1-16.

    Abstract

    In two eye-tracking experiments, we examined the degree to which listeners use acoustic cues to word boundaries. Dutch participants listened to ambiguous sentences in which stop-initial words (e.g., pot, jar) were preceded by eens (once); the sentences could thus also refer to cluster-initial words (e.g., een spot, a spotlight). The participants made fewer fixations to target pictures (e.g., a jar) when the target and the preceding [s] were replaced by a recording of the cluster-initial word than when they were spliced from another token of the target-bearing sentence (Experiment 1). Although acoustic analyses revealed several differences between the two recordings, only [s] duration correlated with the participants’ fixations (more target fixations for shorter [s]s). Thus, we found that listeners apparently do not use all available acoustic differences equally. In Experiment 2, the participants made more fixations to target pictures when the [s] was shortened than when it was lengthened. Utterance interpretation can therefore be influenced by individual segment duration alone.
  • Shatzman, K. B., & McQueen, J. M. (2006). Prosodic knowledge affects the recognition of newly acquired words. Psychological Science, 17(5), 372-377. doi:10.1111/j.1467-9280.2006.01714.x.

    Abstract

    An eye-tracking study examined the involvement of prosodic knowledge—specifically, the knowledge that monosyllabic words tend to have longer durations than the first syllables of polysyllabic words—in the recognition of newly learned words. Participants learned new spoken words (by associating them to novel shapes): bisyllables and onset-embedded monosyllabic competitors (e.g., baptoe and bap). In the learning phase, the duration of the ambiguous sequence (e.g., bap) was held constant. In the test phase, its duration was longer than, shorter than, or equal to its learning-phase duration. Listeners’ fixations indicated that short syllables tended to be interpreted as the first syllables of the bisyllables, whereas long syllables generated more monosyllabic-word interpretations. Recognition of newly acquired words is influenced by prior prosodic knowledge and is therefore not determined solely on the basis of stored episodes of those words.
  • Shatzman, K. B., & McQueen, J. M. (2006). The modulation of lexical competition by segment duration. Psychonomic Bulletin & Review, 13(6), 966-971.

    Abstract

    In an eye-tracking study, we examined how fine-grained phonetic detail, such as segment duration, influences the lexical competition process during spoken word recognition. Dutch listeners’ eye movements to pictures of four objects were monitored as they heard sentences in which a stop-initial target word (e.g., pijp “pipe”) was preceded by an [s]. The participants made more fixations to pictures of cluster-initial words (e.g., spijker “nail”) when they heard a long [s] (mean duration, 103 msec) than when they heard a short [s] (mean duration, 73 msec). Conversely, the participants made more fixations to pictures of the stop-initial words when they heard a short [s] than when they heard a long [s]. Lexical competition between stop- and cluster-initial words, therefore, is modulated by segment duration differences of only 30 msec.
  • Shi, R., Werker, J. F., & Cutler, A. (2006). Recognition and representation of function words in English-learning infants. Infancy, 10(2), 187-198. doi:10.1207/s15327078in1002_5.

    Abstract

    We examined infants' recognition of functors and the accuracy of the representations that infants construct of the perceived word forms. Auditory stimuli were “Functor + Content Word” versus “Nonsense Functor + Content Word” sequences. Eight-, 11-, and 13-month-old infants heard both real functors and matched nonsense functors (prosodically analogous to their real counterparts but containing a segmental change). Results reveal that 13-month-olds recognized functors with attention to segmental detail. Eight-month-olds did not distinguish real versus nonsense functors. The performance of 11-month-olds fell in between that of the older and younger groups, consistent with an emerging recognition of real functors. The three age groups exhibited a clear developmental trend. We propose that in the earliest stages of vocabulary acquisition, function elements receive no segmentally detailed representations, but such representations are gradually constructed so that once vocabulary growth starts in earnest, fully specified functor representations are in place to support it.
  • Shi, R., Cutler, A., Werker, J., & Cruickshank, M. (2006). Frequency and form as determinants of functor sensitivity in English-acquiring infants. Journal of the Acoustical Society of America, 119(6), EL61-EL67. doi:10.1121/1.2198947.

    Abstract

    High-frequency functors are arguably among the earliest perceived word forms and may assist extraction of initial vocabulary items. Canadian 11- and 8-month-olds were familiarized to pseudo-nouns following either a high-frequency functor the or a low-frequency functor her versus phonetically similar mispronunciations of each, kuh and ler, and then tested for recognition of the pseudo-nouns. A preceding the (but not kuh, her, ler)facilitated extraction of the pseudo-nouns for 11-month-olds; the is thus well-specified in form for these infants. However, both the and kuh (but not her-ler )f aciliated segmentation or 8-month-olds, suggesting an initial underspecified representation of high-frequency functors.
  • Shopen, T., Reid, N., Shopen, G., & Wilkins, D. G. (1997). Ensuring the survival of Aboriginal and Torres Strait islander languages into the 21st century. Australian Review of Applied Linguistics, 10(1), 143-157.

    Abstract

    Aboriginal languages threatened by speakers poor economic and social conditions; some may survive through support for community development, language maintenance, bilingual education and training of Aboriginal teachers and linguists, and nonAboriginal teachers of Aboriginal and Islander students.
  • Slobin, D. I., & Bowerman, M. (2007). Interfaces between linguistic typology and child language research. Linguistic Typology, 11(1), 213-226. doi:10.1515/LINGTY.2007.015.
  • Smits, R., Sereno, J., & Jongman, A. (2006). Categorization of sounds. Journal of Experimental Psychology: Human Perception and Performance, 32(3), 733-754. doi:10.1037/0096-1523.32.3.733.

    Abstract

    The authors conducted 4 experiments to test the decision-bound, prototype, and distribution theories for the categorization of sounds. They used as stimuli sounds varying in either resonance frequency or duration. They created different experimental conditions by varying the variance and overlap of 2 stimulus distributions used in a training phase and varying the size of the stimulus continuum used in the subsequent test phase. When resonance frequency was the stimulus dimension, the pattern of categorization-function slopes was in accordance with the decision-bound theory. When duration was the stimulus dimension, however, the slope pattern gave partial support for the decision-bound and distribution theories. The authors introduce a new categorization model combining aspects of decision-bound and distribution theories that gives a superior account of the slope patterns across the 2 stimulus dimensions.
  • Snijders, T. M., Kooijman, V., Cutler, A., & Hagoort, P. (2007). Neurophysiological evidence of delayed segmentation in a foreign language. Brain Research, 1178, 106-113. doi:10.1016/j.brainres.2007.07.080.

    Abstract

    Previous studies have shown that segmentation skills are language-specific, making it difficult to segment continuous speech in an unfamiliar language into its component words. Here we present the first study capturing the delay in segmentation and recognition in the foreign listener using ERPs. We compared the ability of Dutch adults and of English adults without knowledge of Dutch (‘foreign listeners’) to segment familiarized words from continuous Dutch speech. We used the known effect of repetition on the event-related potential (ERP) as an index of recognition of words in continuous speech. Our results show that word repetitions in isolation are recognized with equivalent facility by native and foreign listeners, but word repetitions in continuous speech are not. First, words familiarized in isolation are recognized faster by native than by foreign listeners when they are repeated in continuous speech. Second, when words that have previously been heard only in a continuous-speech context re-occur in continuous speech, the repetition is detected by native listeners, but is not detected by foreign listeners. A preceding speech context facilitates word recognition for native listeners, but delays or even inhibits word recognition for foreign listeners. We propose that the apparent difference in segmentation rate between native and foreign listeners is grounded in the difference in language-specific skills available to the listeners.
  • Snowdon, C. T., & Cronin, K. A. (2007). Cooperative breeders do cooperate. Behavioural Processes, 76, 138-141. doi:10.1016/j.beproc.2007.01.016.

    Abstract

    Bergmuller et al. (2007) make an important contribution to studies of cooperative breeding and provide a theoretical basis for linking the evolution of cooperative breeding with cooperative behavior.We have long been involved in empirical research on the only family of nonhuman primates to exhibit cooperative breeding, the Callitrichidae, which includes marmosets and tamarins, with studies in both field and captive contexts. In this paper we expand on three themes from Bergm¨uller et al. (2007) with empirical data. First we provide data in support of the importance of helpers and the specific benefits that helpers can gain in terms of fitness. Second, we suggest that mechanisms of rewarding helpers are more common and more effective in maintaining cooperative breeding than punishments. Third, we present a summary of our own research on cooperative behavior in cotton-top tamarins (Saguinus oedipus) where we find greater success in cooperative problem solving than has been reported for non-cooperatively breeding species.
  • Spiteri, E., Konopka, G., Coppola, G., Bomar, J., Oldham, M., Ou, J., Vernes, S. C., Fisher, S. E., Ren, B., & Geschwind, D. (2007). Identification of the transcriptional targets of FOXP2, a gene linked to speech and language, in developing human brain. American Journal of Human Genetics, 81(6), 1144-1157. doi:10.1086/522237.

    Abstract

    Mutations in FOXP2, a member of the forkhead family of transcription factor genes, are the only known cause of developmental speech and language disorders in humans. To date, there are no known targets of human FOXP2 in the nervous system. The identification of FOXP2 targets in the developing human brain, therefore, provides a unique tool with which to explore the development of human language and speech. Here, we define FOXP2 targets in human basal ganglia (BG) and inferior frontal cortex (IFC) by use of chromatin immunoprecipitation followed by microarray analysis (ChIP-chip) and validate the functional regulation of targets in vitro. ChIP-chip identified 285 FOXP2 targets in fetal human brain; statistically significant overlap of targets in BG and IFC indicates a core set of 34 transcriptional targets of FOXP2. We identified targets specific to IFC or BG that were not observed in lung, suggesting important regional and tissue differences in FOXP2 activity. Many target genes are known to play critical roles in specific aspects of central nervous system patterning or development, such as neurite outgrowth, as well as plasticity. Subsets of the FOXP2 transcriptional targets are either under positive selection in humans or differentially expressed between human and chimpanzee brain. This is the first ChIP-chip study to use human brain tissue, making the FOXP2-target genes identified in these studies important to understanding the pathways regulating speech and language in the developing human brain. These data provide the first insight into the functional network of genes directly regulated by FOXP2 in human brain and by evolutionary comparisons, highlighting genes likely to be involved in the development of human higher-order cognitive processes.
  • Sprenger, S. A., Levelt, W. J. M., & Kempen, G. (2006). Lexical access during the production of idiomatic phrases. Journal of Memory and Language, 54(2), 161-184. doi:10.1016/j.jml.2005.11.001.

    Abstract

    In three experiments we test the assumption that idioms have their own lexical entry, which is linked to its constituent lemmas (Cutting & Bock, 1997). Speakers produced idioms or literal phrases (Experiment 1), completed idioms (Experiment 2), or switched between idiom completion and naming (Experiment 3). The results of Experiment 1 show that identity priming speeds up idiom production more effectively than literal phrase production, indicating a hybrid representation of idioms. In Experiment 2, we find effects of both phonological and semantic priming. Thus, elements of an idiom can not only be primed via their wordform, but also via the conceptual level. The results of Experiment 3 show that preparing the last word of an idiom primes naming of both phonologically and semantically related targets, indicating that literal word meanings become active during idiom production. The results are discussed within the framework of the hybrid model of idiom representation.
  • Stewart, A., Holler, J., & Kidd, E. (2007). Shallow processing of ambiguous pronouns: Evidence for delay. Quarterly Journal of Experimental Psychology, 60, 1680-1696. doi:10.1080/17470210601160807.
  • Stivers, T., & Majid, A. (2007). Questioning children: Interactional evidence of implicit bias in medical interviews. Social Psychology Quarterly, 70(4), 424-441.

    Abstract

    Social psychologists have shown experimentally that implicit race bias can influence an individual's behavior. Implicit bias has been suggested to be more subtle and less subject to cognitive control than more explicit forms of racial prejudice. Little is known about how implicit bias is manifest in naturally occurring social interaction. This study examines the factors associated with physicians selecting children rather than parents to answer questions in pediatric interviews about routine childhood illnesses. Analysis of the data using a Generalized Linear Latent and Mixed Model demonstrates a significant effect of parent race and education on whether physicians select children to answer questions. Black children and Latino children of low-education parents are less likely to be selected to answer questions than their same aged white peers irrespective of education. One way that implicit bias manifests itself in naturally occurring interaction may be through the process of speaker selection during questioning.
  • Stivers, T., & Robinson, J. D. (2006). A preference for progressivity in interaction. Language in Society, 35(3), 367-392. doi:10.1017/S0047404506060179.

    Abstract

    This article investigates two types of preference organization in interaction: in response to a question that selects a next speaker in multi-party interaction, the preference for answers over non-answer responses as a category of a response; and the preference for selected next speakers to respond. It is asserted that the turn allocation rule specified by Sacks, Schegloff & Jefferson (1974) which states that a response is relevant by the selected next speaker at the transition relevance place is affected by these two preferences once beyond a normal transition space. It is argued that a “second-order” organization is present such that interactants prioritize a preference for answers over a preference for a response by the selected next speaker. This analysis reveals an observable preference for progressivity in interaction.
  • Suomi, K., McQueen, J. M., & Cutler, A. (1997). Vowel harmony and speech segmentation in Finnish. Journal of Memory and Language, 36, 422-444. doi:10.1006/jmla.1996.2495.

    Abstract

    Finnish vowel harmony rules require that if the vowel in the first syllable of a word belongs to one of two vowel sets, then all subsequent vowels in that word must belong either to the same set or to a neutral set. A harmony mismatch between two syllables containing vowels from the opposing sets thus signals a likely word boundary. We report five experiments showing that Finnish listeners can exploit this information in an on-line speech segmentation task. Listeners found it easier to detect words likehymyat the end of the nonsense stringpuhymy(where there is a harmony mismatch between the first two syllables) than in the stringpyhymy(where there is no mismatch). There was no such effect, however, when the target words appeared at the beginning of the nonsense string (e.g.,hymypuvshymypy). Stronger harmony effects were found for targets containing front harmony vowels (e.g.,hymy) than for targets containing back harmony vowels (e.g.,paloinkypaloandkupalo). The same pattern of results appeared whether target position within the string was predictable or unpredictable. Harmony mismatch thus appears to provide a useful segmentation cue for the detection of word onsets in Finnish speech.
  • Swaab, T. Y., Brown, C. M., & Hagoort, P. (1997). Spoken sentence comprehension in aphasia: Event-related potential evidence for a lexical integration deficit. Journal of Cognitive Neuroscience, 9(1), 39-66.

    Abstract

    In this study the N400 component of the event-related potential was used to investigate spoken sentence understanding in Broca's and Wernicke's aphasics. The aim of the study was to determine whether spoken sentence comprehension problems in these patients might result from a deficit in the on-line integration of lexical information. Subjects listened to sentences spoken at a normal rate. In half of these sentences, the meaning of the final word of the sentence matched the semantic specifications of the preceding sentence context. In the other half of the sentences, the sentence-final word was anomalous with respect to the preceding sentence context. The N400 was measured to the sentence-final words in both conditions. The results for the aphasic patients (n = 14) were analyzed according to the severity of their comprehension deficit and compared to a group of 12 neurologically unimpaired age-matched controls, as well as a group of 6 nonaphasic patients with a lesion in the right hemisphere. The nonaphasic brain damaged patients and the aphasic patients with a light comprehension deficit (high comprehenders, n = 7) showed an N400 effect that was comparable to that of the neurologically unimpaired subjects. In the aphasic patients with a moderate to severe comprehension deficit (low comprehenders, n = 7), a reduction and delay of the N400 effect was obtained. In addition, the P300 component was measured in a classical oddball paradigm, in which subjects were asked to count infrequent low tones in a random series of high and low tones. No correlation was found between the occurrence of N400 and P300 effects, indicating that changes in the N400 results were related to the patients' language deficit. Overall, the pattern of results was compatible with the idea that aphasic patients with moderate to severe comprehension problems are impaired in the integration of lexical information into a higher order representation of the preceding sentence context.
  • Swingley, D., & Aslin, R. N. (2007). Lexical competition in young children's word learning. Cognitive Psychology, 54(2), 99-132.

    Abstract

    In two experiments, 1.5-year-olds were taught novel words whose sound patterns were phonologically similar to familiar words (novel neighbors) or were not (novel nonneighbors). Learning was tested using a picture-fixation task. In both experiments, children learned the novel nonneighbors but not the novel neighbors. In addition, exposure to the novel neighbors impaired recognition performance on familiar neighbors. Finally, children did not spontaneously use phonological differences to infer that a novel word referred to a novel object. Thus, lexical competition—inhibitory interaction among words in speech comprehension—can prevent children from using their full phonological sensitivity in judging words as novel. These results suggest that word learning in young children, as in adults, relies not only on the discrimination and identification of phonetic categories, but also on evaluating the likelihood that an utterance conveys a new word.
  • Swingley, D. (2007). Lexical exposure and word-from encoding in 1.5-year-olds. Developmental Psychology, 43(2), 454-464. doi:10.1037/0012-1649.43.2.454.

    Abstract

    In this study, 1.5-year-olds were taught a novel word. Some children were familiarized with the word's phonological form before learning the word's meaning. Fidelity of phonological encoding was tested in a picture-fixation task using correctly pronounced and mispronounced stimuli. Only children with additional exposure in familiarization showed reduced recognition performance given slight mispronunciations relative to correct pronunciations; children with fewer exposures did not. Mathematical modeling of vocabulary exposure indicated that children may hear thousands of words frequently enough for accurate encoding. The results provide evidence compatible with partial failure of phonological encoding at 19 months of age, demonstrate that this limitation in learning does not always hinder word recognition, and show the value of infants' word-form encoding in early lexical development.
  • Swinney, D. A., & Cutler, A. (1979). The access and processing of idiomatic expressions. Journal of Verbal Learning an Verbal Behavior, 18, 523-534. doi:10.1016/S0022-5371(79)90284-6.

    Abstract

    Two experiments examined the nature of access, storage, and comprehension of idiomatic phrases. In both studies a Phrase Classification Task was utilized. In this, reaction times to determine whether or not word strings constituted acceptable English phrases were measured. Classification times were significantly faster to idiom than to matched control phrases. This effect held under conditions involving different categories of idioms, different transitional probabilities among words in the phrases, and different levels of awareness of the presence of idioms in the materials. The data support a Lexical Representation Hypothesis for the processing of idioms.
  • Takashima, A., Petersson, K. M., Rutters, F., Tendolkar, I., Jensen, O., Zwarts, M. J., McNaughton, B. L., & Fernández, G. (2006). Declarative memory consolidation in humans: A prospective functional magnetic resonance imaging study. Proceedings of the National Academy of Sciences of the United States of America [PNAS], 103(3), 756-761.

    Abstract

    Retrieval of recently acquired declarative memories depends on
    the hippocampus, but with time, retrieval is increasingly sustainable
    by neocortical representations alone. This process has been
    conceptualized as system-level consolidation. Using functional
    magnetic resonance imaging, we assessed over the course of three
    months how consolidation affects the neural correlates of memory
    retrieval. The duration of slow-wave sleep during a nap/rest
    period after the initial study session and before the first scan
    session on day 1 correlated positively with recognition memory
    performance for items studied before the nap and negatively with
    hippocampal activity associated with correct confident recognition.
    Over the course of the entire study, hippocampal activity for
    correct confident recognition continued to decrease, whereas activity
    in a ventral medial prefrontal region increased. These findings,
    together with data obtained in rodents, may prompt a
    revision of classical consolidation theory, incorporating a transfer
    of putative linking nodes from hippocampal to prelimbic prefrontal
    areas.
  • Takashima, A., Nieuwenhuis, I. L. C., Rijpkema, M., Petersson, K. M., Jensen, O., & Fernández, G. (2007). Memory trace stabilization leads to large-scale changes in the retrieval network: A functional MRI study on associative memory. Learning & Memory, 14, 472-479. doi:10.1101/lm.605607.

    Abstract

    Spaced learning with time to consolidate leads to more stabile memory traces. However, little is known about the neural correlates of trace stabilization, especially in humans. The present fMRI study contrasted retrieval activity of two well-learned sets of face-location associations, one learned in a massed style and tested on the day of learning (i.e., labile condition) and another learned in a spaced scheme over the course of one week (i.e., stabilized condition). Both sets of associations were retrieved equally well, but the retrieval of stabilized association was faster and accompanied by large-scale changes in the network supporting retrieval. Cued recall of stabilized as compared with labile associations was accompanied by increased activity in the precuneus, the ventromedial prefrontal cortex, the bilateral temporal pole, and left temporo–parietal junction. Conversely, memory representational areas such as the fusiform gyrus for faces and the posterior parietal cortex for locations did not change their activity with stabilization. The changes in activation in the precuneus, which also showed increased connectivity with the fusiform area, are likely to be related to the spatial nature of our task. The activation increase in the ventromedial prefrontal cortex, on the other hand, might reflect a general function in stabilized memory retrieval. This area might succeed the hippocampus in linking distributed neocortical representations.
  • Tendolkar, I., Arnold, J., Petersson, K. M., Weis, S., Brockhaus-Dumke, A., Van Eijndhoven, P., Buitelaar, J., & Fernández, G. (2007). Probing the neural correlates of associative memory formation: A parametrically analyzed event-related functional MRI study. Brain Research, 1142, 159-168. doi:10.1016/j.brainres.2007.01.040.

    Abstract

    The medial temporal lobe (MTL) is crucial for declarative memory formation, but the function of its subcomponents in associative memory formation remains controversial. Most functional imaging studies on this topic are based on a stepwise approach comparing a condition with and one without associative encoding. Extending this approach we applied additionally a parametric analysis by varying the amount of associative memory formation. We found a hippocampal subsequent memory effect of almost similar magnitude regardless of the amount of associations formed. By contrast, subsequent memory effects in rhinal and parahippocampal cortices were parametrically and positively modulated by the amount of associations formed. Our results indicate that the parahippocampal region supports associative memory formation as tested here and the hippocampus adds a general mnemonic operation. This pattern of results might suggest a new interpretation. Instead of having either a fixed division of labor between the hippocampus (associative memory formation) and the rhinal cortex (non-associative memory formation) or a functionally unitary MTL system, in which all substructures are contributing to memory formation in a similar way, we propose that the location where associations are formed within the MTL depends on the kind of associations bound: If visual single-dimension associations, as used here, can already be integrated within the parahippocampal region, the hippocampus might add a general purpose mnemonic operation only. In contrast, if associations have to be formed across widely distributed neocortical representations, the hippocampus may provide a binding operation in order to establish a coherent memory.
  • Terrill, A. (2007). [Review of ‘Andrew Pawley, Robert Attenborough, Jack Golson, and Robin Hide, eds. 2005. Papuan pasts: Cultural, linguistic and biological histories of Papuan-speaking people]. Oceanic Linguistics, 46(1), 313-321. doi:10.1353/ol.2007.0025.
  • Terrill, A. (2006). Body part terms in Lavukaleve, a Papuan language of the Solomon Islands. Language Sciences, 28(2-3), 304-322. doi:10.1016/j.langsci.2005.11.008.

    Abstract

    This paper explores body part terms in Lavukaleve, a Papuan isolate spoken in the Solomon Islands. The full set of body part terms collected so far is presented, and their grammatical properties are explained. It is argued that Lavukaleve body part terms do not enter into partonomic relations with each other, and that a hierarchical structure of body part terms does not apply for Lavukaleve. It is shown too that some universal claims which have been made about the expression of terms relating to limbs are contradicted in Lavukaleve, which has only one general term covering arm, hand, leg and (for some people) foot.
  • Theakston, A. L., Lieven, E. V., Pine, J. M., & Rowland, C. F. (2006). Note of clarification on the coding of light verbs in ‘Semantic generality, input frequency and the acquisition of syntax’ (Journal of Child Language 31, 61–99). Journal of Child Language, 33(1), 191-197. doi:10.1017/S0305000905007178.

    Abstract

    In our recent paper, ‘Semantic generality, input frequency and the acquisition of syntax’ (Journal of Child Language31, 61–99), we presented data from two-year-old children to examine the question of whether the semantic generality of verbs contributed to their ease and stage of acquisition over and above the effects of their typically high frequency in the language to which children are exposed. We adopted two different categorization schemes to determine whether individual verbs should be considered to be semantically general, or ‘light’, or whether they encoded more specific semantics. These categorization schemes were based on previous work in the literature on the role of semantically general verbs in early verb acquisition, and were designed, in the first case, to be a conservative estimate of semantic generality, including only verbs designated as semantically general by a number of other researchers (e.g. Clark, 1978; Pinker, 1989; Goldberg, 1998), and, in the second case, to be a more inclusive estimate of semantic generality based on Ninio's (1999a,b) suggestion that grammaticalizing verbs encode the semantics associated with semantically general verbs. Under this categorization scheme, a much larger number of verbs were included as semantically general verbs.
  • Tomasello, M., Carpenter, M., & Liszkowski, U. (2007). A new look at infant pointing. Child Development, 78, 705-722. doi:10.1111/j.1467-8624.2007.01025.x.

    Abstract

    The current article proposes a new theory of infant pointing involving multiple layers of intentionality and shared intentionality. In the context of this theory, evidence is presented for a rich interpretation of prelinguistic communication, that is, one that posits that when 12-month-old infants point for an adult they are in some sense trying to influence her mental states. Moreover, evidence is also presented for a deeply social view in which infant pointing is best understood—on many levels and in many ways—as depending on uniquely human skills and motivations for cooperation and shared intentionality (e.g., joint intentions and attention with others). Children's early linguistic skills are built on this already existing platform of prelinguistic communication.
  • Van Alphen, P. M., & McQueen, J. M. (2006). The effect of voice onset time differences on lexical access in Dutch. Journal of Experimental Psychology: Human Perception and Performance, 32(1), 178-196. doi:10.1037/0096-1523.32.1.178.

    Abstract

    Effects on spoken-word recognition of prevoicing differences in Dutch initial voiced plosives were examined. In 2 cross-modal identity-priming experiments, participants heard prime words and nonwords beginning with voiced plosives with 12, 6, or 0 periods of prevoicing or matched items beginning with voiceless plosives and made lexical decisions to visual tokens of those items. Six-period primes had the same effect as 12-period primes. Zero-period primes had a different effect, but only when their voiceless counterparts were real words. Listeners could nevertheless discriminate the 6-period primes from the 12- and 0-period primes. Phonetic detail appears to influence lexical access only to the extent that it is useful: In Dutch, presence versus absence of prevoicing is more informative than amount of prevoicing.
  • Van den Brink, D., Brown, C. M., & Hagoort, P. (2006). The cascaded nature of lexical selection and integration in auditory sentence processing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(3), 364-372. doi:10.1037/0278-7393.32.3.364.

    Abstract

    An event-related brain potential experiment was carried out to investigate the temporal relationship
    between lexical selection and the semantic integration in auditory sentence processing. Participants were
    presented with spoken sentences that ended with a word that was either semantically congruent or
    anomalous. Information about the moment in which a sentence-final word could uniquely be identified,
    its isolation point (IP), was compared with the onset of the elicited N400 congruity effect, reflecting
    semantic integration processing. The results revealed that the onset of the N400 effect occurred prior to
    the IP of the sentence-final words. Moreover, the factor early or late IP did not affect the onset of the
    N400. These findings indicate that lexical selection and semantic integration are cascading processes, in
    that semantic integration processing can start before the acoustic information allows the selection of a
    unique candidate and seems to be attempted in parallel for multiple candidates that are still compatible
    with the bottom–up acoustic input.
  • Van Wijk, C., & Kempen, G. (1982). De ontwikkeling van syntactische formuleervaardigheid bij kinderen van 9 tot 16 jaar. Nederlands Tijdschrift voor de Psychologie en haar Grensgebieden, 37(8), 491-509.

    Abstract

    An essential phenomenon in the development towards syntactic maturity after early childhood is the increasing use of so-called sentence-combining transformations. Especially by using subordination, complex sentences are produced. The research reported here is an attempt to arrive at a more adequate characterization and explanation. Our starting point was an analysis of 280 texts written by Dutch-speaking pupils of the two highest grades of the primary school and the four lowest grades of three different types of secondary education. It was examined whether systematic shifts in the use of certain groups of so-called function words could be traced. We concluded that the development of the syntactic formulating ability can be characterized as an increase in connectivity: the use of all kinds of function words which explicitly mark logico-semantic relations between propositions. This development starts by inserting special adverbs and coordinating conjunctions resulting in various types of coordination. In a later stage, the syntactic patterning of the sentence is affected as well (various types of subordination). The increase in sentence complexity is only one aspect of the entire development. An explanation for the increase in connectivity is offered based upon a distinction between narrative and expository language use. The latter, but not the former, is characterized by frequent occurrence of connectives. The development in syntactic formulating ability includes a high level of skill in expository language use. Speed of development is determined by intensity of training, e.g. in scholastic and occupational settings.
  • Van Turennout, M., Hagoort, P., & Brown, C. M. (1997). Electrophysiological evidence on the time course of semantic and phonological processes in speech production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23(4), 787-806.

    Abstract

    The temporal properties of semantic and phonological processes in speech production were investigated in a new experimental paradigm using movement-related brain potentials. The main experimental task was picture naming. In addition, a 2-choice reaction go/no-go procedure was included, involving a semantic and a phonological categorization of the picture name. Lateralized readiness potentials (LRPs) were derived to test whether semantic and phonological information activated motor processes at separate moments in time. An LRP was only observed on no-go trials when the semantic (not the phonological) decision determined the response hand. Varying the position of the critical phoneme in the picture name did not affect the onset of the LRP but rather influenced when the LRP began to differ on go and no-go trials and allowed the duration of phonological encoding of a word to be estimated. These results provide electrophysiological evidence for early semantic activation and later phonological encoding.
  • Van Berkum, J. J. A., Koornneef, A. W., Otten, M., & Nieuwland, M. S. (2007). Establishing reference in language comprehension: An electrophysiological perspective. Brain Research, 1146, 158-171. doi:10.1016/j.brainres.2006.06.091.

    Abstract

    The electrophysiology of language comprehension has long been dominated by research on syntactic and semantic integration. However, to understand expressions like "he did it" or "the little girl", combining word meanings in accordance with semantic and syntactic constraints is not enough--readers and listeners also need to work out what or who is being referred to. We review our event-related brain potential research on the processes involved in establishing reference, and present a new experiment in which we examine when and how the implicit causality associated with specific interpersonal verbs affects the interpretation of a referentially ambiguous pronoun. The evidence suggests that upon encountering a singular noun or pronoun, readers and listeners immediately inspect their situation model for a suitable discourse entity, such that they can discriminate between having too many, too few, or exactly the right number of referents within at most half a second. Furthermore, our implicit causality findings indicate that a fragment like "David praised Linda because..." can immediately foreground a particular referent, to the extent that a subsequent "he" is at least initially construed as a syntactic error. In all, our brain potential findings suggest that referential processing is highly incremental, and not necessarily contingent upon the syntax. In addition, they demonstrate that we can use ERPs to relatively selectively keep track of how readers and listeners establish reference.
  • Van Staden, M., & Majid, A. (2006). Body colouring task. Language Sciences, 28(2-3), 158-161. doi:10.1016/j.langsci.2005.11.004.

    Abstract

    This paper outlines a method for collecting information on the extensional meanings of body part terms using a colouring in task.
  • Van Wingen, G., Van Broekhoven, F., Verkes, R. J., Petersson, K. M., Bäckström, T., Buitelaar, J., & Fernández, G. (2007). How progesterone impairs memory for biologically salient stimuli in healthy young women. Journal of Neuroscience, 27(42), 11416-11423. doi:10.1523/JNEUROSCI.1715-07.2007.

    Abstract

    Progesterone, or rather its neuroactive metabolite allopregnanolone, modulates amygdala activity and thereby influences anxiety. Cognition and, in particular, memory are also altered by allopregnanolone. In the present study, we investigated whether allopregnanolone modulates memory for biologically salient stimuli by influencing amygdala activity, which in turn may affect neural processes in other brain regions. A single progesterone dose was administered orally to healthy young women in a double-blind, placebo-controlled, crossover design, and participants were asked to memorize and recognize faces while undergoing functional magnetic resonance imaging. Progesterone decreased recognition accuracy without affecting reaction times. The imaging results show that the amygdala, hippocampus, and fusiform gyrus supported memory formation. Importantly, progesterone decreased responses to faces in the amygdala and fusiform gyrus duringmemoryencoding, whereas it increased hippocampal responses. The progesterone-induced decrease in neural activity in the amygdala and fusiform gyrus predicted the decrease in memory performance across subjects. However, progesterone did not modulate the differential activation between subsequently remembered and subsequently forgotten faces in these areas. A similar pattern of results was observed in the fusiform gyrus and prefrontal cortex during memory retrieval. These results suggest that allopregnanolone impairs memory by reducing the recruitment of those brain regions that support memory formation and retrieval. Given the important role of the amygdala in the modulation of memory, these results suggest that allopregnanolone alters memory by influencing amygdala activity, which in turn may affect memory processes in other brain regions
  • van de Beek, D., Weisfelt, M., Hoogman, M., de Gans, J., & Schmand, B. (2006). Neuropsychological sequelae of bacterial meningitis: The influence of alcoholism and adjunctive dexamethasone therapy [Letter to the editor]. Brain, 129, E46. doi:10.1093/brain/awl052.

    Abstract

    The article by Schmidt and colleagues (2006) reported neuropsychological sequelae of bacterial and viral meningitis. In a retrospective study, they carefully selected patients and excluded those with concomitant conditions such as alcoholism after Streptococcus pneumoniae meningitis (Schmidt et al., 2006). The authors should be complimented for their solid work; however, some questions can be raised.
  • Van Valin Jr., R. D. (2007). Some thoughts on the reason for the lesser status of typology in the USA as opposed to Europe. Linguistic Typology, 11(1), 253-257. doi:10.1515/LINGTY.2007.019.

    Abstract

    This article addresses the issue of the different status that typology has in American linguistics as opposed to European linguistics. The historical roots of the difference lie in both structural and generative linguistics, in the contrasts between post-Bloomfieldian structuralism in the US vs. Praguean structuralism in Europe, and in the extent of the influence of generative grammar on the two continents.
  • Van Berkum, J. J. A. (1997). Syntactic processes in speech production: The retrieval of grammatical gender. Cognition, 64(2), 115-152. doi:10.1016/S0010-0277(97)00026-7.

    Abstract

    Jescheniak and Levelt (Jescheniak, J.-D., Levelt, W.J.M. 1994. Journal of Experimental Psychology: Learning, Memory and Cognition 20 (4), 824–843) have suggested that the speed with which native speakers of a gender-marking language retrieve the grammatical gender of a noun from their mental lexicon may depend on the recency of earlier access to that same noun's gender, as the result of a mechanism that is dedicated to facilitate gender-marked anaphoric reference to recently introduced discourse entities. This hypothesis was tested in two picture naming experiments. Recent gender access did not facilitate the production of gender-marked adjective noun phrases (Experiment 1), nor that of gender-marked definite article noun phrases (Experiment 2), even though naming times for the latter utterances were sensitive to the gender of a written distractor word superimposed on the picture to be named. This last result replicates and extends earlier gender-specific picture-word interference results (Schriefers, H. 1993. Journal of Experimental Psychology: Learning, Memory, and Cognition 19 (4), 841–850), showing that one can selectively tap into the production of grammatical gender agreement during speaking. The findings are relevant to theories of speech production and the representation of grammatical gender for that process.
  • Van Wijk, C., & Kempen, G. (1982). Syntactische formuleervaardigheid en het schrijven van opstellen. Pedagogische Studiën, 59, 126-136.

    Abstract

    Meermalen is getracht om syntactische formuleenuuirdigheid direct en objectief te meten aan de hand van gesproken of geschreven teksten. Uitgangspunt hierbij vormde in de regel de syntactische complexiteit van de geproduceerde taaluitingen. Dit heeft echter niet geleid tot een plausibele, duidelijk omschreven en praktisch bruikbare index. N.a.v. een kritische bespreking van de notie complexiteit wordt in dit artikel als nieuw criterium voorgesteld de connectiviteit van de taaluitingen; de expliciete aanduiding van logiscli-scmantische relaties tussen proposities. Connectiviteit is gemakkelijk scoorbaar aan de hand van functiewoorden die verschillende vormen van nevenschikkend en onderschikkend zinsverband markeren. Deze nieuwe index ondetrangt de kritiek die op complexiteit gegeven kon worden, blijkt duidelijk te discrimineren tussen groepen leerlingen die van elkaar verschillen naar leeftijd en opleidingsniveau, en sluit aan bij recente taalpsychologische en sociolinguïstische theorie. Tot besluit worden enige onderwijskundige implicaties aangegeven.
  • Van Uytvanck, D., Dukers, A., Ringersma, J., & Wittenburg, P. (2007). Using Google Earth to access language resources. Language Archive Newsletter, (9), 4-7.

    Abstract

    Over the past ten years Geographic Information Systems (GIS) have evolved from a highly specialised niche technology to one that is used daily by a wide range of people. This article describes geographic browsing of language archives, which provides intuitive exploration of resources and permits integration and correlation of information from different archives, even across different research disciplines. In order to facilitate both exploration and management of resources, digital language archives are organised according to criteria such as language name, research topic, project information, researchers, countries, or genres. A set of such criteria can form a tree-like classification scheme, such as in the MPI-IMDI archive, which in turn forms the main method of searching and querying the archive resources. Searching for information can be difficult for occasional users because effective use of these search-fields typically requires specialised knowledge. We assume that many non-specialist users of language resources will search by language name, language family, or geographic area, so that geographic navigation would offer a very powerful search method. We also assume that such users are familiar with maps, and that geographic browsing is more intuitive than browsing classification trees, so these users would prefer to start with a large scale map and then zoom in to find the data that interests them. Therefore, classification trees and geographic maps provide complementary methods for accessing language resources to meet the needs of different user groups. We selected Google Earth (GE) as a geographic browsing system and overlaid it with linguistic information. GE was chosen because it is available via the web, it has good navigation controls, it is familiar to many web users, and because the overlaid linguistic information can be formulated in XML, making it comparatively easy to interchange with other geographic systems.
  • Vernes, S. C., Nicod, J., Elahi, F. M., Coventry, J. A., Kenny, N., Coupe, A.-M., Bird, L. E., Davies, K. E., & Fisher, S. E. (2006). Functional genetic analysis of mutations implicated in a human speech and language disorder. Human Molecular Genetics, 15(21), 3154-3167. doi:10.1093/hmg/ddl392.

    Abstract

    Mutations in the FOXP2 gene cause a severe communication disorder involving speech deficits (developmental verbal dyspraxia), accompanied by wide-ranging impairments in expressive and receptive language. The protein encoded by FOXP2 belongs to a divergent subgroup of forkhead-box transcription factors, with a distinctive DNA-binding domain and motifs that mediate hetero- and homodimerization. Here we report the first direct functional genetic investigation of missense and nonsense mutations in FOXP2 using human cell-lines, including a well-established neuronal model system. We focused on three unusual FOXP2 coding variants, uniquely identified in cases of verbal dyspraxia, assessing expression, subcellular localization, DNA-binding and transactivation properties. Analysis of the R553H forkhead-box substitution, found in all affected members of a large three-generation family, indicated that it severely affects FOXP2 function, chiefly by disrupting nuclear localization and DNA-binding properties. The R328X truncation mutation, segregating with speech/language disorder in a second family, yields an unstable, predominantly cytoplasmic product that lacks transactivation capacity. A third coding variant (Q17L) observed in a single affected child did not have any detectable functional effect in the present study. In addition, we used the same systems to explore the properties of different isoforms of FOXP2, resulting from alternative splicing in human brain. Notably, one such isoform, FOXP2.10+, contains dimerization domains, but no DNA-binding domain, and displayed increased cytoplasmic localization, coupled with aggresome formation. We hypothesize that expression of alternative isoforms of FOXP2 may provide mechanisms for post-translational regulation of transcription factor function.

Share this page