Publications

Displaying 1 - 100 of 387
  • Abma, R., Breeuwsma, G., & Poletiek, F. H. (2001). Toetsen in het onderwijs. De Psycholoog, 36, 638-639.
  • Adank, P., & McQueen, J. M. (2007). The effect of an unfamiliar regional accent on spoken-word comprehension. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1925-1928). Dudweiler: Pirrot.

    Abstract

    This study aimed first to determine whether there is a delay associated with processing words in an unfamiliar regional accent compared to words in a familiar regional accent, and second to establish whether short-term exposure to an unfamiliar accent affects the speed and accuracy of comprehension of words spoken in that accent. Listeners performed an animacy decision task for words spoken in their own and in an unfamiliar accent. Next, they were exposed to approximately 20 minutes of speech in one of these two accents. After exposure, they repeated the animacy decision task. Results showed a considerable delay in word processing for the unfamiliar accent, but no effect of short-term exposure.
  • Alibali, M. W., Kita, S., Bigelow, L. J., Wolfman, C. M., & Klein, S. M. (2001). Gesture plays a role in thinking for speaking. In C. Cavé, I. Guaïtella, & S. Santi (Eds.), Oralité et gestualité: Interactions et comportements multimodaux dans la communication. Actes du colloque ORAGE 2001 (pp. 407-410). Paris, France: Éditions L'Harmattan.
  • Allen, S., Ozyurek, A., Kita, S., Brown, A., Furman, R., Ishizuka, T., & Fujii, M. (2007). Language-specific and universal influences in children's syntactic packaging of manner and path: A comparison of English, Japanese, and Turkish. Cognition, 102, 16-48. doi:10.1016/j.cognition.2005.12.006.

    Abstract

    Different languages map semantic elements of spatial relations onto different lexical and syntactic units. These crosslinguistic differences raise important questions for language development in terms of how this variation is learned by children. We investigated how Turkish-, English-, and Japanese-speaking children (mean age 3;8) package the semantic elements of Manner and Path onto syntactic units when both the Manner and the Path of the moving Figure occur simultaneously and are salient in the event depicted. Both universal and language-specific patterns were evident in our data. Children used the semantic-syntactic mappings preferred by adult speakers of their own languages, and even expressed subtle syntactic differences that encode different relations between Manner and Path in the same way as their adult counterparts (i.e., Manner causing vs. incidental to Path). However, not all types of semantics-syntax mappings were easy for children to learn (e.g., expressing Manner and Path elements in two verbal clauses). In such cases, Turkish- and Japanese-speaking children frequently used syntactic patterns that were not typical in the target language but were similar to patterns used by English-speaking children, suggesting some universal influence. Thus, both language-specific and universal tendencies guide the development of complex spatial expressions.
  • Allerhand, M., Butterfield, S., Cutler, A., & Patterson, R. (1992). Assessing syllable strength via an auditory model. In Proceedings of the Institute of Acoustics: Vol. 14 Part 6 (pp. 297-304). St. Albans, Herts: Institute of Acoustics.
  • Ameka, F. K. (2007). The coding of topological relations in verbs: The case of Likpe (SEkpEle). Linguistics, 45(5), 1065-1104. doi:10.1515/LING.2007.032.

    Abstract

    This article examines the grammar, use and meaning of fifteen verbs used in the Basic Locative Construction (BLC) of Likpe — a Ghana-Togo-Mountain language. The verbs fall into four semantic subclasses: (a) basic topological relations: t 'be.at', tk 'be.on', kpé 'be.in', and fi 'be.near'; (b) postural verbs: sí 'sit', ny 'stand', fáka 'hang', yóma 'hang', kps 'lean', fus 'squat', and labe 'lie'; (c) “distribution” verbs: kpó 'be spread, heaped,' and tí 'be covered'; and (d) “adhesion” verbs: má 'be griped, be fixed', mánkla 'be stuck to'. Likpe locative predications reflect an ontological commitment to the overall topological relation between Figure and Ground and are not focused just on the Figure or the Ground. Various factors determine the choice of “competing” verbs for particular scenarios: animacy, nonindividuation of the Figure, permanency of the configuration and the speaker's desire to be referentially precise or to present stereotypical information. It is demonstrated that in situations where there is a choice, speakers tend to use the more general verbs (stereotype information). The implications of this tendency for the development of a language from a multiverb language using several verbs (e.g., 15) in its BLC to using only a small-set of verbs in its BLC, just as some of Likpe's neighbors have done, are considered.
  • Ameka, F. K., & Levinson, S. C. (Eds.). (2007). The typology and semantics of locative predication: Posturals, positionals and other beasts [Special Issue]. Linguistics, 45(5).

    Abstract

    This special issue is devoted to a relatively neglected topic in linguistics, namely the verbal component of locative statements. English tends, of course, to use a simple copula in utterances like “The cup is on the table”, but many languages, perhaps as many as half of the world's languages, have a set of alternate verbs, or alternate verbal affixes, which contrast in this slot. Often these are classificatory verbs of ‘sitting’, ‘standing’ and ‘lying’. For this reason, perhaps, Aristotle listed position among his basic (“noncomposite”) categories.
  • Ameka, F. K., & Essegbey, J. (2007). Cut and break verbs in Ewe and the causative alternation construction. Cognitive Linguistics, 18(2), 241-250. doi:10.1515/COG.2007.011.

    Abstract

    Ewe verbs covering the cutting and breaking domain divide into four morpho-syntactic classes that can be ranked according to agentivity. We demonstrate that the highly non-agentive break verbs participate in the causative-inchoative alternation while the highly agentive cut verbs do not, as expected from Guerssel et al.'s (1985) hypothesis. However, four verbs tso 'cut with precision', 'cut', 'snap-off', and dze 'split', are used transitively when an instrument is required for the severance to be effected, and intransitively when not. We reject a lexicalist analysis that would postulate polysemy for these verbs and argue for a construction approach.
  • Ameka, F. K., & Levinson, S. C. (2007). Introduction-The typology and semantics of locative predicates: Posturals, positionals and other beasts. Linguistics, 45(5), 847-872. doi:10.1515/LING.2007.025.

    Abstract

    This special issue is devoted to a relatively neglected topic in linguistics, namely the verbal component of locative statements. English tends, of course, to use a simple copula in utterances like “The cup is on the table”, but many languages, perhaps as many as half of the world's languages, have a set of alternate verbs, or alternate verbal affixes, which contrast in this slot. Often these are classificatory verbs of 'sitting', 'standing' and 'lying'. For this reason, perhaps, Aristotle listed position among his basic (“noncomposite”) categories.
  • Ameka, F. K. (1992). Interjections: The universal yet neglected part of speech. Journal of Pragmatics, 18(2/3), 101-118. doi:10.1016/0378-2166(92)90048-G.
  • Ameka, F. K., & Dorvlo, K. (2007). The Ewe language. Verba Africana series - Video documentation and Digital Materials, 1.
  • Ameka, F. K. (1992). The meaning of phatic and conative interjections. Journal of Pragmatics, 18(2/3), 245-271. doi:10.1016/0378-2166(92)90054-F.

    Abstract

    The purpose of this paper is to investigate the meanings of the members of two subclasses of interjections in Ewe: the conative/volitive which are directed at an auditor, and the phatic which are used in the maintenance of social and communicative contact. It is demonstrated that interjections like other linguistic signs have meanings which can be rigorously stated. In addition, the paper explores the differences and similarities between the semantic structures of interjections on one hand and formulaic words on the other. This is done through a comparison of the semantics and pragmatics of an interjection and a formulaic word which are used for welcoming people in Ewe. It is contended that formulaic words are speech acts qua speech acts while interjections are not fully fledged speech acts because they lack illocutionary dictum in their semantic structure.
  • Andics, A., McQueen, J. M., & Van Turennout, M. (2007). Phonetic content influences voice discriminability. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1829-1832). Dudweiler: Pirrot.

    Abstract

    We present results from an experiment which shows that voice perception is influenced by the phonetic content of speech. Dutch listeners were presented with thirteen speakers pronouncing CVC words with systematically varying segmental content, and they had to discriminate the speakers’ voices. Results show that certain segments help listeners discriminate voices more than other segments do. Voice information can be extracted from every segmental position of a monosyllabic word and is processed rapidly. We also show that although relative discriminability within a closed set of voices appears to be a stable property of a voice, it is also influenced by segmental cues – that is, perceived uniqueness of a voice depends on what that voice says.
  • Baayen, H., Levelt, W. J. M., Schreuder, R., & Ernestus, M. (2007). Paradigmatic structure in speech production. Proceedings from the Annual Meeting of the Chicago Linguistic Society, 43(1), 1-29.

    Abstract

    The main goal of the present study is to trace the consequences of local and global markedness for the processing of singular and plural nouns. Decompositional models such as proposed by (Pinker (1997); Pinker (1999)) and (Levelt et al. (1999)) predict a lexeme frequency effect and no effects of the frequencies of the singular and the plural forms. Experiments 1 and 4 reveal the expected lexeme frequency effect. Furthermore, in these experiments there are no clear independent effects of the frequencies of the inflected forms. However, the effects of Entropy and Relative Entropy that emerge from these experiments show that in production knowledge of the probabilities of the individual inflected forms do play a role, albeit indirectly. These entropy effects bear witness to the importance of paradigmatic organization of inflected forms in the mental lexicon, both at the level of individual lexemes (Entropy) and at the general level of the class of nouns (Relative Entropy).
  • Bastiaansen, M. C. M., Böcker, K. B. E., Brunia, C. H. M., De Munck, J. C., & Spekreijse, H. (2001). Desynchronization during anticipatory attention for an upcoming stimulus: A comparative EEG/MEG study. Clinical Neurophysiology, 112, 393-403.

    Abstract

    Objectives: Our neurophysiological model of anticipatory behaviour (e.g. Acta Psychol 101 (1999) 213; Bastiaansen et al., 1999a) predicts an activation of (primary) sensory cortex during anticipatory attention for an upcoming stimulus. In this paper we attempt to demonstrate this by means of event-related desynchronization (ERD). Methods: Five subjects performed a time estimation task, and were informed about the quality of their time estimation by either visual or auditory stimuli providing Knowledge of Results (KR). EEG and MEG were recorded in separate sessions, and ERD was computed in the 8± 10 and 10±12 Hz frequency bands for both datasets. Results: Both in the EEG and the MEG we found an occipitally maximal ERD preceding the visual KR for all subjects. Preceding the auditory KR, no ERD was present in the EEG, whereas in the MEG we found an ERD over the temporal cortex in two of the 5 subjects. These subjects were also found to have higher levels of absolute power over temporal recording sites in the MEG than the other subjects, which we consider to be an indication of the presence of a `tau' rhythm (e.g. Neurosci Lett 222 (1997) 111). Conclusions: It is concluded that the results are in line with the predictions of our neurophysiological model.
  • Bastiaansen, M. C. M., & Brunia, C. H. M. (2001). Anticipatory attention: An event-related desynchronization approach. International Journal of Psychophysiology, 43, 91-107.

    Abstract

    This paper addresses the question of whether anticipatory attention - i.e. attention directed towards an upcoming stimulus in order to facilitate its processing - is realized at the neurophysiological level by a pre-stimulus desynchronization of the sensory cortex corresponding to the modality of the anticipated stimulus, reflecting then opening of a thalamocortical gate in the relevant sensory modality. It is argued that a technique called Event-Related Desynchronization (ERD) of rhythmic 10-Hz activity is well suited to study the thalamocortical processes that are thought to mediate anticipatory attention. In a series of experiments, ERD was computed on EEG and MEG data, recorded while subjects performed a time estimation task and were informed about the quality of their time estimation by stimuli providing Knowledge of Results (KR). The modality of the KR stimuli (auditory, visual, or somatosensory) was manipulated both within and between experiments. The results indicate to varying degrees that preceding the presentation of the KR stimuli, ERD is present over the sensory cortex, which corresponds to the modality of the KR stimulus. The general pattern of results supports the notion that a thalamocortical gating mechanism forms the neurophysiological basis of anticipatory attention. Furthermore, the results support the notion that Event-Related Potential(ERP) and ERD measures reflect fundamentally different neurophysiological processes.
  • Bauer, B. L. M. (1992). Du latin au français: Le passage d'une langue SOV à une langue SVO. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Bauer, B. L. M. (2007). Report on the XVIth International Conference on Historical Linguistic. General Linguistics, 43, 145-149.
  • Belke, E., & Meyer, A. S. (2007). Single and multiple object naming in healthy ageing. Language and Cognitive Processes, 22, 1178-1211. doi:10.1080/01690960701461541.

    Abstract

    We compared the performance of young (college-aged) and older (50+years) speakers in a single object and a multiple object naming task and assessed their susceptibility to semantic and phonological context effects when producing words amidst semantically or phonologically similar or dissimilar words. In single object naming, there were no performance differences between the age groups. In multiple object naming, we observed significant age-related slowing, expressed in longer gazes to the objects and slower speech. In addition, the direction of the phonological context effects differed for the two groups. The results of a supplementary experiment showed that young speakers, when adopting a slow speech rate, coordinated their eye movements and speech differently from the older speakers. Our results imply that age-related slowing in connected speech is not a direct consequence of a slowing of lexical retrieval processes. Instead, older speakers might allocate more processing capacity to speech monitoring processes, which would slow down their concurrent speech planning processes

    Files private

    Request files
  • Bien, H. (2007). On the production of morphologically complex words with special attention to effects of frequency. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Bock, K., Eberhard, K. M., Cutting, J. C., Meyer, A. S., & Schriefers, H. (2001). Some attractions of verb agreement. Cognitive Psychology, 43(2), 83-128. doi:10.1006/cogp.2001.0753.

    Abstract

    In English, words like scissors are grammatically plural but conceptually singular, while words like suds are both grammatically and conceptually plural. Words like army can be construed plurally, despite being grammatically singular. To explore whether and how congruence between grammatical and conceptual number affected the production of subject-verb number agreement in English, we elicited sentence completions for complex subject noun phrases like The advertisement for the scissors. In these phrases, singular subject nouns were followed by distractor words whose grammatical and conceptual numbers varied. The incidence of plural attraction (the use of plural verbs after plural distractors) increased only when distractors were grammatically plural, and revealed no influence from the distractors' number meanings. Companion experiments in Dutch offered converging support for this account and suggested that similar agreement processes operate in that language. The findings argue for a component of agreement that is sensitive primarily to the grammatical reflections of number. Together with other results, the evidence indicates that the implementation of agreement in languages like English and Dutch involves separable processes of number marking and number morphing, in which number meaning plays different parts.

    Files private

    Request files
  • Bohnemeyer, J., & Brown, P. (2007). Standing divided: Dispositional verbs and locative predications in two Mayan languages. Linguistics, 45(5), 1105-1151. doi:0.1515/LING.2007.033.

    Abstract

    The Mayan languages Tzeltal and Yucatec have large form classes of “dispositional” roots which lexicalize spatial properties such as orientation, support/suspension/blockage of motion, and configurations of parts of an entity with respect to other parts. But speakers of the two languages deploy this common lexical resource quite differently. The roots are used in both languages to convey dispositional information (e.g., answering “how” questions), but Tzeltal speakers also use them in canonical locative descriptions (e.g., answering “where” questions), whereas Yucatec speakers only use dispositionals in locative predications when prompted by the context to focus on dispositional properties. We describe the constructions used in locative and dispositional descriptions in response to two different picture stimuli sets. Evidence against the proposal that Tzeltal uses dispositionals to compensate for its single, semantically generic preposition (Brown 1994; Grinevald 2006) comes from the finding that Tzeltal speakers use relational spatial nominals in the “Ground phrase” — the expression of the place at which an entity is located — about as frequently as Yucatec speakers. We consider several alternative hypotheses, including a possible larger typological difference that leads Tzeltal speakers, but not Yucatec speakers, to prefer “theme-specific” verbs not just in locative predications, but in any predication involving a theme argument.
  • Bohnemeyer, J., Enfield, N. J., Essegbey, J., Ibarretxe-Antuñano, I., Kita, S., Lüpke, F., & Ameka, F. K. (2007). Principles of event segmentation in language: The case of motion events. Language, 83(3), 495-532. doi:10.1353/lan.2007.0116.

    Abstract

    We examine universals and crosslinguistic variation in constraints on event segmentation. Previous typological studies have focused on segmentation into syntactic (Pawley 1987) or intonational units (Givón 1991). We argue that the correlation between such units and semantic/conceptual event representations is language-specific. As an alternative, we introduce the MACRO-EVENT PROPERTY (MEP): a construction has the MEP if it packages event representations such that temporal operators necessarily have scope over all subevents. A case study on the segmentation of motion events into macro-event expressions in eighteen genetically and typologically diverse languages has produced evidence of two types of design principles that impact motion-event segmentation: language-specific lexicalization patterns and universal constraints on form-to-meaning mapping.
  • Bowerman, M. (1976). Commentary on M.D.S. Braine, “Children's first word combinations”. Monographs of the Society for Research in Child Development, 41(1), 98-104. Retrieved from http://www.jstor.org/stable/1165959.
  • Bowerman, M. (1973). [Review of Lois Bloom, Language development: Form and function in emerging grammars (MIT Press 1970)]. American Scientist, 61(3), 369-370.
  • Bramão, I., Mendonça, A., Faísca, L., Ingvar, M., Petersson, K. M., & Reis, A. (2007). The impact of reading and writing skills on a visuo-motor integration task: A comparison between illiterate and literate subjects. Journal of the International Neuropsychological Society, 13(2), 359-364. doi:10.1017/S1355617707070440.

    Abstract

    Previous studies have shown a significant association between reading skills and the performance on visuo-motor tasks. In order to clarify whether reading and writing skills modulate non-linguistic domains, we investigated the performance of two literacy groups on a visuo-motor integration task with non-linguistic stimuli. Twenty-one illiterate participants and twenty matched literate controls were included in the experiment. Subjects were instructed to use the right or the left index finger to point to and touch a randomly presented target on the right or left side of a touch screen. The results showed that the literate subjects were significantly faster in detecting and touching targets on the left compared to the right side of the screen. In contrast, the presentation side did not affect the performance of the illiterate group. These results lend support to the idea that having acquired reading and writing skills, and thus a preferred left-to-right reading direction, influences visual scanning. (JINS, 2007, 13, 359–364
  • Braun, B. (2007). Effects of dialect and context on the realisation of German prenuclear accents. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 961-964). Dudweiler: Pirrot.

    Abstract

    We investigated whether alignment differences reported for Southern and Northern German speakers (Southerners align peaks in prenuclear accents later than Northerners) are carried over to the production of different functional categories such as contrast. To this end, the realisation of non-contrastive theme accents is compared with those in contrastive theme-rheme pairs such as ‘Sam rented a truck and Johanna rented a car.’
    We found that when producing this ‘double-contrast’, speakers mark contrast both phonetically by delaying and rising the peak of the theme accent (‘Johanna’) and/or phonologically by a change in rheme accent type (from high to falling ‘car’).
    The effect of dialect is complex: a) only in non-contrastive contexts produced with a high rheme accent Southerners align peaks later than Northerners; b) peak delay as a means to signal functional contrast is not used uniformly by the two varieties. Dialect clearly affects the realisation of prenuclear accents but its effect is conditioned by the pragmatic and intonational context.
  • De Bree, E., Janse, E., & Van de Zande, A. M. (2007). Stress assignment in aphasia: Word and non-word reading and non-word repetition. Brain and Language, 103, 264-275. doi:10.1016/j.bandl.2007.07.003.

    Abstract

    This paper investigates stress assignment in Dutch aphasic patients in non-word repetition, as well as in real-word and non-word reading. Performance on the non-word reading task was similar for the aphasic patients and the control group, as mainly regular stress was assigned to the targets. However, there were group differences on the real-word reading and non-word repetition tasks. Unlike the non-brain-damaged group, the patients showed a strong regularization tendency in their repetition of irregular patterns. The patients’ stress error patterns suggest an impairment in retention or retrieval of targets with irregular stress patterns. Limited verbal short-term memory is proposed as a possible underlying cause for the stress difficulties.
  • Broersma, M. (2007). Why the 'president' does not excite the 'press: The limits of spurious lexical activation in L2 listening. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetics Sciences (ICPhS 2007) (pp. 1909-1912). Dudweiler: Pirrot.

    Abstract

    Two Cross-Modal Priming experiments assessed
    lexical activation of unintended words for
    nonnative (Dutch) and English native listeners.
    Stimuli mismatched words in final voicing, which
    in earlier studies caused spurious lexical activation
    for Dutch listeners. The stimuli were embedded in
    or cut out of a carrier (PRESident). The presence of
    a longer lexical competitor in the signal or as a
    possible continuation of it prevented spurious
    lexical activation of mismatching words (press).
  • Broersma, M., & De Bot, K. (2001). De triggertheorie voor codewisseling: De oorspronkelijke en een aangepaste versie (‘The trigger theory for codeswitching: The original and an adjusted version’). Toegepaste Taalwetenschap in Artikelen, 65(1), 41-54.
  • Broersma, M., & Van de Ven, M. (2007). More flexible use of perceptual cues in nonnative than in native listening: Preceding vowel duration as a cue for final /v/-/f/. In Proceedings of the Fifth International Symposium on the Acquisition of Second Language Speech (New Sounds 2007).

    Abstract

    Three 2AFC experiments investigated Dutch and English listeners’ use of preceding vowel duration for the English final /v/-/f/ contrast. Dutch listeners used vowel duration more flexibly than English listeners did: they could use vowel duration as accurately as native listeners, but were better at ignoring it when it was misleading.
  • Broersma, M. (2007). Kettle hinders cat, shadow does not hinder shed: Activation of 'almost embedded' words in nonnative listening. In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 1893-1896). Adelaide: Causal Productions.

    Abstract

    A Cross-Modal Priming experiment investigated Dutch
    listeners’ perception of English words. Target words were
    embedded in a carrier word (e.g., cat in catalogue) or ‘almost
    embedded’ in a carrier word except for a mismatch in the
    perceptually difficult /æ/-/ε/ contrast (e.g., cat in kettle).
    Previous results showed a bias towards perception of /ε/ over
    /æ/. The present study shows that presentation of carrier
    words either containing an /æ/ or an /ε/ led to long lasting
    inhibition of embedded or ‘almost embedded’ words with an
    /æ/, but not of words with an /ε/. Thus, both catalogue and
    kettle hindered recognition of cat, whereas neither schedule
    nor shadow hindered recognition of shed.
  • Brown, P., & Levinson, S. C. (1992). 'Left' and 'right' in Tenejapa: Investigating a linguistic and conceptual gap. Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung, 45(6), 590-611.

    Abstract

    From the perspective of a Kantian belief in the fundamental human tendency to cleave space along the three planes of the human body, Tenejapan Tzeltal exhibits a linguistic gap: there are no linguistic expressions that designate regions (as in English to my left) or describe the visual field (as in to the left of the tree) on the basis of a plane bisecting the body into a left and right side. Tenejapans have expressions for left and right hands (xin k'ab and wa'el k'ab), but these are basically body-part terms, they are not generalized to form a division of space. This paper describes the results of various elicited producton tasks in which concepts of left and right would provide a simple solution, showing that Tenejapan consultants use other notions even when the relevant linguistic distinctions could be made in Tzeltal (e.g. describing the position of one's limbs, or describing rotation of one's body). Instead of using the left-hand/right-hand distinction to construct a division of space, Tenejapans utilize a number of other systems: (i) an absolute, 'cardinal direction' system, supplemented by reference to other geographic or landmark directions, (ii) a generative segmentation of objects and places into analogic body-parts or other kinds of parts, and (iii) a rich system of positional adjectives to describe the exact disposition of things. These systems work conjointly to specify locations with precision and elegance. The overall system is not primarily egocentric, and it makes no essential reference to planes through the human body.
  • Brown, A. (2007). Crosslinguistic influence in first and second languages: Convergence in speech and gesture. PhD Thesis, Boston University, Boston.

    Abstract

    Research on second language acquisition typically focuses on how a first language (L1) influences a second language (L2) in different linguistic domains and across modalities. This dissertation, in contrast, explores interactions between languages in the mind of a language learner by asking 1) can an emerging L2 influence an established L1? 2) if so, how is such influence realized? 3) are there parallel influences of the L1 on the L2? These questions were investigated for the expression of Manner (e.g. climb, roll) and Path (e.g. up, down) of motion, areas where substantial crosslinguistic differences exist in speech and co-speech gesture. Japanese and English are typologically distinct in this domain; therefore, narrative descriptions of four motion events were elicited from monolingual Japanese speakers (n=16), monolingual English speakers (n=13), and native Japanese speakers with intermediate knowledge of English (narratives elicited in both their L1 and L2, n=28). Ways in which Path and Manner were expressed at the lexical, syntactic, and gestural levels were analyzed in monolingual and non-monolingual production. Results suggest mutual crosslinguistic influences. In their L1, native Japanese speakers with knowledge of English displayed both Japanese- and English-like use of morphosyntactic elements to express Path and Manner (i.e. a combination of verbs and other constructions). Consequently, non-monolingual L1 discourse contained significantly more Path expressions per clause, with significantly greater mention of Goal of motion than monolingual Japanese and English discourse. Furthermore, the gestures of non-monolingual speakers diverged from their monolingual counterparts with differences in depiction of Manner and gesture perspective (character versus observer). Importantly, non-monolingual production in the L1 was not ungrammatical, but simply reflected altered preferences. As for L2 production, many effects of L1 influence were seen, crucially in areas parallel to those described above. Overall, production by native Japanese speakers who knew English differed from that of monolingual Japanese and English speakers. But L1 and L2 production within non-monolingual individuals was similar. These findings imply a convergence of L1-L2 linguistic systems within the mind of a language learner. Theoretical and methodological implications for SLA research and language assessment with respect to the 'native speaker standard language' are discussed.
  • Brown, P. (2007). 'She had just cut/broken off her head': Cutting and breaking verbs in Tzeltal. Cognitive Linguistics, 18(2), 319-330. doi:10.1515/COG.2007.019.

    Abstract

    This paper describes the lexical resources for expressing events of cutting and breaking (C&B hereafter) in the Mayan language Tzeltal. This notional set of verbs is not a class in any grammatical sense; C&B verbs are formally undistinguishable from many other transitive state-change verbs. But they nicely reveal the characteristic specificity of Tzeltal verb semantics: C&B actions are finely differentiated according to the spatial and textural properties of the theme object, with no superordinate term meaning 'either cut in general' or 'break in general'. The paper characterizes the semantics of these verbs and shows that in the great majority of cases it does not predict their argument structure.
  • Brown, P. (1976). Women and politeness: A new perspective on language and society. Reviews in Anthropology, 3, 240-249.
  • Butterfield, S., & Cutler, A. (1988). Segmentation errors by human listeners: Evidence for a prosodic segmentation strategy. In W. Ainsworth, & J. Holmes (Eds.), Proceedings of SPEECH ’88: Seventh Symposium of the Federation of Acoustic Societies of Europe: Vol. 3 (pp. 827-833). Edinburgh: Institute of Acoustics.
  • Cablitz, G., Ringersma, J., & Kemps-Snijders, M. (2007). Visualizing endangered indigenous languages of French Polynesia with LEXUS. In Proceedings of the 11th International Conference Information Visualization (IV07) (pp. 409-414). IEEE Computer Society.

    Abstract

    This paper reports on the first results of the DOBES project ‘Towards a multimedia dictionary of the Marquesan and Tuamotuan languages of French Polynesia’. Within the framework of this project we are building a digital multimedia encyclopedic lexicon of the endangered Marquesan and Tuamotuan languages using a new tool, LEXUS. LEXUS is a web-based lexicon tool, targeted at linguists involved in language documentation. LEXUS offers the possibility to visualize language. It provides functionalities to include audio, video and still images to the lexical entries of the dictionary, as well as relational linking for the creation of a semantic network knowledge base. Further activities aim at the development of (1) an improved user interface in close cooperation with the speech community and (2) a collaborative workspace functionality which will allow the speech community to actively participate in the creation of lexica.
  • Cameron-Faulkner, T., & Kidd, E. (2007). I'm are what I'm are: The acquisition of first-person singular present BE. Cognitive Linguistics, 18(1), 1-22. doi:10.1515/COG.2007.001.

    Abstract

    The present study investigates the development of am in the speech of one English-speaking child, Scarlett (aged 4;6–5;6). We show that am is infrequent in the speech addressed to children; the acquisition of this form of BE presents a unique insight into the processes underlying language development because children have little evidence regarding its correct use. Scarlett produced a pervasive error where she overextended are to first-person singular contexts where am was required (e.g., I'm are trying, When are I'm finished?). Am gradually emerged in her speech on what appears to be a construction-specific basis. The findings of the study are used in support of a usage-based, constructivisit approach to language development.
  • Chen, A., Den Os, E., & De Ruiter, J. P. (2007). Pitch accent type matters for online processing of information status: Evidence from natural and synthetic speech. The Linguistic Review, 24(2), 317-344. doi:10.1515/TLR.2007.012.

    Abstract

    Adopting an eyetracking paradigm, we investigated the role of H*L, L*HL, L*H, H*LH, and deaccentuation at the intonational phrase-final position in online processing of information status in British English in natural speech. The role of H*L, L*H and deaccentuation was also examined in diphonesynthetic speech. It was found that H*L and L*HL create a strong bias towards newness, whereas L*H, like deaccentuation, creates a strong bias towards givenness. In synthetic speech, the same effect was found for H*L, L*H and deaccentuation, but it was delayed. The delay may not be caused entirely by the difference in the segmental quality between synthetic and natural speech. The pitch accent H*LH, however, appears to bias participants' interpretation to the target word, independent of its information status. This finding was explained in the light of the effect of durational information at the segmental level on word recognition.
  • Chen, X. S., Rozhdestvensky, T. S., Collins, L. J., Schmitz, J., & Penny, D. (2007). Combined experimental and computational approach to identify non-protein-coding RNAs in the deep-branching eukaryote Giardia intestinalis. Nucleic Acids Research, 35, 4619-4628. doi:10.1093/nar/gkm474.

    Abstract

    Non-protein-coding RNAs represent a large proportion of transcribed sequences in eukaryotes. These RNAs often function in large RNA–protein complexes, which are catalysts in various RNA-processing pathways. As RNA processing has become an increasingly important area of research, numerous non-messenger RNAs have been uncovered in all the model eukaryotic organisms. However, knowledge on RNA processing in deep-branching eukaryotes is still limited. This study focuses on the identification of non-protein-coding RNAs from the diplomonad parasite Giardia intestinalis, showing that a combined experimental and computational search strategy is a fast method of screening reduced or compact genomes. The analysis of our Giardia cDNA library has uncovered 31 novel candidates, including C/D-box and H/ACA box snoRNAs, as well as an unusual transcript of RNase P, and double-stranded RNAs. Subsequent computational analysis has revealed additional putative C/D-box snoRNAs. Our results will lead towards a future understanding of RNA metabolism in the deep-branching eukaryote Giardia, as more ncRNAs are characterized.
  • Chen, J. (2007). 'He cut-break the rope': Encoding and categorizing cutting and breaking events in Mandarin. Cognitive Linguistics, 18(2), 273-285. doi:10.1515/COG.2007.015.

    Abstract

    Abstract Mandarin categorizes cutting and breaking events on the basis of fine semantic distinctions in the causal action and the caused result. I demonstrate the semantics of Mandarin C&B verbs from the perspective of event encoding and categorization as well as argument structure alternations. Three semantically different types of predicates can be identified: verbs denoting the C&B action subevent, verbs encoding the C&B result subevent, and resultative verb compounds (RVC) that encode both the action and the result subevents. The first verb of an RVC is basically dyadic, whereas the second is monadic. RVCs as a whole are also basically dyadic, and do not undergo detransitivization.
  • Chen, A., & Fikkert, P. (2007). Intonation of early two-word utterances in Dutch. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 315-320). Dudweiler: Pirrot.

    Abstract

    We analysed intonation contours of two-word utterances from three monolingual Dutch children aged between 1;4 and 2;1 in the autosegmentalmetrical framework. Our data show that children have mastered the inventory of the boundary tones and nuclear pitch accent types (except for L*HL and L*!HL) at the 160-word level, and the set of nondownstepped pre-nuclear pitch accents (except for L*) at the 230-word level, contra previous claims on the mastery of adult-like intonation contours before or at the onset of first words. Further, there is evidence that intonational development is correlated with an increase in vocabulary size. Moreover, we found that children show a preference for falling contours, as predicted on the basis of universal production mechanisms. In addition, the utterances are mostly spoken with both words accented independent of semantic relations expressed and information status of each word across developmental stages, contra prior work. Our study suggests a number of topics for further research.
  • Chen, A. (2007). Intonational realisation of topic and focus by Dutch-acquiring 4- to 5-year-olds. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 1553-1556). Dudweiler: Pirott.

    Abstract

    This study examined how Dutch-acquiring 4- to 5-year-olds use different pitch accent types and deaccentuation to mark topic and focus at the sentence level and how they differ from adults. The topic and focus were non-contrastive and realised as full noun phrases. It was found that children realise topic and focus similarly frequently with H*L, whereas adults use H*L noticeably more frequently in focus than in topic in sentence-initial position and nearly only in focus in sentence-final position. Further, children frequently realise the topic with an accent, whereas adults mostly deaccent the sentence-final topic and use H*L and H* to realise the sentence-initial topic because of rhythmic motivation. These results show that 4- and 5-year-olds have not acquired H*L as the typical focus accent and deaccentuation as the typical topic intonation yet. Possibly, frequent use of H*L in sentence-initial topic in adult Dutch has made it difficult to extract the functions of H*L and deaccentuation from the input.
  • Chen, A., Rietveld, T., & Gussenhoven, C. (2001). Language-specific effects of pitch range on the perception of universal intonational meaning. In Eurospeech 2001 (pp. 1403-1406).
  • Chen, A., Rietveld, T., & Gussenhoven, C. (2001). Language-specific effects of pitch range on the perception of universal intonational meaning. In P. Dalsgaard, B. Lindberg, & H. Benner (Eds.), Proceedings of the 7th European Conference on Speech Communication and Technology, II (pp. 1403-1406). Aalborg: University of Aalborg.

    Abstract

    Two groups of listeners, with Dutch and British English as their native language judged stimuli in Dutch and British English, respectively, on the scales CONFIDENT vs. NOT CONFIDENT and FRIENDLY vs. NOT FRIENDLY, two meanings derived from Ohala's universal Frequency Code. The stimuli, which were lexically equivalent, were varied in pitch contour and pitch range. In both languages, the perceived degree of confidence decreases and that of friendliness increases when the pitch range is raised, as predicted by the Frequency Code. However, at identical pitch ranges, British English is perceived as more confident and more friendly than Dutch. We argue that this difference in degree of the use of the Frequency Code is due to the difference in the standard pitch ranges of Dutch and British English.
  • Cho, T., McQueen, J. M., & Cox, E. A. (2007). Prosodically driven phonetic detail in speech processing: The case of domain-initial strengthening in English. Journal of Phonetics, 35(2), 210-243. doi:10.1016/j.wocn.2006.03.003.

    Abstract

    We explore the role of the acoustic consequences of domain-initial strengthening in spoken-word recognition. In two cross-modal identity-priming experiments, listeners heard sentences and made lexical decisions to visual targets, presented at the onset of the second word in two-word sequences containing lexical ambiguities (e.g., bus tickets, with the competitor bust). These sequences contained Intonational Phrase (IP) or Prosodic Word (Wd) boundaries, and the second word's initial Consonant and Vowel (CV, e.g., [tI]) was spliced from another token of the sequence in IP- or Wd-initial position. Acoustic analyses showed that IP-initial consonants were articulated more strongly than Wd-initial consonants. In Experiment 1, related targets were post-boundary words (e.g., tickets). No strengthening effect was observed (i.e., identity priming effects did not vary across splicing conditions). In Experiment 2, related targets were pre-boundary words (e.g., bus). There was a strengthening effect (stronger priming when the post-boundary CVs were spliced from IP-initial than from Wd-initial position), but only in Wd-boundary contexts. These were the conditions where phonetic detail associated with domain-initial strengthening could assist listeners most in lexical disambiguation. We discuss how speakers may strengthen domain-initial segments during production and how listeners may use the resulting acoustic correlates of prosodic strengthening during word recognition.
  • Christoffels, I. K., Formisano, E., & Schiller, N. O. (2007). The neural correlates of verbal feedback processing: An fMRI study employing overt speech. Human Brain Mapping, 28(9), 868-879. doi:10.1002/hbm.20315.

    Abstract

    Speakers use external auditory feedback to monitor their own speech. Feedback distortion has been found to increase activity in the superior temporal areas. Using fMRI, the present study investigates the neural correlates of processing verbal feedback without distortion. In a blocked design, the following conditions were presented: (1) overt picture-naming, (2) overt picture-naming while pink noise was presented to mask external feedback, (3) covert picture-naming, (4) listening to the picture names (previously recorded from participants' own voices), and (5) listening to pink noise. The results show that auditory feedback processing involves a network of different areas related to general performance monitoring and speech-motor control. These include the cingulate cortex and the bilateral insula, supplementary motor area, bilateral motor areas, cerebellum, thalamus and basal ganglia. Our findings suggest that the anterior cingulate cortex, which is often implicated in error-processing and conflict-monitoring, is also engaged in ongoing speech monitoring. Furthermore, in the superior temporal gyrus, we found a reduced response to speaking under normal feedback conditions. This finding is interpreted in the framework of a forward model according to which, during speech production, the sensory consequence of the speech-motor act is predicted to attenuate the sensitivity of the auditory cortex. Hum Brain Mapp 2007. © 2007 Wiley-Liss, Inc.
  • Christoffels, I. K., Firk, C., & Schiller, N. O. (2007). Bilingual language control: An event-related brain potential study. Brain Research, 1147, 192-208. doi:10.1016/j.brainres.2007.01.137.

    Abstract

    This study addressed how bilingual speakers switch between their first and second language when speaking. Event-related brain potentials (ERPs) and naming latencies were measured while unbalanced German (L1)-Dutch (L2) speakers performed a picture-naming task. Participants named pictures either in their L1 or in their L2 (blocked language conditions), or participants switched between their first and second language unpredictably (mixed language condition). Furthermore, form similarity between translation equivalents (cognate status) was manipulated. A cognate facilitation effect was found for L1 and L2 indicating phonological activation of the non-response language in blocked and mixed language conditions. The ERP data also revealed small but reliable effects of cognate status. Language switching resulted in equal switching costs for both languages and was associated with a modulation in the ERP waveforms (time windows 275-375 ms and 375-475 ms). Mixed language context affected especially the L1, both in ERPs and in latencies, which became slower in L1 than L2. It is suggested that sustained and transient components of language control should be distinguished. Results are discussed in relation to current theories of bilingual language processing.
  • Clahsen, H., Eisenbeiss, S., Hadler, M., & Sonnenstuhl, I. (2001). The Mental Representation of Inflected Words: An Experimental Study of Adjectives and Verbs in German. Language, 77(3), 510-534. doi:10.1353/lan.2001.0140.

    Abstract

    The authors investigate how morphological relationships between inflected word forms are represented in the mental lexicon, focusing on paradigmatic relations between regularly inflected word forms and relationships between different stem forms of the same lexeme. We present results from a series of psycholinguistic experiments investigating German adjectives (which are inflected for case, number, and gender) and the so-called strong verbs of German, which have different stem forms when inflected for person, number, tense, or mood. Evidence from three lexical-decision experiments indicates that regular affixes are stripped off from their stems for processing purposes. It will be shown that this holds for both unmarked and marked stem forms. Another set of experiments revealed priming effects between different paradigmatically related affixes and between different stem forms of the same lexeme. We will show that associative models of inflection do not capture these findings, and we explain our results in terms of combinatorial models of inflection in which regular affixes are represented in inflectional paradigms and stem variants are represented in structured lexical entries. We will also argue that the morphosyntactic features of stems and affixes form abstract underspecified entries. The experimental results indicate that the human language processor makes use of these representations.

    Files private

    Request files
  • Cutler, A. (1992). Cross-linguistic differences in speech segmentation. MRC News, 56, 8-9.
  • Cutler, A., & Norris, D. (1992). Detection of vowels and consonants with minimal acoustic variation. Speech Communication, 11, 101-108. doi:10.1016/0167-6393(92)90004-Q.

    Abstract

    Previous research has shown that, in a phoneme detection task, vowels produce longer reaction times than consonants, suggesting that they are harder to perceive. One possible explanation for this difference is based upon their respective acoustic/articulatory characteristics. Another way of accounting for the findings would be to relate them to the differential functioning of vowels and consonants in the syllabic structure of words. In this experiment, we examined the second possibility. Targets were two pairs of phonemes, each containing a vowel and a consonant with similar phonetic characteristics. Subjects heard lists of English words had to press a response key upon detecting the occurrence of a pre-specified target. This time, the phonemes which functioned as vowels in syllabic structure yielded shorter reaction times than those which functioned as consonants. This rules out an explanation for response time difference between vowels and consonants in terms of function in syllable structure. Instead, we propose that consonantal and vocalic segments differ with respect to variability of tokens, both in the acoustic realisation of targets and in the representation of targets by listeners.
  • Cutler, A., Wales, R., Cooper, N., & Janssen, J. (2007). Dutch listeners' use of suprasegmental cues to English stress. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetics Sciences (ICPhS 2007) (pp. 1913-1916). Dudweiler: Pirrot.

    Abstract

    Dutch listeners outperform native listeners in identifying syllable stress in English. This is because lexical stress is more useful in recognition of spoken words of Dutch than of English, so that Dutch listeners pay greater attention to stress in general. We examined Dutch listeners’ use of the acoustic correlates of English stress. Primary- and secondary-stressed syllables differ significantly on acoustic measures, and some differences, in F0 especially, correlate with data of earlier listening experiments. The correlations found in the Dutch responses were not paralleled in data from native listeners. Thus the acoustic cues which distinguish English primary versus secondary stress are better exploited by Dutch than by native listeners.
  • Cutler, A., & Weber, A. (2007). Listening experience and phonetic-to-lexical mapping in L2. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhS 2007) (pp. 43-48). Dudweiler: Pirrot.

    Abstract

    In contrast to initial L1 vocabularies, which of necessity depend largely on heard exemplars, L2 vocabulary construction can draw on a variety of knowledge sources. This can lead to richer stored knowledge about the phonology of the L2 than the listener's prelexical phonetic processing capacity can support, and thus to mismatch between the level of detail required for accurate lexical mapping and the level of detail delivered by the prelexical processor. Experiments on spoken word recognition in L2 have shown that phonetic contrasts which are not reliably perceived are represented in the lexicon nonetheless. This lexical representation of contrast must be based on abstract knowledge, not on veridical representation of heard exemplars. New experiments confirm that provision of abstract knowledge (in the form of spelling) can induce lexical representation of a contrast which is not reliably perceived; but also that experience (in the form of frequency of occurrence) modulates the mismatch of phonetic and lexical processing. We conclude that a correct account of word recognition in L2 (as indeed in L1) requires consideration of both abstract and episodic information.
  • Cutler, A., Cooke, M., Garcia-Lecumberri, M. L., & Pasveer, D. (2007). L2 consonant identification in noise: Cross-language comparisons. In H. van Hamme, & R. van Son (Eds.), Proceedings of Interspeech 2007 (pp. 1585-1588). Adelaide: Causal productions.

    Abstract

    The difficulty of listening to speech in noise is exacerbated when the speech is in the listener’s L2 rather than L1. In this study, Spanish and Dutch users of English as an L2 identified American English consonants in a constant intervocalic context. Their performance was compared with that of L1 (British English) listeners, under quiet conditions and when the speech was masked by speech from another talker or by noise. Masking affected performance more for the Spanish listeners than for the L1 listeners, but not for the Dutch listeners, whose performance was worse than the L1 case to about the same degree in all conditions. There were, however,large differences in the pattern of results across individual consonants, which were consistent with differences in how consonants are identified in the respective L1s.
  • Cutler, A. (1976). High-stress words are easier to perceive than low-stress words, even when they are equally stressed. Texas Linguistic Forum, 2, 53-57.
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1988). Limits on bilingualism [Letters to Nature]. Nature, 340, 229-230. doi:10.1038/340229a0.

    Abstract

    SPEECH, in any language, is continuous; speakers provide few reliable cues to the boundaries of words, phrases, or other meaningful units. To understand speech, listeners must divide the continuous speech stream into portions that correspond to such units. This segmentation process is so basic to human language comprehension that psycholinguists long assumed that all speakers would do it in the same way. In previous research1,2, however, we reported that segmentation routines can be language-specific: speakers of French process spoken words syllable by syllable, but speakers of English do not. French has relatively clear syllable boundaries and syllable-based timing patterns, whereas English has relatively unclear syllable boundaries and stress-based timing; thus syllabic segmentation would work more efficiently in the comprehension of French than in the comprehension of English. Our present study suggests that at this level of language processing, there are limits to bilingualism: a bilingual speaker has one and only one basic language.
  • Cutler, A., Kearns, R., Norris, D., & Scott, D. (1992). Listeners’ responses to extraneous signals coincident with English and French speech. In J. Pittam (Ed.), Proceedings of the 4th Australian International Conference on Speech Science and Technology (pp. 666-671). Canberra: Australian Speech Science and Technology Association.

    Abstract

    English and French listeners performed two tasks - click location and speeded click detection - with both English and French sentences, closely matched for syntactic and phonological structure. Clicks were located more accurately in open- than in closed-class words in both English and French; they were detected more rapidly in open- than in closed-class words in English, but not in French. The two listener groups produced the same pattern of responses, suggesting that higher-level linguistic processing was not involved in these tasks.
  • Cutler, A. (2001). Listening to a second language through the ears of a first. Interpreting, 5, 1-23.
  • Cutler, A. (1976). Phoneme-monitoring reaction time as a function of preceding intonation contour. Perception and Psychophysics, 20, 55-60. Retrieved from http://www.psychonomic.org/search/view.cgi?id=18194.

    Abstract

    An acoustically invariant one-word segment occurred in two versions of one syntactic context. In one version, the preceding intonation contour indicated that a stress would fall at the point where this word occurred. In the other version, the preceding contour predicted reduced stress at that point. Reaction time to the initial phoneme of the word was faster in the former case, despite the fact that no acoustic correlates of stress were present. It is concluded that a part of the sentence comprehension process is the prediction of upcoming sentence accents.
  • Cutler, A. (1992). Proceedings with confidence. New Scientist, (1825), 54.
  • Cutler, A., & Robinson, T. (1992). Response time as a metric for comparison of speech recognition by humans and machines. In J. Ohala, T. Neary, & B. Derwing (Eds.), Proceedings of the Second International Conference on Spoken Language Processing: Vol. 1 (pp. 189-192). Alberta: University of Alberta.

    Abstract

    The performance of automatic speech recognition systems is usually assessed in terms of error rate. Human speech recognition produces few errors, but relative difficulty of processing can be assessed via response time techniques. We report the construction of a measure analogous to response time in a machine recognition system. This measure may be compared directly with human response times. We conducted a trial comparison of this type at the phoneme level, including both tense and lax vowels and a variety of consonant classes. The results suggested similarities between human and machine processing in the case of consonants, but differences in the case of vowels.
  • Cutler, A., & Butterfield, S. (1992). Rhythmic cues to speech segmentation: Evidence from juncture misperception. Journal of Memory and Language, 31, 218-236. doi:10.1016/0749-596X(92)90012-M.

    Abstract

    Segmentation of continuous speech into its component words is a nontrivial task for listeners. Previous work has suggested that listeners develop heuristic segmentation procedures based on experience with the structure of their language; for English, the heuristic is that strong syllables (containing full vowels) are most likely to be the initial syllables of lexical words, whereas weak syllables (containing central, or reduced, vowels) are nonword-initial, or, if word-initial, are grammatical words. This hypothesis is here tested against natural and laboratory-induced missegmentations of continuous speech. Precisely the expected pattern is found: listeners erroneously insert boundaries before strong syllables but delete them before weak syllables; boundaries inserted before strong syllables produce lexical words, while boundaries inserted before weak syllables produce grammatical words.
  • Cutler, A., & Norris, D. (1988). The role of strong syllables in segmentation for lexical access. Journal of Experimental Psychology: Human Perception and Performance, 14, 113-121. doi:10.1037/0096-1523.14.1.113.

    Abstract

    A model of speech segmentation in a stress language is proposed, according to which the occurrence of a strong syllable triggers segmentation of the speech signal, whereas occurrence of a weak syllable does not trigger segmentation. We report experiments in which listeners detected words embedded in nonsense bisyllables more slowly when the bisyllable had two strong syllables than when it had a strong and a weak syllable; mint was detected more slowly in mintayve than in mintesh. According to our proposed model, this result is an effect of segmentation: When the second syllable is strong, it is segmented from the first syllable, and successful detection of the embedded word therefore requires assembly of speech material across a segmentation position. Speech recognition models involving phonemic or syllabic recoding, or based on strictly left-to-right processes, do not predict this result. It is argued that segmentation at strong syllables in continuous speech recognition serves the purpose of detecting the most efficient locations at which to initiate lexical access. (C) 1988 by the American Psychological Association
  • Cutler, A., Mehler, J., Norris, D., & Segui, J. (1992). The monolingual nature of speech segmentation by bilinguals. Cognitive Psychology, 24, 381-410.

    Abstract

    Monolingual French speakers employ a syllable-based procedure in speech segmentation; monolingual English speakers use a stress-based segmentation procedure and do not use the syllable-based procedure. In the present study French-English bilinguals participated in segmentation experiments with English and French materials. Their results as a group did not simply mimic the performance of English monolinguals with English language materials and of French monolinguals with French language materials. Instead, the bilinguals formed two groups, defined by forced choice of a dominant language. Only the French-dominant group showed syllabic segmentation and only with French language materials. The English-dominant group showed no syllabic segmentation in either language. However, the English-dominant group showed stress-based segmentation with English language materials; the French-dominant group did not. We argue that rhythmically based segmentation procedures are mutually exclusive, as a consequence of which speech segmentation by bilinguals is, in one respect at least, functionally monolingual.
  • Cutler, A., & Van Donselaar, W. (2001). Voornaam is not a homophone: Lexical prosody and lexical access in Dutch. Language and Speech, 44, 171-195. doi:10.1177/00238309010440020301.

    Abstract

    Four experiments examined Dutch listeners’ use of suprasegmental information in spoken-word recognition. Isolated syllables excised from minimal stress pairs such as VOORnaam/voorNAAM could be reliably assigned to their source words. In lexical decision, no priming was observed from one member of minimal stress pairs to the other, suggesting that the pairs’ segmental ambiguity was removed by suprasegmental information.Words embedded in nonsense strings were harder to detect if the nonsense string itself formed the beginning of a competing word, but a suprasegmental mismatch to the competing word significantly reduced this inhibition. The same nonsense strings facilitated recognition of the longer words of which they constituted the beginning, butagain the facilitation was significantly reduced by suprasegmental mismatch. Together these results indicate that Dutch listeners effectively exploit suprasegmental cues in recognizing spoken words. Nonetheless, suprasegmental mismatch appears to be somewhat less effective in constraining activation than segmental mismatch.
  • Dahan, D., & Gaskell, M. G. (2007). The temporal dynamics of ambiguity resolution: Evidence from spoken-word recognition. Journal of Memory and Language, 57(4), 483-501. doi:10.1016/j.jml.2007.01.001.

    Abstract

    Two experiments examined the dynamics of lexical activation in spoken-word recognition. In both, the key materials were pairs of onset-matched picturable nouns varying in frequency. Pictures associated with these words, plus two distractor pictures were displayed. A gating task, in which participants identified the picture associated with gradually lengthening fragments of spoken words, examined the availability of discriminating cues in the speech waveforms for these pairs. There was a clear frequency bias in participants’ responses to short, ambiguous fragments, followed by a temporal window in which discriminating information gradually became available. A visual-world experiment examined speech contingent eye movements. Fixation analyses suggested that frequency influences lexical competition well beyond the point in the speech signal at which the spoken word has been fully discriminated from its competitor (as identified using gating). Taken together, these data support models in which the processing dynamics of lexical activation are a limiting factor on recognition speed, over and above the temporal unfolding of the speech signal.
  • Damian, M. F., Vigliocco, G., & Levelt, W. J. M. (2001). Effects of semantic context in the naming of pictures and words. Cognition, 81, B77-B86. doi:10.1016/S0010-0277(01)00135-4.

    Abstract

    Two experiments investigated whether lexical retrieval for speaking can be characterized as a competitive process by assessing the effects of semantic context on picture and word naming in German. In Experiment 1 we demonstrated that pictures are named slower in the context of same-category items than in the context of items from various semantic categories, replicating findings by Kroll and Stewart (Journal of Memory and Language, 33 (1994) 149). In Experiment 2 we used words instead of pictures. Participants either named the words in the context of same- or different-category items, or produced the words together with their corresponding determiner. While in the former condition words were named faster in the context of samecategory items than of different-category items, the opposite pattern was obtained for the latter condition. These findings confirm the claim that the interfering effect of semantic context reflects competition in the retrieval of lexical entries in speaking.
  • Davidson, D. J., & Indefrey, P. (2007). An inverse relation between event-related and time–frequency violation responses in sentence processing. Brain Research, 1158, 81-92. doi:10.1016/j.brainres.2007.04.082.

    Abstract

    The relationship between semantic and grammatical processing in sentence comprehension was investigated by examining event-related potential (ERP) and event-related power changes in response to semantic and grammatical violations. Sentences with semantic, phrase structure, or number violations and matched controls were presented serially (1.25 words/s) to 20 participants while EEG was recorded. Semantic violations were associated with an N400 effect and a theta band increase in power, while grammatical violations were associated with a P600 effect and an alpha/beta band decrease in power. A quartile analysis showed that for both types of violations, larger average violation effects were associated with lower relative amplitudes of oscillatory activity, implying an inverse relation between ERP amplitude and event-related power magnitude change in sentence processing.
  • Dediu, D. (2007). Non-spurious correlations between genetic and linguistic diversities in the context of human evolution. PhD Thesis, University of Edinburgh, Edinburgh, UK.
  • Dediu, D., & Ladd, D. R. (2007). Linguistic tone is related to the population frequency of the adaptive haplogroups of two brain size genes, ASPM and Microcephalin. Proceedings of the National Academy of Sciences of the United States of America, 104, 10944-10949. doi:10.1073/pnas.0610848104.

    Abstract

    The correlations between interpopulation genetic and linguistic diversities are mostly noncausal (spurious), being due to historical processes and geographical factors that shape them in similar ways. Studies of such correlations usually consider allele frequencies and linguistic groupings (dialects, languages, linguistic families or phyla), sometimes controlling for geographic, topographic, or ecological factors. Here, we consider the relation between allele frequencies and linguistic typological features. Specifically, we focus on the derived haplogroups of the brain growth and development-related genes ASPM and Microcephalin, which show signs of natural selection and a marked geographic structure, and on linguistic tone, the use of voice pitch to convey lexical or grammatical distinctions. We hypothesize that there is a relationship between the population frequency of these two alleles and the presence of linguistic tone and test this hypothesis relative to a large database (983 alleles and 26 linguistic features in 49 populations), showing that it is not due to the usual explanatory factors represented by geography and history. The relationship between genetic and linguistic diversity in this case may be causal:
    certain alleles can bias language acquisition or processing and thereby influence the trajectory of language change through iterated cultural transmission.

    Files private

    Request files
  • Dietrich, C., Swingley, D., & Werker, J. F. (2007). Native language governs interpretation of salient speech sound differences at 18 months. Proceedings of the National Academy of Sciences of the USA, 104(41), 16027-16031.

    Abstract

    One of the first steps infants take in learning their native language is to discover its set of speech-sound categories. This early development is shown when infants begin to lose the ability to differentiate some of the speech sounds their language does not use, while retaining or improving discrimination of language-relevant sounds. However, this aspect of early phonological tuning is not sufficient for language learning. Children must also discover which of the phonetic cues that are used in their language serve to signal lexical distinctions. Phonetic variation that is readily discriminable to all children may indicate two different words in one language but only one word in another. Here, we provide evidence that the language background of 1.5-year-olds affects their interpretation of phonetic variation in word learning, and we show that young children interpret salient phonetic variation in language-specific ways. Three experiments with a total of 104 children compared Dutch- and English-learning 18-month-olds' responses to novel words varying in vowel duration or vowel quality. Dutch learners interpreted vowel duration as lexically contrastive, but English learners did not, in keeping with properties of Dutch and English. Both groups performed equivalently when differentiating words varying in vowel quality. Thus, at one and a half years, children's phonological knowledge already guides their interpretation of salient phonetic variation. We argue that early phonological learning is not just a matter of maintaining the ability to distinguish language-relevant phonetic cues. Learning also requires phonological interpretation at appropriate levels of linguistic analysis.
  • Dimroth, C., & Klein, W. (2007). Den Erwachsenen überlegen: Kinder entwickeln beim Sprachenlernen besondere Techniken und sind erfolgreicher als ältere Menschen. Tagesspiegel, 19737, B6-B6.

    Abstract

    The younger - the better? This paper discusses second language learning at different ages and takes a critical look at generalizations of the kind ‘The younger – the better’. It is argued that these generalizations do not apply across the board. Age related differences like the amount of linguistic knowledge, prior experience as a language user, or more or less advanced communicative needs affect different components of the language system to different degrees, and can even be an advantage for the early development of simple communicative systems.
  • Dobel, C. E., Meyer, A. S., & Levelt, W. J. M. (2001). Registrierung von Augenbewegungen bei Studien zur Sprachproduktion. In A. Zimmer (Ed.), Experimentelle Psychologie. Proceedings of 43. Tagung experimentell arbeitender Psychologen (pp. 116-122). Lengerich, Germany: Pabst Science Publishers.
  • Dobel, C., Pulvermüller, F., Härle, M., Cohen, R., Köbbel, P., Schönle, P. W., & Rockstroh, B. (2001). Syntactic and semantic processing in the healthy and aphasic human brain. Experimental Brain Research, 140(1), 77-85. doi:10.1007/s002210100794.

    Abstract

    A syntactic and a semantic task were per-formed by German-speaking healthy subjects and apha-sics with lesions in the dominant left hemisphere. In both
    tasks, pictures of objects were presented that had to be classified by pressing buttons. The classification was into grammatical gender in the syntactic task (masculine or feminine gender?) and into semantic category in the se-
    mantic task (man- or nature made?). Behavioral data revealed a significant Group by Task interaction, with
    aphasics showing most pronounced problems with syn-
    tax. Brain event-related potentials 300–600 ms following picture onset showed different task-dependent laterality
    patterns in the two groups. In controls, the syntax task
    induced a left-lateralized negative ERP, whereas the semantic task produced more symmetric responses over the hemispheres. The opposite was the case in the patients, where, paradoxically, stronger laterality of physio-logical brain responses emerged in the semantic task than in the syntactic task. We interpret these data based on neuro-psycholinguistic models of word processing and current theories about the roles of the hemispheres in language recovery.
  • Drude, S. (2001). Entschlüsselung einer unbekannten Indianersprache: Ein Projekt zur Dokumentation der bedrohten brasilianischen Indianersprache Awetí. Fundiert: Das Wissenschaftsmagazin der Freien Universität Berlin, 2, 112-121. Retrieved from http://www.elfenbeinturm.net/archiv/2001/lust3.html.

    Abstract

    Die Awetí sind ein kleiner Indianerstamm in Zentralbrasilien, der bislang nur wenig Kontakt mit Weißen hatte. Im Zuge eines Programms der Volkswagenstiftung zur Dokumentation bedrohter Sprachen wird unser Autor die Awetí erneut besuchen und berichtet als „jüngerer Bruder des Häuptlings“ über seine Bemühungen, die Sprache der Awetí für künftige Generationen festzuhalten.
  • Duffield, N., Matsuo, A., & Roberts, L. (2007). Acceptable ungrammaticality in sentence matching. Second Language Research, 23(2), 155-177. doi:10.1177/0267658307076544.

    Abstract

    This paper presents results from a new set of experiments using the sentence matching paradigm (Forster, Kenneth (1979), Freedman & Forster (1985), also Bley-Vroman & Masterson (1989), investigating native-speakers’ and L2 learners’ knowledge of constraints on clitic placement in French.1 Our purpose is three-fold: (i) to shed more light on the contrasts between native-speakers and L2 learners observed in previous experiments, especially Duffield & White (1999), and Duffield, White, Bruhn de Garavito, Montrul & Prévost (2002); (ii), to address specific criticisms of the sentence-matching paradigm leveled by Gass (2001); (iii), to provide a firm empirical basis for follow-up experiments with L2 learners
  • Dunn, M., Foley, R., Levinson, S. C., Reesink, G., & Terrill, A. (2007). Statistical reasoning in the evaluation of typological diversity in Island Melanesia. Oceanic Linguistics, 46(2), 388-403.

    Abstract

    This paper builds on a previous work in which we attempted to retrieve a phylogenetic signal using abstract structural features alone, as opposed to cognate sets, drawn from a sample of Island Melanesian languages, both Oceanic (Austronesian) and (non-Austronesian) Papuan (Science 2005[309]: 2072-75 ). Here we clarify a number of misunderstandings of this approach, referring particularly to the critique by Mark Donohue and Simon Musgrave (in this same issue of Oceanic Linguistics), in which they fail to appreciate the statistical principles underlying computational phylogenetic methods. We also present new analyses that provide stronger evidence supporting the hypotheses put forward in our original paper: a reanalysis using Bayesian phylogenetic inference demonstrates the robustness of the data and methods, and provides a substantial improvement over the parsimony method used in our earlier paper. We further demonstrate, using the technique of spatial autocorrelation, that neither proximity nor Oceanic contact can be a major determinant of the pattern of structural variation of the Papuan languages, and thus that the phylogenetic relatedness of the Papuan languages remains a serious hypothesis.
  • Dunn, M., Margetts, A., Meira, S., & Terrill, A. (2007). Four languages from the lower end of the typology of locative predication. Linguistics, 45, 873-892. doi:10.1515/LING.2007.026.

    Abstract

    As proposed by Ameka and Levinson (this issue) locative verb systems can be classified into four types according to the number of verbs distinguished. This article addresses the lower extreme of this typology: languages which offer no choice of verb in the basic locative function (BLF). These languages have either a single locative verb, or do not use verbs at all in the basic locative construction (BLC, the construction used to encode the BLF). A close analysis is presented of the behavior of BLF predicate types in four genetically diverse languages: Chukchi (Chukotko-Kamchatkan, Russian Arctic), and Lavukaleve (Papuan isolate, Solomon Islands), which have BLC with the normal copula/existential verb for the language; Tiriyó (Cariban/Taranoan, Brazil), which has an optional copula in the BLC; and Saliba (Austronesian/Western Oceanic, Papua New Guinea), a language with a verbless clause as the BLC. The status of these languages in the typology of positional verb systems is reviewed, and other relevant typological generalizations are discussed
  • Dunn, M., & Ross, M. (2007). Is Kazukuru really non-Austronesian? Oceanic Linguistics, 46(1), 210-231. doi:10.1353/ol.2007.0018.

    Abstract

    Kazukuru is an extinct language, originally spoken in the inland of the western part of the island of New Georgia, Solomon Islands, and attested by very limited historical sources. Kazukuru has generally been considered to be a Papuan, that is, non-Austronesian, language, mostly on the basis of its lexicon. Reevaluation of the available data suggests a high likelihood that Kazukuru was in fact an Oceanic Austronesian language. Pronominal paradigms are clearly of Austronesian origin, and many other aspects of language structured retrievable from the limited data are also congruent with regional Oceanic Austronesian typology. The extent and possible causes of Kazukuru lexical deviations from the Austronesian norm are evaluated and discussed.
  • Enfield, N. J., Kita, S., & De Ruiter, J. P. (2007). Primary and secondary pragmatic functions of pointing gestures. Journal of Pragmatics, 39(10), 1722-1741. doi:10.1016/j.pragma.2007.03.001.

    Abstract

    This article presents a study of a set of pointing gestures produced together with speech in a corpus of video-recorded “locality description” interviews in rural Laos. In a restricted set of the observed gestures (we did not consider gestures with special hand shapes, gestures with arc/tracing motion, or gestures directed at referents within physical reach), two basic formal types of pointing gesture are observed: B-points (large movement, full arm, eye gaze often aligned) and S-points (small movement, hand only, casual articulation). Taking the approach that speech and gesture are structurally integrated in composite utterances, we observe that these types of pointing gesture have distinct pragmatic functions at the utterance level. One type of gesture (usually “big” in form) carries primary, informationally foregrounded information (for saying “where” or “which one”). Infants perform this type of gesture long before they can talk. The second type of gesture (usually “small” in form) carries secondary, informationally backgrounded information which responds to a possible but uncertain lack of referential common ground. We propose that the packaging of the extra locational information into a casual gesture is a way of adding extra information to an utterance without it being on-record that the added information was necessary. This is motivated by the conflict between two general imperatives of communication in social interaction: a social-affiliational imperative not to provide more information than necessary (“Don’t over-tell”), and an informational imperative not to provide less information than necessary (“Don’t under-tell”).
  • Enfield, N. J. (2001). ‘Lip-pointing’: A discussion of form and function with reference to data from Laos. Gesture, 1(2), 185-211. doi:10.1075/gest.1.2.06enf.

    Abstract

    ‘Lip-pointing’ is a widespread but little-documented form of deictic gesture, which may involve not just protruding one or both lips, but also raising the head, sticking out the chin, lifting the eyebrows, among other things. This paper discusses form and function of lip-pointing with reference to a set of examples collected on video in Laos. There are various parameters with respect to which the conventional form of a lip-pointing gesture may vary. There is also a range of ways in which lip-pointing gestures can be coordinated with other kinds of deictic gesture such as various forms of hand pointing. The attested coordinating/sequencing possibilities can be related to specific functional properties of lip-pointing among Lao speakers, particularly in the context of other forms of deictic gesture, which have different functional properties. It is argued that the ‘vector’ of lip-pointing is in fact defined by gaze, and that the lip-pointing action itself (like other kinds of ‘pointing’ involving the head area) is a ‘gaze-switch’, i.e. it indicates that the speaker is now pointing out something with his or her gaze. Finally, I consider the position of lip-pointing in the broader deictic gesture system of Lao speakers, firstly as a ‘lower register’ form, and secondly as a form of deictic gesture which may contrast with forms of hand pointing.
  • Enfield, N. J. (2007). Encoding three-participant events in the Lao clause. Linguistics, 45(3), 509-538. doi:10.1515/LING.2007.016.

    Abstract

    Any language will have a range of predicates that specify three core participants (e.g. 'put', 'show', 'give'), and will conventionally provide a range of constructional types for the expression of these three participants in a structured single-clause or single-sentence event description. This article examines the clausal encoding of three-participant events in Lao, a Tai language of Southeast Asia. There is no possibility in Lao for expression of three full arguments in the core of a single-verb clause (although it is possible to have a third argument in a noncore slot, marked as oblique with a prepositionlike element). Available alternatives include extraposing an argument using a topic-comment construction, incorporating an argument into the verb phrase, and ellipsing one or more contextually retrievable arguments. A more common strategy is verb serialization, for example, where a threeplace verb (e.g. 'put') is assisted by an additional verb (typically a verb of handling such as 'carry') that provides a slot for the theme argument (e.g. the transferred object in a putting scene). The event construal encoded by this type of structure decomposes the event into a first stage in which the agent comes into control over a theme, and a second in which the agent performs a controlled action (e.g. of transfer) with respect to that theme and a goal (and/or source). The particular set of strategies that Lao offers for encoding three-participant events — notably, topic-comment strategy, ellipsis strategy, serial verb strategy — conform with (and are presumably motivated by) the general typological profile of the language. The typological features of Lao are typical for the mainland Southeast Asia area (isolating, topic-prominent, verb-serializing, widespread nominal ellipsis).
  • Enfield, N. J. (2007). [Comment on 'Agency' by Paul Kockelman]. Current Anthropology, 48(3), 392-392. doi:10.1086/512998.
  • Enfield, N. J. (2007). [review of the book Ethnopragmatics: Understanding discourse in cultural context ed. by Cliff Goddard]. Intercultural Pragmatics, 4(3), 419-433. doi:10.1515/IP.2007.021.
  • Enfield, N. J. (2007). Lao separation verbs and the logic of linguistic event categorization. Cognitive Linguistics, 18(2), 287-296. doi:10.1515/COG.2007.016.

    Abstract

    While there are infinite conceivable events of material separation, those actually encoded in the conventions of a given language's verb semantics number only a few. Furthermore, there appear to be crosslinguistic parallels in the native verbal analysis of this conceptual domain. What are the operative distinctions, and why these? This article analyses a key subset of the bivalent (transitive) verbs of cutting and breaking in Lao. I present a decompositional analysis of the verbs glossed 'cut (off)', 'cut.into.with.placed.blade', 'cut.into.with.moving.blade', and 'snap', pursuing the idea that the attested combinations of sub-events have a natural logic to them. Consideration of the nature of linguistic categories, as distinct from categories in general, suggests that the attested distinctions must have ethnographic and social interactional significance, raising new lines of research for cognitive semantics.
  • Enfield, N. J. (2001). Remarks on John Haiman, 1999. ‘Auxiliation in Khmer: the case of baan.’ Studies in Language 23:1. Studies in Language, 25(1), 115-124. doi:10.1075/sl.25.1.05enf.
  • Ernestus, M., Van Mulken, M., & Baayen, R. H. (2007). Ridders en heiligen in tijd en ruimte: Moderne stylometrische technieken toegepast op Oud-Franse teksten. Taal en Tongval, 58, 1-83.

    Abstract

    This article shows that Old-French literary texts differ systematically in their relative frequencies of syntactic constructions. These frequencies reflect differences in register (poetry versus prose), region (Picardy, Champagne, and Esatern France), time period (until 1250, 1251 – 1300, 1301 – 1350), and genre (hagiography, romance of chivalry, or other).
  • Ernestus, M., & Baayen, R. H. (2007). Paradigmatic effects in auditory word recognition: The case of alternating voice in Dutch. Language and Cognitive Processes, 22(1), 1-24. doi:10.1080/01690960500268303.

    Abstract

    Two lexical decision experiments addressed the role of paradigmatic effects in auditory word recognition. Experiment 1 showed that listeners classified a form with an incorrectly voiced final obstruent more readily as a word if the obstruent is realised as voiced in other forms of that word's morphological paradigm. Moreover, if such was the case, the exact probability of paradigmatic voicing emerged as a significant predictor of the response latencies. A greater probability of voicing correlated with longer response latencies for words correctly realised with voiceless final obstruents. A similar effect of this probability was observed in Experiment 2 for words with completely voiceless or weakly voiced (incompletely neutralised) final obstruents. These data demonstrate the relevance of paradigmatically related complex words for the processing of morphologically simple words in auditory word recognition.
  • Ernestus, M., & Baayen, R. H. (2007). The comprehension of acoustically reduced morphologically complex words: The roles of deletion, duration, and frequency of occurence. In J. Trouvain, & W. J. Barry (Eds.), Proceedings of the 16th International Congress of Phonetic Sciences (ICPhs 2007) (pp. 773-776). Dudweiler: Pirrot.

    Abstract

    This study addresses the roles of segment deletion, durational reduction, and frequency of use in the comprehension of morphologically complex words. We report two auditory lexical decision experiments with reduced and unreduced prefixed Dutch words. We found that segment deletions as such delayed comprehension. Simultaneously, however, longer durations of the different parts of the words appeared to increase lexical competition, either from the word’s stem (Experiment 1) or from the word’s morphological continuation forms (Experiment 2). Increased lexical competition slowed down especially the comprehension of low frequency words, which shows that speakers do not try to meet listeners’ needs when they reduce especially high frequency words.
  • Essegbey, J., & Ameka, F. K. (2007). "Cut" and "break" verbs in Gbe and Sranan. Journal of Pidgin and Creole Languages, 22(1), 37-55. doi:10.1075/jpcl.22.1.04ess.

    Abstract

    This paper compares “cut” and “break” verbs in four variants of Gbe, namely Anfoe, Anlo, Fon and Ayizo, with those of Sranan. “Cut” verbs are change-of-state verbs that co-lexicalize the type of action that brings about a change, the type of instrument or instrument part, and the manner in which a change occurs. By contrast, break verbs co-lexicalize either the type of object or the type of change. It has been hypothesized that “cut”-verbs are unergative while breaks verbs are unaccusatives. For example “break” verbs participate in the causative alternation constructions but “cut” verbs don’t. We show that although there are some differences in the meanings of “cut” and break verbs across the Gbe languages, significant generalizations can be made with regard to their lexicalization patterns. By contrast, the meanings of “cut” and break verbs in Sranan are closer to those of their etymons in English and Dutch. However, despite the differences in the meanings of “cut” and “break” verbs between the Gbe languages and Sranan, the syntax of the verbs in Sranan is similar to that of the Eastern Gbe variants, namely Fon and Ayizo. We look at the implications of our findings for the relexification hypothesis. (copyright Benjamins)
  • Eysenck, M. W., & Van Berkum, J. J. A. (1992). Trait anxiety, defensiveness, and the structure of worry. Personality and Individual Differences, 13(12), 1285-1290. Retrieved from http://www.sciencedirect.com/science//journal/01918869.

    Abstract

    A principal components analysis of the ten scales of the Worry Questionnaire revealed the existence of major worry factors or domains of social evaluation and physical threat, and these factors were confirmed in a subsequent item analysis. Those high in trait anxiety had much higher scores on the Worry Questionnaire than those low in trait anxiety, especially on those scales relating to social evaluation. Scores on the Marlowe-Crowne Social Desirability Scale were negatively related to worry frequency. However, groups of low-anxious and repressed individucores did not differ in worry. It was concluded that worry, especals formed on the basis of their trait anxiety and social desirability sially in the social evaluation domain, is of fundamental importance to trait anxiety.
  • Felser, C., & Roberts, L. (2007). Processing wh-dependencies in a second language: A cross-modal priming study. Second Language Research, 23(1), 9-36. doi:10.1177/0267658307071600.

    Abstract

    This study investigates the real-time processing of wh-dependencies by advanced Greek-speaking learners of English using a cross-modal picture priming task. Participants were asked to respond to different types of picture target presented either at structurally defined gap positions, or at pre-gap control positions, while listening to sentences containing indirect-object relative clauses. Our results indicate that the learners processed the experimental sentences differently from both adult native speakers of English and monolingual English-speaking children. Contrary to what has been found for native speakers, the learners' response pattern was not influenced by individual working memory differences. Adult second language learners differed from native speakers with a relatively high reading or listening span in that they did not show any evidence of structurally based antecedent reactivation at the point of the indirect object gap. They also differed from low-span native speakers, however, in that they showed evidence of maintained antecedent activation during the processing of the experimental sentences. Whereas the localized priming effect observed in the high-span controls is indicative of trace-based antecedent reactivation in native sentence processing, the results from the Greek-speaking learners support the hypothesis that the mental representations built during non-native language processing lack abstract linguistic structure such as movement traces.
  • Fernald, A., Swingley, D., & Pinto, J. P. (2001). When half a word is enough: infants can recognize spoken words using partial phonetic information. Child Development, 72, 1003-1015. doi:10.1111/1467-8624.00331.

    Abstract

    Adults process speech incrementally, rapidly identifying spoken words on the basis of initial phonetic information sufficient to distinguish them from alternatives. In this study, infants in the second year also made use of word-initial information to understand fluent speech. The time course of comprehension was examined by tracking infants' eye movements as they looked at pictures in response to familiar spoken words, presented both as whole words in intact form and as partial words in which only the first 300 ms of the word was heard. In Experiment 1, 21-month-old infants (N = 32) recognized partial words as quickly and reliably as they recognized whole words; in Experiment 2, these findings were replicated with 18-month-old infants (N = 32). Combining the data from both experiments, efficiency in spoken word recognition was examined in relation to level of lexical development. Infants with more than 100 words in their productive vocabulary were more accurate in identifying familiar words than were infants with less than 60 words. Grouped by response speed, infants with faster mean reaction times were more accurate in word recognition and also had larger productive vocabularies than infants with slower response latencies. These results show that infants in the second year are capable of incremental speech processing even before entering the vocabulary spurt, and that lexical growth is associated with increased speed and efficiency in understanding spoken language.
  • Fisher, S. E. (2007). Molecular windows into speech and language disorders. Folia Phoniatrica et Logopaedica, 59, 130-140. doi:10.1159/000101771.

    Abstract

    Why do some children fail to acquire speech and language skills despite adequate environmental input and overtly normal neurological and anatomical development? It has been suspected for several decades, based on indirect evidence, that the human genome might hold some answers to this enigma. These suspicions have recently received dramatic confirmation with the discovery of specific genetic changes which appear sufficient to derail speech and language development. Indeed, researchers are already using information from genetic studies to aid early diagnosis and to shed light on the neural pathways that are perturbed in these inherited forms of speech and language disorder. Thus, we have entered an exciting era for dissecting the neural bases of human communication, one which takes genes and molecules as a starting point. In the current article I explain how this recent paradigm shift has occurred and describe the new vistas that have opened up. I demonstrate ways of bridging the gaps between molecules, neurons and the brain, which will provide a new understanding of the aetiology of speech and language impairments.
  • Fitz, H. (2001). Church's Thesis: A philosophical critique of modern computability theory. Master Thesis, Freie Universität Berlin.
  • FitzPatrick, I. (2007). Effects of sentence context in L2 natural speech comprehension. Nijmegen CNS, 2, 43-56.

    Abstract

    Electrophysiological studies consistently find N400 effects of semantic incongruity in non-native written language comprehension. Typically these N400 effects are later than N400 effects in native comprehension, suggesting that semantic processing in one’s second language (L2) may be delayed compared to one’s first language (L1). In this study we were firstly interested in replicating the semantic incongruity effect using natural auditory speech, which poses strong demands on the speed of processing. Secondly, we wished to investigate whether a possible delay in semantic processing might be due to bilinguals accessing lexical items from both their L1 and L2 (a more extensive lexical search). We recorded EEG from 30 Dutch-English bilinguals who listened to English sentences � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � � ��� � � in which the sentence-final word was: (1) semantically fitting, (2) semantically incongruent, (3) initially congruent: semantically incongruent, but sharing initial phonemes with the most probable sentence completion within the L2, (4) semantically incongruent, but sharing initial phonemes with the L1 translation equivalent of the most probable sentence completion. We found an N400 effect in each of the semantically incongruent conditions. This N400 effect was significantly delayed to L2 words that were initially congruent with the sentence context. We found no effect of initial overlap with L1 translation equivalents. Taken together these findings firstly demonstrate that non-native listeners are sensitive to semantic incongruity in natural speech, secondly indicate that semantic integration in non-native listening can start on the basis of word initial phonemes, and finally suggest that during L2 sentence processing listeners do not access the L1 lexicon.
  • Flecken, M., & Schmiedtova, B. (2007). The expression of simultaneity in L1 Dutch. Toegepaste Taalwetenschap in Artikelen, 77(1), 67-78.
  • Floyd, S. (2007). Changing times and local terms on the Rio Negro, Brazil: Amazonian ways of depolarizing epistemology, chronology and cultural Change. Latin American and Caribbean Ethnic studies, 2(2), 111-140. doi:10.1080/17442220701489548.

    Abstract

    Partway along the vast waterways of Brazil's middle Rio Negro, upstream from urban Manaus and downstream from the ethnographically famous Northwest Amazon region, is the town of Castanheiro, whose inhabitants skillfully negotiate a space between the polar extremes of 'traditional' and 'acculturated.' This paper takes an ethnographic look at the non-polarizing terms that these rural Amazonian people use for talking about cultural change. While popular and academic discourses alike have often framed cultural change in the Amazon as a linear process, Amazonian discourse provides resources for describing change as situated in shifting fields of knowledge of the social and physical environments, better capturing its non-linear complexity and ambiguity.
  • Francks, C., Maegawa, S., Laurén, J., Abrahams, B. S., Velayos-Baeza, A., Medland, S. E., Colella, S., Groszer, M., McAuley, E. Z., Caffrey, T. M., Timmusk, T., Pruunsild, P., Koppel, I., Lind, P. A., Matsumoto-Itaba, N., Nicod, J., Xiong, L., Joober, R., Enard, W., Krinsky, B. and 22 moreFrancks, C., Maegawa, S., Laurén, J., Abrahams, B. S., Velayos-Baeza, A., Medland, S. E., Colella, S., Groszer, M., McAuley, E. Z., Caffrey, T. M., Timmusk, T., Pruunsild, P., Koppel, I., Lind, P. A., Matsumoto-Itaba, N., Nicod, J., Xiong, L., Joober, R., Enard, W., Krinsky, B., Nanba, E., Richardson, A. J., Riley, B. P., Martin, N. G., Strittmatter, S. M., Möller, H.-J., Rujescu, D., St Clair, D., Muglia, P., Roos, J. L., Fisher, S. E., Wade-Martins, R., Rouleau, G. A., Stein, J. F., Karayiorgou, M., Geschwind, D. H., Ragoussis, J., Kendler, K. S., Airaksinen, M. S., Oshimura, M., DeLisi, L. E., & Monaco, A. P. (2007). LRRTM1 on chromosome 2p12 is a maternally suppressed gene that is associated paternally with handedness and schizophrenia. Molecular Psychiatry, 12, 1129-1139. doi:10.1038/sj.mp.4002053.

    Abstract

    Left-right asymmetrical brain function underlies much of human cognition, behavior and emotion. Abnormalities of cerebral asymmetry are associated with schizophrenia and other neuropsychiatric disorders. The molecular, developmental and evolutionary origins of human brain asymmetry are unknown. We found significant association of a haplotype upstream of the gene LRRTM1 (Leucine-rich repeat transmembrane neuronal 1) with a quantitative measure of human handedness in a set of dyslexic siblings, when the haplotype was inherited paternally (P=0.00002). While we were unable to find this effect in an epidemiological set of twin-based sibships, we did find that the same haplotype is overtransmitted paternally to individuals with schizophrenia/schizoaffective disorder in a study of 1002 affected families (P=0.0014). We then found direct confirmatory evidence that LRRTM1 is an imprinted gene in humans that shows a variable pattern of maternal downregulation. We also showed that LRRTM1 is expressed during the development of specific forebrain structures, and thus could influence neuronal differentiation and connectivity. This is the first potential genetic influence on human handedness to be identified, and the first putative genetic effect on variability in human brain asymmetry. LRRTM1 is a candidate gene for involvement in several common neurodevelopmental disorders, and may have played a role in human cognitive and behavioral evolution.

Share this page